text
stringlengths
1
2.25M
--- abstract: 'We present results of the search for supersolid He using low-frequency, low-level mechanical excitation of a solid sample grown and cooled at fixed volume. We have observed low frequency non-linear resonances that constitute anomalous features. These features, which appear below $\sim$ 0.8 K, are absent in He. The frequency, the amplitude at which the nonlinearity sets in, and the upper temperature limit of existence of these resonances depend markedly on the sample history.' address: | CEA-DRECAM, Service de Physique de l’État Condensé,\ Centre d’Études de Saclay, 91191 Gif-sur-Yvette Cedex (France)\ $^*$CNRS-Laboratoire de Physique des Solides,\ Bât. 510, Université Paris-Sud, 91405 Orsay (France) author: - 'Yu. Mukharsky, O. Avenel, and E. Varoquaux$^*$' title: '**[Search for supersolidity in He in low-frequency sound experiments.]{}**' --- Introduction ============ The observation of an anomalous behaviour in the moment of inertia of solid $^4$He first reported by Kim and Chan[@Kim:04; @Kim:04a] have recently been confirmed by a number of different groups using the same torsional oscillator technique, Keiya Shirahama [*et al.*]{} from Keio University,[@Shirahama:06] Minoru Kubota [*et al.*]{} from the University of Kyoto,[@Kubota:06] and Rittner and Reppy from Cornell University.[@Rittner:06]. The last group however reports a marked dependence of the supersolid response upon sample annealing. Other experiments probing the mechanical properties of solid $^4$He at temperatures below 1 K by a number of different techniques have concluded, with increasing certainty as the techniques were refined, that no [*dc*]{} superflow was taking place on a scale that would match that reported by Kim and Chan (see [@Day:06] and references therein). A possible exception is a recent gravitational flow experiment conducted on the liquid-solid coexistence curve by Sasaki [*et al.*]{}[@Sasaki:06] We report here measurements of the response of solid $^4$He samples to low-frequency, low-level mechanical excitations that were carried out with the goal of detecting the presence of an eventual supermobile component. It can be expected that such a component would give rise to a hydrodynamic mode with a lower propagation velocity than that of ordinary (first) sound. Such a mode, if associated to a Bose condensate, should show little internal damping at low temperature, and, in all likelihood, should display a critical velocity above which the supermobile property breaks down. The experiment ============== ![\[cell\] Schematic view of the experimental cell. The flexible membrane makes a partition between the bottom and the top chambers. As shown in the blown-up view, this membrane is separated by small gaps from the solid walls in which the flat electrodynamic pickup coil (top) and the capacitor counter-electrode (bottom) are embedded.](cell02.eps){width="7cm"} The experimental cell used in these [*ac*]{} stress-strain measurements is built from components of the hydromechanical resonator previously used for phase slippage experiments in the $^4$He and $^3$He superfluids.[@Avenel:87; @Varoquaux:87] Pressure is applied to the solid by a flexible membrane positioned between two solid walls as shown in Fig.\[cell\]. The two flat cylindrical chambers above and below the diaphragm have a height of approximately 150 m and a diameter of 8 mm. The top chamber is open to the main volume of the cell by a 0.6 mm diameter cylindrical vent of height 0.5 mm. The bottom chamber has a larger opening, 1.5 mm in diameter, 5 mm in length. The path around the cell is $\sim$ 2.6 cm in length. The volume around the cell has largest dimension 3.5 cm. The corresponding $\lambda/2$ resonance frequency for a longitudinal sound velocity of 440 m/sec is 6.3 kHz, which should fix the low end of the acoustic resonance spectrum in the cell. Several flexible membranes have been used, made of 7.5 m thick Kapton coated with a thin superconducting film of either Al or Nb. The membrane is electrostatically actuated by applying between it and ground an [*ac*]{} voltage of up to 7 volts rms superposed to a [*dc*]{} bias of typically 150 volts. The corresponding force per unit area, of the order of 1 Pa, is much weaker than in other experiments probing solid $^4$He.[@Day:06] The displacement of the diaphragm is measured with an electrodynamic sensor using a [*dc*]{}-SQUID as front-end amplifier to achieve a sensitivity of $\sim 10^{-15}$ m, significantly higher than in other attempts to observe supersolidity in $^4$He, apart possibly from torsional oscillator experiments. Experimental observations ========================= We have observed low frequency non-linear resonances that constitute anomalous features in the sense that they appear below a sample-dependent onset temperature $T_\m r$ and that their frequencies are too low to be usual acoustic resonances. These resonances have not been observed in a pure solid $^3$He sample. When the $^4$He samples, which are obtained by cooling along the melting curve at constant volume, are grown and cooled very gently, these resonances are either weak or even not seen. Efforts to produce $^4$He samples that would show reproducible features in a reproducible manner have so far failed. In all likelihood, the solid sample ends up at low temperature badly fractured, which greatly affects its low-frequency stress-strain response. Slowly raising and lowering the temperature while remaining in the hcp phase changes, or even suppresses, these resonances. \[results\] Such a resonance is shown in Fig.\[results\] The displacement of the flexible membrane when the frequency is swept through resonance at constant excitation drive level is plotted in terms of the frequency (i) for various temperatures (left frame) at the $\times70$ drive level; (ii) for various drive levels at 12.6 mK (right frame). When the temperature is raised, the resonance amplitude decreases, the width increases and the frequency shifts upward. At 90 mK, the resonance features are lost in the baseline corrections uncertainties; the resonance disappears completely above 90 mK. When the drive level increases, the resonance amplitude also increases but not in proportion to the drive, the resonance frequency shifts downward in frequency, the width increases and the shape becomes distorted. The non-linear behaviour does not appear to set in very abruptly. At low drive level and low temperature, the resonance is fairly sharp, with a quality factor $Q$ of the order of 100. The data shown in Fig. \[results\] were obtained after a fast cooldown from 1 K. The solid was formed by the blocked capillary method starting from a pressure of 47 bars at a temperature of 2.3 K with nominal purity $^4$He containing typically 1 part in $10^7$ of $^3$He impurities. Besides the resonance at 524 Hz shown in the figure, weaker resonances were also seen at 593, 808, 924, 1088, and 1392 Hz. These resonances are attached to a particular sample configuration and usually remain unchanged as long as the temperature is kept well below 1 K although some slow relaxation was occasionally observed at low temperature. Raising the temperature above 1 K and cooling again gives a different sample configuration, resulting in a different set of resonance frequencies and a different onset temperature. Low frequency resonances have been observed up to above 700 mK. When cooling very slowly from 1.5 K, these features are either weak or completely absent. Discussion ========== The sample-history dependence of the low-frequency resonances is most probably linked to the polycrystalline state that forms in the solid when it is cooled in the fixed volume container. Thermal gradients develop and the solid sample breaks under thermal stress. Annealing becomes very slow below $\sim$ 1.2 K and the defects created during the cool-down remain frozen. It has been known for some time [@Lie-zhao:86] that layers of superfluid liquid persist at pressures up to 50 bars in very small pores of Vycor glass while two amorphous solid layers are adsorbed on the glass surface and solid forms in the bulk of the pores. This type of interfacial superfluidity has been observed by Yamamoto [*et al.*]{}[@Yamamoto:04] in pores as small as 2.5 nm, in which superfluidity has been observed up to 35 bars at low temperature. It has been suggested by Beamish [@Beamish:04] and by Dash and Wettlaufer [@Dash:05] that such superfluid layers may provide mechanical decoupling between the $^4$He solid and the torsional oscillator walls and be a possible explanation for experiments of Kim and Chan.[@Kim:04; @Kim:04a] This slip mechanism would however not explain why defects in the bulk of the crystal sample appear to play an important role in the moment-of-inertia anomaly.[@Rittner:06] A number of authors have pointed out that a perfect crystal can not possess off-diagonal long range order, a result established by Penrose and Onsager in 1952.[@Penrose:56a] It was then speculated that the superfluid layers forming at the grain boundaries in a polycrystalline sample could play a major role in the supersolidity.[@Burovski:05; @Prokof'ev:05; @Khairallah:05] It seems quite possible that the stress-strain anomalies of solid $^4$He formed at constant volume reported here are related to the existence of pre-melted superfluid films at grain boundaries. According to Dash and Wettlaufer, the non-crystalline interface between grains may have a thickness of 4$\sim$8 atomic layers.[@Dash:05] A fraction only of these layers forms a 2D-superfluid, the rest being amorphous solid. A Kosterlitz-Thouless transition temperature $T_\m{KT}\sim0.1$ K in $^4$He film at saturated vapour pressure would correspond to an equivalent superfluid fraction of 3% of an atomic layer.[@Rudnick:78] The critical velocity of such a dilute 2D-superfluid is in the range of a few cm/s.[@Gillis:89] If actual superflow is thought to take place, this would imply that there are many interfaces in parallel and that the grains are very small, in the range of 1 to 10 m in size. In such a case, the properties of this solid-superfluid slurry would be more homogeneous than the wide variation of resonance frequencies and onset temperatures would let think. If there are indeed fewer crystallites, the pressure change induced by the membrane can still be transmitted in the superfluid layers by fourth sound, the velocity of which is, at low temperature, of the order of the first sound velocity in the bulk. In the very dilute Bose condensate, first sound velocity is low. It is expressed in terms of the scattering length $a_0$ and the condensate number density $n_0$ by $c=(2\pi \hbar/m_4)\sqrt{n_0a_0/\pi}$, $\hbar$ being the Planck constant and $m_4$ the atomic mass of $^4$He.[@Mewes:96] Taking $a_0$ of the order of the hard core radius of the $^4$He atomic potential (2.5 Å) yields an estimate of 27 m/s for the layer in which 3 % of the atoms are in the condensate, a value low enough to account for the observed resonances. The frequency also depends on the geometry of the actual path between grains. So far, no systematic pattern of behaviour for frequencies and critical temperatures has emerged in our experiments. To conclude, we have observed low frequency resonance modes that depend on the presence of defects in the solid $^4$He sample. This observation brings evidence that there exists a component of the sample that transmits low velocity, low dissipation pressure waves through the sample. The superfluid layers that are believed to coat the grain boundaries may constitute such a component. We acknowledge useful correspondence with John Reppy and Sébastien Balibar. [10]{} E. Kim and M. Chan, Nature [**427**]{}, 225 (2004). E. Kim and M. Chan, Science [**305**]{}, 1942 (2004). K. Shirahama, M. Kondo, S. Takada, and Y. Shibayama, Bull. Am. Phys. Soc. (2005). , ScienceNow (2006), quoted by A. Cho in the issue of March 15. A.-S. Rittner and J. Reppy, Arxiv preprint cond-mat/0604528, 2006. J. Day and J. Beamish, Phys. Rev. Lett. [**96**]{}, 105304 (2006). S. Balibar, private communication. O. Avenel and E. Varoquaux, Jpn. J. Appl. Phys. [**26**]{}, 1798 (1987). E. Varoquaux, O. Avenel, and M. Meisel, Can. J. Phys. [**65**]{}, 1377 (1987). C. Lie-zhao, D. F. Brewer, C. Girit, E. N. Smith, and J. D. Reppy, Phys. Rev. B [**33**]{}, 106 (1986). K. Yamamoto, H. N. Y. Shibayama, and K. Shirahama, Phys. Rev. Lett. [**93**]{}, 075302 (2004). J. Beamish, Nature (London) [**427**]{}, 204 (2004). J. Dash and J. Wettlaufer, Phys. Rev. Lett. [**94**]{}, 235301 (2005). O. Penrose and L. Onsager, Phys. Rev. [**104**]{}, 576 (1956). E. Burovski, E. Kozik, A. Kuklov, N. Prokof’ev, and B. Svistunov, Phys. Rev. Lett. [**94**]{}, 165301 (2005). N. Prokof’ev and B. Svistunov, Phys. Rev. Lett. [**94**]{}, 155302 (2005). S. Khairallah and D. Ceperley, Superfluidity of dense $^4$[He]{} in vycor, arXiv:physics/0502039 - 8 Feb. 2005. I. Rudnick, Phys. Rev. Lett. [**40**]{}, 1454 (1978). K. Gillis, S. Volz, and J. Mochel, Phys. Rev. B [**40**]{}, 6684 (1989). M.-O. Mewes et al., Phys. Rev. Lett. [**77**]{}, 988 (1996).
--- abstract: 'Recently the inverse magnetocaloric effect is observed for different compounds. However there is very rare for any manifestation of the effect to be seen in manganites. We have found inverse magnetocaloric effect in the case of polycrystalline La$_{0.125}$Ca$_{0.875}$MnO$_{3}$. Such phenomenon is attributed to the stabilization of antiferromagnetic state associated with inherent magnetic inhomogeneous phases for this compound.' author: - Anis Biswas - Tapas Samanta - 'S. Banerjee' - 'I. Das' title: 'Inverse magnetocaloric effect in polycrystalline La$_{0.125}$Ca$_{0.875}$MnO$_{3}$' --- The study of the magnetocaloric effect (MCE) has been a subject of intense research owing to its possible application in magnetic refrigeration [@pecharsky1; @phan; @pecharsky2; @nature; @tsapl1; @tsapl2; @guo; @tsjap; @phanapl; @tsjpcm; @sun; @tapas; @abmce1; @abmce2; @phanjap; @abmce3; @abmce4; @jap]. It has been widely observed that the magnetic entropy reduces with the application of magnetic field for different materials including some paramagnetic salts [@pecharsky1]. The cooling can be possible by utilizing those materials subject to expose them in the magnetic field followed by demagnetizing them adiabatically. However, there is recent report on the discovery of materials with an inverse situation in which, the magnetic configuration entropy increases due to the application of the magnetic field [@nature]. Such an effect is known as inverse magnetocaloric effect (IMCE). For the materials exhibiting IMCE, the attaining of low temperature can be possible by only just magnetizing them adiabatically [@nature]. Some of the examples of such materials are NiMnSn, FeRh, TbNiAl$_{4}$, DySb, Tb$_{2}$Ni$_{2}$Sn, NiMnSb etc [@nature; @nimnsn; @ferh; @tbnial4; @tbnial4a; @dysb; @tbni2sn; @nimnsb]. The materials which show IMCE, can be used as heat-sink for heat generated when a conventional magnetocaloric material is magnetized before cooling by demagnetization under adiabatic condition [@nature; @jap]. The refrigeration efficiency can be enhanced by using materials exhibiting IMCE in composites with conventional magnetic refrigerants [@nature]. Therefore searching of suitable materials, which display IMCE is an important issue in the ongoing research related to the magnetic refrigeration. Our present study is based on the magnetocaloric property of polycrystalline La$_{0.125}$Ca$_{0.875}$MnO$_{3}$. We have observed IMCE with quite large value of change of magnetic entropy (-$\Delta{S}$) in this compound. The perovskite manganites with general formula R$_{1-x}$B$_{x}$MnO$_{3}$ (R is rare-earth, B is bivalent ion) are considered potential magnetic refrigerants  [@phan; @abmce1; @abmce2; @phanjap; @abmce3; @abmce4]. La$_{1-x}$Ca$_{x}$MnO$_{3}$ is a manganite system, which has a very rich phase diagram depending on the values of x [@tokura]. Although some works regarding the magnetocaloric properties of this system with x$\sim$ $0.2-0.5$ are reported, there is hardly any such study for high doping concentration of bivalent ion [@phan]. In fact, a little attention has been paid to the study of MCE for other manganite systems in high doping region (i.e, high value of x) also. We have chosen La$_{0.125}$Ca$_{0.875}$MnO$_{3}$ for two main reasons. Firstly, it is a system with high doping concentration of bivalent ion. Secondly, its position in the phase diagram of La$_{1-x}$Ca$_{x}$MnO$_{3}$ is at phase boundary between antiferromagnetic (AFM) and canted antiferromagnetic (CAF) phases [@tokura]. Therefore this compound provides an opportunity to study the effect of the phase boundary on magnetocaloric property of a system as well. There are reports of the observation of IMCE in the chrage ordered systems such as Pr$_{0.5}$Sr$_{0.5}$MnO$_{3}$, Nd$_{0.5}$Sr$_{0.5}$MnO$_{3}$ [@chenepl; @sande]. For that systems, charge order transition and antiferromagnetic transition occurs simultaneously  [@chenepl; @sande]. In complete contrast to those cases, in the present system charge ordering hardly occurs. In spite of this, the system exhibits IMCE. The polycrystalline La$_{0.125}$Ca$_{0.875}$MnO$_{3}$ was prepared by sol-gel method. The details of the solgel method has been described in our previous article [@ab1]. At the end of the sol-gel process, the decomposed gel was annealed at $1400^{o}$C for $36$ hours. The x-ray powder diffraction study has confirmed the formation of the sample with single crystallographic phase with PNMA space group. The lattice parameters are determined as, a = $5.347 \AA$, b = $7.442 \AA$, and c = $5.318 \AA$ . A commercial SQUID magnetometer was utilized for magnetization study. The temperature dependence of dc susceptibility (Fig. 1) shows clear antiferromagnetic transition at $\sim 120$ K. The transition temperature is consistent with the phase diagram of the sample [@tokura]. The magnetization measurement has been performed in the presence of $100$ Oe magnetic field in zero field cooled protocol. We have also done the specific heat study using semi-adiabatic heat pulse method. The transition is manifested by observed maxima in the temperature dependence of specific heat as well (inset of Fig. 1). The isothermal magnetic field dependence of magnetization \[M(H)\] at different temperatures has been studied for the sample (Fig. 2). To get intricate details of M(H), we have examined Banerjee’s plot  [@banerjee] i.e., H/M vs. M$^{2}$ behavior around transition temperature (inset, Fig. 2). The negative slope of Banerjee’s plot is evident in high magnetic field region (above$\sim 35$ kOe), which is characteristic of first order transition [@banerjee]. From the isothermal M(H) curves, the change of the magnetic entropy (-$\Delta{S}$) was estimated for various magnetic fields by using the Maxwell’s relation [@pecharsky1], $$\left(\frac{\partial{S}}{\partial{H}}\right)_{T} = \left( \frac{\partial{M}}{\partial{T}}\right)_{H}$$ The temperature dependence of -$\Delta{S}$ for different magnetic field has been shown in Fig. 3. A minimum in -$\Delta{S}$(T) has been observed at $\sim 120$ K, where antiferromagnetic transition occurs. One important feature of -$\Delta{S}$(T) is that the value of -$\Delta{S}$ remains negative for all the magnetic field at transition temperature for this sample. The negative value of -$\Delta{S}$ increases with the increase of magnetic field (i.e., the magnetic configuration entropy increases) and it reaches $\sim -6.4$ J/kg K for the magnetic field change $0-70$ kOe. The increase of the negative value of -$\Delta{S}$ with magnetic field at transition temperature has been shown in inset (b) of Fig. 3. From the magnetocaloric behavior of the sample, it seems that although there is change of slope in Banerjee’s plot at high magnetic field, the antiferromagnetism still exists in the field up to $\sim 70$ kOe. Recently, Ranke et al., put forward a theoretical framework of the magnetocaloric properties of antiferromagnetic system [@ranke]. The temperature dependence of -$\Delta{S}$ for La$_{0.125}$Ca$_{0.875}$MnO$_{3}$ follow that theoretical model quite convincingly. Previously, we have also observed small negative -$\Delta{S}$ at antiferromagnetic transition temperature for other manganite system at magnetic field below the required field for quenching the antiferromagnetism as well [@abmce2]. Now question arises about the origin of the enhancement of magnetic configuration entropy with the increase of magnetic field. According to the phase diagram (temperature vs x) of La$_{1-x}$Ca$_{x}$MnO$_{3}$, La$_{0.125}$Ca$_{0.875}$MnO$_{3}$ situates at the phase boundary between antiferromagnetic and inhomogeneous canted antiferromagnetic state (CAF) [@tokura]. It is obvious that CAF phase may have influence on the magnetic state of the sample. As a result of which, magnetically inhomogeneous phase with mixed magnetic exchange interactions can be stabilized at the antiferromagnetic transition temperature. The enhancement of magnetic configuration entropy with the application of magnetic field can occur for such system giving rise to IMCE [@nature] with large value of -$\Delta{S}$. We have also studied magnetocaloric property of another La$_{1-x}$Ca$_{x}$MnO$_{3}$, with slightly different value of x (x$\sim 0.83$). That sample is of polycrystalline form and prepared in similar condition as La$_{0.125}$Ca$_{0.875}$MnO$_{3}$. IMCE is also observed for that compound at its antiferromagnetic transition temperature \[inset (a), Fig. 3\]. However the value of -$\Delta{S}$ is considerable less for La$_{0.17}$Ca$_{0.83}$MnO$_{3}$ in comparison with La$_{0.125}$Ca$_{0.875}$MnO$_{3}$. According to the phase diagram of La$_{1-x}$Ca$_{x}$MnO$_{3}$, the magnetic transition in the case of La$_{0.17}$Ca$_{0.83}$MnO$_{3}$ is paramagnetic to antiferromagnetic, in which there is no influence of CAF phase [@tokura]. From the comparison of the magnetocaloric properties of the two compounds, it can be argued that the presence of magnetically inhomogeneous CAF state plays a vital role in IMCE and the magnetic entropy change becomes significantly enhanced as because of the influence of such state. To summarize, we have observed IMCE in the case of polycrystalline La$_{0.125}$Ca$_{0.875}$MnO$_{3}$ with large value of -$\Delta{S}$ at antiferromagnetic transition temperature. Possibly, the stabilization of inhomogeneous magnetic state for this compound at its antiferromagnetic transition temperature causes the increase in magnetic entropy in the presence of magnetic field. The observation of the enhancement of magnetic configuration entropy with the increase of magnetic field is very rare especially for manganite systems. [abc]{} K. A. Gschneidner, Jr., V. K. Pecharsky, and A. O. Tsokol, Rep. Prog. Phys., [**68**]{}, 1479 (2005). M. H. Phan, S. Yu, J. Magn. Magn. Mat., [**308**]{}, 325 (2007) V. K. Pecharsky, K. A. Gschneidner, Jr., Phys. Rev. Lett., [**78**]{}, 4494 (1997) T. Krenke, E. Duman, M. Acet, E. Wassermann, X. Moya, L. Manosa, and A. Planes, Nature materials, [**4**]{}, 450 (2005). Tapas Samanta, I. Das, and S. Banerjee, Appl. Phys. Lett., [**91**]{}, 082511 (2007) Tapas Samanta, I. Das, S. Banerjee, Appl. Phys. Lett., [**91**]{}, 152506 (2007) Z. B. Guo, Y. W. Du, J. S. Zhu, H. Huang, W. P. Ding, and D. Feng, Phys. Rev. Lett., [**78**]{}, 1142 (1997) Tapas Samanta, I. Das, S. Banerjee, J. Appl. Phys., [**104**]{}, 123901 (2008) M. Phan, S. C. Yu, N. Hur, Appl. Phy. Lett., [**86**]{}, 072504 (2005) Tapas Samanta, I. Das, S. Banerjee, J. Phys.:Cond.Mat., [**21**]{}, 026010 (2009) Y. Sun, M. Salamon, S. Chun, J. Appl. Phys., [**92**]{}, 3235 (2002) Tapas Samanta, I. Das, Phys. Rev. B, [**74**]{}, 132405 (2006) Anis Biswas, Tapas Samanta, S. Banerjee, I. Das, J. Appl. Phys., [**103**]{}, 013912 (2008) Anis Biswas, Tapas Samanta, S. Banerjee, I. Das, Appl. Phys. Lett., [**92**]{}, 012502 (2008) M. Phan, S. Yu, N. Hur, Y. Yeong, J. Appl. Phys., [**96**]{}, 1154 (2004) Anis Biswas, Tapas Samanta, S. Banerjee, I. Das, Appl. Phys. Lett., [**92**]{}, 212502 (2008) Anis Biswas, Tapas Samanta, S. Banerjee, I. Das, AIP Conf. Proc., [**1003**]{}, 109 (2008). R. J. Joenk, J. Appl. Phys., [**34**]{}, 1097 (1963). S. Chatterjee, S. Giri, S. majumdar, S. K. De, J. Phys. D: Appl.Phys., [**42**]{}, 065001 (2009) M. P. Annaorazov, S. A. Nikitin, A. L. Tyurin, K. A. Asatryan, A. K. Dovletov, J. Appl. Phys., [**79**]{}, 1689 (1996). L. Li, K. Nishimura, W. D. Hutchison, Sol. Stat. Comn., [**149**]{}, 932 (2009). L. Li, K. Nishimura, W. D. Hutchison, J. Phys.: Conf. Ser., [**150**]{}, 042113 (2009). W. J. Hu, J. Du, B. Li, Q. Zhang, Z. D. Zhang, Appl. Phys. Lett., [**92**]{}, 192505 (2005). P. Kumar, N. Singh, K. G. suresh, A. K. Nigam, Arxiv:condmatt/0609335. W.J. Feng, J. Du, B. Li, W.J. Hu, Z.D. Zhang, X.H. Li, Y.F. Deng, J. Phys. D: Appl. Phys., [**42**]{}, 125003 (2009). Ed. by Y. Tokura, Gordon and Breach Science Publishers,2000 P. Chen, Y.W. Du, G. Ni, Eur. Phys. Lett., [**52**]{}, 589 (2000) P. Sande, L. Hueso, D. Miguens, J. Rivas, F.Rivadulla, M. A. Lopez-Quintela, Appl. Phys. Lett., [**79**]{}, 2040 (2001) Anis Biswas, I. Das, C. Majumdar, J. Appl. Phys., [**98**]{}, 124310 (2005). S. K. Banerjee, Phys. Lett., [**12**]{}, 16 (1964). P. J. Ranke, N. Oliveira, B. P. Alho, E. Plaza, V. S. Sousa, L. Caron, M. S. Reis, J. Phys.:Cond. Mat. [**21**]{}, 056004 (2009).
--- abstract: 'We study the methods and their accuracies for determining $\tan\beta$ in two Higgs doublet models at future lepton colliders. In addition to the previously proposed methods using direct production of additional Higgs bosons, we propose a method using the precision measurement of the decay branching ratio of the standard-model (SM)-like Higgs boson. The method is available if there is a deviation from the SM in the coupling constants of the Higgs boson with the weak gauge bosons. We find that, depending on the type of Yukawa interactions, this method can give the best sensitivity in a wide range of $\tan\beta$.' author: - Hiroshi Yokoya title: ' $\tan\beta$ determination from the Higgs boson decay at the International Linear Collider[^1] ' --- Introduction ============ A Higgs boson has been discovered at the LHC [@Ref:atlas; @Ref:cms]. No indication has been found so far for the deviation from the standard model (SM) on the nature of the Higgs boson such as the decay width, spin-parity, and also on the coupling constants with SM fermions up to current experimental accuracies [@Aad:2013wqa; @Aad:2013xqa; @Chatrchyan:2013iaa; @Chatrchyan:2013mxa]. In spite of such situation, Higgs sector is not yet settled to be composed with one Higgs doublet. Namely, there are various other possibilities which are still consistent with current experimental data, and even more, plenty of models with the extended Higgs sector have been proposed to explain the phenomena which may indicate physics with a new energy scale beyond the SM, such as hierarchy problem, neutrino masses, dark matter, etc. Searches for the evidence of extended Higgs sectors are of primary importance at future experiments. At the second stage of the LHC experiment with $\sqrt{s}=13$ or 14 TeV, direct searches for the evidence of additional Higgs bosons can be performed and the energy reach for new particles will be extended. On the other hand, precise measurements of the couplings of the Higgs boson can be performed at the future International Linear Collider (ILC) experiment [@Asner:2013psa; @Dawson:2013bba], and the evidence of non-standard Higgs models can be detected as a deviation from the SM in the coupling constants of the SM-like Higgs boson. Furthermore, some parameter regions where the direct searches at the LHC cannot be covered can be complemented by the searches at the ILC [@Kanemura:2014dea]. The model discrimination can be performed through the direct measurement of the properties of additional Higgs bosons and/or fingerprinting the pattern of the deviation in various coupling measurements [@KTYY]. In this talk, we discuss collider methods for the determination of $\tan\beta$, a ratio of vacuum expectation values of the two doublets, in the two Higgs doublet model (THDM) as a benchmark model for the extended Higgs sector. We propose a new method through the measurements of the branching ratio of the SM-like Higgs boson at future lepton colliders. The method is applicable as long as there exist deviations in the couplings of the SM-like Higgs boson to gauge bosons, even for the case where additional Higgs bosons are too heavy to be detected directly. We studies the sensitivity of determining $\tan\beta$ at the ILC, and compare it with those for the previously proposed methods which utilize direct production of additional Higgs bosons [@Ref:TanB]. Two Higgs Doublet Model ======================= In this section, we briefly review the THDM with a softly-broken discrete $Z_2$ symmetry. This model has two preferable features which are good to naturally avoid phenomenological constraints on the extended Higgs sector. One is that any multi-doublet model predicts the electroweak rho parameter, $\rho=m_W^2/(m_Z^2\cos^2\theta_W)$, to be unity at the tree level. Since the experimental constraint on the rho parameter is quite strict, $\rho_{\rm exp}=1.0004^{+0.0003}_{-0.0004}$ [@Beringer:1900zz], these models may be regarded as natural extension of the SM. Second is that the $Z_2$ symmetry can suppress the flavor changing neutral currents (FCNCs) which are also severely constrained by flavor experiments. Under the $Z_2$ symmetry, each fermion couples to only one Higgs field, so that the Higgs-mediated FCNCs are prevented at the tree level and the constraints are relaxed to the loop level [@Ref:GW]. In the THDM with $Z_2$ symmetry, depending on the assignment of the $Z_2$ parity to each fermion, four types of Yukawa interaction can be constructed [@Barger:1989fj; @Grossman:1994jb; @Ref:AKTY]. Among the four types, we focus on the so-called Type-II and Type-X (lepton specific) THDMs, since these deserve much interests from the viewpoint of constructing models for physics beyond the SM. Type-II THDM is well-known as the Higgs sector in the minimal supersymmetric extension of the SM, where up-type quarks couple to one Higgs doublet while down-type quarks and charged leptons couple to another Higgs doublet. Type-X THDM is sometimes employed in models for neutrino masses, etc., where quarks couple to one Higgs doublet while charged leptons couple to another Higgs doublet. For simplicity, we restrict ourselves to consider the CP-conserving scenario, where CP-even $H$, CP-odd $A$ and charged $H^\pm$ Higgs bosons appear as mass eigenstates in addition to the light CP-even $h$ which we assume the observed Higgs boson with $m_h=125$ GeV. Mixing angles $\alpha$ and $\beta$ are defined, respectively, as the angles in the neutral CP-even states and that between $A$ and $z$ the neutral component of Nambu-Goldstone boson, as well as between $H^\pm$ and $w^\pm$ the charged components of Nambu-Goldstone bosons. $\beta$ satisfies $\tan\beta=v_2/v_1$ where $v_i$ are vacuum expectation values of the two Higgs fields. $\xi_h^u$ $\xi_h^d$ $\xi_h^\ell$ $\xi_H^u$ $\xi_H^d$ $\xi_H^\ell$ $\xi_A^u$ $\xi_A^d$ $\xi_A^\ell$ --------- -------------------- --------------------- --------------------- -------------------- -------------------- -------------------- ------------- -------------- -------------- Type-I $c_\alpha/s_\beta$ $c_\alpha/s_\beta$ $c_\alpha/s_\beta$ $s_\alpha/s_\beta$ $s_\alpha/s_\beta$ $s_\alpha/s_\beta$ $\cot\beta$ $-\cot\beta$ $-\cot\beta$ Type-II $c_\alpha/s_\beta$ $-s_\alpha/c_\beta$ $-s_\alpha/c_\beta$ $s_\alpha/s_\beta$ $c_\alpha/c_\beta$ $c_\alpha/c_\beta$ $\cot\beta$ $\tan\beta$ $\tan\beta$ Type-X $c_\alpha/s_\beta$ $c_\alpha/s_\beta$ $-s_\alpha/c_\beta$ $s_\alpha/s_\beta$ $s_\alpha/s_\beta$ $c_\alpha/c_\beta$ $\cot\beta$ $-\cot\beta$ $\tan\beta$ Type-Y $c_\alpha/s_\beta$ $-s_\alpha/c_\beta$ $c_\alpha/s_\beta$ $s_\alpha/s_\beta$ $c_\alpha/c_\beta$ $s_\alpha/s_\beta$ $\cot\beta$ $\tan\beta$ $-\cot\beta$ : The scaling factors for the four types of Yukawa interactions in the THDM [@Ref:AKTY].[]{data-label="Tab:sf"} Coupling constants of $\Phi VV$ interactions, where $\Phi=H$ or $h$, are given as $$\begin{aligned} g^{\rm THDM}_{hVV}=g^{\rm SM}_{hVV}\cdot\sin(\beta-\alpha),\quad g^{\rm THDM}_{HVV}=g^{\rm SM}_{hVV}\cdot\cos(\beta-\alpha),\end{aligned}$$ where $g^{\rm SM}_{hVV}$ is the corresponding coupling constant for the SM Higgs boson. When $\sin(\beta-\alpha)=1$, which is called “SM-like limit” [@Gunion:2002zf], $h$ has the same coupling constants with gauge bosons as those of the SM Higgs boson. In general, $\sin(\beta-\alpha)$ is a free parameter in the model. However, a large deviation of $\sin(\beta-\alpha)$ from unity is restricted by theoretical constraints which are derived by using the argument of perturbative unitarity [@Ref:Uni-2hdm]. Experimental constraints have also been obtained at the LHC [@Aad:2013wqa; @CMS:yva; @ATLAS:2013zla; @CMS:2013eua]. The Yukawa couplings for each type in the THDM are characterized by a scaling factor, $\xi_\phi^f$, defined as the coupling constant in each model divided by that for the coupling constant in the SM. For example, $$\begin{aligned} \xi_h^f=\sin(\beta-\alpha)+\cot\beta\cdot\cos(\beta-\alpha),\\ \xi_H^f=\cos(\beta-\alpha)-\cot\beta\cdot\sin(\beta-\alpha),\end{aligned}$$ for $f=u$ in Type-II and $f=u,d$ in Type-X, while $$\begin{aligned} \xi_h^f=\sin(\beta-\alpha)-\tan\beta\cdot\cos(\beta-\alpha),\\ \xi_H^f=\cos(\beta-\alpha)+\tan\beta\cdot\sin(\beta-\alpha),\end{aligned}$$ for $f=d,\ell$ in Type-II and $f=\ell$ in Type-X. Thus, in the SM-like limit, Yukawa couplings for $h$ become the same as those in the SM, and their $\tan\beta$ dependence disappears. On the other hand, $\tan\beta$ dependence on the Yukawa couplings for $H$, as well as $A$ and $H^\pm$, remains in this limit. The scaling factors are summarized in Table. \[Tab:sf\] for the four types of Yukawa interaction in the THDM. In Fig. \[Fig:bb\], we show branching ratios of neutral Higgs bosons in the $b\bar{b}$ and $\tau^+\tau^-$ decay modes for $m_h=125$ GeV and $m_H=m_A=200$ GeV in the Type-II and Type-X THDM. From the left, ${\mathcal B}(b\bar{b})$ for Type-II with $\sin^2(\beta-\alpha)=1$, that with $\sin^2(\beta-\alpha)=0.99$, ${\mathcal B}(\tau^+\tau^-)$ for Type-X with $\sin^2(\beta-\alpha)=1$, that with $\sin^2(\beta-\alpha)=0.99$ are plotted as a function of $\tan\beta$, respectively. Solid (dashed) lines are for $\cos(\beta-\alpha)<0$ ($\cos(\beta-\alpha)>0$). We see that, when $\sin^2(\beta-\alpha)=1$, branching ratios of $h$ are independent of $\tan\beta$. However, once it deviates from unity, there appear substantial $\tan\beta$ dependence, and the branching ratios vary in a wide range. $\tan\beta$ dependence on the branching ratios of $H$ and $A$ is also large, and it remains even in the SM-like limit. ![Left two panels: the decay branching ratios for $h\to b\bar b$ (black curves), $H\to b\bar b$ (red curves), and $A\to b\bar b$ (blue curves) decays as a function of $\tan\beta$ in the Type-II THDM with $\sin^2(\beta-\alpha)=1$ and 0.99, respectively. The solid (dashed) curves denote the case with $\cos(\beta-\alpha) \le 0$ ($\cos(\beta-\alpha) \ge 0$). Right two panels: the same as left two panels, but for the $\tau^+\tau^-$ decays in the Type-X THDM. []{data-label="Fig:bb"}](TypeII_Bb_100.eps "fig:"){width="25.00000%"} ![Left two panels: the decay branching ratios for $h\to b\bar b$ (black curves), $H\to b\bar b$ (red curves), and $A\to b\bar b$ (blue curves) decays as a function of $\tan\beta$ in the Type-II THDM with $\sin^2(\beta-\alpha)=1$ and 0.99, respectively. The solid (dashed) curves denote the case with $\cos(\beta-\alpha) \le 0$ ($\cos(\beta-\alpha) \ge 0$). Right two panels: the same as left two panels, but for the $\tau^+\tau^-$ decays in the Type-X THDM. []{data-label="Fig:bb"}](TypeII_Bb_99.eps "fig:"){width="25.00000%"} ![Left two panels: the decay branching ratios for $h\to b\bar b$ (black curves), $H\to b\bar b$ (red curves), and $A\to b\bar b$ (blue curves) decays as a function of $\tan\beta$ in the Type-II THDM with $\sin^2(\beta-\alpha)=1$ and 0.99, respectively. The solid (dashed) curves denote the case with $\cos(\beta-\alpha) \le 0$ ($\cos(\beta-\alpha) \ge 0$). Right two panels: the same as left two panels, but for the $\tau^+\tau^-$ decays in the Type-X THDM. []{data-label="Fig:bb"}](TypeX_Btau_100.eps "fig:"){width="24.00000%"} ![Left two panels: the decay branching ratios for $h\to b\bar b$ (black curves), $H\to b\bar b$ (red curves), and $A\to b\bar b$ (blue curves) decays as a function of $\tan\beta$ in the Type-II THDM with $\sin^2(\beta-\alpha)=1$ and 0.99, respectively. The solid (dashed) curves denote the case with $\cos(\beta-\alpha) \le 0$ ($\cos(\beta-\alpha) \ge 0$). Right two panels: the same as left two panels, but for the $\tau^+\tau^-$ decays in the Type-X THDM. []{data-label="Fig:bb"}](TypeX_Btau_99.eps "fig:"){width="24.00000%"} $\tan\beta$ measurement ======================= In this section, we discuss methods of $\tan\beta$ determination in the future lepton colliders. We consider three methods which utilize the measurements of the following observables, respectively,[^2] (i) branching ratios of $H$ and $A$, ${\mathcal B}_{H,A}$, (ii) total decay widths of $H$ and $A$, $\Gamma_{H,A}$, (iii) branching ratio of $h$, ${\mathcal B}_h$. The first two observables can be studied in the direct production of $H$ and $A$, i.e., the $e^+e^-\to HA$ process. Thus, these can be available if the sum of the mass of $H$ and $A$ is less than the collider energy. Since the production cross section is independent of model parameters, and the branching ratio of $H$ and $A$ in the $b\bar{b}$ and $\tau^+\tau^-$ decay modes significantly depend on $\tan\beta$, $\tan\beta$ can be determined by counting the number of events for $4b$ ($4\tau$) events; ${\mathcal N}\propto\sigma_{HA}\cdot{\mathcal B}_H\cdot{\mathcal B}_A$. Thus, the observation of the branching ratios gives $\tan\beta$ determination by comparing with the theoretical prediction. We note that masses of $H$ and $A$ can be easily measured from the peak in the invariant mass distribution. The last method utilizes the precision measurement of the branching ratio of $h$. In the THDM, as we see in Eqs. (2-5), when $\sin(\beta-\alpha)<1$, the Yukawa couplings of $h$ can be deviated from those in the SM. It is known that the pattern of the deviations for up-type, down-type quarks and charged leptons depends on the type of Yukawa interaction, therefore, by observing it we could distinguish the type of Yukawa interaction in the THDM [@KTYY; @Kanemura:2014dja]. Furthermore, the magnitude of the deviation depends on the value of $\tan\beta$, so that we can determine $\tan\beta$ by observing it. The accuracy of the $\tan\beta$ determination depends on how accuracy the branching ratio can be measured experimentally and also how steeply the branching ratio depends on $\tan\beta$. Results ======= In this section, we study the accuracies of $\tan\beta$ measurement for the above three methods at the ILC. For the method (i), the sensitivity is estimated as follows. We utilize $b\bar{b}$ decay mode for Type-II and $\tau^+\tau^-$ decay mode for Type-X which are basically large and also which have large $\tan\beta$ dependence. The expected number of events for $4b$ and $4\tau$ events can be obtained as $N=\sigma_{HA}\cdot{\mathcal B}_H\cdot{\mathcal B}_A\cdot{\mathcal L}\cdot{\mathcal \epsilon}$, where $\epsilon$ is the acceptance ratio for observing $4b$ and $4\tau$ signals. We take $m_H=m_A=200$ GeV and $\sqrt{s}=500$ GeV with ${\mathcal L}=250$ fb$^{-1}$. $\epsilon_{4b}$ and $\epsilon_{4\tau}$ are estimated to be 50% for both by our simulation [@Kanemura:2013eja]. The $1\sigma$ sensitivity to $\tan\beta$ is obtained by solving $N(\tan\beta\pm\Delta\tan\beta)=N_{\rm obs}\pm\Delta N_{\rm obs}$, where $\Delta N_{\rm obs}=\sqrt{N_{\rm obs}}$ is a statistical error. For the method (ii), $\tan\beta$ sensitivity from the width measurement is estimated as follows. The detector resolutions for the Breit-Wigner width in the $b\bar{b}$ and $\tau^+\tau^-$ invariant mass distributions are estimated to be $\Gamma^{\rm res}_{b\bar{b}}=11$ GeV and $\Gamma^{\rm res}_{\tau^+\tau^-}=7$ GeV, respectively [@Kanemura:2013eja]. The width to be observed is $$\begin{aligned} \Gamma^R_{H/A}=\frac{1}{2}\left[\sqrt{\left(\Gamma_{H}^{\rm tot}\right)^2+\left(\Gamma^{\rm res}\right)^2}+ \sqrt{\left(\Gamma_{A}^{\rm tot}\right)^2+\left(\Gamma^{\rm res}\right)^2}\right], \end{aligned}$$ and the $1\sigma$ uncertainty is given by [@Ref:TanB] $$\begin{aligned} \Delta\Gamma^R_{H/A}=\sqrt{\left(\Gamma^R_{H/A}/\sqrt{2N_{\rm obs}}\right)^2+\left(\Delta\Gamma^{\rm res}_{\rm sys}\right)^2},\end{aligned}$$ where $\Delta\Gamma^{\rm res}_{\rm sys}$ is taken as 10% of $\Gamma^{\rm res}$ for each decay mode. Then, $1\sigma$ sensitivity for the $\tan\beta$ determination is obtained by solving $\Gamma_{H/A}(\tan\beta\pm\Delta\tan\beta)= \Gamma^{R}_{H/A}\pm\Delta\Gamma^{R}_{H/A}$, where $\Gamma_{H/A}=\frac{1}{2}(\Gamma_H+\Gamma_A)$. For the method (iii), $\tan\beta$ sensitivity is evaluated by solving ${\mathcal B}_h(\tan\beta\pm\Delta\tan\beta)={\mathcal B}_h^{\rm obs}\pm\Delta{\mathcal B}_h^{\rm obs}$, where the accuracy of ${\mathcal B}_h$ measurement is evaluated from the reference value by rescaling the statistical factor by taking into account the change of the expected number of events. The reference values for the $1\sigma$ accuracy of determining branching ratios in the $b\bar{b}$ and $\tau^+\tau^-$ decay modes at the ILC with $\sqrt{s}=250$ GeV and ${\mathcal L}=250$ fb$^{-1}$ are taken as 1.3% and 2%, respectively, from the recent reports [@Asner:2013psa; @Dawson:2013bba][^3]. ![Sensitivities to the $\tan\beta$ measurement in the Type-II THDM. From the left, $\sin(\beta-\alpha)=1$, $\sin^2(\beta-\alpha)=0.99$ with $\cos(\beta-\alpha)<0$, and $\sin^2(\beta-\alpha)=0.99$ with $\cos(\beta-\alpha)>0$ are taken, respectively. ](TypeII_dTanB_100.eps "fig:"){height="24.50000%"} ![Sensitivities to the $\tan\beta$ measurement in the Type-II THDM. From the left, $\sin(\beta-\alpha)=1$, $\sin^2(\beta-\alpha)=0.99$ with $\cos(\beta-\alpha)<0$, and $\sin^2(\beta-\alpha)=0.99$ with $\cos(\beta-\alpha)>0$ are taken, respectively. ](TypeII_dTanB_99_Negative.eps "fig:"){height="24.50000%"} ![Sensitivities to the $\tan\beta$ measurement in the Type-II THDM. From the left, $\sin(\beta-\alpha)=1$, $\sin^2(\beta-\alpha)=0.99$ with $\cos(\beta-\alpha)<0$, and $\sin^2(\beta-\alpha)=0.99$ with $\cos(\beta-\alpha)>0$ are taken, respectively. ](TypeII_dTanB_99_Positive.eps "fig:"){height="24.50000%"}  \[Fig:II\] In Fig. \[Fig:II\], our numerical results for the three methods are shown for the Type-II THDM. Left panel is for $\sin^2(\beta-\alpha)=1$, middle panel is for $\sin^2(\beta-\alpha)=0.99$ with $\cos(\beta-\alpha)<0$, and right panel is for $\sin^2(\beta-\alpha)=0.99$ with $\cos(\beta-\alpha)>0$. $1\sigma$ (solid) and also $2\sigma$ (dashed) sensitivities are drawn as a function of $\tan\beta$ for each method. In the left panel, the method (iii) does not work, since there is no $\tan\beta$ sensitivity in the SM-like limit. The method (i) has good sensitivity in smaller $\tan\beta$ regions, since there exists $\tan\beta$ dependence in ${\mathcal B}_{H/A}$ only for these regions. The method (ii) has good sensitivity in larger $\tan\beta$ regions, where the widths can be directly measured. In the middle and right panels, when $\sin^2(\beta-\alpha)<1$, the method (iii) works very well in wide regions in $\tan\beta$. In Fig. \[Fig:X\], numerical results for the three methods are also shown for the Type-X THDM in the same manner as Fig. \[Fig:II\]. The features of the three methods are similar to those for Type-II. Summary ======= We have studied the sensitivities of $\tan\beta$ measurement by using the complementary three methods at the ILC; (i) the branching ratio of $H$ and $A$, (ii) the total decay width of $H$ and $A$, and (iii) the branching ratio of $h$. The first two methods utilize the direct observation of the additional Higgs bosons, $H$ and $A$. Therefore, these methods are available if the production process of $e^+e^-\to HA$ is kinematically accessible. The last method utilizes the precision measurement of the branching ratio of $h$ at the ILC. Although the method is available only for the case with $\sin(\beta-\alpha)<1$ where $\tan\beta$ dependence can be seen in the branching ratio of $h$, the method has better sensitivity for determining $\tan\beta$ than the other methods in a wide range of $\tan\beta$. The author would like to thank Shinya Kanemura, Koji Tsumura, and also Kei Yagyu for fruitful discussions and collaborations. The work was supported in part by Grant-in-Aid for Scientific Research, No. 24340036 and the Sasakawa Scientific Research Grant from The Japan Science Society. ![The same as Fig. \[Fig:II\], but for the Type-X THDM.](TypeX_dTanB_100.eps "fig:"){height="24.50000%"} ![The same as Fig. \[Fig:II\], but for the Type-X THDM.](TypeX_dTanB_99_Negative.eps "fig:"){height="24.50000%"} ![The same as Fig. \[Fig:II\], but for the Type-X THDM.](TypeX_dTanB_99_Positive.eps "fig:"){height="24.50000%"}  \[Fig:X\] [99]{} S. Kanemura, K. Tsumura and H. Yokoya, Phys. Rev. D [**88**]{}, 055010 (2013). ATLAS Collaboration, Phys. Lett. B [**716**]{}, 1 (2012). CMS Collaboration, Phys. Lett. B [**716**]{}, 30 (2012). G. Aad [*et al.*]{} \[ATLAS Collaboration\], Phys. Lett. B [**726**]{}, 88 (2013). G. Aad [*et al.*]{} \[ATLAS Colaboration\], Phys. Lett. B [**726**]{}, 120 (2013). S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], JHEP [**1401**]{}, 096 (2014). S. Chatrchyan [*et al.*]{} \[CMS Collaboration\], arXiv:1312.5353 \[hep-ex\]. D. M. Asner [*et al.*]{}, arXiv:1310.0763 \[hep-ph\]. S. Dawson [*et al.*]{}, arXiv:1310.8361 \[hep-ex\]. S. Kanemura, H. Yokoya and Y. -J. Zheng, arXiv:1404.5835 \[hep-ph\]. S. Kanemura, K. Tsumura, K. Yagyu and H. Yokoya, in preparation. V. D. Barger, T. Han and J. Jiang, Phys. Rev. D [**63**]{}, 075002 (2001); J. F. Gunion, T. Han, J. Jiang and A. Sopczak, Phys. Lett. B [**565**]{}, 42 (2003). J. Beringer [*et al.*]{} \[Particle Data Group Collaboration\], Phys. Rev. D [**86**]{}, 010001 (2012). S. L. Glashow, S. Weinberg, Phys. Rev.  [**D15**]{}, 1958 (1977). V. D. Barger, J. L. Hewett and R. J. N. Phillips, Phys. Rev. D [**41**]{}, 3421 (1990). Y. Grossman, Nucl. Phys. B [**426**]{}, 355 (1994). M. Aoki, S. Kanemura, K. Tsumura, K. Yagyu, Phys. Rev.  [**D80**]{}, 015017 (2009). J. F. Gunion and H. E. Haber, Phys. Rev. D [**67**]{}, 075019 (2003). S. Kanemura, T. Kubota and E. Takasugi, Phys. Lett. B [**313**]{}, 155 (1993); A. G. Akeroyd, A. Arhrib and E. -M. Naimi, Phys. Lett. B [**490**]{}, 119 (2000). CMS Collaboration, CMS-PAS-HIG-13-005. ATLAS Collaboration, ATLAS-CONF-2013-027. CMS Collaboration, CMS-PAS-HIG-13-025. S. Kanemura, M. Kikuchi and K. Yagyu, Phys. Lett. B [**731**]{}, 27 (2014). [^1]: The talk is based on Ref. [@Kanemura:2013eja]. [^2]: The other method by using the cross-section measurements of $b\bar{b}H+b\bar{b}A$ production is also proposed in Ref. [@Ref:TanB]. This method may become useful for heavier $H$ and $A$ case where $H$ and $A$ cannot be produced in pair but only singly due to the kinematical limitation. [^3]: We note that an indication of $\sin(\beta-\alpha)\neq1$ in the THDM can be obtained by measuring the absolute value of the $hVV$ couplings at the ILC. At the ILC with $\sqrt{s}=250$ GeV and ${\mathcal L}=250$ fb$^{-1}$, the best accuracy of the measurements can be expected for the $hZZ$ coupling at 0.7% [@Asner:2013psa; @Dawson:2013bba].
--- abstract: 'We examine the role of thermal fluctuations in two-species Bose-Einstein condensates confined in quasi-two-dimensional (quasi-2D) optical lattices using the Hartree-Fock-Bogoliubov theory with the Popov approximation. The method, in particular, is ideal to probe the evolution of quasiparticle modes at finite temperatures. Our studies show that the quasiparticle spectrum in the phase-separated domain of the two-species Bose-Einstein condensate has a discontinuity at some critical value of the temperature. Furthermore, the low-lying modes like the slosh mode becomes degenerate at this critical temperature, and this is associated with the transition from the immiscible side-by-side density profile to the miscible phase. Hence, the rotational symmetry of the condensate density profiles are restored, and so is the degeneracy of quasiparticle modes.' author: - 'K. Suthar' - 'D. Angom' bibliography: - 'tbec\_2d\_temp.bib' title: Thermal fluctuations enhanced miscibility of binary condensates in optical lattices --- Introduction ============ Ultracold atoms in an optical lattice offer fascinating prospects to study phenomena in many-body physics associated with strongly correlated systems in a highly controllable environment [@jaksh_98; @orzel_01; @greiner_02; @bloch_12]. These systems are recognized as ideal tools to explore new quantum phases [@demler_02; @kuklov_03; @kuklov_04], complex phase-transition [@pal_10; @sungsoo_11; @lin_15; @jurgensen_15], quantum magnetism [@trotzky_08; @simon_11], quantum information [@bloch1_08] and to simulate transport and magnetic properties of condensed-matter systems [@lewenstein_07; @bloch_08]. Moreover, the effect of phase separation [@mishra_07; @zhan_14], quantum emulsions and coherence properties [@greiner_01; @roscilde_07; @buonsante_08], and multicritical behaviour [@ceccarelli_15; @ceccarelli_16] of the mixtures have been explored in the past decade. Among the various observations made in the two-species Bose-Einstein condensates (TBECs) of ultracold atomic gases, the most remarkable is the phenomenon of phase separation, and it has been a long-standing topic of interest in chemistry and physics. For repulsive on-site interactions, the transition to the phase-separated domain or immiscibility is characterized by the parameter $\Delta = U_{11} U_{22}/U^{2}_{12} - 1$, where $U_{11}$ and $U_{22}$ are the intraspecies on-site interactions and $U_{12}$ is the interspecies on-site interaction. When $\Delta < 0$, an immiscible phase occurs in which, the atoms of species $1$ and $2$ have relatively strong repulsion, whereas $\Delta\geqslant 0$ implies a miscible phase [@ho_96; @timmermans_98; @esry_99]. The presence of an external trapping potential, however, modifies this condition as the trap introduces an additional energy cost for the species to spatially separate [@wen_12]. In experiments, the unique feature of phase separation has been successfully observed in TBECs with harmonic trapping potential [@papp_08; @tojo_10; @mccarron_11]. Previously, in the context of superfluid Helium at zero temperature, the phase separation of the bosonic mixtures of isotopes of different masses has also been predicted in Refs. [@chester_55; @miller_78]. The recent experimental realizations of TBECs in optical lattices, either of two different atomic species [@catani_08] or two different hyperfine states of same atomic species [@gadway_10; @soltan_11] provides the motivations to study these systems in detail. In recent works, we have examined the miscible-immiscible transition, and the quasiparticle spectra of the TBECs at zero temperature [@suthar_15; @suthar_16]. In other theoretical studies, the finite temperature properties of TBECs have been explorered [@ohberg_99; @shi_00; @kwangsik_07]. In continuum or TBECs with harmonic confining potentials alone, we have explorered the suppression of phase separation due to the presence of the thermal fluctuations [@arko_15]. However, a theoretical understanding of the finite temperature effects on the topology and the collective excitations of TBECs in optical lattices is yet to be explored. The Bose-Einstein condensation and hence, the coherence in a system of bosons depends on the interplay between various parameters, such as temperature, interaction strength, confinement, and dimensionality [@proukakis_06]. In particular, in the low-dimensional Bose gases, the coherence can only be maintained across the entire spatial extent at a temperature much below the critical temperature. The coherence property has already been observed experimentally [@dettmer_01; @hellweg_03; @richard_03; @esteve_06; @plisson_11]. With an attention towards this unexplored physics, we study the finite temperature effects of quasi-2D trapped TBECs in optical lattices. In the present work, we address the topological phase transition in the TBECs of two different isotopes of Rb with temperature as a control parameter in the domain $T<T_c$, where $T_c$ is the critical temperature of either of the species of the mixture. We study the evolution of the quasiparticle spectra of TBEC in quasi-2D optical lattices with temperature. For this work, we use Hartree-Fock-Bogoliubov (HFB) formalism with the Popov approximation, and starting from phase-separated domain at zero temperature we vary the temperature. We observe a topological transition of the TBEC at a critical value of the temperature. This transition is accompanied by a discontinuity in the quasiparticle excitation spectrum, and in addition, the slosh mode corresponding to each of the species becomes degenerate. Furthermore, we compute the equal-time first order spatial correlation functions which is the measure of the coherence and phase fluctuations present in the system. It describes off-diagonal long range order which is the defining characteristic of BEC [@penrose_56]. This is an important theoretical tool to study the many body effects in atomic physics experiments [@burt_97; @tolra_04]. At finite temperature the decay in the coherence of the TBECs is examined using the first order correlation function. This paper is organized as follows. In Sec. \[theory\_2s2d\] we describe the HFB-formalism, and the numerical techniques used in the present work. The evolution of the quasiparticle modes and the density distributions with the temperature are shown in Sec. \[results\]. Finally, our main results are summarized in Sec. \[conc\]. Theory and methods {#theory_2s2d} ================== HFB-Popov approximation for quasi-2D TBEC ----------------------------------------- We consider a binary BEC confined in an optical lattice with pancake shaped configuration of background harmonic trapping potential. Thus, the trapping frequencies satisfy the condition $\omega_{\perp} \ll \omega_z$ with $\omega_x = \omega_y = \omega_{\perp}$. In this system, the excitation energies along the axial direction is high, and the degree of freedom in this direction is frozen. The excitations, both the quantum and thermal fluctuations, are considered only along the radial direction. In the tight-binding approximation (TBA) [@chiofalo_00; @smerzi_03], the Bose-Hubbard (BH) Hamiltonian [@fisher_89; @lundh_12; @hofer_12] describing this system is $$\begin{aligned} \hat{H} = && \sum_{k=1}^2 \bigg[- J_k \sum_{\langle \xi\xi'\rangle} \hat{a}^{\dagger}_{k\xi}\hat{a}_{k\xi'} + \sum_\xi(\epsilon^{(k)}_{\xi} - \mu_k) \hat{a}^{\dagger}_{k\xi}\hat{a}_{k\xi}\bigg] \nonumber\\ &+& \frac{1}{2}\!\!\sum_{k=1, \xi}^{2}\!\! U_{kk}\hat{a}^{\dagger}_{k\xi} \hat{a}^{\dagger}_{k\xi}\hat{a}_{k\xi}\hat{a}_{k\xi} + U_{12}\!\!\sum_\xi \hat{a}^{\dagger}_{1\xi}\hat{a}_{1\xi} \hat{a}^{\dagger}_{2\xi}\hat{a}_{2\xi}, \label{bh2d} \end{aligned}$$ where $k = 1,2$ is the species index, $\mu_k$ is the chemical potential of the $k$th species, and $\hat{a}_{k\xi}$ ($\hat{a}^\dagger_{k\xi}$) is the annihilation (creation) operators of the two different species at $\xi$th lattice site. The index is such that $\xi \equiv (i,j)$ with $i$ and $j$ as the lattice site index along $x$ and $y$ directions, respectively. The summation index $\langle \xi\xi'\rangle$ represents the sum over nearest-neighbour to $\xi$th site. The TBA is valid when the depth of the lattice potential is much larger than the chemical potential $V_0 \gg \mu_k$, the BH Hamiltonian then describes the system when the bosonic atoms occupy the lowest energy band. A detailed derivation of the BH-Hamiltonian is given in our previous works [@suthar_15; @suthar_16]. In the BH-Hamiltonian, $J_k$ are the tunneling matrix elements, $\epsilon^{(k)}_{\xi}$ is the offset energy arising due to background harmonic potential, $U_{kk}$ ($U_{12}$) are the intraspecies (interspecies) interaction strengths. In the present work all the interaction strengths are considered to be repulsive, that is, $U_{kk},U_{12}>0$. In the weakly interacting regime, under the Bogoliubov approximation [@griffin_96; @amrey_04], the annihilation operators at each lattice site can be decomposed as $\hat{a}_{1\xi} = (c_{\xi} + \hat{\varphi}_{1\xi})e^{-i \mu_1 t/\hbar}$, $\hat{a}_{2\xi} = (d_{\xi} + \hat{\varphi}_{2\xi})e^{-i \mu_2 t/\hbar}$, where $c_{\xi}$ and $d_{\xi}$ are the complex amplitudes describing the condensate phase of each of the species. The operators $\hat{\varphi}_{1\xi}$ and $\hat{\varphi}_{2\xi}$ are the operators representing the quantum or thermal fluctuation part of the field operators. From the equation of motion of the field operators with the Bogoliubov approximation, the equilibrium properties of a TBEC is governed by the coupled generalized discrete nonlinear Schrödinger equations (DNLSEs) $$\begin{aligned} \mu_1 c_\xi = &-& J_1 \sum_{\xi'} c_{\xi'} + \left [\epsilon^{(1)}_\xi + U_{11} (n^{c}_{1\xi} + 2 \tilde{n}_{1\xi}) + U_{12} n_{2\xi} \right ] c_\xi, \nonumber \\~\\ \mu_2 d_\xi = &-& J_2 \sum_{\xi'} d_{\xi'} + \left [\epsilon^{(2)}_\xi + U_{22} (n^{c}_{2\xi} + 2 \tilde{n}_{2\xi}) + U_{12} n_{1\xi} \right ] d_\xi, \nonumber \\ \end{aligned}$$ \[dnls2d\] where $n^{c}_{1\xi} = |c_\xi|^2$ and $n^{c}_{2\xi} = |d_\xi|^2$ are the condensate, $\tilde{n}_{k\xi} = \langle {\hat{\varphi}}^{\dagger}_{k\xi}\hat{\varphi}_{k\xi} \rangle$ are the noncondensate and $n_{k\xi} = n^{c}_{k\xi} + \tilde{n}_{k\xi}$ are the total density of the species. Using Bogoliubov transformation $$\hat\varphi_{k\xi} = \sum_l\left[u^l_{k\xi}\hat{\alpha}_l e^{-i \omega_l t} - v^{*l}_{k\xi}\hat{\alpha}^{\dagger}_l e^{i \omega_l t}\right], \label{bog_trans_2d}$$ where $\hat{\alpha}_l (\hat{\alpha}^{\dagger}_l)$ are the quasiparticle annihilation (creation) operators, which satisfy the Bose commutation relations, $l$ is the quasiparticle mode index, $u^l_{k\xi}$ and $v^l_{k\xi}$ are the quasiparticle amplitudes for the $k$th species, and $\omega_l = E_l/\hbar$ is the frequency of the $l$th quasiparticle mode with $E_l$ is the mode excitation energy. Using the Bogoliubov transformation, we obtain the following HFB-Popov equations [@suthar_16]: $$\begin{aligned} E_l u^l_{1,\xi} = &-& J_1(u^l_{1,\xi-1} + u^l_{1,\xi+1}) + \mathcal{U}_1 u^l_{1,\xi} - U_{11} c^2_\xi v^l_{1,\xi} \nonumber\\ &+& U_{12} c_\xi(d^{*}_\xi u^l_{2,\xi} - d_\xi v^l_{2,\xi}),\\ E_l v^l_{1,\xi} = &~& J_1(v^l_{1,\xi-1} + v^l_{1,\xi+1}) + \underline{\mathcal{U}}_1 v^l_{1,\xi} + U_{11} c^{*2}_\xi u^l_{1,\xi} \nonumber\\ &-& U_{12} c^{*}_\xi(d_\xi v^l_{2,\xi} - d^{*}_\xi u^l_{2,\xi}),\\ E_l u^l_{2,\xi} = &-& J_2(u^l_{2,\xi-1} + u^l_{2,\xi+1}) + \mathcal{U}_2 u^l_{2,\xi} - U_{22} d^2_\xi v^l_{2,\xi} \nonumber\\ &+& U_{12} d_\xi(c^{*}_\xi u^l_{1,\xi} - c_\xi v^l_{1,\xi}),\\ E_l v^l_{2,\xi} = &~& J_2(v^l_{2,\xi-1} + v^l_{2,\xi+1}) + \underline{\mathcal{U}}_2 v^l_{2,\xi} + U_{22} d^{*2}_\xi u^l_{2,\xi} \nonumber\\ &-& U_{12} d^{*}_\xi(c_\xi v^l_{1,\xi} - c^{*}_\xi u^l_{1,\xi}), \end{aligned}$$ \[hfb\_eq\_2sp\] where $\mathcal{U}_1 = 2 U_{11} (n^{c}_{1\xi} + \tilde{n}_{1\xi}) + U_{12} (n^{c}_{2\xi} + \tilde{n}_{2\xi}) + (\epsilon^{(1)}_\xi - \mu_1)$, $\mathcal{U}_2 = 2 U_{22} (n^{c}_{2\xi} + \tilde{n}_{2\xi}) + U_{12} (n^{c}_{1\xi} + \tilde{n}_{1\xi}) + (\epsilon^{(2)}_\xi - \mu_2)$ with $\underline{\mathcal{U}}_k = -\mathcal{U}_k$. To solve the above eigenvalue equations, we use a basis set of on-site Gaussian wave functions, and define the quasiparticle amplitude as linear combination of the basis functions. The condensate and noncondensate densities are then computed through the self-consistent solution of Eqs. (\[dnls2d\]) and (\[hfb\_eq\_2sp\]). The noncondensate atomic density at the $\xi$th lattice site is $$\tilde{n}_{k\xi} = \sum_l \left[ (|u^l_{k\xi}|^2 + |v^l_{k\xi}|^2)N_0(E_l) + |v^l_{k\xi}|^2 \right],$$ where $N_0(E_l) = (e^{\beta E_l} - 1)^{-1}$ with $\beta = (k_{B}T)^{-1}$ is the Bose-Einstein distribution factor of the $l$th quasiparticle mode with energy $E_l$ at temperature $T$. The last term in the $\tilde{n}_{k\xi}$ is independent of the temperature, and hence, represents the quantum fluctuations of the system. To examine the role of temperature we measure the miscibility of the condensates in terms of the overlap integral $$\Lambda = \frac{\left[\int n_1(\mathbf r) n_2(\mathbf r) d\mathbf{r}\right]^2} {\left[\int n^2_1(\mathbf r) d\mathbf{r} \right] \left[\int n^2_2(\mathbf r) d\mathbf{r} \right]}.$$ Here $n_k (\mathbf r)$ is the total density of $k$th condensate at position $\mathbf r \equiv (x,y)$. If the two condensate of the TBEC complete overlap to each other then the system is in miscible phase with $\Lambda=1$, whereas for the completely phase-separated case $\Lambda=0$. Field-field correlation function -------------------------------- To define a measure of the coherence in the condensate we introduce the first order correlation function $g^{(1)}_{k} (\mathbf r, \mathbf r')$, which can be expressed as expectations of product of field operators at different positions and times [@glauber_63; @naraschewski_99; @bezett_08; @bezett_12]. These are normalized to attain unit modulus in the case of perfect coherence. Here, we restrict ourselves to discussing ordered spatial correlation functions at a fixed time. In terms of the quantum Bose field operator $\hat{\Psi}_{k}$ the first-order spatial correlation function is $$g^{(1)}_{k} (\mathbf r, \mathbf r') = \frac{\langle \hat{\Psi}^{\dagger}_{k}(\mathbf r)\hat{\Psi}_{k}(\mathbf r')\rangle} {\sqrt{{\langle \hat{\Psi}^{\dagger}_{k}(\mathbf r) \hat{\Psi}_{k}(\mathbf r) \rangle} {\langle \hat{\Psi}^{\dagger}_{k}(\mathbf r')\hat{\Psi}_{k}(\mathbf r') \rangle}}},$$ where, $\langle\cdots\rangle$ represents thermal average. It is important to note that the local first order correlation function is equal to the density, [i.e.]{} $g^{(1)}_{k}(\mathbf r, \mathbf r) = n_k(\mathbf r)$. The expression of $g^{(1)}_{k} (\mathbf r, \mathbf r')$ can also written in terms of condensate, and non-condensate correlations as $$g^{(1)}_{k} (\mathbf r, \mathbf r') = \frac{n^{c}_k(\mathbf r, \mathbf r') + \tilde{n}_k(\mathbf r, \mathbf r')} {\sqrt{n_k(\mathbf r) n_k(\mathbf r')}}, \label{corr_eq}$$ where, $$\begin{aligned} n^{c}_k(\mathbf r,\mathbf r') &=& \psi^{*}_k(\mathbf r) \psi_k(\mathbf r'), \\ \tilde{n}_k (\mathbf r, \mathbf r') &=& \sum_l \big[\left\{ u^{*l}_{k}(\mathbf r) u^{l}_{k}(\mathbf r') + v^{*l}_{k}(\mathbf r) v^{l}_{k}(\mathbf r') \right\}N_0(E_l) \\ &&+ v^{*l}_{k}(\mathbf r) v^{l}_{k}(\mathbf r') \big], \\ n_{k}(\mathbf r) &=& n^{c}_k(\mathbf r) + \tilde{n}_k (\mathbf r) \end{aligned}$$ are the condensate density correlation, noncondensate correlation and total density of the $k$th species, respectively. In the above expressions the $n^{c}_k(\mathbf r,\mathbf r')$ and $\tilde{n}_k (\mathbf r, \mathbf r')$ are obtained by expanding the complex amplitudes ($c_{\xi},d_{\xi}$) and the quasiparticle amplitudes ($u^l_{k,\xi},v^l_{k,\xi}$) in the localized Gaussian basis. At $T=0$ K, the entire condensate cloud has complete coherence, and therefore $g^{(1)}=1$ within the condensate region. In TBECs, the transition from phase-separated to the miscible domain at $T \neq 0$ is characteristic signature in the spatial structure of $g^{(1)}_{k} (\mathbf r, \mathbf r')$. Numerical methods ----------------- To solve the coupled DNLSEs, Eqs. (\[dnls2d\]), we scale and rewrite the equations in the dimensionless form. For this we choose the characteristic length scale as the lattice constant $a=\lambda_L/2$ with $\lambda_{L}$ as the wavelength of the laser which creates the lattice potential. Similarly, the recoil energy $E_R = \hbar^2k_L^2/2m$ with $k_L=2\pi/\lambda_L$ is chosen as the energy scale of the system. We use fourth order Runge-Kutta method to solve these equations for zero as well as finite temperatures. To start the iterative steps to solve the equations an appropriate initial guess value of $c_{\xi}$ and $d_{\xi}$ are chosen. For the present work we chose the values corresponding to the side-by-side profile as it gives quasiparticle excitation energies which are real, and not complex. This is important as this shows that the solution we obtain is a stable one, and not metastable. The stationary ground-state wave-function of the TBEC is obtained through imaginary time propagation. In the tight-binding limit, the width of the orthonormalized Gaussian basis functions localized at each lattice site is $0.3a$. Furthermore, to study the quasiparticle excitation spectrum, we cast Eqs. (\[hfb\_eq\_2sp\]) as matrix eigenvalue equations and diagonalize the matrix using the routine ZGEEV from the LAPACK library [@anderson_99]. For finite temperature computations, to take into account the thermal fluctuations, we solve the coupled equations Eqs. (\[dnls2d\]) and Eqs. (\[hfb\_eq\_2sp\]) self-consistently. The solution of the DNLSEs is iterated until it satisfies the convergence criteria in terms of the number of condensate and noncondensate atoms. In general, the convergence is not smooth, and most of the time we encounter severe oscillations in the number of atoms. To remedy these oscillations by damping, we use a successive over- (under-) relaxation technique while updating the condensate (noncondensate) atoms. The new solutions after an iteration cycle (IC) are $$\begin{aligned} c^{\rm new}_{\xi,\rm IC} = r^{\rm ov} c_{\xi,\rm IC} + (1 - r^{\rm ov}) c_{\xi,\rm IC-1}, \\ d^{\rm new}_{\xi,\rm IC} = r^{\rm ov} d_{\xi,\rm IC} + (1 - r^{\rm ov}) d_{\xi,\rm IC-1}, \\ \tilde{n}^{\rm new}_{k\xi,\rm IC} = r^{\rm un} \tilde{n}_{k\xi,\rm IC} + (1 - r^{\rm un}) \tilde{n}_{k\xi,\rm IC-1}, \end{aligned}$$ where $r^{\rm ov} > 1$ ($r^{\rm un} < 1$) is the over (under) relaxation parameter. [![The condensate density profiles at different temperatures in phase-separated domain of $^{87}$Rb -$^{85}$Rb TBEC. The condensate density distribution of the first species (upper panel) and the second species (lower panel) are shown at $T/T_c = 0, 0.08, 0.17$, and $0.2$, which correspond to $T = 0, 30, 60$, and $70$ nK. Here $x$ and $y$ are measured in units of the lattice constant $a$.[]{data-label="den_cond_rb"}](cond.jpg "fig:"){width="8.5cm"}]{} [![The noncondensate density profile at different temperatures in phase-separated domain of $^{87}$Rb -$^{85}$Rb TBEC. The density distribution of the noncondensate atoms of first species (upper panel) and the second species (lower panel) are shown at $T/T_c = 0, 0.08, 0.17$, and $0.2$, which correspond to $T = 0, 30, 60$, and $70$ nK. Here $x$ and $y$ are measured in units of the lattice constant $a$.[]{data-label="den_nc_rb"}](noncond.jpg "fig:"){width="8.5cm"}]{} Results and discussions {#results} ======================= To examine the effects of thermal fluctuations on the quasiparticle spectra we consider the $^{87}$Rb -$^{85}$Rb TBEC with $^{87}$Rb labeled as species $1$ and $^{85}$Rb labeled as species $2$. The radial trapping frequencies of the harmonic potential are $\omega_x = \omega_y = 2\pi\times 50$ Hz with the anisotropy parameter $\omega_z/\omega_{\perp} = 20.33$, and these parameters are chosen on the experimental work Gadway and collaborators [@gadway_10]. It is important to note that we consider equal background trapping potential for species $1$ and $2$. The laser wavelength used to create the 2D lattice potential and the lattice depth are $\lambda_L = 1064$ nm and $V_0 = 5E_R$, respectively. We then take the total number of atoms $N_1 = N_2 = 100$ confined in a ($40\times40$) quasi-2D lattice system. It must be mentioned that, the number of lattice sites considered much larger than the spatial extent of the condensate cloud. Albeit the computations are more time consuming with the larger lattice size, we chose it to ensure that the spatial extent of the thermal component is confined well within the lattice considered. The tunneling matrix elements are $J_1 = 0.66 E_R$ and $J_2 = 0.71 E_R$, which correspond to an optical lattice potential with a depth of $5 E_R$. The intraspecies and interspecies on-site interactions are set as $U_{11} = 0.07 E_R$, $U_{22} = 0.02 E_R$ and $U_{12} = 0.15 E_R$, respectively. For this set of parameters the ground state density distribution of $^{87}$Rb -$^{85}$Rb TBEC is phase-separated with side-by-side geometry. This is a symmetry-broken profile where one species is placed to the left and other to the right of trap center along $y$-axis. The evolution of the ground state from miscible to the side-by-side density profile due to decrease in the $U_{22}$ is reported in our previous work [@suthar_16]. In the present work, we demonstrate the role of temperature in the phase-separated domain of the binary condensate. [![The quasiparticle energies of the low-lying modes as a function of the temperature in phase-separated domain of $^{87}$Rb -$^{85}$Rb TBEC. At $T/T_c = 0.185$ the Kohn and higher modes energy becomes degenerate and system transformed from side-by-side to miscible density profile. In the figure, the slosh mode (SM), Kohn mode (KM), breathing mode (BM), and quadrupole mode (QM) are marked by the black arrows. Here the excitation energy $E_l$ and the temperature $T$ are scaled with respect to the recoil energy $E_R$ and the critical temperature $T_c$ of $^{87}$Rb.[]{data-label="mode_rb"}](mode_rb.pdf "fig:"){width="8.5cm"}]{} Zero temperature ---------------- At $T = 0$ K, in the phase-separated domain, the energetically preferable ground state of TBEC is the side-by-side geometry, which is reported in our previous work [@suthar_16]. Unlike in one-dimensional system [@suthar_15] in quasi-2D system the presence of the quantum fluctuations does not alter the ground state. We start with the phase-separated $^{87}$Rb -$^{85}$Rb TBEC, which has the overlap integral $\Lambda = 0.10$. The density distributions of the condensate and noncondensate atoms of the two species at $T = 0$ K is shown in Fig. \[den\_cond\_rb\] and Fig. \[den\_nc\_rb\]. This is a symmetry broken side-by-side geometry with noncondensate atoms more localized at the edges of the condensate along $y$-axis. [![The mode function of first excitation mode (slosh mode) as a function of the temperature in phase-separated domain of $^{87}$Rb -$^{85}$Rb TBEC. The slosh mode is an out-of-phase mode, where the density flow of first species (upper pannel) is in opposite direction to the flow of second species (lower panel). The value of $T/T_c$ is shown at the upper left corner of each plot in upper panel. These values correspond to $T = 0, 30, 64$, and $66$ nK. Here $x$ and $y$ are in units of the lattice constant $a$.[]{data-label="mode_fn1"}](mode1.jpg "fig:"){width="8.5cm"}]{} [![The mode function of second excitation mode (slosh mode), which at $T/T_c > 0.185$ becomes degenerate to the mode shown in Fig. \[mode\_fn1\] of $^{87}$Rb -$^{85}$Rb TBEC. Here the density flow of first species (upper panel) is out of phase to the flow of second species (lower panel). The value of $T/T_c$ is shown at the upper left corner of each plot in upper panel. These values correspond to $T = 0, 30, 64$, and $66$ nK. Here $x$ and $y$ are in units of the lattice constant $a$.[]{data-label="mode_fn2"}](mode2.jpg "fig:"){width="8.5cm"}]{} [![The evolution of the interface mode in the phase-separated domain of $^{87}$Rb -$^{85}$Rb TBEC with temperature. At $T/T_c > 0.185$, this mode transformed into breathing mode as the system acquires the rotational symmetry. These are out-of-phase modes as the density perturbation of first species (upper panel) is in opposite direction to the second species (lower panel). The value of $T/T_c$ is shown at the upper left corner of each plot in upper panel. Here $x$ and $y$ are in units of the lattice constant $a$.[]{data-label="mode_fn14"}](mode14.jpg "fig:"){width="8.5cm"}]{} Finite temperatures ------------------- At $T\neq0$, in addition to the quantum fluctuations, which are present at the zero temperature, the thermal cloud also contribute to the noncondensate density. As shown in Figs. \[den\_cond\_rb\] and \[den\_nc\_rb\], at $T = 30$ nK, the condensate density profiles of both the species begin to overlap, or in other words, the two species are partly miscible. This is also evident from the value of $\Lambda=0.16$, which shows a marginal increase compared to value of 0.10 at zero temperature. Upon further increase in temperature, at $T = 60$ nK, $\Lambda = 0.36$, this indicates an increase in the miscibility of the two species. Another important feature at $30$ and $60$ nK is the localization of the non-condensate atoms at the interface. This is due to repulsion from the condensate atoms, and lower thermal energy which is insufficient to overcome this repulsion energy. At higher temperatures, the extent of overlap between the condensate density profiles increases, and TBEC is completely miscible at $T = 70$ nK. This is reflected in the value of $\Lambda = 0.95$, and the condensate as well as the noncondensate densities acquire rotational symmetry. The transition from the phase-separated into miscible domain can further be examined from the evolution of the quasiparticle modes as a function of the temperature. The evolution of the few low-lying mode energies with temperature is shown in Fig. \[mode\_rb\] with the temperature defined in the units of the critical temperature $T_c$ of the $^{87}$Rb atoms. It is evident from the figure that there are mode energy bifurcations with the increase in the temperature. These are associated with the restoration of rotational symmetry when the TBEC is rendered miscible through an increase in temperature. As to be expected the two lowest energy mode are the zero energy or the Goldstone modes, which are the result of the spontaneous symmetry breaking associated with the condensation. In the phase-separated domain, these modes correspond to one each for each of the species. The first two excited modes are the non-degenerate Kohn or slosh modes of the two species, and these remain non-degenerate in the domain $T/T_c \leq 0.185$. The structure of these modes are shown in Figs. \[mode\_fn1\] and \[mode\_fn2\]. When $T/T_c > 0.185$ the TBEC acquires a rotational symmetry and the slosh modes becomes degenerate with $\pi/2$ rotation. A key feature in the quasiparticle mode evolution is that the energy of all the out-of-phase mode increases for $T/T_c > 0.185$, whereas all the in-phase mode remains steady. Here, out-of-phase and in-phase mean the amplitudes $u_1$ and $u_2$ of a quasiparticle are of different and same phases, respectively. Among the low-energy modes, the Kohn mode is an in-phase whereas the breathing and quadrupole modes are out of phase in nature. One unique feature of TBEC in the immiscible phase is the presence of interface modes, which have amplitudes prominent around the interface region. The existence of these modes is reported in our previous work [@suthar_16], and were investigated in other works  [@ticknor_13; @ticknor_14] for TBECs confined in harmonic potential alone at zero temperature. One of the low-energy interface modes is shown in Fig. \[mode\_fn14\]. It is evident from the figure that the mode is out-of-phase in nature, and it is transformed into breathing mode of the miscible domain when $T/T_c > 0.185$. In the miscible domain, the breathing mode becomes degenerate with the quadrupole mode and gain energy. The quasiparticles of the miscible domain have well-defined azimuthal quantum number, and modes undergo rotations as $T$ is further increased. [![The first-order off-diagonal correlation function $g^{(1)}_{k} (0,\mathbf r)$ of $^{87}$Rb (upper panel) and $^{85}$Rb (lower panel) at $T/T_c = 0, 0.08, 0.17,$ and $0.2$, which correspond to $T = 0, 30, 60$, and $70$ nK. Here $x$ and $y$ are measured in units of the lattice constant $a$.[]{data-label="corr_fn"}](corr.png "fig:"){width="8.5cm"}]{} To investigate the spatial coherence of TBEC at equilibrium, we examine the trends in $g^{(1)}_{k} (0, \mathbf r)$ defined earlier in Eq. (\[corr\_eq\]), and are shown in Fig. \[corr\_fn\] for various temperatures. As mentioned earlier, at $T=0$ K, $n_k(\mathbf r) \approx n^{c}_{k}(\mathbf r)$ have complete phase coherence, and therefore, $g^{(1)}_{k} = 1$ within the extent of the condensates, this is shown in Fig. \[corr\_fn\]. At zero temperature or in the limit $\tilde{n}_k \equiv 0$ the correlation function, Eq. (\[corr\_eq\]), resemble a Heaviside function, and the negligible contribution from the quantum fluctuations smooth out the sharp edges as $g^{(1)}_{k}$ drops to zero. More importantly, in the numerical computations this cause a loss of numerical accuracy as it involves division of two small numbers in Eq. (\[corr\_eq\]) [@gies_04]. However at finite temperature the presence of the noncondensate atoms modify the nature of the spatial coherence present in the system. The decay rate of the correlation function increases with the temperature, and this is evident from Fig. \[corr\_fn\], which shows $g^{(1)}_{k} (0, \mathbf r)$ at $T=30,60$, and $70$ nK. In addition to this, the transition from phase-separated to the miscible TBEC is also reflected in the decay trends of $g^{(1)}_{k} (0, \mathbf r)$. Conclusions {#conc} =========== We have examined the finite temperature effects on the phenomenon of phase separation in TBECs confined in quasi-2D optical lattices. As temperature is increased the phase-separated side-by-side ground state geometry is transformed into miscible phase. For the case of TBEC comprising of $^{87}$Rb and $^{85}$Rb, the transformation occurs at $T/T_c\approx 0.185$. This demonstrates the importance of thermal fluctuations which can make TBECs miscible albeit the interaction parameters satisfy the criterion of phase separation. The other key observation is that the transition from phase-separated domain to miscible domain is associated with a change in the nature of the quasiparticle energies. The low-lying out-of-phase mode, in particular, the slosh mode becomes degenerate and increase in energy. On the other hand, the in-phase mode, such as Kohn mode, remain steady as temperature ($T<T_c$) is increased. The interface modes, which are unique to the phase-separated domain, in addition to change in energy are geometrically transformed into rotationally symmetric breathing modes in the miscible domain. The temperature driven immiscible to the miscible transition is also evident in the profile of the correlation functions. We thank Arko Roy, S. Gautam, S. Bandyopadhyay and R. Bai for useful discussions. The results presented in the paper are based on the computations using Vikram-100, the 100TFLOP HPC Cluster at Physical Research Laboratory, Ahmedabad, India.
--- abstract: 'The compactification on a torus in $SU(\infty)$ Yang-Mills theory is considered. A special form of the configuration of a gauge field on a torus is examined. The vacuum energy and free energy in the presence of fermions coupled with this background in the theory are derived and possible symmetry breaking is investigated.' author: - | Kiyoshi Shiraishi\ Institute for Nuclear Study, University of Tokyo,\ Midori-cho, Tanashi, Tokyo 188, Japan date: 'Classical and Quantum Gravity [**6**]{} (1989) pp. 2029–2034 ' title: 'Compactification of spacetime in $SU(\infty)$ Yang-Mills theory ' --- Recently Floratos et al. offered $SU(\infty)$ Yang-Mills (YM) theories [@1] which came from the study on membrane theories [@2]. We consider, in this paper, the compactification on torus in the $SU(\infty)$ YM theory. A special form of the configuration of gauge field on torus is examined. The vacuum energy and thermodynamic potential in the presence of fermions coupled with the YM theory in this situation are derived and possible symmetry breaking is investigated. In order that our discussion should be self-contained, we start with a brief review of $SU(\infty)$ YM theory [@1]. We denote the dimension of space-time as $D$. The gauge fields are given by the functions which depend on the $D$-dimensional coordinates $x^M$ as well as the coordinates of ‘sphere’, $\theta$ and $\phi$; $$A_M(x,\theta,\phi)=\sum_{l=1}^\infty\sum_{m=-l}^l A_M^{lm}(x)\, Y_{lm}(\theta,\phi)$$ where $Y$ are the spherical harmonics on $S^2$. Note that the sum over $l$ starts with $l=1$. The field strength is defined as $$F_{MN}=\partial_MA_N-\partial_NA_M+\{A_M, A_N\}$$ where the bracket of two functions $f$ and $g$ is defined as $$\{f, g\}=\frac{\partial f}{\partial\cos\theta} \frac{\partial g}{\partial\phi}- \frac{\partial f}{\partial\phi} \frac{\partial g}{\partial\cos\theta}\,.$$ The sequential operation of the bracket satisfies the Jacobi identity: $$\{\{f, g\}, h\}+\{\{h, f\}, g\}+\{\{g, h\}, f\}=0$$ where $f$, $g$ and $h$ are functions of $\theta$ and $\phi$. The gauge transformation of a gauge field is given by $$\delta A_M=\partial_M\omega+\{A_M, \omega\}\,.$$ At the same time the transformation of the field strength follows $$\delta F_{MN}=\{F_{MN}, \omega\}\,.$$ The YM field equation is $$D_MF^{MN}\equiv\partial_MF^{MN}+\{A_M, F^{MN}\}=0\,. \label{eq7}$$ For later use, we introduce the matter field $\psi(x,\theta,\phi)$ in the ‘adjoint representation’. This field transforms as $$\delta \psi=\{\psi, \omega\} \label{eq8}$$ and obeys the field equation $$D_MD^M\psi-m^2\psi= 0$$ where $m$ is the mass of the $\psi$ field. $\psi$ is assumed to have coupling with the gauge field only. In the analysis here a set of spherical harmonics is chosen as a basis of generators. One can write the bracket relation as $$\{ Y_{lm}, Y_{l'm'}\}=\sum_{l''m''}f_{lm}{}^{l''m''}{}_{l'm'}Y_{l''m''}$$ where $f$ is the ‘structure constant’. The bracket corresponds to the commutation for generators of the usual groups. We can find the Cartan subalgebra in this basis: if we pick up the spherical harmonics with $m=0$, then the following are trivially led $$\{Y_{l0}, Y_{l'0}\}=0\,.$$ Next we consider spacetime compactification. We consider $M^{D-1}\times S^1$ ($(D-1)$-dimensional Minkowski spacetime$\times$circle) as the background space-time. The periodicity with respect to the coordinate on the circle gives rise to the ‘Kaluza-Klein’ excited states [@3]. Furthermore, since $S^1$ is a non-simply connected manifold, non-trivial Wilson loops can be defined on it [@4]. In other words, there are vacuum expactation values of the YM field on a torus ($S^1$) (modulo gauge transformation). They can bring about symmetry breakdown of gauge groups in ordinary YM gauge theory [@5; @6; @7; @8]. Thus the similar mechanisms are extensively studied in the context of multidimensional unification theory [@9]. In our model, we first write out the field equation. Setting the coordinates $x^M=(x^m,y)$, $m=0, 1, 2, \dots , D-2$, the equation (\[eq7\]) decomposed to: $$\begin{aligned} D_MF^{Mn}&=&D_mF^{mn}+ D_yF^{yn}\nonumber \\ &=&\partial_mF^{mn}+\{A_m, F^{mn}\}+\partial_yF^{yn}+\{A_y, F^{yn}\}=0\,.\end{aligned}$$ To obtain the equation of motion for $A_n$, we impose a gauge condition $\partial_MA^M=0$. Further, if we neglect the self-coupling of YM fields $A_n$, or consider the coupling only to the ‘background gauge field’ $\langle A_y\rangle$ so as to get a free field equation of motion, we obtain $$\partial_m^2A^n+\partial_y^2A^n+2\{\langle A_y\rangle, \partial_yA^n\}+\{\langle A_y\rangle,\{\langle A_y\rangle, A^n\}\}=0\,.$$ We consider $\langle A_y\rangle=$ constant as a usual case for arguments for Wilson loops [@4], and then the background field strength $\langle F_{ym}\rangle=0$ satisfies the equation of motion $D_MF^{MN}=0$ automatically. Now, we consider how many degrees of freedom $\langle A_y\rangle$ possesses. For an ordinary gauge group such as $SU(N)$ the degree of freedom is as many as the rank of the group, i.e. the dimension of Cartan subalgebra. This is true for an arbitrary dimensional torus. In other words: suppose $T^{a'}$ belongs to the Cartan subalgebra. Then we can expand $\langle A_y\rangle$ as $$\langle A_y\rangle=\sum_{a'}\langle A_y^{a'}\rangle T^{a'}\,.$$ This form guarantees vanishing field strength automatically especially on a higher-dimensional torus. We assume $\langle A_y\rangle$ can be expanded in terms of the basis of the Cartan subalgebra even in the $SU(\infty)$ YM theory. That is to say, by using components of the field, it follows $$\langle A_y\rangle=\sum_{l=1}^\infty\langle A_y^{l0}\rangle Y_{l0}(\theta,\phi)\,.$$ In a generic case, the field equation for a component field $A_n^{lm}$ becomes a set of simultaneous infinite number of equations: $$\begin{aligned} & &(\partial_m^2+\partial_y^2)A_n^{lm}+2f_{l_10}{}^{lm}{}_{l_2m}\langle A_y^{l_10}\rangle\partial_yA_n^{l_2m}\nonumber \\ & &\qquad\qquad+f_{l_10}{}^{lm}{}_{l'm} f_{l_20}{}^{l'm}{}_{l_3m}\langle A_y^{l_10}\rangle\langle A_y^{l_20}\rangle A_n^{l_3m}=0\end{aligned}$$ where the summations over $l_1$, $l_2$ and $l_3$ are implicit, while the sum over $m$ is unnecessary because of the ‘selection rule’ for the quantum number. To simplify the equations, we can take a new basis for $\langle A_y\rangle$ as $$\begin{aligned} \langle A_y\rangle&=&\sqrt{\frac{1}{3}}\langle A_y^{(1)}\rangle Y_{10}+\sqrt{\frac{1}{15}}\langle A_y^{(2)}\rangle Y_{20}\nonumber \\ & &+\sum_{l=3}^\infty\langle A_y^{(l)}\rangle\frac{1}{\sqrt{2l-1}}\left(\frac{1}{\sqrt{2l+1}}Y_{l0}- \frac{1}{\sqrt{2l-3}}Y_{l-2,0}\right)\,.\end{aligned}$$ In this basis, the bracket operation between $\langle A_y\rangle$ and $A_n$ can be rewritten by $$\{\langle A_y\rangle, A_n\}=\frac{\partial\langle A_y\rangle}{\partial\cos\theta}\frac{\partial A_n}{\partial\phi} =\sum_{l'}\sum_{lm} i m \langle A_y^{(l'+1)}\rangle A_n^{lm}Y_{l'0}Y_{lm}\,.$$ Thus, the use of the well-known formula for multiplication of $Y$ [@10] $$\begin{aligned} & &Y_{l_1m_1}Y_{l_1m_2}\nonumber \\ & &=\sum_{l_3m_3}\left\{\frac{(2l_1+1)(2l_2+1)}{4\pi (2l_3+1)}\right\}^{1/2}(l_1m_1l_2m_2|l_3m_3)(l_10l_20|l_30)Y_{l_3m_3}\end{aligned}$$ makes the component equations simpler. In the above expression $(l_1m_1l_2m_2|l_3m_3)$ denotes the Clebsch-Gordon coefficient in the standard notation. However, for a general set of $\langle A_y^{(l)}\rangle$ we also need a diagonalization of an infinite-dimensional (mass) matrix. In this paper, rather than giving general discussions, we investigate the case for a specific form of $\langle A_y\rangle$ in detail. We consider the following case: $$\langle A_y^{(1)}\rangle=\frac{\theta}{L}\sqrt{4\pi}\qquad \mbox{and}\qquad\langle A_y^{(2)}\rangle=\langle A_y^{(3)}\rangle =\cdots=\langle A_y^{(l)}\rangle=\cdots=0$$ where $\theta$ is a constant and $L$ is the length of the circumference of the extraspace $S^1$. This is the only case that the mass matrix is (already) diagonal. Since $A_n$ can be expanded in a Fourier series with respect to the $S^1$ coordinate, i.e. $$A_n^{lm}=\sum_{k=-\infty}^\infty A_n^{lmk}e^{i2\pi k y/L}\qquad(0\le y < L)\,.$$ We can make up the field equation for each excited mode: $$\left(\partial_m^2-\frac{(2\pi)^2k^2}{L^2}\right)A_n^{lmk} -2\frac{2\pi}{L} km\theta A_n^{lmk}-\frac{1}{L^2}m^2\theta^2 A_n^{lmk}=0\,.$$ Therefore the mass square of $A_n^{lmk}$ in $(D-1)$ dimensions is given by $$\frac{1}{L^2}(2\pi k+m\theta)^2 \label{eq23}$$ where $k$ and $m$ are integers. Based on this mass spectrum, we can evaluate the 1-loop vacuum energy. The vacuum energy in the $SU(\infty)$ YM theory is seemingly anticipated to diverge because of an infinite number of ‘component fields’. As for our particular model, we can first suppose that the component fields which have the label $l\le N-1$, for a finite integer $N$. In this situation, the number of corresponding generators are $$\sum_{l=1}^{N-1}(2l+1)=N^2-1$$ and the number of generators which belongs to the Cartan subalgebra is $N-1$. These are precisely coincident with the case of the $SU(N)$ group. According to the usual prescription [@4; @5; @6; @8], the 1-loop vacuum energy is given formally as $$\begin{aligned} E_{vac}&=&-\frac{(D-2)V_{D-1}}{2(4\pi)^{(D-1)/2}}\int_0^\infty dt\, t^{-(D-1)/2-1}\nonumber \\ & &\times\sum_{l=1}^{N-1}\sum_{m=-l}^l\sum_{k=-\infty}^\infty\exp \left\{-t\left(\frac{2\pi}{L}\right)^2\left(k+\frac{m\theta}{2\pi} \right)^2\right\}\end{aligned}$$ where $V_{D-1}$ is the $(D-1)$-dimensional volume of the system. Using Jacobi’s imaginary transformation [@11] and regularising $E_{vac}$ by discarding an infinity, this reduces to $$E_{vac}=-\frac{(D-2)V_{D-1}L}{\pi^{D/2}L^D}\Gamma(D/2) \sum_{k=1}^\infty\frac{1}{k^D}\left[ \frac{\sin^2(Nk\theta/2)}{\sin^2(k\theta/2)}-1\right]\,.$$ Here finite summations have been performed. In the limit $N\rightarrow\infty$, $E_{vac}$ diverges only at $\theta=0$ modulo $2\pi$. This fact can be easily seen from taking a limit $D\rightarrow\infty$. In the limit the only term with $k=1$ in the sum remains. If we assume a vacuum with minimum energy, the expectation value of $\theta$ is zero (mod $2\pi$). (The periodicity of $2\pi$ in $\theta$ is explained with respect to a proper gauge transformation [@4; @7].) Consequently, in the pure YM theory under the assumption of this particular $\langle A_y\rangle$, gauge symmetry is not broken because $\langle\theta\rangle=0$ and there appear $(N^2-1)$ massless gauge bosons. Here we should note that there exist many local minima in the potential, and the number of the local minima is $N-2$ in the range $0<\theta< 2\pi$. Next we consider the matter field coupled to the background gauge field $\langle A_y\rangle$. For a typical example, we examine a massless Dirac fermion field in the ‘adjoint representation’ (recall (\[eq8\])). For matter fields, we can take a ‘twisted boundary condition’ in the circle direction. Then we obtain the Fourier expansion of the field in the following form: $$\psi_n^{lm}=\sum_{k=-\infty}^\infty \psi_n^{lmk}e^{i2\pi k y/L+i\delta y/L}\qquad(0\le y < L)$$ where $\delta$ is a constant which represents the ‘twist’. The mass spectrum is modified as $$\frac{1}{L^2}(2\pi k+m\theta+\delta)^2 \label{eq28}$$ where $k$ and $m$ are integers. The 1-loop vacuum energy is expressed as $$\begin{aligned} E_{vac}(\mbox{fermion})&=&\frac{N_F 2^{[D/2]}V_{D-1}}{2(4\pi)^{(D-1)/2}}\int_0^\infty dt\, t^{-(D-1)/2-1}\nonumber \\ \times& &\sum_{l=1}^{N-1}\sum_{m=-l}^l\sum_{k=-\infty}^\infty\exp \left\{-t\left(\frac{2\pi}{L}\right)^2\left(k+\frac{m\theta+\delta}{2\pi} \right)^2\right\}\end{aligned}$$ where $N_F$ is the number of fermions, and after regularisation we obtain $$\begin{aligned} E_{vac}(\mbox{fermion})&=&\frac{N_F 2^{[D/2]}V_{D-1}L}{\pi^{D/2}L^D}\Gamma(D/2) \nonumber \\ & &\cdot\sum_{k=1}^\infty\frac{\cos(k\delta)}{k^D}\left[ \frac{\sin^2(Nk\theta/2)}{\sin^2(k\theta/2)}-1\right]\,.\end{aligned}$$ Note the overall sign of $E_{vac}(\mbox{fermion})$. In the case with $\delta=0$, provided that $N_F$ is enough large to overcome the contribution from YM fields, it is possible to get the non-vanishing vacuum gauge field expactation value at finite $N$, even after taking the limit $N\rightarrow\infty$. The minima of $E_{vac}(\mbox{fermion})$ are located at $\theta =2\pi p/N$, $p=1,\ldots, N-1$. The lowest energy of (degenerate) vacua is then $$-\frac{N_F 2^{[D/2]}V_{D-1}L}{\pi^{D/2}L^D}\Gamma(D/2)\,, \zeta(D)$$ where $\zeta(z)$ is the zeta function. The vacuum energy, or the effective potential for $\theta$, has an infinite number of degenerate minima in the limit $N\rightarrow\infty$. Many massive fermions appear when $\theta$ is located at any minima according to the spectrum (\[eq28\]). On the other hand, symmetry-breaking pattern is rather complicated in the case of finite $N$. When $\theta=2\pi/N$, there remains only $(N-1)$ massless vector bosons associated with the generators of the Cartan subalgebra. Thus a symmetry breakdown such as $SU(N)\rightarrow[U(1)]^N$ is expected. However, for general finite $N$ and for general minima of $\theta=2\pi p/N$, we see more massless gauge bosons. For example, suppose $N=4$ and the vacuum with $p=2$. The state with $k=1$ and $m=2$ ($l=2$ or $3$) in the spectrum (\[eq23\]) becomes massless. Then the resulting symmetry can be larger than $[U(1)]^N$. If $N$ is a prime number, this ‘accidental’ symmetry does not emerge in any vacuum associated with $\theta=(2\pi/N)\times(\mbox{integer})$. If we take $N\rightarrow\infty$, we can say that the minima of the vacuum energy as a function of $\theta$ are located at every point of $2\pi Q$, where $Q$ is a rational number, $0<Q<2\pi$ The free energy can be calculated in a similar way to obtain $E_{vac}$ [@5]. The technique is the same as the one in [@12], which takes the imaginary time direction as a circle. One finds the following expression for the free energy $F(\mbox{fermion})$ with the fermion fields considered above: $$\begin{aligned} F(\mbox{fermion})&=&\frac{N_F 2^{[D/2]}V_{D-1}L}{\pi^{D/2}L^D}\Gamma(D/2) \nonumber \\ &\cdot&\left\{\sum_{k=1}^\infty\frac{\cos(k\delta)}{k^D}\left[ \frac{\sin^2(Nk\theta/2)}{\sin^2(k\theta/2)}-1\right]\right. \nonumber \\ & &~-\frac{N^2-1}{\beta^D}\left(1-\frac{1}{2^{D-1}}\right)\zeta(D) \nonumber \\ & &~+\left.2\sum_{k=1}^\infty\sum_{n=1}^\infty \frac{\cos(k\delta)}{(L^2k^2+\beta^2n^2)^{D/2}}\left[ \frac{\sin^2(Nk\theta/2)}{\sin^2(k\theta/2)}-1\right]\right\}\end{aligned}$$ where $\beta$ is the inverse of temperature. For $\delta=0$, or $\delta$ at near $0$, and sufficiently large $D$, no phase transition is expected to occur as long as the form of $\langle A_y\rangle$ is constrained to our ansatz. That is because what determines the shape of the ‘potential’ for $\theta$ is the term with $k=1$ in the sum. For the case with $\delta$ takes the value near $\pi/2$ and in low dimensions, the $k=1$ term does not necessarily dominate in the summation, and then the shape of the potential for $\theta$ is modified even at zero temperature; in addition, the phase transition can take place [@8]. In conclusion, we see that gauge symmetry breaking in $SU(\infty)$ YM theory is feasible under the assumption with a special form of the configuration of the gauge field on the extra $S^1$ and in the presence of fermion fields. We did not persue the possibility of phase transition in the case of matter fields with a special twisted boundary condition on $S^1$. We want to report the effect of general twist and the dimensionality of spacetime in an effective potential for a simpler group such as $SU(3)$ elsewhere. For $SU(\infty)$ YM theory, we must consider the general form of $\langle A_y\rangle$ by executing a diagonalization of the infinite-dimensional mass matrix from the beginning. Otherwise, we might miss the existence of other minima or vacua with lower energy, as in the problem of Higgs potentials [@13]. The construction of other ‘representation’ than the ‘adjoint representation’ is also an interesting task. We hope to investigate the above subjects in relation to the vacuum energy and spontaneous symmetry breaking. Acknowledgements {#acknowledgements .unnumbered} ================ The author thanks S. Hirenzaki for useful comments. He also thanks A. Nakamula for discussion and Y. Hirata for reading this manuscript. This work is supported in part by a Grant-in-Aid for Encouragement of Young Scientist from the Ministry of Education, Science and Culture (\# 63790150). The author is grateful to the Japan Society for the Promotion of Science for the fellowship. He also thanks Iwanami Fūjukai for financial aid. [99]{} E. G. Floratos, J. Iliopoulos and G. Tiktopoulos, Phys. Lett. [**B217**]{} (1989) 285. B. de Wit, J. Hoppe and H. Nicolai, Nucl. Phys. [**B305**]{} \[FS23\] (1988) 545, and references therein. T. Appelquist, A. Chodos and P. G. O. Freund, [*Modern Kaluza-Kleln Theories*]{} (Benjamin-Cummings, New York, 1987) Y. Hosotani, Phys. Lett. [**B126**]{} (1983) 309. D. J. Toms, Phys. Lett. [**B126**]{} (1983) 445. N. Weiss, Phys. Rev. [**D24**]{} (1981) 475; [**D25**]{} (1982) 2667. K. Shiraishi, Z. Phys [**C35**]{} (1987) 37. V. B. Svetovoǐ and N. G. Khariton, Sov. J. Nucl. Phys. [**43**]{} (1986) 280. A. T. Davies and A. McLachlan, Phys. Lett. [**B200**]{} (1988) 305; Nucl. Phys. [**B317**]{} (1989) 237. A. Higuchi and L. Parker, Phys. Rev. [**D37**]{} (1988) 2853. Y. Hosotani, Ann. Phys. (NY) [**190**]{} (1989) 233. K. Shiraishi, Prog. Theor. Phys. [**80**]{} (1988) 601. C.-L. Ho and Y. Hosotani, preprint IASSNS-HEP-88/48(October 1988). M. Evans and B. A. Ovrut, Phys. Lett. [**B174**]{} (1986) 63. K. Shiraishi, Prog. Theor. Phys. [**78**]{} (1986) 535; ibid. [**81**]{} (1989) 248 (E). A. Nakamula and K. Shiraishi, Phys. Lett. [**B215**]{} (1988) 551; ibid. [**B218**]{} (1989) 508 (E). J. S. Dowker and S. Jadhav, Phys. Rev. [**D39**]{} (1989) 1196; ibid. [**D39**]{} (1989) 2368. K. Lee, R. Holman and E. Kolb, Phys. Rev. Lett. [**59**]{} (1987) 1069. B. H. Lee, S. H. Lee, E. J. Weinberg and K. Lee, Phys. Rev. Lett. [**60**]{} (1988) 2231. I. E. McCarthy, [*Introduction to Nuclear Theory*]{}, (John Wiley & Sons, New York, 1968). A. Erdelyi at al., [*Higher Transcendental Functions*]{} (McGraw-Hill, New York, 1953). L. Dolan and R. Jackiw, Phys. Rev. [**D9**]{} (1974) 3320. J. Breit, S. Gupta and A. Zaks, Phys. Rev. Lett. [**51**]{} (1983) 1007.
--- abstract: 'We report hole-doping dependence of the in-plane resistivity $\rho_{ab}$ in a cuprate superconductor La$_{2-x}$Sr$_x$CuO$_4$, carefully examined using a series of high-quality single crystals. Our detailed measurements find a tendency towards charge ordering at particular rational hole doping fractions of 1/16, 3/32, 1/8, and 3/16. This observation appears to suggest a specific form of charge order and is most consistent with the recent theoretical prediction of the checkerboard-type ordering of the Cooper pairs at rational doping fractions $x=(2m+1)/2^n$, with integers $m$ and $n$.' author: - Seiki Komiya - 'Han-Dong Chen' - 'Shou-Cheng Zhang' - Yoichi Ando bibliography: - 'MagicNumber.bib' title: 'Magic Doping Fractions in High-Temperature Superconductors' --- All high-$T_c$ cuprates contain three robust phases — the insulating antiferromagnetic (AF) phase, the superconducting (SC) phase, and the metallic phase — depending on the density of charge carriers introduced by doping. However, in some cuprate materials, there are also other electronic phases which compete with superconductivity [@SACHDEV2002; @KIVELSON2003]. Determining the nature of these competing phases is a key focus of the current research in high-temperature superconductivity. One particularly important type of competing phase is a charge-ordered phase in underdoped cuprates, where the carrier density is smaller than the optimum level for superconductivity. In the underdoped regime, the mean kinetic energy of the carriers is low because of the small carrier density, and the Coulomb interaction plays an important role. The Coulomb interaction generally prefers some form of charge order, whose detailed form could be affected by the local antiferromagnetic exchange energy as well. One possibility is that the charges form one-dimensional (1D) stripes [@KIVELSON2003; @ZAANEN1989; @KATO1990; @WHITE1998; @VOJTA1999]. Experimentally, magnetic and lattice neutron scattering on La$_{2-x}$Sr$_x$CuO$_4$ (LSCO) and its family compounds has been interpreted in terms of the 1D stripe picture [@KIVELSON2003; @TRANQUADA1995], although the two-dimensional (2D) nature of the spin system has recently been emphasized in Refs. [@CHRISTENSEN2004; @HAYDEN2004]. Considering the presence of strong pairing interactions in this material, Chen [*et al.*]{} [@CHEN2002; @CHEN2003; @CHEN2004] have proposed a 2D checkerboard-type ordering of the hole pairs. It offers a natural explanation of the scanning tunneling microscopy (STM) results on Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$ (BSCCO) and Ca$_{2-x}$Na$_x$CuO$_2$Cl$_2$ (NCCOC) compounds, which show rotationally symmetric $4a\times 4a$ charge ordering patterns [@HOFFMAN2002; @HOWALD2003; @VERSHININ2004; @McElroy2004; @HANAGURI2004]. The checkerboard state of the Cooper pairs has also been discussed in other frameworks in the recent literature [@ALTMAN2002; @VOJTA2002; @TESANOVIC2004; @ANDERSON2004]. Furthermore, the possibility of a Wigner crystal of single holes has also been proposed as a competing charge ordered state at low doping [@FU2004; @KIM2001; @ZHOU2003]. In view of the contrasting experimental results and theoretical proposals, more systematic studies of the nature of the charge order in the cuprates are clearly called for. The charge ordering tendency is expected to be particularly pronounced near certain “magic" doping levels, where the charge modulation is commensurate with the underlying lattice [@CHEN2003; @TESANOVIC2004; @ANDERSON2004; @FU2004]. Motivated by the recent discussions on the stripe versus the checkerboard order, we carry out a systematic study of the doping dependence of the resistivity, in order to uncover the possible commensurability effects. Thanks to the greatly improved crystal-growth technique for LSCO using floating-zone furnaces, single crystals of LSCO of unprecedented quality have recently become available [@KOMIYA2002] for a very wide doping range. The cleanliness of the new-generation crystals has allowed, for example, to produce 100%-untwinned single crystals [@Lavrov2001], which in turn led to finding of novel physics in this system [@ANDO2002; @DUMM2003; @LAVROV2002]. In this work, we systematically measure the temperature dependence of the in-plane resistivity $\rho_{ab}$ in a series of high-quality LSCO crystals for $x$ = 0.009 – 0.216. The raw data of $\rho_{ab}(T)$ are shown in Fig. 1 for all the superconducting samples; note that the hole doping is changed in very small increments (typically 1%) here, which is necessary for analyzing how exactly the mobility of the holes changes with their density. Figure 2(a) shows the $x$ dependence of the inverse mobility $\mu^{-1}$, which is equal to $ne\rho_{ab}$, for representative temperatures (the hole density $n$ is given by $x/V$, where $V$ is the unit volume per Cu). Also, since the absolute values of $\mu^{-1}$ are subject to possible geometrical-factor errors (which can be up to 5% in the case of our measurements), in Fig. 2(b) we show $\rho_{ab}(T)/\rho_{ab}(300{\rm K})$, which factors out such geometrical-factor errors. One can easily see that at high temperature the $x$ dependencies of these variables are rather smooth and featureless, but with lowering temperature they start to “oscillate"; a peak at $x \simeq$ 0.13 is particularly evident. In addition, there are three more peaks and/or shoulders, if weaker, at $x \simeq$ 0.06, 0.09, and 0.18. This observation suggests that there are particular carrier densities where the hole motion tends to be hindered, which weakly enhances the resistivity at low temperature. Most naturally, such a behavior is indicative of a “commensurability" effect associated with some sort of charge ordering [@CHEN2003; @TESANOVIC2004; @ANDERSON2004; @FU2004]. Remember, in usual charge ordered systems where the Peierls transition is responsible, a sharp increase in resistivity is observed upon charge ordering [@GRUNER1988]; in the present case, where the Coulomb interaction is likely to be responsible, the effect appears to be milder. The observed decimal numbers (0.06, 0.09, 0.13 and 0.18) suggest that the commensurability effect is possibly taking place at rational doping levels 1/16, 3/32, 1/8, and 3/16. (We note that there has been some preliminary evidence for a charge ordering tendency at $x$ = 1/16 [@KIM2001; @ZHOU2003]). Given that the inverse mobility shows an $x$-dependence that is indicative of charge ordering, one may wonder how the superconducting transition temperature $T_c$ changes with $x$; the inset of Fig. 2(a) shows the $x$-dependence of the zero-resistivity $T_c$ in our series of samples. Besides a plateau-like feature for $x$ = 0.08 – 0.12, the $T_c$ changes rather smoothly without showing clear dips that can be associated with the magic fractions observed in the resistivity data. Hence, the putative charge order appears to be [*not*]{} particularly destructive to superconductivity; this is rather surprising, but is probably related to the STM observation [@VERSHININ2004] that the checkerboard order shows up as a [*precursor*]{} to superconductivity. In this regard, it is useful to note that most of the other experiments concerning the checkerboard state were done below $T_c$ [@HOFFMAN2002; @HOWALD2003; @McElroy2004; @HANAGURI2004], while the existence of the magic doping fractions is suggested in the resistivity data above $T_c$; if the charge ordering phenomena in cuprates involve the Cooper pairs [@CHEN2003; @ALTMAN2002; @VOJTA2002; @TESANOVIC2004; @ANDERSON2004] and those pairs are formed at $T > T_c$, it is possible that the charge ordering, observable when the superconductivity is weakened, is essentially of the same nature across $T_c$. Now let us discuss the theoretical implications of our data. The 1D stripe model predicts a particular set of “magic" doping fractions. The stripe model most often discussed in the literature involves site-centered, horizontal or vertical charge stripes separated by the AF domains at a commensurate distance of $d=pa$, where $p$ is an integer and $a$ is the lattice constant. (For the case of $p=4$, see, for example, Fig. 1 of Ref. [@TRANQUADA1995]). The holes fill alternating sites on the charge stripe. In the stripe literature, it is commonly assumed that the hole doping on the stripe stays fixed, while the inter-stripe separation varies to accommodate different values of the doping level. This simple picture predicts magic doping fractions of $x=1/2p$, with a charge unit-cell of $2a \times pa$. On the other hand, the 2D checkerboard-type order generally leads to a different set of magic doping fractions. Stimulated by the checkerboard orders observed by STM [@HOFFMAN2002; @HOWALD2003; @VERSHININ2004; @McElroy2004; @HANAGURI2004], a global phase diagram of cuprate superconductors has been theoretically proposed and numerically analyzed [@CHEN2002; @CHEN2003; @CHEN2004], where the zero-temperature phase diagram was examined in the two-dimensional parameter space of chemical potential versus the ratio of the hole kinetic energy over the Coulomb interaction, within the framework of the $SO(5)$ theory. Most intriguingly, this theory predicts, besides the AF and SC states, checkerboard-type ordering of the Cooper pairs at magic rational doping fractions $(2m+1)/2^n$, where $m$ and $n$ are integers [@CHEN2003]. A hierarchy construction of the checkerboard states is shown in Fig. 3. The energetics of the checkerboard state has been studied extensively in Ref. [@CHEN2004], both numerically and analytically. In general, at the magic doping fraction $x=(2m+1)/2^n$, the charge unit-cell is $2^{(n+1)/2}a \times 2^{(n+1)/2}a$, pointing along the original Cu-O bond direction when $n$ is odd, and along the diagonal direction when $n$ is even. This theory relies on mapping the original fermionic degrees of freedom into effective bosonic degrees of freedom [@CHEN2003]; such mapping may be justified in the underdoped and optimally doped regimes, but fails in heavily doped samples. Therefore, while the bosonic theory predicts all magic doping fractions at $x=(2m+1)/2^n$, one can only expect the effective theory to be valid for $x<1/4=25\%$. Also, it is generally expected that the charge ordering tendencies are stronger at higher levels of the hierarchy, with smaller $n$. ![Hierarchical construction of the checkerboard-type ordering of the hole pairs at “magic" doping fractions $(2m+1)/2^n$, where $m$ and $n$ are integers. Following the construction of Ref. [@CHEN2003], the original CuO$_2$ lattice is grouped into non-overlapping plaquettes, which can be represented by the squares on a checkerboard. A checkerboard can be alternately colored black and white; in our case, each black square contains four sites and no holes, while each white square contains four sites and two holes in the form of a Cooper pair. Such a state has hole doping density $x=1/4$ (a), as represented at the highest level of the hierarchy (e). (Electrons are denoted by black dots, and holes are denoted by open dots; since we only address the issue of charge ordering here, the spin of the electron is not explicitly indicated). At the next level of the hierarchy, consider the lattice of white squares only, and alternately color half of them black. Such a state has hole doping density $x=1/8$ (b). At one further level down in the hierarchy, one can either consider the lattice of the white squares, and alternately color half of them black, thus obtaining a state with $x=1/16$ (c), or one can consider the lattice of the newly colored black squares and alternately color half of them white, thus obtaining a state with $x=3/16$ (d). This hierarchy construction can be obviously iterated [*ad infinitum*]{}, generating a binary tree of magic doping fractions as shown here in (e).](Fig3.eps){width="8.5cm"} Both the 1D stripe model and the 2D checkerboard model adequately explain the dominant magic doping fraction at $x=1/8$, but they predict different sets of magic doping fractions, at which the system is expected to develop charge ordering tendencies. In this regard, our extensive data set on the doping dependence, if it indeed reflects a charge ordering tendency, contains sufficient accuracy to distinguish between the simple 1D stripe model and the 2D checkerboard model discussed above. The simple stripe model predicts commensurate effect at magic doping fractions 1/4, 1/6, 1/8, 1/10, 1/12, 1/14, 1/16, which should be either equally strong or vary monotonically in strength; therefore, the absence of any commensurability effects at $x=1/6$ and $x=1/10$ in our data or any other previous experiments is puzzling in the simple stripe model. Although the stripe structure can yield a complex “devil’s staircase" of commensurate dopings in nickelates [@WOCHNER1998], the magic fractions suggested here in a cuprate superconductor would be a challenge to the stripe picture. On the other hand, the suggested series (1/16, 3/32, 1/8, and 3/16) agrees surprisingly well with the magic doping fractions predicted from the checkerboard model discussed above, up to the level $n=5$. At this level, the absence of the $1/32$ fraction is understandable, since the hole-pair lattice at this doping fraction would be very dilute and therefore disordered. More notable is the absence of the $5/32$ fraction, for which we do not have an adequate explanation at this moment. In passing, let us briefly mention the Wigner crystal of single holes, which has the charge unit-cell $2^{n/2}a \times 2^{n/2}a$ at $x=1/2^n$. The main difference between the pair checkerboard and the hole checkerboard is not only the size of the charge unit-cell but also their orientations, which are $45^{\circ}$ with respect to each other; thus, the two should be easily discernible in experiments at a given doping. Neutron scattering on the LSCO-based materials have provided an extensive set of data on the [*spin*]{} order. We note, however, that the nature of the spin order may not be directly related to the nature of the charge order reflected in the transport properties, particularly in the superconducting doping regime of LSCO where the spin order is mostly dynamic [@YAMADA1998]. If the magnetic incommensurability observed by the neutron scattering is part of some dynamic dispersing mode [@CHRISTENSEN2004; @NORMAN2000; @HAYDEN2004], it is natural that the incommensurability does not represent the unit-cell of the incipient charge order. Furthermore, at $x$ = 0.02, the magnetic neutron scattering found static and unidirectional spin stripes that are no less than 30 unit-cell apart (magnetic incommensurability $\delta$ is 0.016) [@MATSUDA2000], which would cause a large resistivity anisotropy if the charges are conforming to the 1D spin stripes. However, transport measurements on untwinned single crystals have found only a factor of 1.5 resistivity anisotropy between the “longitudinal" and “transverse" directions [@ANDO2002]; this is rather difficult to understand without invoking some 2D character (which can, for example, be coming from a nematic stripe order [@Kivelson1998]) in the charge system. In passing, we note that our transport measurements of LSCO have found evidence for charge self-organization [@ANDO2001; @ANDO2003] and modest one-dimensionality [@ANDO2002; @DUMM2003] only in the lightly doped regime ($x \le 0.05$) of this compound, while the present study suggests the existence of the checkerboard order only in the superconducting doping regime ($x \ge 0.06$). We also note that the measurements of the in-plane resistivity anisotropy become impractical for $x \ge 0.06$, because the structural transition temperature (below which the system becomes orthorhombic) comes close to or below the room temperature at these dopings, making it difficult to prepare untwinned samples. There are only a few direct experimental observations of charge order in the LSCO-based materials. Neutron scattering on the La$_{1.48}$Nd$_{0.4}$Sr$_{0.12}$CuO$_4$ (LNSCO) compound at $x=1/8$ reveals elastic charge order peaks [@TRANQUADA1996] which can be interpreted either as orthogonally intersecting stripes on alternating planes or in the same plane. The latter case would also be consistent with the 2D pair checkerboard pattern with the charge unit-cell $4a \times 4a$. While it is fair to mention that the details of the charge order in LNSCO are more consistent with the 1D stripe picture [@ZIMMERMANN1998], such a structure may well be a result of the 1D modulation arising from the explicit symmetry breaking in the low-temperature tetragonal (LTT) phase of this particular material, and therefore may not be representative of the charge order in Nd-free LSCO. In view of the intriguing agreement of the present transport data with the hole pair checkerboard model, it would be desirable to systematically perform direct measurements of the charge order in the LSCO-based materials by some means. If the proposed checkerboard states are indeed realized in LSCO, the orientation of the charge unit cell should be along the Cu-O bond direction at $x=1/8$ (as is the case in BSCCO or NCCOC), while it should be along the diagonal direction near $x=1/16$; it would be definitive if this rotation of the charge unit cell upon changing $x$ is confirmed by a direct means. Also, since some fractions suggested in the present measurements are relatively weak, it would be highly desirable to carry out these systematic transport measurements under high magnetic fields or under high pressure, where one would expect the competing order to be enhanced and the magic doping fractions to be more pronounced. We would like to thank P. W. Anderson, S. A. Kivelson, J. M. Tranquada, A. Yazdani and Z. X. Zhao for helpful discussions. This work is supported by the Grant-in-Aid for Science provided by the Japanese Society for the Promotion of Science, the NSF under grant numbers DMR-0342832, and the US Department of Energy, Office of Basic Energy Sciences, under contract DE-AC03-76SF00515.
--- abstract: 'In this paper we consider a simple two-patch model in which a population affected by a disease can freely move. We assume that the capacity of the interconnected paths is limited, and thereby influencing the migration rates. Possible habitat disruptions due to human activities or natural events are accounted for. The demographic assumptions prevent the ecosystem to be wiped out, and the disease remains endemic in both populated patches at a stable equilibrium, but possibly also with an oscillatory behavior in the case of unidirectional migrations. Interestingly, if infected cannot migrate, it is possible that one patch becomes disease-free. This fact could be exploited to keep disease-free at least part of the population.' author: - | Silvia Motto, Ezio Venturino\ Dipartimento di Matematica “Giuseppe Peano”,\ Università di Torino,\ via Carlo Alberto 10, 10123 Torino, Italy title: 'Migration paths saturations in meta-epidemic systems.' --- Introduction ============ Natural landscapes can become fragmented due to landslides, for instance, or human constructions. Wild populations can be affected by these events. To understand these phenomena and possibly alleviate their negative consequences for the environment, scientists have developed the concepts of population assembling, [@LM], and metapopulations, [@HG; @W97], which showed that global survival is possible even if in some patches the populations get extinguished, [@W96]. Diseases represent a common occurrence in nature for individuals and communities. Ecoepidemiology merges the demographic and epidemiological features of interacting populations into a single model, see Chapter 7 of [@MPV] for an introduction. In this context also metaecoepidemic models can be considered, [@V]. Epidemics affecting populations living in patchy habitats have been investigated since quite some time, also in the context of fighting new emerging diseases, both deterministically, [@AvdD; @SvdD; @W; @WR; @WZ], and stochastically, [@AP02]. We consider a very simple one disease-affected population, 2-patch model with migrations. In [@BIV; @BCGV] other models of this kind have been introduced. A specific feature of this contribution lies in the fact that an upper bound on the migration rates is assumed, as in [@ABMV], but the restrictive assumption of no vital dynamics used for that similar model is here removed. Of interest is the assessment of the consequences that possible paths disturbances have on the whole ecosystem. The disease is assumed to be recoverable, but both disease transmission and recovery rates are environment-dependent. The disease tranmission is modeled via mass action, assuming homogeneous mixing for the population in both patches. Instead in [@ABLN] $n$ patches are assumed, where SIS models with standard incidence are present in each one. No vital dynamics is however considered. In [@GR] the model is similar to [@ABLN], but contains susceptible recruitments and a different disease incidence. The effect of population diffusion on the disease spread is studied in [@WM], investigating what happens if the disease gets eradicated in neighboring patches. Also, diffusion may or may not help the epidemics to spread, [@WR]. The main reference model is [@JW], where the environment consists of several fragments. The stability conditions for endemic and disease-free equilibria are established. In contrast to [@JW], we propose here a saturation effect on the migrating corridors. The analysis of course shows that the basic equilibria are the same, i.e. system disappearance and the endemic equilibrium with both patches populated. Further, we also try to answer the question of what happens to the ecosystem as a whole when some disruptions in the interpatch communications occur. This could happen because migrations are not possible in one direction, or if they require a strenuous effort, which infected individuals cannot exert. The Model {#chapt:secondo_modello} ========= A similar model with restricted migrations has been introduced in [@ABMV]. But in contrast to its assumptions stating that the migrations are restricted by the size of the population of the patch into which the migration occurs, we rather consider here the case in which migrations are restricted by the size of the available canals. Thus, even if the populations in each patch grow, only a maximal fixed migration rate can be attained. Mathematically, this is obtained by using a Holling type II function for modelling the migration rates. For $k=1,2$, let us denote by $S_k$ the susceptibles and by $I_k$ the infected in each patch. $$\begin{aligned} \label{model_g} \dot{S_1}=r_1S_1-\gamma_{1}S_1I_1+\delta_1I_1-m_{21}\frac {S_1}{A+I_1+S_1}+m_{12}\frac {S_2}{A+I_2+S_2},\\ \nonumber \dot{I_1}=\gamma_{1}S_1I_1-(\delta_1+\mu_1)I_1-n_{21}\frac {I_1}{B+I_1+S_1}+n_{12}\frac {I_2}{B+I_2+S_2},\\ \nonumber \dot{S_2}=r_2S_2-\gamma_{2}S_2I_2+\delta_2I_2+m_{21}\frac {S_1}{A+I_1+S_1}-m_{12}\frac {S_2}{A+I_2+S_2},\\ \nonumber \dot{I_2}=\gamma_{2}S_2I_2-(\delta_2+\mu_2)I_2+n_{21}\frac {I_1}{B+I_1+S_1}-n_{12}\frac {I_2}{B+I_2+S_2}.\end{aligned}$$ The parameters have the following meanings. By $r_k$ we denote the net reproduction rate of the susceptibles in patch $k$. Note that we make the strong demographic assumption that only susceptibles give birth, the disease preventing the infected to reproduce. Further, $\gamma_k$ denotes the disease contact rate, $\delta_k$ is the disease recovery rate, $\mu_k$ is the infected mortality rate in each patch, $A$ is the half saturation constant for the susceptibles, and $B$ the one for the infected; finally the migration rates from patch $j$ into patch $i$ are $m_{ij}$ for the susceptibles and $n_{ij}$ for the infected. In fact, e.g. the parameter $m_{12}$ represents the maximum migration rate possible for the susceptibles through the canal leading from patch 2 into patch 1. The last term in the first equation states thus that the higher the population in patch 2, the smaller the migration rate becomes in view of the saturation of the communication path. Similarly for the corresponding terms in this and the other equations. The first equation states that the susceptibles reproduce, and possibly become infected by contagion, new recruits come into this class also via disease recovery, and then the emigrations and immigrations occur. Similar considerations hold true for the remaining equations. ![The system in consideration.](generale_1.eps "fig:") \[fig:Illustraz\_caso\_gen2\] Equilibria ---------- The general model admits only two possible equilibria, the trivial state in which the ecosystem vanishes, and the coexistence state. To study the latter, we can eliminate the migration rates by summing the first and third equations of (\[model\_g\]), as well as the second and fourth one, to obtain respectively $$\begin{aligned} \nonumber r_1\tilde{S_1}-\gamma_{1}\tilde{S_1}\tilde{I_1}+\delta_1\tilde{I_1}+r_2\tilde{S_2}-\gamma_{2}\tilde{S_2}\tilde{I_2}+\delta_2\tilde{I_2}=0,\\ \label{eq2piu4} \gamma_{1}\tilde{S_1}\tilde{I_1}-(\delta_1+\mu_1)\tilde{I_1}+\gamma_{2}\tilde{S_2}\tilde{I_2}-(\delta_2+\mu_2)\tilde{I_2}=0.\end{aligned}$$ These equations can also be summed, to produce $$\tilde{S_1}=\frac{-r_2\tilde{S_2}+\mu_1\tilde{I_1}+\mu_2\tilde{I_2}}{r_1}$$ which upon substitution into (\[eq2piu4\]) gives $$\tilde{S_2}=\frac{r_1((\delta_1+\mu_1)\tilde{I_1}+(\delta_2+\mu_2)\tilde{I_2})-\gamma_1\mu_1{\tilde{I_1}}^2-\gamma_1\mu_1\tilde{I_1}\tilde{I_2}}{r_1\gamma_2\tilde{I_2}-r_2\gamma_1\tilde{I_1}}.$$ We need nonnegative populations, therefore some necessary conditions for the feasibility of the equilibrium with endemic disease and both patches populated follow: $$\tilde{I_1}>\frac{r_1\gamma_2\tilde{I_2}}{r_2\gamma_1}, \quad \tilde{S_2} \le \frac{\mu_1\tilde{I_1}+\mu_2\tilde{I_2}}{r_2}, \quad \tilde{I_2} \ge \frac{\gamma_1\mu_1{\tilde{I_1}}^2-r_1(\delta_1+\mu_1)\tilde{I_1}}{r_1(\delta_2+\mu_2)-\gamma_1\mu_2\tilde{I_1}}$$ or the opposite inequalities. The last one, however, leads to an upper bound that must be explicitly imposed not to be negative. In conclusion, we have the second set of necessary conditions $$\tilde{I_1} < \frac{r_1\gamma_2\tilde{I_2}}{r_2\gamma_1}, \quad \tilde{S_2} \le \frac{\mu_1\tilde{I_1}+\mu_2\tilde{I_2}}{r_2}, \quad \tilde{I_2} \le \frac{\gamma_1\mu_1{\tilde{I_1}}^2-r_1(\delta_1+\mu_1)\tilde{I_1}}{r_1(\delta_2+\mu_2)-\gamma_1\mu_2\tilde{I_1}},$$ supplemented by either one of the two sets of conditions, $$\frac {\delta_2 + \mu_2}{\gamma_1 \mu_2} < I_1 \le \frac {\delta_1 + \mu_1}{\gamma_2 \mu_1}, \quad \frac {\delta_1 + \mu_1}{\gamma_2 \mu_1} \le I_1 < \frac {\delta_2 + \mu_2}{\gamma_1 \mu_2}$$ Stability --------- The Jacobian of (\[model\_g\]) is $$\label{Jac} J=\left[ \begin{array}{cccc} J_{11} &-\gamma_1S_1+\delta_1+\hat{\eta_2}S_1&\hat{\theta_1}-\hat{\theta_2}S_2&-\hat{\theta_2}S_2\\ \\ \gamma_1I_1+\hat{\rho_2}I_1& J_{22} &-\hat{\sigma_2}I_2&\hat{\sigma_1}-\hat{\sigma_2}I_2\\ \\ \hat{\eta_1}-\hat{\eta_2}S_1&-\hat{\eta_2}S_1&J_{33} &-\gamma_2S_2+\delta_2+\hat{\theta_2}S_2\\ \\ -\hat{\rho_2}I_1&\hat{\rho_1}-\hat{\rho_2}I_1&\gamma_2I_2+\hat{\sigma_2}I_2&J_{44} \end{array} \vspace{3pt} \right]$$ with $$\begin{aligned} J_{11}=-\gamma_1I_1-\hat{\eta_1}+\hat{\eta_2}S_1+r_1, \quad J_{22}=\gamma_1S_1-\delta_1-\hat{\rho_1}+\hat{\rho_2}I_1-\mu_1\\ J_{33}=-\gamma_2I_2-\hat{\theta_1}+\hat{\theta_2}S_2+r_2, \quad J_{44}=\gamma_2S_2-\delta_2-\hat{\sigma_1}+\hat{\sigma_2}I_2-\mu_2,\\ \hat{\eta_1}=\frac{m_{21}}{A+S_1+I_1}, \quad \hat{\eta_2}=\frac{m_{21}}{{(A+S_1+I_1)}^2},\quad \hat{\theta_1}=\frac{m_{12}}{A+S_2+I_2}, \\ \hat{\theta_2}=\frac{m_{12}}{{(A+S_2+I_2)}^2},\quad \hat{\rho_1}=\frac{n_{21}}{B+S_1+I_1}, \quad \hat{\rho_2}=\frac{n_{21}}{{(B+S_1+I_1)}^2}\\ \hat{\sigma_1}=\frac{n_{12}}{B+S_2+I_2}, \quad \hat{\sigma_2}=\frac{n_{12}}{{(B+S_2+I_2)}^2}.\end{aligned}$$ The origin represents the only case in which the stability study can be performed analytically. The characteristic equation factorizes, to give $H(\lambda)K(\lambda)=0$, with $$K(\lambda)=\lambda^2+(\delta_1+\delta_2+\mu_1+\mu_2+\frac{n_{12}}{B} +\frac{n_{21}}{B})\lambda+(\delta_2+\mu_2)(\delta_1+\mu_1+ \frac{n_{21}}{B})+(\delta_1+\mu_1)\frac{n_{12}}{B}$$ and $$\label{H} H(\lambda)=\lambda^2+(\frac{m_{12}}{A}+\frac{m_{21}}{A}-r_1-r_2)\lambda-\frac{m_{12}r_1}{A}-\frac{m_{21}r_2}{A}+r_1r_2.$$ For $K(\lambda)$ all coefficients are positive, so that its roots have both negative real parts. If we consider the Routh-Hurwitz conditions for $H(\lambda)=0$, we find $$\frac{m_{12}}{A}+\frac{m_{21}}{A}-r_1-r_2>0 \quad -\frac{m_{12}r_1}{A}-\frac{m_{21}r_2}{A}+r_1r_2>0 .$$ The stability conditions are then $$A(r_2+r_1)-m_{21}<m_{12}<Ar_2-\frac{m_{21}r_2}{r_1}.$$ Eliminating $Ar_2$ from the first and last terms, and observing that $m_{12}>0$ in the last inequality, we get $$Ar_1-m_{21}<-\frac{m_{21}r_2}{r_1} \quad A-\frac{m_{21}}{r_1}>0 ,$$ from which $-m_{21}r_2 r_1^{-1}>0$ follows, thus showing that the origin can never be stable. Through numerical simuations, it can be verified that indeed the endemic equilibrium can be stably achieved. This can be accomplished for instance using the following set of parameter values $$\begin{aligned} r_1=1, \quad r_2=1, \quad \gamma_1=1, \quad \gamma_2=1, \quad \delta_1=0.5, \quad \delta_2=0.5, \quad \mu_1=1,\\ \mu_2=1, \quad m_{12}=1, \quad m_{21}=1, \quad n_{12}=1, \quad n_{21}=1, \quad A=1, \quad B=10.\end{aligned}$$ Bifurcations {#subsec:bif_gen2} ------------ We now show that no Hopf bifurcations can arise at the origin. Since $K(\lambda)$ has roots with negative real parts, we consider only $H(\lambda)=0$. To have a Hopf bifurcation we need $$\frac{m_{12}}{A}+\frac{m_{21}}{A}-r_1-r_2=0, \quad -\frac{m_{12}r_1}{A}-\frac{m_{21}r_2}{A}+r_1r_2>0 .$$ Solving for $r_2$ in the first equation, and substituting into the second one, we have $$\Psi(r_1)= -{r_1}^2+2\frac{m_{21}r_1}{A}-\frac{m_{21}}{A}\left(\frac{m_{12}}{A}+\frac{m_{21}}{A}\right)>0 .$$ But this condition can never be satisfied, as the concave parabola $\Psi(r_1)=0$ has the vertex $\left(m_{21}A^{-1}, -m_{21}m_{12}A^{-2}\right)$, lying in the fourth quadrant. Unidirectional Migrations {#subsec:no_12_mod2} ========================= We now consider the case in which the joining path between the two patches can be traversed only in one direction. This is by no means restrictive as for instance fish can swim much more easily downstream in rivers, and sometimes dams and waterfalls prevent them from returning upstream. The Figure \[fig:graf\_interr\_migr\_2\_12\] describes the situation. ![The schematic model of unidirectional migrations.[]{data-label="fig:graf_interr_migr_2_12"}](no_migrazione_da_2_a_1_1.eps) The system (\[model\_g\]) contains now $m_{12}=0$ and $n_{12}=0$. The system’s Jacobian (\[Jac\]) simplifies accordingly. Equilibria {#subsec:equilibri_no12_mod2} ---------- In this case we have again the origin, and possibly coexistence. But in addition, we find the point $E_1=(0,0,\tilde{S_2},\tilde{I_2})$ with $$\tilde{S_2}=\frac{\delta_2+\mu_2}{\gamma_{2}}, \quad \tilde{I_2}=\frac{r_2(\delta_2+\mu_2)}{\gamma_{2}\mu_2},$$ which is always feasible. We also find the point $E_2=(\tilde{S_1},0,\tilde{S_2},\tilde{I_2})$ with population values $$\tilde{S_1}=\frac{m_{21}-r_1A}{r_1}, \quad \tilde{S_2}=\frac{\delta_2+\mu_2}{\gamma_2}, \quad \tilde{I_2}=\frac{\gamma_2(m_{21}-r_1A)+r_2(\delta_2+\mu_2)}{\gamma_2\mu_2}.$$ It has the following feasibility condition, $$\label{feas_E2} m_{21}\ge r_1A.$$ Stability --------- At the origin, we find the following eigenvalues, $$\lambda_1=r_2 \quad, \lambda_2=-\delta_2-\mu_2, \quad \lambda_3=\frac{r_1A-m_{21}}{A} , \quad \lambda_4=-\frac{\delta_1B+n_{21}+\mu_1B}{B},$$ from which its unconditional instability is immediate. Since all the eigenvalues are real, no Hopf bifurcation can arise. At $E_1$ again it is possible to obtain directly the eigenvalues, $$\lambda_{1,2}=\frac{-r_2\delta_2\pm\sqrt{{r_2}^2{\delta_2}^2-4{\mu_2}^2r_2(\mu_2+\delta_2)}}{2\mu_2}, \quad \lambda_3=\frac{-m_{21}+r_1A}{A}$$ and $\lambda_4=-[(\delta_1+\mu_1)B+n_{21}]B^{-1}<0$. Since also $\lambda_{1,2}<0$ easily, stability is regulated by the third eigenvalue, giving $$\label{stab21_caso8_mod2} r_1A<m_{21}.$$ Again, no Hopf bifurcations arise, since the eigenvalues $\lambda_1$ and $\lambda_2$ can never be purely imaginary, as the parameters are all positive: $r_2\delta_2 \ne 0$, since $r_2>0$ and $\delta_2>0$. At $E_2$ once more the eigenvalues are explicitly found, $$\begin{aligned} \lambda_1=\gamma_1\tilde{S_1}-\delta_1-\mu_1-\frac{n_{21}}{B+\tilde{S_1}} , \quad \lambda_2=-\frac{m_{21}}{A+\tilde{S_1}}+\frac{m_{21}\tilde{S_1}}{{(A+\tilde{S_1})}^2}+r_1 , \\ \lambda_{3,4}=\frac{r_2-\gamma_2\tilde{I_2} \pm \sqrt{(r_2-\gamma_2\tilde{I_2})^2-4\mu_2\gamma_2\tilde{I_2}}}{2}.\end{aligned}$$ Using the expression for $\tilde{S_1}$ the second eigenvalue becomes $\lambda_2= r_1(m_{21}-r_1A) m_{21}^{-1}$, so that its negativity in this case entails $m_{21}-r_1A<0$, which contradicts the feasibility condition (\[feas\_E2\]). In conclusion, $E_2$ is unconditionally unstable. Also here no Hopf bifurcations arise. Imposing the real part of $\lambda_{3,4}$ to be zero, we find $r_2-\gamma_2\tilde{I_2}=0$ which explicitly becomes $$-\frac{\gamma_2(m_{21}-r_1A)+r_2\delta_2}{\mu_2}=0,$$ which cannot be satisfied in view of the feasibility condition (\[feas\_E2\]). In this model we can study the coexistence equilibrium, because the Jacobian becomes a lower triangular matrix. The characteristic equation then factorizes accordingly, to give the quadratic equation $${\lambda}^2+a_1\lambda+a_0=0$$ with $$\begin{array}{l} a_1=\gamma_1\tilde{I_1}+\tilde{\eta_1}-\tilde{\eta_2}\tilde{S_1}-r_1-\gamma_1\tilde{S_1}+\delta_1+\mu_1+\tilde{\rho_1}-\tilde{\rho_2}\tilde{I_1} \\ a_0=(-\gamma_1\tilde{I_1}-\tilde{\eta_1}+r_1)(-\delta_1-\mu_1-\tilde{\rho_1}+\tilde{\rho_2}\tilde{I_1}) +\gamma_1\tilde{S_1}\tilde{I_1}(\tilde{\rho_2}-\tilde{\eta_2})\\ +\tilde{\eta_2}\tilde{S_1}(\gamma_1\tilde{S_1}-\delta_1-\mu_1 -\tilde{\rho_1})-\delta_1\tilde{I_1}(\gamma_1+\tilde{\rho_2}). \end{array}$$ To have roots with negative real parts we then need both these coefficients positive, $$a_1>0, \quad a_0>0.$$ The remaining eigenvalues are evaluated explicitly, $$\lambda_{3,4}=\frac{k \pm \sqrt{k^2+4(r_2\delta_2+r_2\mu_2-r_2\gamma_2\tilde{S_2}-\mu_2\gamma_2\tilde{I_2})}}{2},$$ where $k=-\gamma_2\tilde{I_2}-\delta_2-\mu_2+\gamma_2\tilde{S_2}+r_2$. They are both real and negative if $k<0$ and $r_2\delta_2+r_2\mu_2-r_2\gamma_2\tilde{S_2}-\mu_2\gamma_2\tilde{I_2}<0$. In summary, the stability conditions for the coexistence equilibrium are $$\label{stab16} a_1>0, \quad a_0>0, \quad \gamma_2\tilde{S_2}+r_2<\gamma_2\tilde{I_2}+\delta_2+\mu_2, \quad r_2\delta_2+r_2\mu_2<r_2\gamma_2\tilde{S_2}+\mu_2\gamma_2\tilde{I_2}.$$ In principle Hopf bifurcations could arise in this situation, whenever either one of the two sets of conditions holds, $$\label{condbif1} a_1=0, \quad a_0>0, \quad k<0, \quad h<0,$$ or $$\label{condbif2} a_1>0, \quad a_0>0, \quad k=0, \quad h<0.$$ Infected do not Migrate {#sec:terzo_modello} ======================= In this case we assume that migrations entail an effort, which is too strenuous for infected to exert, so that they are prevented from changing the patch in which they live. We need to set $n_{21}=n_{12}=0$ and dropping also the populations $I_1$ and $I_2$ from the migration terms. Pictorially, the system is illustrated in Figure \[fig:infetti\_non\_migrano\]. ![The system where the infected are prevented from migrating.[]{data-label="fig:infetti_non_migrano"}](infetti_non_migrano_1.eps) Equilibria {#subsec:equilibri_gen3} ---------- In addition to the origin, we find also the following two pairs of equilibria, $Z_1^{\pm}=(\tilde{S_1},\tilde{I_1},\tilde{S_2}^{\pm},0)$, $Z_2^{\pm}=(\tilde{S_1}^{\pm},0,\tilde{S_2},\tilde{I_2})$. At $Z_1$ the population values can be explicitly calculated, $$\tilde{S_1}=\frac{\delta_1+\mu_1}{\gamma_1},\quad \tilde{I_1}=\tilde{S_1}\frac{r_1}{\mu_1}+\frac{r_2\tilde{S_2}}{\mu_1},\quad \tilde{S_2}^{\pm}=\frac{\ell \pm \sqrt{\ell^2-4r_2m_{21}A\tilde{S_1}(A+\tilde{S_1})}}{2r_2(A+\tilde{S_1})}$$ where $\ell=m_{12}A+m_{12}\tilde{S_1}-r_2A^2-r_2A\tilde{S_1}-m_{21}\tilde{S_1}$. These points are feasible if and only if $$\label{amm_12_3bis} m_{21}\tilde{S_1} +r_2A^2+r_2A\tilde{S_1} < m_{12}A+m_{12}\tilde{S_1}.$$ The inequality is strict since for $\ell =0$ the corresponding quadratic equation has purely imaginary solutions. We also find at $Z_2$ the populations $$\begin{aligned} \tilde{S_2}=\frac{\delta_2+\mu_2}{\gamma_2}, \quad \tilde{I_2}=r_2\frac{\delta_2+\mu_2}{\gamma_2\mu_2}+\frac{r_1\tilde{S_1}}{\mu_2},\quad \tilde S_1^{\pm}=\frac{h \pm \sqrt{h^2-4r_1m_{12}A\tilde{S_2}(A+\tilde{S_2})}}{2r_1(A+\tilde{S_2})}\end{aligned}$$ with $h=m_{21}A+m_{21}\tilde{S_2}-m_{12}\tilde{S_2}-r_1A^2-r_1A\tilde{S_2}$. Once again, noting the strict inequality, these equilibria are feasible if and only if $$\label{amm_14_3bis} m_{12}\tilde{S_2}+r_1A^2+r_1A\tilde{S_2} < m_{21}A+m_{21}\tilde{S_2} .$$ Also the endemic coexistence equilibrium can be analytically evaluated, $$\begin{aligned} \tilde{S_1}=\frac{\delta_1+\mu_1}{\gamma_{1}}, \quad \tilde{I_2}=\frac{r_1\gamma_2(\delta_1+\mu_1)+r_2\gamma_1(\delta_2+\mu_2) -\tilde{I_1}\gamma_1\gamma_2\mu_1 }{\gamma_1\gamma_2\mu_2}, \\ \tilde{S_2}=\frac{\delta_2+\mu_2}{\gamma_{2}},\quad \tilde{I_1} = \frac 1{\gamma_{1}\tilde{S_1}-\delta_1} \left[r_1\tilde{S_1}-m_{21}\frac{\tilde{S_1}}{A+\tilde{S_1}}+m_{12}\frac{\tilde S_{2}}{A+\tilde{S_2}}\right] %=\frac 1{\mu_{1}} %\left[r_1\tilde{S_1}-m_{21}\frac{\tilde{S_1}}{A+\tilde{S_1}}+m_{12}\frac{\tilde S_{2}}{A+\tilde{S_2}}\right].\end{aligned}$$ For feasibility, note that the denominator in the expression for $\tilde I_1$ reduces to $\mu_1$. Then $I_1 \ge 0$ gives $$\label{feas_coex1} r_1\tilde{S_1}+ m_{12}\frac{\tilde S_{2}}{A+\tilde{S_2}} \ge m_{21}\frac{\tilde{S_1}}{A+\tilde{S_1}}.$$ We need also $I_2 \ge 0$ i.e. $$\label{amm_coex} \tilde{I_1}\le \frac{r_1\gamma_2(\delta_1+\mu_1)+r_2\gamma_1(\delta_2+\mu_2)}{\gamma_1\gamma_2\mu_1}$$ which can be recast in the following form $$\label{feas_coex2} r_2\tilde{S_2}+m_{21}\frac{\tilde{S_1}}{A+\tilde{S_1}} \ge m_{12}\frac{\tilde S_{2}}{A+\tilde{S_2}}.$$ Stability --------- At the origin, the Jacobian has the explicit eigenvalues $\lambda_1=-\delta_1-\mu_1$, $\lambda_2=-\delta_2-\mu_2$ and the roots of the quadratic (\[H\]) so that the same analysis carries out also in this case, showing the inconditionate instability of this equilibrium point. At $Z_1$ one eigenvalue is $\lambda_1=\gamma_2\tilde{S_2}-\delta_2-\mu_2$. The remaining ones are the roots of the cubic equation $\lambda^3+\hat{p_2}\lambda^2+\hat{p_1}\lambda+\hat{p_0}=0$ with $$\begin{aligned} \hat{p_2}=\gamma_1\tilde{I_1}-r_2-r_1+\tilde{\eta_1}-\tilde{\eta_2}\tilde{S_1}+\tilde{\theta_1}-\tilde{\theta_2}\tilde{S_2}-\gamma_1\tilde{S_1}+\delta_1+\mu_1, \\ \hat{p_1}=\gamma_1\tilde{I_1}(\gamma_1\tilde{S_1}-\delta_1)+(\tilde{\theta_1}-\tilde{\theta_2}\tilde{S_2})(\gamma_1\tilde{I_1}-r_1-\gamma_1\tilde{S_1}+\delta_1 +\mu_1) \\ +(\tilde{\eta_1}\tilde{S_1}+r_1-\gamma_1\tilde{I_1}-\tilde{\eta_1})(r_2+\gamma_1\tilde{S_1}-\delta_1-\mu_1) +r_2(\gamma_1\tilde{S_1}-\delta_1-\mu_1), \\ \hat{p_0}=\gamma_1\tilde{I_1}(\gamma_1\tilde{S_1}-\delta_1)(\tilde{\theta_1}-\tilde{\theta_2}\tilde{S_2}-r_2)-(\gamma_1\tilde{S_1}-\delta_1-\mu_1)[r_2(\tilde{\eta_2}\tilde{S_1} \\ +r_1-\gamma_1\tilde{I_1}-\tilde{\eta_1})+(\gamma_1\tilde{I_1}-r_1)(\tilde{\theta_1}-\tilde{\theta_2}\tilde{S_2})].\end{aligned}$$ The Routh-Hurwitz conditions combined with negativity of the explicit eigenvalue ensure then stability: $$\label{stab_gen3_caso12} \gamma_2\tilde{S_2}<\delta_2+\mu_2, \quad \hat{p_0}>0, \quad \hat{p_2}>0, \quad \hat{p_2}\hat{p_1}>\hat{p_0}.$$ A similar situation arises for $Z_2$, one eigenvalue is found analytically, $\lambda_1=\gamma_1\tilde{S_1}-\delta_1-\mu_1$ and the cubic equation $\lambda^3+\hat{q_2}\lambda^2+\hat{q_1}\lambda+\hat{q_0}=0$ with coefficients $$\begin{aligned} \hat{q_2}=-r_2-r_1+\gamma_2\tilde{I_2}+\tilde{\theta_1}-\tilde{\theta_2}\tilde{S_2}+\tilde{\eta_1}-\tilde{\eta_2}\tilde{S_1}-\gamma_2\tilde{S_2}+\delta_2+\mu_2 \\ \hat{q_1}=-\gamma_2\tilde{I_2}(-\gamma_2\tilde{S_2}+\delta_2)+(-\tilde{\eta_1}+\tilde{\eta_2}\tilde{S_1})(-\gamma_2\tilde{I_2}+r_2+\gamma_2\tilde{S_2}-\delta_2 -\mu_2) \\ +(-\gamma_2\tilde{I_2}-\tilde{\theta_1}+\tilde{\theta_1}\tilde{S_2}+r_2)(r_1+\gamma_2\tilde{S_2}-\delta_2-\mu_2)+ +r_1(\gamma_2\tilde{S_2}-\delta_2-\mu_2) \\ \hat{q_0}=\gamma_2\tilde{I_2}(-\gamma_2\tilde{S_2}+\delta_2)(-\tilde{\eta_1}+\tilde{\eta_2}\tilde{S_1}+r_1)-(\gamma_2\tilde{S_2}-\delta_2-\mu_2)[r_1(-\gamma_2\tilde{I_2}+ \\ -\tilde{\theta_1}+\tilde{\theta_2}\tilde{S_2}+r_2)+(-\gamma_2\tilde{I_2}+r_2)(-\tilde{\eta_1}+\tilde{\eta_2}\tilde{S_1})]\end{aligned}$$ for which the stability criterion becomes $$\label{stab_gen3_caso14} \gamma_1\tilde{S_1}<\delta_1+\mu_1, \quad \hat{q_0}>0 , \quad \hat{q_2}>0 , \quad \hat{q_2}\hat{q_1}>\hat{q_0}$$ #### Remark 1. No bifurcations can arise here near the origin. The proof of this statement is exactly the same as the one carried out in Section \[subsec:bif\_gen2\]. The stability conditions for both equilibria $Z_1$ and $Z_2$ are nonempty, as can be easily shown numerically using respectively the following sets of parameters $$\begin{aligned} r_1=1, \quad r_2=0.8, \quad \gamma_1=0.5, \quad \gamma_2=1, \quad \delta_1=1, \quad \delta_2=4,\\ \mu_1=1, \quad \mu_2=2, \quad m_{21}=2, \quad m_{12}=10, \quad A=5.\end{aligned}$$ and $$\begin{aligned} r_1=0.8, \quad r_2=1, \quad \gamma_1=1, \quad \gamma_2=0.5, \quad \delta_1=4, \quad \delta_2=1,\\ \mu_1=2, \quad \mu_2=1, \quad m_{21}=9, \quad m_{12}=2, \quad A=5.\end{aligned}$$ The endemic equilibrium with all patches populated can numerically be shown to be attained for instance for the parameter values $$\begin{aligned} r_1=1, \quad r_2=1, \quad \gamma_1=1, \quad \gamma_2=1, \quad \delta_1=0.5, \quad \delta_2=0.5,\\ \mu_1=1, \quad \mu_2=1, \quad m_{12}=1, \quad m_{21}=1, \quad A=1.\end{aligned}$$ Biological Interpretation ========================= For the general model, the system can never be wiped out, since the origin is unconditionally unstable. The ecosystem thrives with a nonvanishing population and an endemic state of the disease in both patches at stable levels, for certain parameter ranges. These results hold true also for the particular case in which migrations back into patch 1 are forbidden. But in such case new possible equilibria arise, in which patch 1 is depleted, or in which only the susceptible population survives. But the latter equilibrium is never stable. The equilibrium with patch 1 empty is stable if the reproductive rate in that patch is low enough, or better, if the emigration rate is sufficiently high, compare (\[stab21\_caso8\_mod2\]). For the equilibrium with endemic disease and both patches populated, stability conditions have been derived, and the presence of a regime of possible oscillatory behavior has been highlighted. If the infected do not migrate, once again the ecosystem is guaranteed to survive, as the origin is always unstable. The equilibria $Z_1$ and $Z_2$ are interesting, as in them one patch becomes disease-free. This is a result that potentially could be exploited by the manager of wild parks, to preserve at least part of a population from an epidemics. In order to control the disease at least in part of the environment it therefore appears to be better to preserve population movements in both directions, by preventing the infected to migrate, than to impose unidirectional migrations for both classes of individuals. ### Acknowledgments. {#acknowledgments. .unnumbered} The authors have been partially supported by the project “Metodi numerici in teoria delle popolazioni” of the Dipartimento di Matematica “Giuseppe Peano”. [4]{} Aimar, V., Borlengo, S., Motto, S., Venturino, E., A meta-epidemic model with steady state demographics and migrations saturation, AIP Conf. Proc. 1479, ICNAAM 2012, T. Simos, G. Psihoylos, Ch. Tsitouras, Z. Anastassi (Editors), 1311–1314 (2012); doi: 10.1063/1.4756396 Allen, L. J. S., Bolker, B. M., Lou, Y., Nevai, A. L., Asymptotic Profiles of the Steady States for an SIS Epidemic Patch Model, SIAM J. Appl. Math. 67, 1283–1309, (2007) Arino, J., van den Driessche , P., Disease spread in metapopulations, Nonlinear Dynamics and Evolution Equations, Fields Inst. Commun. 48, H. Brunner, X. O. Zhao, and X. Zou, eds., AMS, Providence, RI, 1–13, (2006). Arrigoni, F., Pugliese, A., Limits of a multi-patch SIS epidemic model, J. Math. Biol. 45, 419–440 (2002) Barengo, M., Iennaco, I., Venturino, E., A simple meta-epidemic model, Proceedings of the 12th International Conference on Computational and Mathematical Methods in Science and Engineering, CMMSE 2012, J. Vigo-Aguiar, A.P. Buslaev, A. Cordero, M. Demiralp, I.P. Hamilton, E. Jeannot, V.V. Kozlov, M.T. Monteiro, J.J. Moreno, J.C. Reboredo, P. Schwerdtfeger, N. Stollenwerk, J.R. Torregrosa, E. Venturino, J. Whiteman (Editors) La Manga, Spain, July 2nd-5th, 2012, v. 1, p. 122–133 (2012) Bianco, F., Cagliero, E., Gastelurrutia, M., Venturino, E., Metaecoepidemics with migration of and disease in the predators, Proceedings of the 11th International Conference on Computational and Mathematical Methods in Science and Engineering, CMMSE 2011, J. Vigo Aguiar, R. Cortina, S. Gray, J.M. Ferradiz, A. Fernandez, I. Hamilton, J.A. Lopez-Ramos, F. de Oliveira, R. Steinwandt, E. Venturino, J. Whiteman, B. Wade (Editors), Benidorm, Spain, June 26th-30th, v. 1, 204–223 (2011) Gao, D., Ruan, S., An SIS patch model with variable transmission coefficients, Mathematical Biosciences 232, 110–115 (2011) Hanski, I., Gilpin, M., Metapopulation biology: ecology, genetics and evolution, London: Academic Press (1997) Jin, Y., Wang, W., The effect of population dispersal on the spread of a disease, J. Math. Anal. Appl. 308, 343–364 (2005) Lloyd, A., May, R. M., Spatial heterogeneity in epidemic models, J. Theoret. Biol. 179, 1–11 (1996) Malchow, H., Petrovskii, S., Venturino, E., Spatiotemporal patterns in Ecology and Epidemiology. CRC, Boca Raton (2008) Salmani, M., van den Driessche, P., A model for disease transmission in a patchy environment, Discrete Contin. Dynam. Systems Ser. B 6, 185–202 (2006) Venturino, E., Simple metaecoepidemic models, Bull. Math. Biol. 73, 917–950 (2011) Wang, W., Population dispersal and disease spread, Discrete Contin. Dynam. Systems Ser. B 4, 797–804 (2004) Wang, W., Mulone, G., Threshold of disease transmission in a patch environment, J. Math. Anal. Appl. 285, 321–335 (2003) Wang, W.. Ruan, S., Simulating the SARS outbreak in Beijing with limited data, J. Theoret. Biol. 227, 369–379 (2004) Wang, W., Zhao, X.-Q., An epidemic model in a patchy environment, Math. Biosci. 190, 97–112 (2004) Wiens, J. A., Wildlife in patchy environments: metapopulations, mosaics, and management, in D. R. McCullough (Ed.) Metapopulations and Wildlife Conservation. Island Press, Washington 53–84 (1996) Wiens, J. A., Metapopulation dynamics and landscape ecology, in I. A. Hanski, M. E. Gilpin (Ed.s), 43–62. Academic Press, San Diego, (1997)
--- abstract: 'This article proves a conjecture by S.-C. Liu and C.-C. Yeh about Catalan numbers, which states that odd Catalan numbers can take exactly $k-1$ distinct values modulo $2^k$, namely the values $C_{2^1-1},\ldots,\allowbreak C_{2^{k-1}-1}$.' address: École Normale Supérieure de Lyon author: - 'Hsueh-Yung Lin' bibliography: - 'article.bib' title: Odd Catalan numbers modulo $2^k$ --- Notation ======== In this article we denote $C_n := \frac{(2n)!}{(n+1)!n!}$ the $n$-th Catalan number. We also define $(2n+1)!! := 1\times 3\times\cdots\times(2n+1)$. For $x$ an integer, $\nu_2(x)$ stands for the $2$-adic valuation of $x$, i.e. $\nu_2(x)$ is the largest integer $a$ such that $2^a$ divides $x$. Introduction ============ The main result of this article is Theorem \[thmp\], which proves a conjecture by S.-C. Liu and C.-C. Yeh about odd Catalan numbers [@key-9]. To begin with, let us recall the characterization of odd Catalan numbers: A Catalan number $C_n$ is odd if and only if $n=2^a-1$ for some integer $a$. That result is easy, see e.g. [@key-2]. The main theorem we are going to prove is the following: \[thmp\] For all $k\geq2$, the numbers $C_{2^1-1}, C_{2^2-1},\ldots, C_{2^{k-1}-1} $ all are distinct modulo $2^k$, and modulo $2^k$ the sequence $(C_{2^n-1})_{n\geq 1}$ is constant from rank ${k-1}$ on. Here are a few historical references about the values of the $C_n$ modulo $2^k$. Deutsch and Sagan [@key-1] first computed the $2$-adic valuations of the Catalan numbers. Next S.-P. Eu, S.-C. Liu and Y.-N. Yeh [@key-5] determined the modulo $8$ values of the $C_n$. Then S.-C. Liu et C.-C. Yeh determined the modulo $64$ values of the $C_n$ by extending the method of Eu, Liu and Yeh in [@key-9], in which they also stated Theorem \[thmp\] as a conjecture. Our proof of Theorem \[thmp\] will be divided into three parts. In Section \[S2\] we will begin with the case $k=2$, which is the initialization step for a proof of Theorem \[thmp\] by induction. In Section \[S3\] we will prove that the numbers $C_{2^1-1}, C_{2^2-1},\ldots, C_{2^{k-1}-1}$ all are distinct modulo $2^k$. Finally in Section \[S4\] we will prove that $C_{2^n-1} \equiv C_{2^{k-1}-1} \pmod{2^k}$ for all ${n\geq k-1}$. Odd Catalan numbers modulo $4$ {#S2} ============================== In this section we prove that any odd Catalan number is congruent to $1$ modulo $4$, which is Theorem \[thmp\] for ${k=2}$. Though this result can be found in [@key-5], I give a more “elementary” proof, in which I shall also make some computations which will be used again in the sequel. Before starting, we state two identities: \[lemeg\] For any $a\geq 3$, the following identities hold: $$\label{1} (2^{a}-1)!!\equiv 1\pmod{2^{a}};$$ $$\label{2} (2^a-3)!!\equiv -1 \pmod{2^{a+1}}.$$ We are proving the two identities separately. In both cases we reason by induction on $a$, both equalities being trivial for ${a=3}$. So, let $a \geq 4$ and suppose the result stands true for ${a-1}$. First we have $$\begin{aligned} (2^{a}-1)!! & = & 1\times 3\times\cdots\times(2^{{a-1}}-1)\times(2^{{a-1}}+1)\times\cdots\times(2^{a}-1)\\ & \equiv & 1\times3\times\cdots\times(2^{{a-1}}-1)\times(-(2^{{a-1}}-1))\times\cdots\times(-1)\\ & = & (1\times3\times\cdots\times(2^{{a-1}}-1))^{2}\times (-1)^{2^{a-2}} \pmod{2^{a}}. \end{aligned}$$ Since, by the induction hypothesis, $1\times3\times\cdots\times(2^{a-1}-1)$ is equal to $1$ or ${2^{a-1}+1}$ modulo $2^a$, we have $(1\times3\times\cdots\times(2^{{a-1}}-1))^2\equiv1\pmod{2^a}$ in both cases, from which the first identity follows. For the second identity, $$(2^{a}-3)!! = \prod_{k=1}^{2^{a-2}-1}(2k+1) \cdot \prod_{k=2^{a-2}}^{2^{a-1}-2}(2k+1).$$ Reversing the order of the indexes in the first product and translating the indexes in the second one, we get $$\begin{aligned} (2^{a}-3)!! & = & \prod_{k=0}^{2^{a-2}-2}(2^{a-1}-(2k+1)) \cdot \prod_{k=0}^{2^{a-2}-2}(2^{a-1}+(2k+1))\\ & = & \prod_{k=0}^{2^{a-2}-2}[2^{2(a-1)}-(2k+1)^2]\\ & \equiv & \prod_{k=0}^{2^{a-2}-2}[-(2k+1)^2] = -(2^{{a-1}}-3)!!^{2} \pmod{2^{{a+1}}}. \end{aligned}$$ By the induction hypothesis, $(2^{{a-1}}-3)!!$ is equal to ${-1}$ or ${2^{a}-1}$ modulo $2^{a+1}$, and in either case the result follows. Now comes the main proposition of this section: \[thmk=2\] Fore all integer $a$, $C_{2^a-1}\equiv 1 \pmod{4}$. Put $n := 2^a-1$. We want to prove that $4\mid \frac{(2n)!}{n!(n+1)!}-1=\frac{(2n)!-n!(n+1)!}{n!(n+1)!}$. Let us denote $\omega := \nu_2[(2n)!]$. Since $C_n = \frac{(2n)!}{n!(n+1)!}$ is odd, one also has $\omega = \nu_2[n!(n+1)!]$. Then, proving that $4\mid \frac{(2n)!-n!(n+1)!}{n!(n+1)!}$ is equivalent to proving that $4\mid\frac{(2n)!}{2^{\omega}}-\frac{n!(n+1)!}{2^{\omega}}$. To do that, it suffices to show that $\frac{n!(n+1)!}{2^{\omega}} \equiv 1 \pmod4$ and $\frac{(2n)!}{2^{\omega}} \equiv 1 \pmod4$. As $\omega = \nu_2[n!(n+1)!] = \nu_2[(n!)^22^a] = a+2\nu_2(n!)$, one has $\nu_2(n!)=(\omega - a)/2$, thus $n!/2^{(\omega-a)/2}$ is an odd number by the very definition of valuation. That yields the first equality: $$\frac{n!(n+1)!}{2^{\omega}} = \frac{(n!)^{2}(n+1)}{2^{\omega}} = \frac{(n!)^{2}2^{a}}{2^{\omega}} = \left(\frac{n!}{2^{(\omega-a)/2}}\right)^{2} \equiv 1 \pmod{4}.$$ Concerning the equality $ \frac{(2n)!}{2^{\omega}}\equiv 1 \pmod4 $, it is easy to check for ${a\leq2}$; now we consider the case ${a\geq3}$, to which we can apply Lemma \[lemeg\]. For all $i\leq a$, put $\omega_i :=\nu_2[(2^{a-i+1}-1)!]$. For $i<a$, one has $$\frac{(2^{a-i+1}-1)!}{2^{\omega_i}} = \frac{(2^{a-i+1}-1)!! (\prod_{p=1}^{2^{a-i}-1}2p)}{2^{\omega_i}}=(2^{a-i+1}-1)!!\frac{(2^{a-i}-1)!}{2^{\omega_i+2^{a-i}-1}}.$$ As the left-hand side of this equality is odd, so is its right-hand side, so that ${\omega_i+2^{a-i}-1}$ is actually the $2$-adic valuation of ${2^{a-i}-1}$. In the end, we have shown that $$\frac{(2^{a-i+1}-1)!}{2^{\omega_i}} = (2^{a-i+1}-1)!!\frac{(2^{a-i}-1)!}{2^{\omega_{i+1}}}.$$ Morevoer, for $i=a$ it is immediate that $(2^{a-i+1}-1)!/2^{\omega_i} = 1$, whence $$\begin{aligned} \frac{(2n)!}{2^{\omega}}&=& \frac{(2^{a+1}-2)!}{2^{\omega}} = \frac{1}{2^{a+1}-1}\cdot\frac{(2^{a+1}-1)!}{2^{\omega}}\\ &=& \frac{1}{2^{a+1}-1}\cdot(2^{a+1}-1)!!\cdot \frac{(2^{a}-1)!}{2^{\omega_{1}}}\\ &=& \frac{1}{2^{a+1}-1}\cdot(2^{a+1}-1)!!\cdot (2^{a}-1)!!\cdot \frac{(2^{a-1}-1)!}{2^{\omega_{2}}}\\ &=& \cdots \rule{0pt}{4ex}\\ &=& \frac{1}{2^{a+1}-1}\cdot \prod_{k=1}^{a+1}(2^{k}-1)!!\\ &=& \frac{1}{2^{a+1}-1}\cdot(2^{a+1}-1)!!\cdot \prod_{k=1}^{a}(2^{k}-1)!!\\ &=& (2^{a+1}-3)!!\cdot \prod_{k=1}^{a}(2^{k}-1)!!. \end{aligned}$$ But, modulo $4$, one has $(2^{a+1}-3)!! \equiv -1$ by  in Lemma \[lemeg\], $(2^1 -1)!! \equiv 1$, $(2^2 -1)!! \equiv -1$ and $(2^k -1)!! \equiv 1$ for ${k\geq3}$ by  in Lemma \[lemeg\], whence $(2n)!/2^{\omega} {\equiv 1}$. Before ending this section, I highlight an intermediate result of the previous proof by stating it as a lemma: \[lemcalc\] For $a \geq 0$, putting $\omega := \nu_2[(2^{a+1}-2)!]$, $$\frac{(2^{a+1}-2)!}{2^{\omega}} = (2^{a+1}-3)!!\cdot \prod_{k=1}^{a}(2^{k}-1)!! .$$ Distinctness modulo $2^k$ of the $C_{2^1-1},\ldots,C_{2^{k-1}-1}$ {#S3} ================================================================= In this section we prove that for all $k\geq2$, the numbers $C_{2^1-1},\ldots,\allowbreak C_{2^{k-1}-1} $ are distinct modulo $2^k$. To begin with, we state a lemma which gives an equivalent formulation to the equality “$C_{2^{m}-1}\equiv p \pmod{2^k}$”. This lemma will be used in Sections \[S3\] and \[S4\]. \[lemmod\] Let $k\geq 2$ and $m\geq 1$, then $C_{2^{m}-1}\equiv p \pmod {2^k}$ if and only if $$(2^{m+1}-3)!!\equiv p\prod_{n=1}^{m}(2^{n}-1)!! \pmod{2^{k}}.$$ Denote $\omega := \nu_2[(2^{m+1}-2)!] = \nu_2[(2^{m})!(2^{m}-1)!]$ (recall that $C_{2^{m}-1}=\frac{(2^{m+1}-2)!}{(2^{m})!(2^{m}-1)!}$ is odd). Applying Lemma \[lemcalc\], $$\begin{aligned} && C_{2^m-1}\equiv p \pmod {2^k}\\ &\Leftrightarrow& 2^k\mid \frac{(2^{m+1}-2)!}{(2^m)!(2^m-1)!}-p\\ &\Leftrightarrow& 2^k\mid\frac{(2^{m+1}-2)!}{2^{\omega}}-\frac{p(2^m)!(2^m-1)!}{2^{\omega}} \\ &\Leftrightarrow& 2^{k}\mid (2^{m+1}-3)!!\prod_{n=1}^{m}(2^{n}-1)!!-p\left(\prod_{n=1}^{m}(2^{n}-1)!!\right)^2\\ &\Leftrightarrow& 2^{k}\mid \left(\prod_{n=1}^{m}(2^{n}-1)!!\right)\, \left((2^{m+1}-3)!!-p\prod_{n=1}^m(2^n-1)!!\right).\end{aligned}$$ But $\prod_{n=1}^{m}(2^{n}-1)!!$ is odd, so $C_{2^m-1}\equiv p \pmod {2^k}$ if and only if $2^k$ divides $({(2^{m+1}-3)!!}-p\prod_{n=1}^m(2^n-1)!!)$, which is our lemma. \[propdist\] Let $k\geq2$ be an integer. For all $j \in \{1,\ldots,k-1\}$, $C_{2^j-1}\not\equiv C_{2^k-1} \pmod{2^{k+1}}$. We prove this proposition by contradiction. Suppose there exists a $j\in\{1,\ldots,{k-1}\}$ such that $C_{2^j-1}\equiv C_{2^k-1} =: p\pmod{2^{k+1}}$. By Lemma \[lemmod\], one would have $$p\prod_{n=1}^{j} (2^{n}-1)!! \equiv (2^{j+1}-3)!! \pmod{2^{k+1}},$$ and by Lemma \[lemmod\] and Fomula  in Lemma \[lemeg\], $$p\prod_{n=1}^{k} (2^{n}-1)!! \equiv (2^{k+1}-3)!! \equiv -1 \pmod{2^{k+1}}.$$ As $j+2\leq k+1$, both equalities would remain true modulo $2^{j+2}$. Thus one would have $$\begin{aligned} -1 &\equiv &p\prod_{n=1}^{k} (2^{n}-1)!! \pmod{2^{j+2}}\\ &=&p \prod_{n=1}^{j} (2^{n}-1)!! \times \prod_{n=j+1}^{k} (2^{n}-1)!!\\ &\equiv &(2^{j+1}-3)!! \times \prod_{n=j+1}^k (2^n-1)!!\\ &=& (2^{j+1}-3)!!\cdot (2^{j+1}-1)!! \times \prod_{n=j+2}^{k} (2^{n}-1)!!\\ &\equiv& (2^{j+1}-3)!!\cdot (2^{j+1}-1)!! \quad \textrm{(by~\eqref{1} in Lemma~\ref{lemeg})} \rule{0pt}{3ex}\\ &=&(2^{j+1}-3)!!^2 \cdot (2^{j+1}-1) \rule{0pt}{3ex}\\ &\equiv &2^{j+1}-1\pmod{2^{j+2}} \quad \textrm{(by~\eqref{2} in Lemma~\ref{lemeg})}, \rule{0pt}{3ex}\end{aligned}$$ which is absurd. Thanks to the previous proposition, we prove the first claim of Theorem \[thmp\]: \[coro\] For $k\geq2$, the numbers $C_{2^1-1}, C_{2^2-1}, \ldots, C_{2^{k-1}-1}$ all are distinct modulo $2^k$. The case $k=2$ is trivial. Let $k\geq2$ and suppose that, modulo $2^k$, the numbers $C_{2^1-1}, C_{2^2-1},\ldots,C_{2^{k-1}-1}$ all are distinct, so that they are also distinct modulo $2^{k+1}$. By Proposition \[propdist\], $C_{2^{j}-1}\not\equiv C_{2^{k}-1}$ (mod $2^{k+1}$) for all $j\in\{1,\ldots,{k-1}\}$, so the numbers $C_{2^1-1}, C_{2^2-1},\ldots, C_{2^k-1} $ all are distinct modulo $2^{k+1}$. The claim follows by induction. Ultimate constancy of the sequence of the $C_{2^n-1}$ modulo $2^k$ {#S4} ================================================================== To complete the proof of Theorem \[thmp\], it remains to prove that the $C_{2^n-1}$ all are equal modulo $2^k$ for $n\geq{k-1}$. Let $k\geq 2$, then for all $m\geq {k-1}$, $C_{2^m-1}\equiv C_{2^{k-1}-1}\pmod {2^k}$. Denote $C_{2^{k-1}-1} =: p \pmod{2^k}$. We will show that $C_{2^m-1}\equiv p \pmod{2^k}$ for all $m\geq {k-1}$ by induction. Let $m\geq k$ be such that the previous equality stands true for ${m-1}$. By Lemma \[lemmod\], it suffices to show that $(2^{m+1}-3)!!\equiv p \prod_{n=1}^m(2^n-1)!! \pmod{2^k}$. To do this, we are going to show that $(2^{m+1}-3)!! \equiv (2^m-3)!! \pmod{2^k}$ and that $p\prod_{n=1}^m(2^n-1)!! \equiv (2^m-3)!! \pmod{2^k}$. The first equality follows from the following computation: $$\begin{aligned} && (2^{m+1}-3)!!\\ &=& (2^{m}-3)!! \times (2^{m}-1)\times (2^{m}+1) \times\cdots\times (2\cdot 2^{m}-3)\\ &\equiv& (2^{m}-3)!! \cdot \left(1\times 3\times\cdots\times (2^k-1)\right)^{2^{m-k}} \pmod{2^k}\\ &\equiv& (2^{m}-3)!! \quad \textrm{(by~\eqref{1} in Lemma~\ref{lemeg})}.\end{aligned}$$ To get the other equality, using again  in Lemma \[lemeg\], one has $$p\prod_{n=1}^{m}(2^{n}-1)!! = (2^{m}-1)!!\cdot p \prod_{n=1}^{m-1}(2^{n}-1)!! \equiv p\prod_{n=1}^{m-1}(2^{n}-1)!! \pmod{2^k}.$$ But by Lemma \[lemmod\], the induction hypothesis means that $p\prod_{n=1}^{m-1}(2^{n}-1)!! \equiv {(2^m-3)!!} \pmod {2^k}$, whence the result. Going further ============= After the series of works on odd Catalan numbers modulo $2^k$ this article belongs to, a natural question would be how many distinct *even* Catalan numbers there are modulo $2^k$ and how these numbers behave. An idea to do this would be to study the $C_n$ having some fixed $2$-adic valuation. More generally, one could also wonder what happens for Catalan numbers modulo $p^k$ for prime $p$, which is a question that mathematicians studying the arithmetic properties of Catalan numbers have been asking for a long time. Acknowledgements {#acknowledgements .unnumbered} ================ The author thanks Pr P. Shuie and Pr S.-C. Liu for their mathematical advice, and R. Peyre for helping to improve the writing of this article.
--- abstract: | Most computer algebra systems incorrectly simplify$$\dfrac{z-z}{\dfrac{\sqrt{w^{2}}}{w^{3}}-\dfrac{1}{w\sqrt{w^{2}}}}$$ to 0 rather than to 0/0. The reasons for this are: 1\. The default simplification doesn’t succeed in simplifying the denominator to 0. 2\. There is a rule that 0 is the result of 0 divided by anything that doesn’t simplify to either 0 or 0/0. Many of these systems have more powerful optional transformation and general purpose simplification functions. However that is unlikely to help this example even if one of those functions can simplify the denominator to 0, because the input to those functions is the result of *default* simplification, which has already incorrectly simplified the overall ratio to 0. Try it on your computer algebra systems! This article describes how to simplify products of the form $w^{\alpha}\left(w^{\beta_{1}}\right)^{\gamma_{1}}\cdots\left(w^{\beta_{n}}\right)^{\gamma_{n}}$ correctly and well, where $w$ is any real or complex expression and the exponents are rational numbers. It might seem that correct good simplification of such a restrictive expression class must already be published and/or built into at least one widely used computer-algebra system, but apparently this issue has been overlooked. Default and relevant optional simplification was tested with 86 examples on 5 systems with $n=1$. Using a spectrum from the most serious flaw being a result that is not equivalent to the input somewhere to the least serious being not rationalizing a denominator when that doesn’t cause a more serious flaw, the overall percentage of most flaw types is alarming:   flaw: $\not\equiv$ 0-recognition ${\mathrm{cancelable}\atop \mathrm{singularity}}$ ${\mathrm{extra}\atop \mathrm{factor}}$ excessive $\left|\gamma_{k}\right|$ $\neg$canonical $\neg$idempotent $\frac{\cdots}{\sqrt{\cdots}}$ ------- -------------- --------------- --------------------------------------------------- ----------------------------------------- ------------------------------------- ----------------- ------------------ -------------------------------- %: 11 50 25 16 32 39 0.4 6 author: - 'David R. Stoutemyer[^1]' title: Simplifying products of fractional powers of powers --- Introduction\[sec:Introduction\] ================================ *“When you are right you cannot be too radical*;”\ – Martin Luther King Jr. First, a few crucial definitions: **Default simplification** is what a computer-algebra system does to a standard mathematical expression when the user presses or , using factory-default mode settings without enclosing the expression in an optional transformational function such as $\mbox{expand}(\ldots)$, $\mbox{factor}(\ldots)$, or $\mbox{simplify}(\ldots)$. Default simplification is the minimal set of transformations that a system does routinely. Default simplification is called *evaluation* in *Mathematica*^®^ and in some other systems. Any fixed set of default transformations is likely to omit ones that are wanted in some situations and to include ones that are unwanted in other situations. Therefore: - Most systems also provide optional transformations done by a function such as $\mathrm{expand}\left(\ldots\right)$ or by assigning a certain value to a control variable such as $\mathrm{trigExpand}\leftarrow\mathrm{true}$. - Some systems provide a way to disable default transformations. For example the Maxima assignment $\mathtt{simp:false}$ suppresses most simplification, whereas the Maxima $\mathrm{box}(\ldots)$, *Mathematica* $\mathrm{Hold}[\ldots]$ and Maple $\mathrm{freeze}(\ldots)$ functions suppress most or all transformations on their argument. Simplification is **idempotent** for a class of input expressions if simplification of the result (by the same default or optional transformations) yields the same result. A **conveniently cancelable singularity** is a removable singularity that can be removed exactly by functional identities such as $\sin(2w)\equiv2\sin(w)\cos(w)$ together with transformations such as a common denominator followed by factoring out the gcd of any resulting numerator and denominator, then using the law of exponents $w^{\mu}w^{\nu}\rightarrow w^{\mu+\nu}$. For example, $z^{3}z^{-2}\rightarrow z$, $\sin(2z)/\sin(z)\rightarrow2\cos(z)$, and $$\frac{1}{c\,\left(cx-1\right)}+\frac{1}{c}\rightarrow\frac{x}{cx-1},$$ which cancels the removable singularity at $c=0$, leaving the non-removable singularity along the hyperbola $cx=1$. However the removable singularity in $\sin(z)/z$ is not *conveniently* cancelable because it can’t be canceled exactly except inconveniently by means such as introducing the piecewise function$$\dfrac{\sin(z)}{z}\rightarrow\begin{cases} 1, & \mathrm{if}\; z=0,\\ \dfrac{\sin(z)}{z}, & \mathrm{otherwise,}\end{cases}$$ or the infinite series$$\dfrac{\sin(z)}{z}\rightarrow\sum_{k=0}^{\infty}\dfrac{(-1)^{k}z^{2k}}{(2k+1)!}.$$ A **nested power produc**t is an expression or a sub-expression of the form$$w^{\alpha}\left(w^{\beta_{1}}\right)^{\gamma_{1}}\cdots\left(w^{\beta_{n}}\right)^{\gamma_{n}},\label{eq:DefinitionOfNestedPowerProduct}$$ with $n\geq1$, rational exponents, and $\alpha$ possibly 0 or 1. This article describes simple algorithms that can be used in default and/or optional transformations to simplify nested power products correctly and well. The abstract presents one example of why this is important. Default and relevant optional transformations for *Derive*^®^ 6.00, TI-CAS version 3.10[^2], Maxima 5.24.0, Maple^tm^ 15.00 and *Mathematica* 8.0.4.0 were tested on 86 examples for the simplest case where $n=1$. The table in the Abstract shows the overall percentages for each of eight different decreasingly serious flaw types described in Section \[sec:ListOfGoals\]. Those large percentages for the six most serious kinds of flaws are alarming, and so are many corresponding percentages for each of the five systems.[^3] Wikipedia currently lists 29 other computer algebra systems, and I strongly suspect that most or all of them also have substantial room for improvement in this regard. Here is an outline of the rest of the article: Section \[sec:More-important-definitions\] defines three more crucial terms. Section \[sec:ListOfGoals\] describes eight prioritized goals for results that are nested power products, why they are important, and the reasons for their priorities. Section \[sec:Experimental-results\] describes the tables of results at the end of this article and how the listed result flaws were measured. Section \[sec:Four-alternative-forms\] describes four good forms for nested power products and how to obtain them: 1. Form 1 merely standardizes the outer fractional exponents to the interval $(-1,1)$ in a way that doesn’t introduce removable singularities, but instead tends to reduce their magnitude – perhaps completely. 2. Form 2 further reduces many outer fractional exponents to $[-1/2,1/2]$ in a way that cancels as much of any removable singularity as can be done without resorting to form 4. Form 2 is an improvement on form 1 at the expense of more computation. 3. Form 3 absorbs $w^{\alpha}$ into one of the nested powers just prior to display if $w^{\alpha}$ can thus be totally absorbed, giving a result with one less factor. Form 3 is an aesthetic improvement on form 2 at the expense of more computation. 4. Form 4 completely cancels any cancelable singularity and nicely collapses all of the exponents into a single unnested exponent. However, this form often entails a complicated unit magnitude piecewise constant factor that is -1 raised to a complicated exponent. Unsophisticated users might be baffled by this factor, and even sophisticated users might abhor the mess. However, this form must be addressed because it can occur in input, it is valuable for some purposes, and some computer algebra systems generate this form for some inputs. Section \[sec:Unimplemented-extensions\] suggests how to extend the algorithms to recognize syntactically different but equivalent instances of $w$ in nested power products and how to extend the algorithms to some kinds of non-numeric exponents. Section \[sec:Summary\] is an overall summary. The Appendix lists about one page of *Mathematica* rewrite rules that implement most of the third result form. Tables of results and their flaw numbers for the five systems and for the rewrite rules are at the end of the article. \[sec:More-important-definitions\]More key definitions ====================================================== In this article: - Unless stated otherwise, an **indeterminate** is a variable that has no assigned value, rather than a result such as 0/0. - Any finite or infinite-magnitude complex value can be substituted for indeterminates in expressions. - Fractional powers and square roots denote the principal branch.[^4] A **canonical form** for a class of expressions is one for which all equivalent expressions in the class are represented uniquely. Canonical forms help make cancellations of equivalent sub-expressions automatic. For example, if a computer algebra system always makes arguments of functional forms such as $\sin(\ldots)$ canonical, then an input sub-expression such as $\sin\left((x+1)^{2}\right)-\sin\left(x^{2}+2x+1\right)$ automatically simplifies to 0 rather than remaining unchanged as a bulky land mine that might make a subsequent result incorrect. Without canonical arguments, recognition of cryptically similar factors and terms requires costly tests such as determining if the difference in corresponding arguments can be simplified to 0. This might happen every time the same two functional forms meet during processes such as expansion of an integer power of a sum containing two sines, which can be often. In contrast, canonical arguments permit a much faster mere syntactic comparison of functional forms. As discussed in [@Brown; @MosesSimplification; @Stoutemyer10Commandments], canonical forms are unnecessarily costly and rigid for the entire class of expressions addressed by general-purpose computer algebra systems. However, canonical forms are acceptable and good for default simplification of some simple classes of *irrational* *sub-expressions* such as nested power products. **Zero-recognizing simplification** for a class of expressions is simplification for which all expressions in the class equivalent to 0 are transformed to 0. As illustrated by the example in the Abstract, a failure to recognize that a sub-expression is equivalent to 0 can lead to dramatically incorrect results. Therefore it is desirable for default simplification to have at least a zero-recognition property. It has been proven impossible to guarantee this even for some rather simple classes of irrational expressions, but a strong effort should be made to achieve at least zero recognition for as broad a class of expressions as is practical. **Candid simplification** produces results that are not equivalent to an expression that visibly manifests a simpler expression class. For example, in a candid result there are no superfluous variables, degree magnitudes are not larger than necessary, there are no unnecessary irrational sub-expressions, and irrationalities are nested no more deeply than necessary. Thus without being as rigidly constrained as canonical forms, candid simplification yields more desirable properties than mere zero-recognizing simplification. In this article **undefined** means an unknown point in the entire infinite complex plane, such as the result of 0/0.[^5] A **conveniently representable** subset of the infinite complex plane is one that is reasonably representable using constant expressions extended by sets, intervals and the symbol $\infty$. Conveniently representable proper subsets of the infinite complex plane are regarded here as defined. Particular computer algebra systems might not be able to represent the full range of possibilities, but this article is suggesting what *should* be done as well as reporting the current situation. These ideas are discussed in more detail in [@StoutemyerUsefulNumbers], but for this article the major defined subset of interest that isn’t a single point is the result of $u/0$ for any particular non-zero complex constant $u$. This result should be some representation of complex infinity. Among many other benefits it permits the correct computation$$\begin{aligned} \dfrac{1}{1+\dfrac{1}{0}} & \rightarrow & 0.\end{aligned}$$ Does your computer algebra system do this? - For *Mathematica*, $1/0\rightarrow\mathtt{ComplexInfinity}$. - For *Derive*, with its default real domain, $1/0\rightarrow\pm\infty$. - For TI-CAS, regrettably $1/0\rightarrow\mathrm{undef}$. - Maxima and Maple inconveniently throw an error. When a proper subset of the infinite complex plane isn’t conveniently representable, then the next best thing is to degrade it to 0/0. However, that shouldn’t be done for subsets that are as easily represented as complex infinity. If finite or infinite magnitude complex numbers are substituted for all of the indeterminates in an *unsimplified input* expression, then that *input* expression is undefined at that point if and only if the result is 0/0. A **generalized limit** is the set of uni-directional limits of an input expression from all possible directions in the complex plane. When the generalized limit of an input expression at a conveniently cancelable singularity is a conveniently representable proper subset of the entire infinite complex plane, then this article regards it as not only acceptable but *commendable* to cancel the singularity and thereby produce a *result* expression whose substitutional value is that conveniently representable subset at that point. Reasons for this attitude about mathematics software include: - Otherwise the results tend to be unacceptably complicated.[^6] - There is a high likelihood that the physical problem is actually continuous there too – Nature abhors a removable singularity. Removable singularities are often an artifact of the modeling such as using a polar or spherical coordinate system. - Cancelable singularities are often a result of an *unnecessary* previous transformation unavoidably done by a system (such as inappropriate rationalization of a denominator) or a result of a previous transformation such as monic normalization, a tangent half angle substitution, or expansion into partial fractions deemed necessary to obtain an anti-derivative. - Cancellation to simplify nested power products is consistent with quiet transformations such as $w/w\rightarrow1$ that are currently *unavoidable* in most computer algebra systems; - Symbolic cancelation tends to reduce rounding errors near removable singularities for subsequent substitution of floating-point numbers. However, this transformation of expressions has the composability consequence that substitution of numeric values doesn’t necessarily commute with simplification. To accommodate either treatment of expressions, computer algebra systems could and should build in *provisos* such as “$|\: w\neq0$” that are optionally attached automatically to intermediate and final results containing canceled removable singularities, as suggested in [@CorlessAndJeffreyProvisos; @Stoutemyer10Commandments]. Meanwhile, implementers who do not want to completely cancel cancelable singularities for simplifying nested power products can adapt the algorithms presented here to merely reduce the magnitude of cancelable singularities, such as $\left(z^{2}\right)^{5/2}/z^{3}\rightarrow\left(z^{2}\right)^{3/2}/z$ rather than transforming all the way to $z\sqrt{z^{2}}$. \[sec:ListOfGoals\]A list of goals for simplifying nested power products ======================================================================== The most important concern is correctness, followed by candidness, then aesthetics and compliance with custom. More specifically, here is a list of desirable but partially conflicting goals for simplifying nested power product and their differences, in decreasing order of importance: 1. The result should be equivalent to the input *wherever the input is defined*. (It is acceptable for the result to be a generalized limit of the input where the input is 0/0.) 2. A linear combination of two or more equivalent nested power products should simplify to a multiple of a single power product – or to 0 if the linear combination is equivalent to 0. 3. Let the **net exponent** of $\left(w^{\beta_{k}}\right)^{\gamma_{k}}$ be$$\triangle_{k}:=\beta_{k}\gamma_{k},$$ and for a product of nested powers of $w$ let the **total positive nested exponent** and the **total negative nested exponen**t be$$\begin{aligned} \triangle_{+} & := & \sum_{k=1}^{n}\max\left(\triangle_{k},0\right),\\ \triangle_{-} & := & \sum_{k=1}^{n}\min\left(\triangle_{k},0\right).\end{aligned}$$ When possible, use the transformation$$\left(w^{\beta_{k}}\right)^{\gamma_{k}}\rightarrow w^{m_{k}\beta_{k}}\left(w^{\beta_{k}}\right)^{\gamma_{k}-m_{k}}\label{eq:ShiftingIntegersInOneNestedPower}$$ with appropriate integers $m_{k}$ to minimize $\min\left(\max\left(\alpha,0\right)+\triangle_{+},\:-\min\left(\alpha,0\right)-\triangle_{-}\right)$, thus canceling as much of any removable singularity as is possible by this means. 4. When possible, *fully* absorb the $w^{\alpha}$ into the nested powers of $\left(w^{\beta_{k}}\right)^{\gamma_{k}}$ to have fewer factors. 5. Otherwise use transformation (\[eq:ShiftingIntegersInOneNestedPower\]) to minimize $\triangle_{+}-\triangle_{-}$ to minimize the contributions of the troublesome nested powers. 6. Inputs that are equivalent where both are defined should produce the same (canonical) result. 7. Results should be idempotent: Reapplying the same default or optional simplification to the result should leave it unchanged. 8. To help achieve goal 6, rationalize a denominator in a nested power product when this doesn’t introduce a removable singularity or increase its magnitude. A larger numbered goal should not be fulfilled if the only way to fulfill it is to violate a smaller numbered goal. For example, fulfillment of goals 5 or 8 can often violate goals 1, 2 and/or 3. The reasons for this ranking of the goals are: 1. A violation of goal 1 is most unsatisfactory because it is a result that is not equivalent to the input everywhere the input is defined. For example, if expression $w$ can be 0, then rationalizing the denominator of $1/\sqrt{w}$ to give $\sqrt{w}/w$ makes an input that is a well-defined complex infinity at $w=0$ become 0/0 there. A more serious example is the mal-transformation $(w^{-2})^{-1/2}\rightarrow(w^{2})^{1/2}$, because the two sides differ along the entire positive and negative imaginary axis. For example $(i^{-2})^{-1/2}=-i$, whereas $(i^{2})^{1/2}=i$. 2. The example in the Abstract shows the importance of zero recognition. For example if default simplification of one nested power product produces $\sqrt{z}/z$ and default simplification of another nested power product produces the equivalent expression $z/\sqrt{z}$, then the latter violates goal 8 and together they violate goal 6. These violations are minor; but if default simplification doesn’t simplify their difference to 0, then that is a violation of goal 2, which is serious. 3. A violation of goal 3 is next most serious because it is a squandered opportunity to improve the result by canceling a conveniently cancelable singularity and thereby making the result have the limiting value at $w=0$ rather than be undefined there. For example,$$\begin{aligned} \dfrac{\left(w^{2}\right)^{5/2}}{w}\;\rightarrow & \dfrac{w^{2}\left(w^{2}\right)^{3/2}}{w} & \rightarrow\; w\left(w^{2}\right)^{3/2},\label{eq:FirstEgOfGoal3}\\ w\left(\dfrac{1}{w^{2}}\right)^{5/2}\rightarrow & \dfrac{w}{w^{2}}\left(\dfrac{1}{w^{2}}\right)^{3/2} & \rightarrow\;\dfrac{1}{w}\left(\dfrac{1}{w^{2}}\right)^{3/2}.\label{eq:SecondEgOfGoal3}\end{aligned}$$ 4. A violation of goal 4 is more complicated than need be. For example, most people would agree that $w^{4}\left(w^{2}\right)^{1/2}$ is more complicated than $\left(w^{2}\right)^{5/2}$, which has one less factor. 5. Goal 5 is important because when there is more than one factor of the form $\left(w^{\beta}\right)^{\gamma_{k}}$, there might be more than one way to distribute only some of $w^{\alpha}$ into the nested powers. In contrast if $\gamma_{k}$ is a half-integer power then there are only two ways to minimize $\left|\gamma_{k}\right|$ by factoring an integer power of $w^{\alpha}$ out of $(w^{\alpha})^{\gamma}$, or only one way for other fractional powers. Moreover, unnested exponents are less specialized and can therefore interact more freely with other factors in a product. For example, for intermediate results (\[eq:FirstEgOfGoal3\]) and (\[eq:SecondEgOfGoal3\]),$$\begin{aligned} w\left(w^{2}\right)^{3/2} & \rightarrow & w^{2}\left(w^{2}\right)^{1/2},\\ \dfrac{1}{w}\left(\dfrac{1}{w^{2}}\right)^{3/2} & \rightarrow & \dfrac{1}{w^{2}}\left(\dfrac{1}{w^{2}}\right)^{1/2}.\end{aligned}$$ 6. For a given $w$, the above goals tend to yield the most concise possible nested power product in terms of $w$. Therefore it is better to have the consistency of having two inputs that are equivalent where they are defined return the same most concise form. More importantly a canonical form for nested power product sub-expressions greatly facilitates achieving goal 2 – a major benefit for very little effort. 7. Without idempotency, an unaware user could obtain inconsistent results, and a cautious aware user would have to re-enter such results as inputs until they cycle or stop changing. 8. Goal 8 complies with the custom of rationalizing denominators and helps achieve canonicality goal 6. For example,$$\begin{aligned} \dfrac{w}{\sqrt{w^{2}}}\;\rightarrow & \dfrac{w\sqrt{w^{2}}}{w^{2}} & \rightarrow\;\dfrac{\sqrt{w^{2}}}{w}.\end{aligned}$$ But rationalization should not be done at the expense of lower-numbered goals. For example,$$\begin{aligned} \dfrac{1}{w\left(w^{2}\right)^{2/3}} & \not\rightarrow & \dfrac{\left(w^{2}\right)^{1/3}}{w^{3}},\end{aligned}$$ because although it reduces the absolute value of the outer exponent (goal 5), it violates goal 1 by making an input that is complex infinity at $w=0$ become a result that is 0/0 there. \[sec:Experimental-results\]Important information about the tables ================================================================== Tables at the end of this article show the results that occurred for each example with each system and with the Appendix rewrite rules. In all of the tables the goal numbers in the Section \[sec:ListOfGoals\] goals that aren’t satisfied but could be satisfied without violating a lower-numbered goal are listed beside each result. Unmet goal numbers 1 and 2 are boldface to emphasize their extreme seriousness. Examples, test protocol and table interpretation ------------------------------------------------ Tables \[Flo:MathematicaDefaultTable\] through \[Flo:MaximaFullratsimpTableAndMapleSimplifyTable\] report default and relevant optional transformation results for **test family 1**: multiplying $w^{m}$ by $\left(w^{2}\right)^{n+2/3}$ for successive integer $m=-3$ through 3 in combination with successive integer $n=-3$ through 2. For comparison with results that meet all of the goals, Table \[Flo:AlgorithmAtable\] has corresponding results for form 3 described in Section \[sec:Four-alternative-forms\], as produced by the one page of *Mathematica* rewrite rules listed in the Appendix. Tables \[Flo:MathematicaDefaultSqrtReciprocalTable\] through \[Flo:MapleSimplifySqrtReciprocal\] report default and relevant optional transformation results for **test family 2**: multiplying $w^{m}$ by $\left(w^{\boldsymbol{-}2}\right)^{n+1/2}$ for the same combinations of $m$ and $n$. For comparison with results that meet all of the goals, Table \[Flo:AlgorithmASqrtReciprocalTable\] has corresponding form 3 results produced by the Appendix rewrite rules. To help assess compliance with goal 8, Table \[Flo:WonSqrtWSquaredTable\] compares results for all of the systems with the Appendix rewrite rules on the particularly simple input $w/\sqrt{w^{2}}$ . Some of the examples in test families 1 and 2 also test this goal. Table \[Flo:Form1MinusForm4Table\] tests only whether or not the expression$$\dfrac{\sqrt{w^{2}}}{w}-\left(-1\right)^{^{\frac{1}{2}\left(\arg(w^{2})-2\arg(w)\right)/\pi}}$$ simplifies to 0. This is the difference between equivalent expressions in form 3 and form 4. This is a more difficult but not impossible problem. All systems fail – including the Appendix rewrite rules, which do not address this issue. Here is how compliance with the goals was assessed: 1. Most of the results that violated goal 1 did so only at $w=0$. However, the goal 1 violations in Tables \[Flo:MaximaDefaultSqrtReciprocal\] and \[Flo:MaximaFullratsimpSqrtRecip\] instead or also are not equivalent to the input where it is defined along the entire positive and negative imaginary axis. This is caused by an outlaw of exponents: transforming $\left(w^{-\lambda}\right)^{\mu}$ to $\left(w^{\lambda}\right)^{-\mu}$ for fractional $\mu$, which is not valid along these semi-axes. 2. For test family 1, each input is equivalent to the input two rows down and one column left wherever both are defined, and their omnidirectional limits are identical wherever one of the inputs is 0/0. Therefore to assess compliance with goal 2 (zero-recognition), for every entry in the table I computed the difference $w^{m}\left(w^{2}\right)^{n+2/3}-w^{m+2}\left(w^{2}\right)^{n-1+2/3}$ or the optional transformation thereof and the difference $w^{m}\left(w^{2}\right)^{n+2/3}-w^{m-2}\left(w^{2}\right)^{n+1+2/3}$, then considered it a flaw for the entry if either of these two differences was non-zero. Thus compliance with this goal is *not* discernible from merely inspecting the result entries. Compliance is a property of the default simplification or optional transformation when given the difference of two non-identical but equivalent nested power products. For test family 2, each input is equivalent to the input two rows down and one column *right* wherever both are defined, so I did an analogous test for that. It is of course possible for an entry to pass these limited tests but fail for more widely separated equivalent inputs.[^7] For Table \[Flo:WonSqrtWSquaredTable\] the result is equivalent to $\sqrt{w}/w$ and $w^{2}/(w^{2})^{3/2}$, so I tested whether or not the corresponding two differences or optional transformation thereof simplified to 0. Table \[Flo:Form1MinusForm4Table\] tests zero recognition directly, and only than – but with only *one* particular difference in equivalent forms rather than only two. 3. To comply with goal 3 without violating lower-numbered goals, a result $w^{\hat{\alpha}}\left(w^{\beta}\right)^{\hat{\gamma}}$ should have$$\begin{aligned} & \alpha=0\:\vee\:\\ & \left(\mathrm{sign}\left(\hat{\alpha}\right)=\mathrm{sign}\left(\beta\hat{\gamma}\right)\:\wedge\:\left|\hat{\gamma}\right|<1\right)\:\vee\:\\ & (\mathrm{\alpha\neq0\:\wedge\: sign}\left(\hat{\alpha}\right)\neq\mathrm{sign}\left(\beta\hat{\gamma}\right)\:\wedge\:\left|\hat{\gamma}\right|<1\:\wedge\:\\ & \min(\left|\beta\hat{\gamma}\right|,\left|\hat{\alpha}\right|)\:\leq\:\min(\left|\beta\left(\hat{\gamma}-\mathrm{sign}\left(\hat{\gamma}\right)\right)\right|,\,\left|\hat{\alpha}+\beta\,\mathrm{sign}\left(\hat{\gamma}\right)\right|)).\end{aligned}$$ 4. To comply with goal 4, $\hat{\alpha}$ should not be an integer multiple of $\beta$. 5. To comply with goal 5 without violating lower numbered goals,$$\hat{\alpha}=0\;\vee\;\left(\mathrm{sign}\!\left(\hat{\alpha}\right)\!=\!\mathrm{sign}\!\left(\beta\hat{\gamma}\right)\:\wedge\:\left|\hat{\gamma}\right|\!<\!1\right)\;\vee\;\left(\mathrm{\alpha\!\neq\!0\:\wedge\: sign}\!\left(\hat{\alpha}\right)\!\neq\!\mathrm{sign}\!\left(\beta\hat{\gamma}\right)\:\wedge\:\left|\hat{\gamma}\right|\!\leq\!\frac{1}{2}\right).$$ 6. For compliance with goal 6, every result entry for test family 1 should be *identical* to the entry 2 rows down and one column left, whereas every result entry for test family 2 should be identical to the entry 2 rows down and one column right. To equally assess the top two rows, the bottom two rows, the leftmost column and the rightmost column, I computed extra neighbors bordering those shown. When there were differences, I did not penalize the best displayed results for equivalent entries unless their was a better displayed result one column and one or two rows outside the table. However, I did penalize *all* of the not-best members for equivalent entries. I similarly tested the result in Table \[Flo:WonSqrtWSquaredTable\] against the equivalent expressions $\sqrt{w}/w$ and $w^{2}/(w^{2})^{3/2}$. It is of course possible for an entry to pass these tests but fail for more widely separated equivalent inputs. 7. To test compliance with goal 7, I resimplified each result with either default simplification or the optional transformation used for the original input, then checked for identical results. Compliance with this goal is *not* discernible from merely inspecting the result entries. 8. To test compliance with goal 8, I manually rationalized the results having a fractional power in the denominator and $\alpha\neq0$ by multiplying the numerator and denominator by $(w^{-2})^{1/3}$ for test family 1 or $\sqrt{w^{2}}$ for test family 2. It was counted as a flaw if and only if that forced rationalization did not introduce a removable singularity or increase its magnitude. Remarks about particular results. --------------------------------- Maxima also has a relevant $\mathrm{rat}(\ldots)$ function. For these examples, it generally produces the same result as default simplification, except that fractional powers are represented as an integer power of a reciprocal power – or an integer power of $\sqrt{\ldots}$ for half-integer powers. Thus a default result $w^{3}\left(w^{2}\right)^{5/3}$ would instead be $w^{3}((w^{2})^{1/3})^{5}$, and a default result $w^{3}\left(w^{2}\right)^{5/2}$ would instead be $w^{3}\sqrt{w^{2}}^{5}$. The standard definition of $u^{m/n}$ for reduced integers $m$ and $n$ *is* $\left(u^{1/n}\right)^{m}$, which is consistent with the alternate definition $e^{\ln\left(u\right)m/n}$.[^8] Consequently, the Maxima $\mathrm{rat}\left(\ldots\right)$ function makes the standard interpretation of the result more explicit at the expense of clutter. Nonetheless, it might be helpful as a precursor to semantically substituting a new expression for $\left(w^{2}\right)^{1/3}$ via syntactic substitution in a expression containing $\left(w^{2}\right)^{m/3}$ for several different integer $m$. For the sake of brevity, results are not included for the $\mathrm{rat}(\ldots)$ function because its flaws were very nearly identical to default simplification, regarding $((w^{2})^{1/n})^{m}$ as $(w^{2})^{m/n}$. *Mathematica*, Maxima and Maple also respectively have relevant PowerExpand[\[]{}...[\]]{}, radcan(...) and simplify(..., symbolic) functions. However, they always transform $(w^{\beta})^{\gamma}$ to $w^{\beta\gamma}$, which is not equivalent along entire rays from $w=0$. I didn’t test these functions because their purpose is presumably partly to allow these risky unconditional transactions for consenting adults. However, these three systems, *Derive* and TI computer algebra also have *safe* ways to enable such desired transformations, when justified, by declaring, for example, that certain variables are real or positive. “*… a man who thought he could somehow pull up the root without affecting the power.*”\ –adapted from Gilbert K. Chesterton To make sure that $w$ is regarded as a complex variable and the principal branch is used rather than the real branch: - All of the *Derive* results follow a prior declaration $w:\in\mathtt{Complex}$. - All of the TI-CAS results necessarily used $w\_$ rather than $w$ to manifestly declare it as a complex indeterminate. However for consistency $w\_$ is displayed in all of the tables as $w$ because Table \[Flo:TIandMapleDefaultSqrtReciprocal\] is shared with Maple for brevity. - All of the Maxima results followed a prior assignment $\mathtt{domain:complex}$ and a prior declaration $\mathtt{declare(w,complex)}$. As illustrated by Tables \[Flo:MapleSimplifySqrtReciprocal\] through \[Flo:Form1MinusForm4Table\], the Maple $\mathrm{simplify}(\ldots)$ function expresses half-integer powers of squares or of reciprocals of squares using the Maple $\mathrm{csgn}(\ldots)$ function defined by$$\mathrm{csgn}(w):=\begin{cases} 1, & \mathrm{if}\;\Re(w)>0\:\vee\;\Re(w)=0\;\wedge\;\Im(w)\geq0,\\ -1, & \mathrm{otherwise}.\end{cases}\label{eq:DefinitionOfCsgn}$$ The right side of this definition is a simplified special instance of form 4, for which $\mathrm{csgn}(w)$ is a convenient abbreviation for those familiar with it.[^9] Regarding Table \[Flo:Form1MinusForm4Table\]: - *Mathematica* (hence also the Appendix rewrite rules) did the automatic transformation$$(-1)^{\left(1/2\right)\left(\mathrm{Arg}\left[w^{2}\right])-2\mathrm{Arg}\left[w\right]\right)/\pi}\rightarrow i^{\left(\mathrm{Arg}\left[w^{2}\right])-2\mathrm{Arg}\left[w\right]\right)/\pi}.$$ Although it eliminates the $1/2$ factor from the exponent in this case, it does so at the expense of candidness by introducing $i$ into an expression that is real for all $w$. - For TI-CAS the $\arg$ function is spelled “angle” and regrettably angle(0) is returned unchanged rather than transforming to 0. Therefore the input was$$\dfrac{\sqrt{w\_^{2}}}{w\_}-\left(-1\right)^{\mathrm{when}\left(w\_=0,\:0,\:(1/2)(\mathrm{angle}(w\_^{2})-2\mathrm{angle}(w\_))/\pi\right).}$$ As indicated in Table \[Flo:Form1MinusForm4Table\], the real power of -1 in the input was changed to an imaginary power of $e$ in the result. This has the candidness disadvantage of introducing $i$ into an expression that is real for all real $w\_$. - For *Derive* the arg function is spelled “phase” and regrettably phase(0) returns $\pi/2\pm\pi/2$, which denotes an unknown element of $\left\{ 0,\pi\right\} $, which are the only two possibilities for *real* arguments. Therefore the input was$$\dfrac{\sqrt{w^{2}}}{w}-\left(-1\right)^{\mathrm{IF}\left(w=0,\;0,\;(1/2)(\mathrm{PHASE}(w{}^{2})-2\mathrm{PHASE}(w))/\pi\right).}$$ - For Maple, the $\arg$ function is spelled “argument”, and for Maxima it is spelled “carg”. If you are interested in results for some other systems, then try a few of the examples that are heavily flawed for most of the five tested systems.[^10] First do whatever is necessary so that fractional powers use the principal branch and $w$ is regarded as a complex indeterminate. \[sec:Four-alternative-forms\]Four alternative forms ==================================================== Section \[sec:Introduction\] explains the reasons for four separate forms. Form 1: Reduction of outer fractional exponents to (-1, 1)\[subWidth2Interval\] ------------------------------------------------------------------------------- For $x\in\mathbb{R}$ the **integer part function**$$\mathrm{Ip}\left(x\right):=\begin{cases} \left\lfloor x\right\rfloor , & \mathrm{if}\; x\geq0,\\ \left\lceil x\right\rceil , & \mathrm{otherwise}.\end{cases}$$   For $x\in\mathbb{R}$ the **fractional part function** $\mathrm{Fp}(x):=x-\mathrm{Ip}(x)$. For $\beta\in\mathbb{Q}$, $\gamma\in\mathbb{Q}-\mathbb{Z},$ and arbitrary expression $w\in\mathbb{C}$, $$\left(w^{\beta}\right)^{\gamma}\equiv w^{\mathrm{\beta\, Ip}\left(\gamma\right)}\left(w^{\beta}\right)^{\mathrm{Fp}\left(\gamma\right)}.\label{eq:TransformationToMinus1One}$$ We have$$\left(w^{\beta}\right)^{\gamma}\equiv\left(w^{\beta}\right)^{\mathrm{Ip}\left(\gamma\right)}\left(w^{\beta}\right)^{\mathrm{Fp}\left(\gamma\right)}\label{eq:IpPlusFP}$$ because: 1\. With $\gamma\in\mathbb{Q}-\mathbb{Z}$, $\mathrm{Ip}(\gamma)=0\;\vee\;\mathrm{sign}\left(\mathrm{Ip}(\gamma)\right)=\mathrm{sign}\left(\mathrm{Fp}(\gamma)\right)$. 2\. For any expression $u\in\mathbb{C}$ and $r_{1},r_{2}\in\mathbb{Q}\;|\; r_{1}=0\;\vee\;\textrm{sign}\, r_{1}=\textrm{sign}\, r_{2}$,$$u^{r_{1}+r_{2}}\equiv u^{r_{1}}u^{r_{2}},\label{eq:uR1PlusR2Equal}$$ even at $u=0$ with $r_{1}$ and $r_{2}$ both negative, making both sides of (\[eq:uR1PlusR2Equal\]) be complex infinity. 3\. By Proposition \[pro:IntegerExponent\] we also have $\left(w^{\beta}\right)^{\mathrm{Ip}\left(\gamma\right)}\equiv w^{\mathrm{\beta\, Ip}\left(\gamma\right)}$ because $\mathrm{Ip}\left(\gamma\right)\in\mathbb{Z}$. Therefore Form 1 is simply to transform $w^{\alpha}\left(w^{\beta_{1}}\right)^{\gamma_{1}}\cdots\left(w^{\beta_{n}}\right)^{\gamma_{n}}$ toward canonicality by transforming every positive fraction $\gamma_{k}$ to the interval $(0,1)$ and every negative fraction $\gamma_{k}$ to the interval $(-1,0)$. The various $w^{\mathrm{Ip}\left(\gamma_{k}\right)\,\beta_{k}}$ are combined with the original $w^{\alpha}$, giving a transformed expression$$\widehat{W}:=w^{\hat{\alpha}}\left(w^{\beta_{1}}\right)^{\hat{\gamma}_{1}}\cdots\left(w^{\beta_{n}}\right)^{\hat{\gamma}_{n}}\label{eq:DefinitionOfWHat}$$ \[sub:DefinitionOfWHat\]where $\hat{\alpha}$ might be 0. This form 1 satisfies goals 1 and 7, while possibly contributing progress toward goals 2, 3, 5 and 6. This form also has the advantage that if $\left(w^{\beta}\right)^{\hat{\gamma}}$ is subsequently raised to any power $\lambda$, then we can simplify it to the simplified value of $\left(w^{\beta}\right)^{\gamma\lambda}$ by Proposition \[pro:ExponentInMinus1To1\] because $-1<\hat{\gamma}<1$. For example,$$\begin{aligned} \left(\left(w^{2}\right)^{3/4}\right)^{7/6} & \rightarrow & \left(w^{2}\right)^{7/8}.\end{aligned}$$ Although there is no such thing as a free radical in computer algebra, this transformation of each nested power is fast and easy to implement because it occurs only for certain fractional powers of powers, which are relatively rare, and very little work is done even when it does occur. There is no good reason why default simplification shouldn’t do at least this much. However, default simplification for many systems unavoidably collects similar factors, resulting in a partial reversal of this transformation whenever a resulting unnested exponent $\hat{\alpha}$ is identical to one of the inner nested exponents. This happens for *Derive*, TI-CAS and *Mathematica*, but not for Maple or Maxima. With unavoidable collection, unconditional magnitude reduction of fractional outer exponents can lead to an infinite recursion such as$$w\left(w^{2}\right)^{5/3}\rightarrow w^{2}\left(w^{2}\right)^{2/3}\rightarrow w\left(w^{2}\right)^{5/3}\rightarrow\cdots.$$ Therefore in the Appendix *Mathematica* rewrite rules: - Transformation $\left(w^{\beta}\right)^{\gamma}\rightarrow w^{\mathrm{Ip}\left(\gamma\right)\,\beta}\left(w^{\beta}\right)^{\mathrm{Fp}\left(\gamma\right)}$ is used unconditionally only *prior* to default simplification. - The rewrite rules that are active *during* default and optional transformations do not reduce the magnitude of $\gamma$ if doing so would give an unnested exponent $\hat{\alpha}$identical to $\beta$. - This transformation is used *after* default simplification only if there is an unnested factor $w^{\alpha}$ and the transformation would not be reversed by unavoidable collection of similar powers. Implementations for other systems might have to overcome this difficulty in some other way, or compromise and not always produce a form with outer fractional exponents in the interval $(-1,1)$ when $w^{\alpha}$ can’t be fully absorbed into some nested power. Form 2: Further reducing some outer exponents to (-1/2, 1/2[\]]{}\[sub:UnitWidthInterval\] ------------------------------------------------------------------------------------------ Form 2 is form 1 supplemented by an additional transformation. Expression $\widehat{W}$ given by definition (\[eq:DefinitionOfWHat\]) is equivalent to expression $W$ everywhere that $W$ is defined, because at the only questionable point $w=0$: 1. Expressions $W$ and $\widehat{W}$ are both 0 if $\alpha\geq0$ and all of the $\beta_{k}\gamma_{k}$ are positive. 2. Otherwise expression $W$ and $\widehat{W}$ are both complex infinity if $\alpha\leq0$ and all of the $\beta_{k}\gamma_{k}$ are negative. 3. Otherwise if $\hat{\alpha}\geq0$ and all $\hat{\beta}_{k}\hat{\gamma}_{k}>0$, then $W$ is 0/0 but $\widehat{W}$ has improved to 0. 4. Otherwise if $\hat{\alpha}\leq0$ and all $\hat{\beta}_{k}\hat{\gamma}_{k}<0$, then $W$ is 0/0 but $\widehat{W}$ has improved to complex infinity. 5. Otherwise both $W$ and $\widehat{W}$ are 0/0. However, the magnitude of the multiplicity of the removable singularity is less for $\widehat{W}$ if for any $\gamma_{k}$, $\left|\gamma_{k}\right|\geq1$. Expression $\widehat{W}$ is canonical in cases 1 through 4, but not necessarily for case 5. For example, 1. The different equivalent expressions $z^{-1}\left(z^{2}\right)^{2/3}$ and $z\left(z^{2}\right)^{-1/3}$ both have outer exponents in (-1, 1). Of these two alternatives, the latter is preferable for most purposes because the $\left|-2/3\right|<\left|4/3\right|$, making multiplicity of the uncanceled portion of the removable singularity have a smaller magnitude. Thus a rationalized numerator is sometimes preferable to a rationalized denominator. 2. The different expressions $z^{-3}\left(z^{2}\right)^{1/2}$ and $z^{-1}\left(z^{2}\right)^{-1/2}$ both have outer exponents in (-1, 1), and they are equivalent wherever the first alternative is defined. However, the latter unrationalized denominator is preferable because the former is 0/0 at $z=0$ where the latter is defined and equal to the complex infinity limit of the former. 3. The different equivalent expressions $z\left(z^{2}\right)^{-1/2}$ and $z^{-1}\left(z^{2}\right)^{1/2}$ both have outer exponents in (-1, 1), and the multiplicities of the uncanceled portion of their removable singularity at $z=0$ are both 1. Of these two alternatives, the latter is slightly preferable because it has a traditionally rationalized denominator rather than a rationalized numerator. Thus after producing form 1 we can sometimes add 1 to a negative $\hat{\gamma}_{k}$ or subtract 1 from a positive $\hat{\gamma}_{k}$, then adjust $\alpha$ accordingly to reduce the magnitude of the overall removable singularity – perhaps entirely. If not, perhaps we can at least contribute toward goals 2, 6 and 8 by rationalizing a square root in the denominator. Let$$\begin{aligned} \Delta_{k} & := & \beta_{k}\hat{\gamma}_{k},\\ \Delta & := & \alpha+\Delta_{1}+\cdots+\Delta_{n}.\end{aligned}$$ Transforming any of the $\left(w^{\beta_{k}}\right)^{\gamma_{k}}$ to $w^{m_{k}\beta_{k}}\left(w^{\beta_{k}}\right)^{\gamma_{k}-m_{k}}$ for any integer $m_{k}$ leaves $\Delta$ unchanged. Our primary goal is, whenever possible, to make all of the $\Delta_{k}$ have the same sign and for $\alpha$ to have either the same sign or be 0. A secondary goal is to prefer $-1/2<\hat{\gamma}_{k}\leq1/2$. Therefore, the algorithm to convert form 1 to form 2 is: 1. If $\Delta>0$, then for each $\Delta_{k}<0$, add $\mathrm{sign}\left(\beta_{k}\right)$ to $\hat{\gamma}_{k}$ and subtract $\left|\beta_{k}\right|$ from $\alpha,$ then return the result. 2. If $\Delta<0$, then for each $\Delta_{k}>0$, subtract $\mathrm{sign}\left(\beta_{k}\right)$ from $\hat{\gamma}_{k}$ and add $\left|\beta_{k}\right|$ to $\alpha$, then return the result. 3. For each $\hat{\gamma}_{k}>1/2$, subtract 1 from $\hat{\gamma}_{k}$ and add $\beta_{k}$ to $\alpha$. 4. For each $\hat{\gamma}\leq-1/2$, add 1 to $\hat{\gamma}_{k}$ and subtract $\beta_{k}$ from $\alpha$. 5. Return the result. This canonical form 2 satisfies all of the goals except for the aesthetic goal 4. For brevity the Appendix rewrite rules consider only one $\Delta_{k}$ at a time. This is sufficient for all of the test cases, which have only one nested power. For an industrial-strength implementation, each time we multiply a fractional power of a power by a product of one or more factors, we should inspect those factors for identical expressions $w$ and apply the above algorithm if that subset if non-empty. The cost is $O\left(n_{c}\right)$ where $n_{c}$ is the number of cofactors. The opportunity occurs only when multiplying a fractional power of a power, which is rare; and the number of factors in a product is typically quite small. Therefore it also quite reasonable to do this in default simplification. **Form 3: Finally, fully absorb $w^{\alpha}$ into a fractional power if possible -------------------------------------------------------------------------------- Form 3 is form 2 followed by an additional transformation. Form 2 can result in an expression such as $z^{4}\left(z^{2}\right)^{1/2}$, for which many users would regard $\left(z^{2}\right)^{5/2}$ as a simpler result because it has one less factor. We can often absorb at least some of $z^{\alpha}$ into one of the $\left(z^{\beta_{k}}\right)^{\gamma_{k}}$ by the transformation$$z^{\alpha}\left(z^{\beta_{k}}\right)^{\gamma_{k}}\rightarrow z^{\beta_{k}\,\mathrm{Fp}\left(\alpha/\beta_{k}\right)}\left(z^{\mathbf{\beta_{k}}}\right)^{\gamma_{k}+\mathrm{Ip}\left(\alpha/\beta_{k}\right)},$$ which doesn’t change the domain of definition. However, this transformation seems inadvisable unless $\mathrm{Fp}\left(\alpha/\beta_{k}\right)=0$, because otherwise it increases the contribution of a troublesome nested power without reducing the number of factors. Also, this transformation is problematic *during* intermediate computations even if $\mathrm{Fp}\left(\alpha/\beta_{k}\right)=0$, because when there is more than one nested power, then more than one might be eligible, making it awkward to maintain canonicality achieved by form 2. Moreover, absorption conflicts with transformations done to obtain form 1 or 2, thus risking infinite recursion. A solution to this dilemma is to fully absorb $w^{\alpha}$ only just before display – after all other default and optional simplification. This does have the minor disadvantage that what the user sees doesn’t faithfully represent the internal representation. However, that bridge has already been crossed by most systems, which for speed and implementation simplicity internally use, for example, $(\ldots)^{1/2}$ to represent a displayed $\sqrt{\ldots}$ and $a+-1*b$ to represent a displayed $a-b$. When there is more than one nested power of $w$, then there might be more than one way to absorb $\alpha$ completely into those nested powers. For example,$$\begin{aligned} w^{6}\left(w^{2}\right)^{1/2}\left(w^{3}\right)^{1/2}\left(w^{4}\right)^{1/2} & \equiv & \left(w^{2}\right)^{\boldsymbol{7/2}}\left(w^{3}\right)^{1/2}\left(w^{4}\right)^{1/2}\nonumber \\ & \equiv & \left(w^{2}\right)^{1/2}\left(w^{3}\right)^{\boldsymbol{5/2}}\left(w^{4}\right)^{1/2}\label{eq:FirstExampleOfAbsorption}\\ & \equiv & \left(w^{2}\right)^{\boldsymbol{3/2}}\left(w^{3}\right)^{1/2}\left(w^{4}\right)^{\boldsymbol{3/2}}.\nonumber \end{aligned}$$ In general, the possible resulting expressions are given by$$\left(w^{\beta_{1}}\right)^{\gamma_{1}+m_{1}}\left(w^{\beta_{2}}\right)^{\gamma_{2}+m_{2}}\cdots\left(w^{\beta_{n}}\right)^{\gamma_{n}+m_{n}},$$ where the tuple of integers $\left\langle m_{1},m_{2},\ldots,m_{n}\right\rangle $ is a solution to the linear Diophantine equation$$m_{1}\beta_{1}+m_{2}\beta_{2}+\cdots+m_{n}\beta_{n}=\alpha.$$ Solutions exist if and only if $\alpha$ is an integer multiple of $\gcd\left(\beta_{1},\beta_{2},\ldots\beta_{n}\right)$, in which case there might be a countably infinite number of tuples. However, to avoid introducing removable singularities or increasing the magnitude of their multiplicity, we are only interested in solutions for which $\mathrm{sign}\left(m_{j}\beta_{j}\right)\equiv\mathrm{sign}\left(\alpha\right)$ for $j=1,2,\ldots,n$. Papp and Vizvari [@PappAndVizvari] describe an algorithm for solving such sign-constrained linear Diaphantine equations, and the *Mathematica* $\mathtt{Reduce}\left[\ldots\right]$ function can solve such equations. For example, suppose our canonical form 2 result is$$z^{14}\left(z^{6/7}\right)^{1/2}\left(z^{10/7}\right)^{1/3}.\label{eq:nEqual2Example}$$ In *Mathematica*, we can determine the family of integers $m_{1}\geq0$ to add to $1/2$ and $m_{2}\geq0$ to add to 1/3 that together absorb $z^{14}$ as follows: $$\begin{aligned} \mathsf{In}[1]: & = & \mathtt{Reduce\,}\left[\dfrac{6}{7}m_{1}\!+\!\dfrac{10}{7}m_{2}==14\:\;\&\&\:\;\dfrac{6}{7}m_{1}\geq0\:\;\&\&\:\;\dfrac{10}{7}m_{2}\geq0,\mathtt{\,\left\{ m_{1},m_{2}\right\} ,\, Integers}\right]\\ & & \quad//\mathtt{TraditionalForm}\end{aligned}$$ $$\mathsf{Out}[1]//\mathrm{TraditionalForm}=\left(m_{1}=3\wedge m_{2}=8\right)\,\vee\,\left(m_{1}=8\wedge m_{2}=5\right)\,\vee\,\left(m_{1}=13\wedge m_{2}=2\right)$$ Regarding the choice between alternative absorptions, canonicality is not as important for a final displayed result as it is during intermediate calculations where it facilitates important cancellations. However, with more than one solution, we could choose one in a canonical way as follows: Order the $\beta_{j}$ in some canonical way, such as the way they order in $\left(w^{\beta_{1}}\right)^{\gamma_{1}}\cdots\left(w^{\beta_{n}}\right)^{\gamma_{n}}$, then to choose the solution for which $m_{1}$ is smallest, with ties broken according to which $m_{2}$ is smallest, etc. Solution of sign-constrained linear Diophantine equations can be costly – probably too costly for default simplification. Consequently, the rewrite rules in the Appendix simply absorb $w^{\alpha}$ if and only if it can be completely absorbed into a single power of a power, in which case the particular one is the first one encountered by the pattern matcher. This is canonical, but it doesn’t absorb $w^{\alpha}$ for examples such as (\[eq:nEqual2Example\]). However, this transformation is inexpensive because it is done only once in one pass over the expression just prior to display, and the transformation requires comparing a power of a power with its cofactors only in products where powers of powers occur. All but this absorption rule are automatically applied *before* default simplification so that, for example, the input$$\dfrac{w-w}{\dfrac{\sqrt{z^{2}}}{z^{3}}-\dfrac{1}{z\sqrt{z^{2}}}}$$ correctly simplifies to `indeterminate`, meaning 0/0, rather than to 0. The rewrite rules in the Appendix are not much more than the minimal amount necessary to generate the form 3 results in Tables \[Flo:AlgorithmAtable\] and \[Flo:AlgorithmASqrtReciprocalTable\], together with the relevant rows in Tables \[Flo:WonSqrtWSquaredTable\] and \[Flo:Form1MinusForm4Table\]. Form 4\[sub:UnitMagnitudeFactor\]: One unnested power times a unit-magnitude factor ----------------------------------------------------------------------------------- Form 4 is quite different from forms 1 through 3. A universal principal-branch formula for transforming a nested power to an unnested power is$$\left(w^{\beta}\right)^{\gamma}\rightarrow\left(-1\right)^{\tau}w^{\beta\gamma},\label{eq:PowerOfPowerTransformation}$$ where$$\tau:=\begin{cases} 0, & \mathrm{if}\;\arg(0)=0,\\ \dfrac{\gamma\left(\arg\left(w^{\beta}\right)-\beta\arg\left(w\right)\right)}{\pi}, & \mathrm{otherwise},\end{cases}\label{eq:CorrectionExponentForPowerOfPower}$$ with short-circuit evaluation so that the “otherwise” result expression is not evaluated when the “if” test is true. The transformation given by formulas (\[eq:PowerOfPowerTransformation\]) and (\[eq:CorrectionExponentForPowerOfPower\]) can be derived from the identities$$\begin{aligned} \left|p\right| & \equiv & (-1)^{-\arg\left(p\right)/\pi}p\qquad\mathrm{for}\; p\neq0,\\ \left|q^{\alpha}\right|^{\beta} & \equiv & \left|q\right|^{\alpha\beta}.\end{aligned}$$ Notice that the **unit-polar** factor $(-1)^{\tau}$ is unit magnitude because $\arg(\ldots)$ is always real, as are the rational numbers $\gamma$ and $\beta$. Moreover, $(-1)^{\tau}$ is piecewise constant with pie-shaped pieces emanating from $w=0$ because $\arg\left(w^{\beta}\right)$ and $\beta\arg\left(w\right)$ have the same derivative with respect to $w$ everywhere they are both continuous, and each of them has a finite number of discontinuities. An imaginary exponential $e^{i\pi\tau}$ is an alternative to $(-1)^{\tau}$, but it has the candidness disadvantage of introducing $i$ into a factor that can be real and always is for the common case where the outer exponent $\gamma$ is a half-integer. If $\arg(0)$ is defined as 0, as it is in *Mathematica*, Maple, and Maxima, then we can define $\tau$ more concisely and unconditionally as$$\tau:=\dfrac{\gamma\left(\arg\left(w^{\beta}\right)-\beta\arg\left(w\right)\right)}{\pi}.\label{eq:UnconditionalDefinitionOfTau}$$ \[pro:PositiveRadicand\]If $w\geq0$, then $\tau=0$. When $w=0$, $\tau=0$ follows immediately from expression (\[eq:CorrectionExponentForPowerOfPower\]), and\ $w>0\:\Rightarrow\:\arg\left(w^{\beta}\right)=0\wedge\arg\left(w\right)=0\:\Rightarrow\:\gamma\left(\arg\left(w^{\beta}\right)-\beta\arg\left(w\right)\right)/\pi=0\:\Rightarrow\:\tau=0.$ \[pro:ExponentInMinus1To1\]If $-1<\beta\leq1$, then $\tau=0$. $-1<\beta\leq1\:\Rightarrow\:\arg\left(w^{\beta}\right)=\beta\arg\left(w\right)\:\Rightarrow\:\gamma\left(\arg\left(w^{\beta}\right)-\beta\arg\left(w\right)\right)/\pi=0\:\Rightarrow\:\tau=0.$ \[pro:IntegerExponent\]If $\gamma$ is integer, then $(-1)^{\tau}=1$. $\arg\left(w^{\beta}\right)$ is $\beta\arg\left(w\right)$ plus an even integer multiple of $2\pi$. Thus when $\gamma$ is an integer, then $\gamma\left(\arg\left(w^{\beta}\right)-\beta\arg\left(w\right)\right)/\pi$ is an even integer, making $\tau$ be an even integer, making $(-1)^{\tau}=1$. The simplification afforded by these three propositions should have already been exploited with bottom-up default simplification, in which case $\left(w^{\beta}\right)^{\gamma}$ will have already been simplified to $w^{\beta\gamma}$. If it isn’t, then that is another opportunity to improve the system for very little effort.[^11] Thus, because $\beta$ and $\gamma$ are explicit non-zero rational numbers, without loss of generality this article assumes that $w$ isn’t known to be nonnegative, and that $\beta\leq-1$ or $\beta>1$, and that $\gamma$ is non-integer. Using transformation (\[eq:PowerOfPowerTransformation\]) on every $\left(w^{\beta_{k}}\right)^{\gamma_{k}}$ in $W$ defined by (\[eq:DefinitionOfNestedPowerProduct\]) then collecting powers of $-1$ gives$$\overline{W}=\left(-1\right)^{\sigma}w^{\alpha+\beta_{1}\gamma_{1}+\cdots+\beta_{n}\gamma_{n}},\label{eq:Form1}$$ where $\sigma$ is a simplified sum of terms of the form (\[eq:CorrectionExponentForPowerOfPower\]) or (\[eq:UnconditionalDefinitionOfTau\]). The factor $\left(-1\right)^{\sigma}$ is also unit magnitude with pie-shaped piecewise constant pieces because it is the product of such factors. This form has two great advantages over the other three forms: - All of the exponents have been combined into a *single unnested exponent*. - Cancelable singularities are always *completely* canceled. Unfortunately this comes at the expense of a form that is usually bulkier than the other forms Simplification of individual piecewise expressions and combinations of such expressions is currently rather weak in most systems, but Carette [@Carette] describes a canonical form for such expressions, so we can hope for improvement. In our case the piecewise expressions all have the same tests. Therefore we can add all of the 0s together and add all of the expressions involving $\arg(\ldots)$ together into a single piecewise function. For example,$$\begin{gathered} \dfrac{\left(z^{2}\right)^{3/2}\left(z^{3}\right)^{4/3}}{z^{6}}\\ \rightarrow\left(\!\left(-1\right)^{\!\begin{cases} 0, & \!\mathrm{\!\! if}\:\arg z\!=\!0,\\ \frac{\frac{3}{2}\left(\arg\!\left(z^{2}\right)-2\arg z\right)}{\pi}, & \!\mathrm{\!\! otherwise}\end{cases}}\!\right)\!\negthinspace\left(\!\left(-1\right)^{\!\begin{cases} 0, & \!\mathrm{\!\! if}\:\arg z\!=\!0,\\ \frac{\frac{4}{3}\left(\arg\left(z^{3}\right)-3\arg z\right)}{\pi}, & \!\mathrm{\!\! otherwise}\end{cases}}\!\right)\! z^{\frac{3}{2}2+\frac{4}{3}3-6}\\ \rightarrow\left(\left(-1\right)^{\begin{cases} 0, & \mathrm{\!\! if}\:\arg z=0,\\ \frac{\frac{3}{2}\arg\left(z^{2}\right)+\frac{4}{3}\arg(z^{3})-7\arg z}{\pi}, & \mathrm{\!\! otherwise}\end{cases}}\right)z.\label{eq:ExampleOfForm4}\end{gathered}$$ If $\arg(0)\rightarrow0$, then simplification of piecewise expressions isn’t an issue here and the resulting exponent of $-1$ is simply $\left(\frac{3}{2}\arg\left(z^{2}\right)+\frac{4}{3}\arg(z^{3})-7\arg z\right)/\pi$. However, the result is not canonical either way, because starting with the equivalent canonical form 2,$$\begin{aligned} \dfrac{\sqrt{z^{2}}\left(z^{3}\right)^{1/3}}{z} & \rightarrow & \left(\left(-1\right)^{\begin{cases} 0, & \mathrm{\!\! if}\;\arg z=0,\\ \frac{\frac{1}{2}\arg\left(z^{2}\right)+\frac{1}{3}\arg(z^{3})-2\arg z}{\pi} & \mathrm{\!\! otherwise}\end{cases}}\right)z,\label{eq:FullySimplifiedForm4Example}\end{aligned}$$ which has smaller magnitude coefficients. Thus for canonicality we could precede this transformation with a transformation to form 2. Equivalently we can adjust the coefficients of the $\arg\left(w^{\beta_{k}}\right)$ and $\arg(w)$ analogous to how we adjusted exponents to arrive at form 2. This is preferable because it also canonicalizes expressions of form 4 that are entered directly or generated by the system. With pie-shaped pieces, $(-1)^{\sigma}$ can always be expressed in the more candid canonical form$$\begin{cases} c_{1}, & \mathrm{if}\;-\pi<\arg w\:::\:\theta_{1,}\\ c_{2}, & \mathrm{if}\;\theta_{1\:}::\:\arg w\:::\:\theta_{2,}\\ \ldots & \ldots\\ c_{m}, & \mathrm{otherwise},\end{cases}$$ where $c_{1}$ through $c_{m}$ are unit-magnitude complex constants, $\theta_{1}$ through $\theta_{m-1}$ are real constants in $(-\pi,\pi)$, and each instance of “::” is either “$<$” or “$\leq$”.[^12] Moreover: 1. When $w$ is real, then the positive and negative real axes are each entirely within one pie slice, enabling us to simplify$\left(-1\right)^{\sigma}$ to one unconditional constant or piecewise expression of the form$$\begin{cases} c_{1}, & \mathrm{if}\: w::0,\\ c_{2} & \mathrm{otherwise},\end{cases}$$ where “::” is one of the comparison operators “&gt;”, “$\geq$”, =, “$\leq$”, “&lt;”, or “$\neq$”. 2. For half-integers or quarter-integer fractional powers, $\left(-1\right)^{\tau}$ can be expressed as a piecewise expression depending on the real and imaginary parts of $w$ rather than $\arg(w)$. For example,$$\begin{aligned} \dfrac{\left(w^{2}\right)^{1/2}}{w} & \rightarrow & \begin{cases} \:1 & \mathrm{if}\:\Re\left(w\right)>0\vee\Re\left(w\right)\geq0\wedge\Im\left(w\right)\geq0,\\ -1 & \mathrm{otherwise};\end{cases}\label{eq:GoesToCsgn}\\ \dfrac{\left(w^{4}\right)^{1/4}}{w} & \rightarrow & \begin{cases} 1 & -\Re\left(w\right)<\Im\left(w\right)\leq\Re\left(w\right),\\ -i & \mathrm{if}\:-\Im\left(w\right)<\Re\left(w\right)\leq\Im\left(w\right),\\ -1 & \Re\left(w\right)<\Im\left(w\right)\leq-\Re\left(w\right),\\ i & \mathrm{otherwise}.\end{cases}\end{aligned}$$ Notice that the right side of result (\[eq:GoesToCsgn\]) is the definition of the Maple csgn function. Without an abbreviation such as $\mathrm{csgn}(\ldots)$, Most implementers will probably want to avoid form 4 as a *default* even when $\arg\left(0\right)\rightarrow0$, because $(-1)^{\sigma}$ is likely to be rather complicated nonetheless: 1. It will probably contain complicated square roots and arctangents if the real and imaginary parts of $w$ are given as exact numbers. 2. It will probably also contain piecewise sign tests if given real and imaginary parts that are non-numeric, such as for $w=x+iy$ with non-numeric real indeterminates $x$ and $y$. 3. It will probably contain radicals nested at least one deep if $\arg\left(w\right)$ is a simple enough rational multiple of $\pi$. 4. Otherwise it will contain perhaps bulky sub-expressions $\arg\left(w\right)$ and $\arg\left(w^{\beta}\right)$ – or, worse yet, expressions involving square roots, arctangents, piecewise sign tests, and sub-expressions of the form $\Re\left(w\right)$ and $\Im\left(w\right)$. As espoused by Corless and Jeffrey [@CorlessAndJeffrey], expression $\tau$ can alternatively be defined in terms of the unwinding function $\kappa$ as:$$\tau:=2\gamma\kappa\left(\beta\ln w\right).\label{eq:TauViaUnwindingNumber}$$ This is more concise than definition (\[eq:CorrectionExponentForPowerOfPower\]), but a function that computes unwinding numbers isn’t currently available externally in most computer algebra systems. Also, unless the system automatically transforms $\ln0$ to $-\infty$, as is done in *Mathematica* and *Derive*, then definition (\[eq:TauViaUnwindingNumber\]) has the same disadvantages as using $\arg\left(\ldots\right)$.[^13] Simplifying mixtures of form 4 with form 1, 2 or 3 -------------------------------------------------- If an expression contains a mixture of forms, then we should unify the forms to facilitate collection and cancelation. For example with $\arg(0)\rightarrow0$, the three expressions$$\begin{aligned} & \dfrac{\left(z^{2}\right)^{1/2}}{z},\label{eq:form4Alternative1}\\ & (-1)^{\left(\arg\left(z^{2}\right)/2-\arg z\right)/\pi},\label{eq:form4Alternative2}\\ & \begin{cases} \:1 & \mathrm{if}\:\Re\left(w\right)>0\vee\Re\left(w\right)\geq0\wedge\Im\left(w\right)\geq0,\\ -1 & \mathrm{otherwise}\end{cases}\label{eq:form4Alternative3}\end{aligned}$$ are equivalent. Therefore the result of any linear combination of them should transform either to 0 or a multiple of one of them. The rewrite rules in the Appendix don’t address this issue. In general it is easy to transform form (\[eq:form4Alternative1\]) to form (\[eq:form4Alternative2\]), which is only slightly more difficult to transform to either form (\[eq:form4Alternative1\]) or form (\[eq:form4Alternative3\]). \[sec:Unimplemented-extensions\]Unimplemented extensions ======================================================== More semantic pattern matching for $w$ -------------------------------------- The *Mathematica* pattern matcher is mostly syntactic rather than semantic, and the rules in the Appendix do almost no transformation of the radicand expressions $w$ or any cofactors thereof. Thus recognition of opportunities relies mostly on the default transformations together with any optional transformations done by the user. Consequently, opportunities for the rules to simplify nested power products might not be recognized for radicands that aren’t indeterminates. The rules work for most functional forms that have syntactically identical forms for the different instances of $w$, such as$$\begin{aligned} \dfrac{\left(\mathrm{Log}\left[x^{2}\left(x+y\right)\right]^{2}\right)^{5/3}}{\mathrm{Log}\left[x^{2}\left(x+y\right)\right]} & \rightarrow & \mathrm{Log}\left[x^{2}\left(x+y\right)\right]\left(\mathrm{Log}\left[x^{2}\left(x+y\right)\right]^{2}\right)^{2/3}.\end{aligned}$$ However the rules don’t apply to *all* such functional form opportunities. For example,$$\dfrac{\left(\mathrm{Cos}[\theta]^{2}\right)^{5/3}}{\mathrm{Cos}[\theta]}\rightarrow\left(\mathrm{Cos}[\theta]^{2}\right)^{5/3}\mathrm{Sec}[\theta]$$ because default simplification transforms $\mathrm{Cos}[\theta]^{-1}$ to $\mathrm{Sec}[\theta]$. Even more opportunities are unrecognized when $w$ is a sum. As an example of how to overcome this, the Appendix includes one extra rule that square-free factors radicands that are sums so that, for example,$$\dfrac{\left(z^{2}+2z+1\right)^{5/3}}{z+1}\rightarrow\dfrac{\left(\left(z+1\right)^{2}\right)^{5/3}}{z+1}\rightarrow\left(z+1\right)\left(\left(z+1\right)^{2}\right)^{2/3}.$$ Factored over the integers or square-free factored form is a good choice for radicands for other reasons too, and these forms are canonical when the radicand is a rational expression. However,$$\left(z^{2}+2z+1\right)\left(\left(z+1\right)^{2}\right)^{5/3}\rightarrow\left(z+1\right)^{2}\left(\left(z+1\right)^{2}\right)^{5/3}\rightarrow\left(\left(z+1\right)^{2}\right)^{8/3},$$ would require another rule that factors the *cofactor* of a power of a power of a sum. Then, perhaps we would want another rule to factor *sums containing* such radicands so that$$z^{2}\left(\left(z+1\right)^{2}\right)^{5/3}+2z\left(\left(z+1\right)^{2}\right)^{5/3}+\left(\left(z+1\right)^{2}\right)^{5/3}\rightarrow\left(\left(z+1\right)^{2}\right)^{8/3}.$$ It is impossible to implement equivalence recognition for all possible expressions $w$ representable in general purpose systems, but it is worth expending a modest amount of execution time for default simplification and more time for optional transformations. The Appendix leaves most such opportunities unimplemented because the simplifications described here are so fundamental and low level that they should be part of the built-in transformations. Good simplification of nested power products is more appropriately built into a system rather than provided as an optionally loaded package that most users are unlikely to know about and load into every session. So rather than implementing a comprehensive package for one system, the intent of this article is to inspire implementers of all systems to improve some very fundamental transformations – at least to the extent that it can be done economically. Non numeric exponents --------------------- Although not implemented in the rules of the Appendix, more generally the exponents for forms 1 through 4 can be Gaussian fractions or even symbolic, in which case we can still apply these transformations to the rational numeric parts of the exponents. For example,$$w^{3\xi+\rho}\left(w^{\xi}\right)^{3/2+\omega\pi i}\rightarrow\left(w^{\xi}\right)^{3}w^{\rho}\left(w^{\xi}\right)^{1+1/2+\omega\pi i}\rightarrow\left(w^{\xi}\right)^{4}w^{\rho}\left(w^{\xi}\right)^{1/2+\omega\pi i}\rightarrow w^{4\xi+\rho}\left(w^{\xi}\right)^{1/2+\omega\pi i}.$$ As another example, if a user has declared the variable $n$ to be integer, then$$w^{-n}\left(w^{2}\right)^{n+1/2}\rightarrow w^{n}\left(w^{2}\right)^{1/2}.$$ To some extent, the methods can also be extended to handle floating-point and symbolic real expressions for exponents $\alpha$ and $\beta_{k}$. For example,$$\begin{aligned} w^{4.321}\left(w^{1.234}\right)^{3/2} & \rightarrow & w^{5.555}\sqrt{w^{1.234}},\\ w^{2-\pi}\left(w^{\pi}\right)^{3/2} & \rightarrow & w^{2}\sqrt{w^{\pi}}.\end{aligned}$$ \[sec:Summary\]Summary ====================== This article: 1. shows that many widely-used computer algebra systems have significant room for improvement at simplifying sub-expressions of the form $w^{\alpha}\left(w^{\beta_{1}}\right)^{\gamma_{1}}\cdots\left(w^{\beta_{n}}\right)^{\gamma_{n}}$; 2. defines four different simplified forms with good properties; 3. explains how to compute these forms; 4. includes a demonstration implementation of form 3 via *Mathematica* rewrite rules. Acknowledgment {#acknowledgment .unnumbered} ============== I thank Sam Blake for his helpful assistance with *Mathematica*, Daniel Lichtblau for information about the algorithm in $\mathtt{Reduce}\left[\ldots\right]$, and a referee for many fine suggestions. [10]{} Brown, W.S: On computing with factored rational expressions. *Proceedings of EUROSAM ’74, ACM SIGSAM Bulletin* 8 (3), pp. 26-34, 1974. Carette, J., A canonical form for piecewise defined functions, *Proceedings of ISSAC 2007*, pp. 77-84. Corless, R.M., Jeffrey, D.J., Well … It isn’t quite that simple. *ACM SIGSAM Bulletin* 26 (3), pp. 2-6, 1992. Corless, R.M. and Jeffrey, D.J., Editor’s corner: The unwinding number, *ACM Communications in Computer Algebra* 30 (2), pp. 28-35, 1996. Jeffrey, D.J., Branching out with inverse functions, 2009,\ <http://www.activemath.org/workshops/MathUI/09/proc/> Moses, J: Algebraic simplification, a guide for the perplexed. *Proceedings of the second ACM symposium on symbolic and algebraic manipulation*, pp. 282-304, 1971 Papp, D. and Vizvari, B: Effective solution of linear Diophantine equation systems with an application to chemistry, *Journal of Mathematical Chemistry* 39 (1), pp. 15-31, 2006. Rich, A.D. and Jeffrey, D.J., Function evaluation on branch cuts, *Communications in Computer Algebra* 30 (2), pp. 25-27, 1996. Stoutemyer, D.R., Useful computations need useful numbers, *ACM Communications in Computer Algebra* 41 (3), pp. 75-99, 2007. Stoutemyer, D.R., Ten commandments for good default expression simplification, *Journal of Symbolic Computation*, 46 (7), pp. 859-887, 2011. Appendix: rewrite rules for $w^{\alpha}\left(w^{\beta_{1}}\right)^{\gamma_{1}}\cdots\left(w^{\beta_{n}}\right)^{\gamma_{n}}$ {#appendix-rewrite-rules-for-walphaleftwbeta_1rightgamma_1cdotsleftwbeta_nrightgamma_n .unnumbered} ============================================================================================================================ (* EXTRA SIMPLIFICATION DONE BEFORE ORDINARY EVALUATION: *) PreProductOfPowersOfPowers [(w_Plus)^(g_Rational /; !IntegerQ[g])] := Block[{squareFree = FactorSquareFree[w]}, squareFree^g /; Head[squareFree] =!= Plus]; PreProductOfPowersOfPowers [(w_^b_)^(g_Rational /; g <= -1 || g >= 1)] := w^(IntegerPart[g]*b) * (w^b)^FractionalPart[g]; PreProductOfPowersOfPowers [(w_^b_)^(g_Rational /; g<=-1 || g>=1) * w_^a_. * u_]:= PreProductOfPowersOfPowers[w^(a+IntegerPart[g]*b) * (w^b)^FractionalPart[g] * u]; PreProductOfPowersOfPowers [(w_^b_)^g_ * w_^a_. * u_. /; Sign[a] != Sign[b*g] && (Sign [a+b*Sign[g]] == Sign [b*(g-Sign[g])] || Min [Abs[a], Abs[b*g]] > Min [Abs [a+b*Sign[g]], Abs [b*(g-Sign[g])]] || g == -1/2 && Min [Abs[a], Abs[b/2]] == Min [Abs[a-b], Abs[b/2]])] := w^(a+b*Sign[g]) * (w^b)^(g-Sign[g]) * u; PreProductOfPowersOfPowers [f_[args__]] := Apply [f, Map [PreProductOfPowersOfPowers, {args}]]; PreProductOfPowersOfPowers [anythingElse_] := anythingElse; (* EXTRA SIMPLIFICATION DURING ORDINARY EVALUATION: *) Unprotect [Times]; (w_^b_)^g_ * w_^a_. * u_. /; Sign[a] != Sign[b*g] && (Sign [a+b*Sign[g]] == Sign [b*(g-Sign[g])] || Min [Abs[a], Abs[b*g]] > Min [Abs [a+b*Sign[g]], Abs [b*(g-Sign[g])]] || Min [Abs[a], Abs[b*g]] == Min [Abs [a+b*Sign[g]], Abs [b*(g-Sign[g])]] && Abs[g] > Abs [g-Sign[g]] || g == -1/2 && Min [Abs[a], Abs[b/2]] == Min [Abs[a-b], Abs[b/2]]) := w^(a+b*Sign[g]) * (w^b)^(g-Sign[g]) * u; (w_^b1_)^g1_ * (w_^b2_)^g2_ * u_. /; Sign[b1*g1] != Sign[b2*g2] && Abs[b2] > Abs[b1] && Sign [b2*(g2-Sign[g2])] == Sign [b1*g1 + b2*Sign[g2]] := (w^b2)^(g2-Sign[g2]) * ((w^(b2*Sign[g2]) * (w^b1)^g1) * u); Protect [Times]; (* EXTRA SIMPLIFICATION DONE AFTER ORDINARY EVALUATION: *) PostProductOfPowersOfPowers [w_^a_ * (w_^b_)^g_ * u_. /; IntegerQ [a/b]] := PostProductOfPowersOfPowers [(w^b)^(g+a/b) * u]; PostProductOfPowersOfPowers [w_^a_.*(w_^b_)^(g_Rational /; g<=-1 || g>=1)*u_. /; !IntegerQ [(a + b*IntegerPart[g])/b]] := PostProductOfPowersOfPowers[(u*w^(a+b*IntegerPart[b]))*(w^b)^FractionalPart[g]]; PostProductOfPowersOfPowers [f_[args__]] := Apply [f, Map [PostProductOfPowersOfPowers, {args}]]; PostProductOfPowersOfPowers [anythingElse_] := anythingElse; $Post = PostProductOfPowersOfPowers; $Pre = PreProductOfPowersOfPowers; Tables \[Flo:AlgorithmAtable\] through \[Flo:Form1MinusForm4Table\] {#tables-floalgorithmatable-through-floform1minusform4table .unnumbered} =================================================================== \[Flo:AlgorithmAtable\] $\downarrow\!\!\!\overrightarrow{\times}$ ------------------------------------------- ------------------------------------------- ------------------------------------------- ------------------------------------------- --------------------------------------- -------------------------------------- --------------------------------- $w^{-3}$ $\frac{1}{w^{7}\left(w^{2}\right)^{1/3}}$ $\frac{1}{w^{5}\left(w^{2}\right)^{1/3}}$ $\frac{1}{w^{3}\left(w^{2}\right)^{1/3}}$ $\frac{1}{w\left(w^{2}\right)^{1/3}}$ $\frac{w}{\left(w^{2}\right)^{1/3}}$ $w\left(w^{2}\right)^{2/3}$ $w^{-2}$ $\frac{1}{\left(w^{2}\right)^{10/3}}$ $\frac{1}{\left(w^{2}\right)^{7/3}}$ $\frac{1}{\left(w^{2}\right)^{4/3}}$ $\frac{1}{\left(w^{2}\right)^{1/3}}$ $\left(w^{2}\right)^{2/3}$ $\left(w^{2}\right)^{5/3}$ $w^{-1}$ $\frac{1}{w^{5}\left(w^{2}\right)^{1/3}}$ $\frac{1}{w^{3}\left(w^{2}\right)^{1/3}}$ $\frac{1}{w\left(w^{2}\right)^{1/3}}$ $\frac{w}{\left(w^{2}\right)^{1/3}}$ $w\left(w^{2}\right)^{2/3}$ $w^{3}\left(w^{2}\right)^{2/3}$ $w^{0}$ $\frac{1}{\left(w^{2}\right)^{7/3}}$ $\frac{1}{\left(w^{2}\right)^{4/3}}$ $\frac{1}{\left(w^{2}\right)^{1/3}}$ $\left(w^{2}\right)^{2/3}$ $\left(w^{2}\right)^{5/3}$ $\left(w^{2}\right)^{8/3}$ $w^{1}$ $\frac{1}{w^{3}\left(w^{2}\right)^{1/3}}$ $\frac{1}{w\left(w^{2}\right)^{1/3}}$ $\frac{w}{\left(w^{2}\right)^{1/3}}$ $w\left(w^{2}\right)^{2/3}$ $w^{3}\left(w^{2}\right)^{2/3}$ $w^{5}\left(w^{2}\right)^{2/3}$ $w^{2}$ $\frac{1}{\left(w^{2}\right)^{4/3}}$ $\frac{1}{\left(w^{2}\right)^{1/3}}$ $\left(w^{2}\right)^{2/3}$ $\left(w^{2}\right)^{5/3}$ $\left(w^{2}\right)^{8/3}$ $\left(w^{2}\right)^{11/3}$ $w^{3}$ $\frac{1}{w\left(w^{2}\right)^{1/3}}$ $\frac{w}{\left(w^{2}\right)^{1/3}}$ $w\left(w^{2}\right)^{2/3}$ $w^{3}\left(w^{2}\right)^{2/3}$ $w^{5}\left(w^{2}\right)^{2/3}$ $w^{7}\left(w^{2}\right)^{2/3}$ : **Unflawed** results **of Appendix rewrite rules** for 1st row $\times$ 1st column.\ Compare with Tables \[Flo:MathematicaDefaultTable\] through \[Flo:MaximaFullratsimpTableAndMapleSimplifyTable\] ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- $\downarrow\!\!\!\overrightarrow{\times}$ ------------------------------------------- ------------------------------------------- --------------------------- ------------------------------------------- --------------------------- ------------------------------------------- --------------------------- ------------------------------------------ --------------------------- ------------------------------------------ --------------------------- ------------------------------------------ --------------------------- $w^{-3}$ $\frac{1}{w^{3}\left(w^{2}\right)^{7/3}}$ [$\begin{array}{c} $\frac{1}{w^{3}\left(w^{2}\right)^{4/3}}$ [$\begin{array}{c} $\frac{1}{w^{3}\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w^{3}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{5/3}}{w^{3}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{8/3}}{w^{3}}$ [$\begin{array}{c} \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\end{array}$]{} \boldsymbol{2}\\ \mathbf{2}\\ \boldsymbol{2}\\ 5\\ 5\\ 3\\ 3\\ 3\\ 6\end{array}$]{} 6\end{array}$]{} 5\\ 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} $w^{-2}$ $\frac{1}{w^{2}\left(w^{2}\right)^{7/3}}$ [$\begin{array}{c} $\frac{1}{w^{2}\left(w^{2}\right)^{4/3}}$ [$\begin{array}{c} $\frac{1}{w^{2}\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w^{2}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{5/3}}{w^{2}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{8/3}}{w^{2}}$ [$\begin{array}{c} \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ \boldsymbol{2}\\ \boldsymbol{2}\\ \boldsymbol{2}\\ 4\\ 4\\ 4\\ 3\\ 3\\ 3\\ 5\\ 5\\ 6\\ 4\\ 4\\ 4\\ 6\\ 6\\ 7\end{array}$]{} 5\\ 5\\ 5\\ 7\end{array}$]{} 7\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} $w^{-1}$ $\frac{1}{w\left(w^{2}\right)^{7/3}}$ [$\begin{array}{c} $\frac{1}{w\left(w^{2}\right)^{4/3}}$ [$\begin{array}{c} $\frac{1}{w\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{5/3}}{w}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{8/3}}{w}$ [$\begin{array}{c} \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\end{array}$]{} \mathbf{2}\\ \boldsymbol{2}\\ \boldsymbol{2}\\ 5\\ 5\\ 3\\ 3\\ 3\\ 6\end{array}$]{} 6\end{array}$]{} 5\\ 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} $w^{0}$ $\frac{1}{\left(w^{2}\right)^{7/3}}$ [$\begin{array}{c} $\frac{1}{\left(w^{2}\right)^{4/3}}$ [$\begin{array}{c} $\frac{1}{\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $\left(w^{2}\right)^{2/3}$ [$\begin{array}{c} $\left(w^{2}\right)^{5/3}$ [$\begin{array}{c} $\left(w^{2}\right)^{8/3}$ [$\begin{array}{c} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} $w^{1}$ $\frac{w}{\left(w^{2}\right)^{7/3}}$ [$\begin{array}{c} $\frac{w}{\left(w^{2}\right)^{4/3}}$ [$\begin{array}{c} $\frac{w}{\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $w\left(w^{2}\right)^{2/3}$ [$\begin{array}{c} $w\left(w^{2}\right)^{5/3}$ [$\begin{array}{c} $w\left(w^{2}\right)^{8/3}$ [$\begin{array}{c} \boldsymbol{2}\\ \mathbf{\boldsymbol{2}}\\ \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\\ \mathbf{2}\\ 3\\ 3\\ 5\\ 5\\ 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} $w^{2}$ $\frac{1}{\left(w^{2}\right)^{4/3}}$ [$\begin{array}{c} $\frac{1}{\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $\left(w^{2}\right)^{2/3}$ [$\begin{array}{c} $\left(w^{2}\right)^{5/3}$ [$\begin{array}{c} $\left(w^{2}\right)^{8/3}$ [$\begin{array}{c} $\left(w^{2}\right)^{11/3}$ [$\begin{array}{c} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} $w^{3}$ $\frac{w^{3}}{\left(w^{2}\right)^{7/3}}$ [$\begin{array}{c} $\frac{w^{3}}{\left(w^{2}\right)^{4/3}}$ [$\begin{array}{c} $\frac{w^{3}}{\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $w^{3}\left(w^{2}\right)^{2/3}$ [$\begin{array}{c} $w^{3}\left(w^{2}\right)^{5/3}$ [$\begin{array}{c} $w^{3}\left(w^{2}\right)^{8/3}$ [$\begin{array}{c} \mathbf{2}\\ \mathbf{2}\\ \boldsymbol{2}\\ \mathbf{2}\end{array}$]{} \mathbf{2}\\ \mathbf{2}\\ 3\\ 3\\ 3\\ 5\\ 5\\ 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} 6\\ 6\\ 8\end{array}$]{} 8\end{array}$]{} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : ***Mathematica*** **8 default** simplification for 1st row $\times$ 1st column, with flaw numbers. Compare with Table \[Flo:AlgorithmAtable\].[]{data-label="Flo:MathematicaDefaultTable"} --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- $\downarrow\!\!\!\overrightarrow{\times}$ ------------------------------------------- ------------------------------------------ --------------------------- ------------------------------------------ --------------------------- ------------------------------------------ --------------------------- ------------------------------------------ -------------------- -------------------------------------- -------------------- --------------------------------- -------------------- $w^{-3}$ $\frac{\left(w^{2}\right)^{2/3}}{w^{9}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w^{7}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w^{5}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w^{3}}$ [$\begin{array}{c} $\frac{w}{\left(w^{2}\right)^{1/3}}$ $w\left(w^{2}\right)^{2/3}$ \boldsymbol{\mathbf{1}}\\ \boldsymbol{\mathbf{1}}\\ \boldsymbol{\mathbf{1}}\\ 3\\ 3\\ 3\\ 3\\ 5\end{array}$]{} 5\end{array}$]{} 5\end{array}$]{} 5\end{array}$]{} $w^{-2}$ $\frac{\left(w^{2}\right)^{2/3}}{w^{8}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w^{6}}$ [$\begin{array}{c} $\frac{1}{\left(w^{2}\right)^{4/3}}$ $\frac{1}{\left(w^{2}\right)^{1/3}}$ $\left(w^{2}\right)^{2/3}$ $\left(w^{2}\right)^{5/3}$ \boldsymbol{\mathbf{1}}\\ \boldsymbol{\mathbf{1}}\\ 3\\ 3\\ 4\\ 4\\ 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} $w^{-1}$ $\frac{\left(w^{2}\right)^{2/3}}{w^{7}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w^{5}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w^{3}}$ [$\begin{array}{c} $\frac{w}{\left(w^{2}\right)^{1/3}}$ $w\left(w^{2}\right)^{2/3}$ $w^{3}\left(w^{2}\right)^{2/3}$ \boldsymbol{\mathbf{1}}\\ \boldsymbol{\mathbf{1}}\\ \boldsymbol{\mathbf{1}}\\ 3\\ 3\\ 3\\ 5\end{array}$]{} 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} $w^{0}$ $\frac{1}{\left(w^{2}\right)^{7/3}}$ $\frac{1}{\left(w^{2}\right)^{4/3}}$ $\frac{1}{\left(w^{2}\right)^{1/3}}$ $\left(w^{2}\right)^{2/3}$ $\left(w^{2}\right)^{5/3}$ $\left(w^{2}\right)^{8/3}$ $w^{1}$ $\frac{w}{\left(w^{2}\right)^{7/3}}$ [$\begin{array}{c} $\frac{w}{\left(w^{2}\right)^{4/3}}$ [$\begin{array}{c} $\frac{w}{\left(w^{2}\right)^{1/3}}$ $w\left(w^{2}\right)^{2/3}$ $w\left(w^{2}\right)^{5/3}$ [$\begin{array}{c} $w\left(w^{2}\right)^{8/3}$ [$\begin{array}{c} 3\\ 3\\ 5\\ 5\\ 5\end{array}$]{} 5\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} $w^{2}$ $\frac{1}{\left(w^{2}\right)^{4/3}}$ $\frac{1}{\left(w^{2}\right)^{1/3}}$ $\left(w^{2}\right)^{2/3}$ $\left(w^{2}\right)^{5/3}$ $\left(w^{2}\right)^{8/3}$ $\left(w^{2}\right)^{11/3}$ $w^{3}$ $\frac{w^{3}}{\left(w^{2}\right)^{7/3}}$ [$\begin{array}{c} $\frac{w}{\left(w^{2}\right)^{1/3}}$ $w\left(w^{2}\right)^{2/3}$ $w^{3}\left(w^{2}\right)^{2/3}$ $w^{3}\left(w^{2}\right)^{5/3}$ [$\begin{array}{c} $w^{3}\left(w^{2}\right)^{8/3}$ [$\begin{array}{c} 3\\ 5\\ 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} 6\\ 8\end{array}$]{} --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : ***Mathematica*** **8 FullSimplify[\[]{}...[\]]{}** for 1st row $\times$ 1st column, with flaw numbers. Compare with Table \[Flo:AlgorithmAtable\].[]{data-label="Flo:MathematicaFullSimplifyTable"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ $\downarrow\!\!\!\overrightarrow{\times}$ ------------------------------------------- ------------------------------------------ -------------------- ------------------------------------------ -------------------- ------------------------------------------ -------------------- ------------------------------------------ -------------------- -------------------------------------- -------------------- --------------------------------- -- $w^{-3}$ $\frac{\left(w^{2}\right)^{2/3}}{w^{9}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w^{7}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w^{5}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w^{3}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w}$ [$\begin{array}{c} $w\left(w^{2}\right)^{2/3}$ \mathbf{1}\\ \mathbf{1}\\ \mathbf{1}\\ 3\\ 3\\ 3\\ 3\\ 3\\ 5\end{array}$]{} 5\end{array}$]{} 5\end{array}$]{} 5\end{array}$]{} 5\end{array}$]{} $w^{-2}$ $\frac{1}{\left(w^{2}\right)^{10/3}}$ $\frac{1}{\left(w^{2}\right)^{7/3}}$ $\frac{1}{\left(w^{2}\right)^{4/3}}$ $\frac{1}{\left(w^{2}\right)^{1/3}}$ $\left(w^{2}\right)^{2/3}$ $\left(w^{2}\right)^{5/3}$ $w^{-1}$ $\frac{\left(w^{2}\right)^{2/3}}{w^{7}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w^{5}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w^{3}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w}$ [$\begin{array}{c} $w\left(w^{2}\right)^{2/3}$ $w^{3}\left(w^{2}\right)^{2/3}$ \mathbf{1}\\ \mathbf{1}\\ \mathbf{1}\\ 3\\ 3\\ 3\\ 3\\ 5\end{array}$]{} 5\end{array}$]{} 5\end{array}$]{} 5\end{array}$]{} $w^{0}$ $\frac{1}{\left(w^{2}\right)^{7/3}}$ $\frac{1}{\left(w^{2}\right)^{4/3}}$ $\frac{1}{\left(w^{2}\right)^{1/3}}$ $\left(w^{2}\right)^{2/3}$ $\left(w^{2}\right)^{5/3}$ $\left(w^{2}\right)^{8/3}$ $w^{1}$ $\frac{\left(w^{2}\right)^{2/3}}{w^{5}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w^{3}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w}$ [$\begin{array}{c} $w\left(w^{2}\right)^{2/3}$ $w^{3}\left(w^{2}\right)^{2/3}$ $w^{5}\left(w^{2}\right)^{2/3}$ 3\\ 3\\ 3\\ 5\end{array}$]{} 5\end{array}$]{} 5\end{array}$]{} $w^{2}$ $\frac{1}{\left(w^{2}\right)^{4/3}}$ $\frac{1}{\left(w^{2}\right)^{1/3}}$ $\left(w^{2}\right)^{2/3}$ $\left(w^{2}\right)^{5/3}$ $\left(w^{2}\right)^{8/3}$ $\left(w^{2}\right)^{11/3}$ $w^{3}$ $\frac{\left(w^{2}\right)^{2/3}}{w^{3}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w}$ [$\begin{array}{c} $w\left(w^{2}\right)^{2/3}$ $w^{3}\left(w^{2}\right)^{2/3}$ $w^{5}\left(w^{2}\right)^{2/3}$ $w^{7}\left(w^{2}\right)^{2/3}$ 3\\ 3\\ 5\end{array}$]{} 5\end{array}$]{} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ : ***Derive*** **6 default** simplify for 1st row $\times$ 1st column, with flaw numbers.\ Compare with Table \[Flo:AlgorithmAtable\].[]{data-label="Flo:DeriveDefaultTable"} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- $\downarrow\!\!\!\overrightarrow{\times}$ ------------------------------------------- ------------------------------------------- -- ------------------------------------------- -- ------------------------------------------- --------------------------- ------------------------------------------ --------------------------- -------------------------------------- -------------------- --------------------------------- -- $w^{-3}$ $\frac{1}{w^{7}\left(w^{2}\right)^{1/3}}$ $\frac{1}{w^{5}\left(w^{2}\right)^{1/3}}$ $\frac{1}{w^{3}\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w^{3}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w}$ [$\begin{array}{c} $w\left(w^{2}\right)^{2/3}$ \mathbf{2}\end{array}$]{} \mathbf{2}\\ 3\\ 3\\ 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} $w^{-2}$ $\frac{1}{\left(w^{2}\right)^{10/3}}$ $\frac{1}{\left(w^{2}\right)^{7/3}}$ $\frac{1}{\left(w^{2}\right)^{4/3}}$ $\frac{1}{\left(w^{2}\right)^{1/3}}$ $\left(w^{2}\right)^{2/3}$ $\left(w^{2}\right)^{5/3}$ $w^{-1}$ $\frac{1}{w^{5}\left(w^{2}\right)^{1/3}}$ $\frac{1}{w^{3}\left(w^{2}\right)^{1/3}}$ $\frac{1}{w\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w}$ [$\begin{array}{c} $w\left(w^{2}\right)^{2/3}$ $w^{3}\left(w^{2}\right)^{2/3}$ \mathbf{2}\end{array}$]{} \mathbf{2}\\ 3\\ 5\\ 6\end{array}$]{} $w^{0}$ $\frac{1}{\left(w^{2}\right)^{7/3}}$ $\frac{1}{\left(w^{2}\right)^{4/3}}$ $\frac{1}{\left(w^{2}\right)^{1/3}}$ $\left(w^{2}\right)^{2/3}$ $\left(w^{2}\right)^{5/3}$ $\left(w^{2}\right)^{8/3}$ $w^{1}$ $\frac{1}{w^{3}\left(w^{2}\right)^{1/3}}$ $\frac{1}{w\left(w^{2}\right)^{1/3}}$ $\frac{w}{\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $w\left(w^{2}\right)^{2/3}$ [$\begin{array}{c} $w^{3}\left(w^{2}\right)^{2/3}$ $w^{5}\left(w^{2}\right)^{2/3}$ \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} $w^{2}$ $\frac{1}{\left(w^{2}\right)^{4/3}}$ $\frac{1}{\left(w^{2}\right)^{1/3}}$ $\left(w^{2}\right)^{2/3}$ $\left(w^{2}\right)^{5/3}$ $\left(w^{2}\right)^{8/3}$ $\left(w^{2}\right)^{11/3}$ $w^{3}$ $\frac{1}{w\left(w^{2}\right)^{1/3}}$ $\frac{w}{\left(w^{2}\right)^{1/3}}$ $\frac{w^{3}}{\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $w^{3}\left(w^{2}\right)^{2/3}$ [$\begin{array}{c} $w^{5}\left(w^{2}\right)^{2/3}$ $w^{7}\left(w^{2}\right)^{2/3}$ \mathbf{2}\\ \mathbf{2}\end{array}$]{} 3\\ 6\end{array}$]{} ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : **TI-CAS 3.1 default** simplify for 1st row $\times$ 1st column, with flaw numbers.\ Compare with Table \[Flo:AlgorithmAtable\].[]{data-label="Flo:TICASDefaultTable"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- $\downarrow\!\!\!\overrightarrow{\times}$ ------------------------------------------- ------------------------------------------- --------------------------- ------------------------------------------- --------------------------- ------------------------------------------- --------------------------- ------------------------------------------ --------------------------- ------------------------------------------ --------------------------- ------------------------------------------ --------------------------- $w^{-3}$ $\frac{1}{w^{3}\left(w^{2}\right)^{7/3}}$ [$\begin{array}{c} $\frac{1}{w^{3}\left(w^{2}\right)^{4/3}}$ [$\begin{array}{c} $\frac{1}{w^{3}\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w^{3}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{5/3}}{w^{3}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{8/3}}{w^{3}}$ [$\begin{array}{c} \mathbf{\mathbf{2}}\\ \mathbf{\mathbf{2}}\\ \mathbf{2}\end{array}$]{} \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ 5\\ 5\\ 3\\ 3\\ 3\\ 6\end{array}$]{} 6\end{array}$]{} 5\\ 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} $w^{-2}$ $\frac{1}{w^{2}\left(w^{2}\right)^{7/3}}$ [$\begin{array}{c} $\frac{1}{w^{2}\left(w^{2}\right)^{4/3}}$ [$\begin{array}{c} $\frac{1}{w^{2}\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w^{2}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{5/3}}{w^{2}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{8/3}}{w^{2}}$ [$\begin{array}{c} \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ 4\\ 4\\ 4\\ 3\\ 3\\ 3\\ 5\\ 5\\ 6\end{array}$]{} 4\\ 4\\ 4\\ 6\end{array}$]{} 6\end{array}$]{} 5\\ 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} $w^{-1}$ $\frac{1}{w\left(w^{2}\right)^{7/3}}$ [$\begin{array}{c} $\frac{1}{w\left(w^{2}\right)^{4/3}}$ [$\begin{array}{c} $\frac{1}{w\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{5/3}}{w}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{8/3}}{w}$ [$\begin{array}{c} \mathbf{\mathbf{2}}\\ \mathbf{2}\\ \mathbf{2}\end{array}$]{} \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ 5\\ 5\\ 3\\ 3\\ 3\\ 6\end{array}$]{} 6\end{array}$]{} 5\\ 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} $w^{0}$ $\frac{1}{\left(w^{2}\right)^{7/3}}$ [$\begin{array}{c} $\frac{1}{\left(w^{2}\right)^{4/3}}$ [$\begin{array}{c} $\frac{1}{\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $\left(w^{2}\right)^{2/3}$ [$\begin{array}{c} $\left(w^{2}\right)^{5/3}$ [$\begin{array}{c} $\left(w^{2}\right)^{8/3}$ [$\begin{array}{c} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} $w^{1}$ $\frac{w}{\left(w^{2}\right)^{7/3}}$ [$\begin{array}{c} $\frac{w}{\left(w^{2}\right)^{4/3}}$ [$\begin{array}{c} $\frac{w}{\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $w\left(w^{2}\right)^{2/3}$ [$\begin{array}{c} $w\left(w^{2}\right)^{5/3}$ [$\begin{array}{c} $w\left(w^{2}\right)^{8/3}$ [$\begin{array}{c} \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{\mathbf{2}}\\ \mathbf{\mathbf{2}}\\ 3\\ 3\\ 5\\ 5\\ 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} $w^{2}$ $\frac{w^{2}}{\left(w^{2}\right)^{7/3}}$ [$\begin{array}{c} $\frac{w^{2}}{\left(w^{2}\right)^{4/3}}$ [$\begin{array}{c} $\frac{w^{2}}{\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $w^{2}\left(w^{2}\right)^{2/3}$ [$\begin{array}{c} $w^{2}\left(w^{2}\right)^{5/3}$ [$\begin{array}{c} $w^{2}\left(w^{2}\right)^{8/3}$ [$\begin{array}{c} \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ 3\\ 3\\ 3\\ 4\\ 4\\ 4\\ 4\\ 4\\ 4\\ 6\end{array}$]{} 5\\ 5\\ 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} 6\\ 6\\ 8\end{array}$]{} 8\end{array}$]{} $w^{3}$ $\frac{w^{3}}{\left(w^{2}\right)^{7/3}}$ [$\begin{array}{c} $\frac{w^{3}}{\left(w^{2}\right)^{4/3}}$ [$\begin{array}{c} $\frac{w^{3}}{\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $w^{3}\left(w^{2}\right)^{2/3}$ [$\begin{array}{c} $w^{3}\left(w^{2}\right)^{5/3}$ [$\begin{array}{c} $w^{3}\left(w^{2}\right)^{8/3}$ [$\begin{array}{c} \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\end{array}$]{} \mathbf{2}\\ \mathbf{\mathbf{2}}\\ 3\\ 3\\ 3\\ 5\end{array}$]{} 5\\ 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} 6\\ 6\\ 8\end{array}$]{} 8\end{array}$]{} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : **Maple 15 and Maxima 5.24 default** simplification for 1st row $\times$ 1st column, with flaw numbers. Compare with Table \[Flo:AlgorithmAtable\].[]{data-label="Flo:MaximaAndMapleDefaultTable"} --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- $\downarrow\!\!\!\overrightarrow{\times}$ ------------------------------------------- ------------------------------------------- -------------------- ------------------------------------------- -------------------- ------------------------------------------- -------------------- ------------------------------------------ -------------------- -------------------------------------- -------------------- --------------------------------- -------------------- $w^{-3}$ $\frac{1}{w^{7}\left(w^{2}\right)^{1/3}}$ $\frac{1}{w^{5}\left(w^{2}\right)^{1/3}}$ $\frac{1}{w^{3}\left(w^{2}\right)^{1/3}}$ $\frac{\left(w^{2}\right)^{2/3}}{w^{3}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w}$ [$\begin{array}{c} $w\left(w^{2}\right)^{2/3}$ 3\\ 3\\ 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} $w^{-2}$ $\frac{1}{w^{6}\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $\frac{1}{w^{4}\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $\frac{1}{w^{2}\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{2/3}}{w^{2}}$ [$\begin{array}{c} $\left(w^{2}\right)^{2/3}$ $w^{2}\left(w^{2}\right)^{2/3}$ [$\begin{array}{c} 4\end{array}$]{} 4\end{array}$]{} 4\end{array}$]{} 3\\ 4\end{array}$]{} 4\\ 5\\ 6\end{array}$]{} $w^{-1}$ $\frac{1}{w^{5}\left(w^{2}\right)^{1/3}}$ $\frac{1}{w^{3}\left(w^{2}\right)^{1/3}}$ $\frac{1}{w\left(w^{2}\right)^{1/3}}$ $\frac{\left(w^{2}\right)^{2/3}}{w}$ [$\begin{array}{c} $w\left(w^{2}\right)^{2/3}$ $w^{3}\left(w^{2}\right)^{2/3}$ 3\\ 5\\ 6\end{array}$]{} $w^{0}$ $\frac{1}{w^{4}\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $\frac{1}{w^{2}\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $\frac{1}{\left(w^{2}\right)^{1/3}}$ $\left(w^{2}\right)^{2/3}$ $w^{2}\left(w^{2}\right)^{2/3}$ [$\begin{array}{c} $w^{4}\left(w^{2}\right)^{2/3}$ [$\begin{array}{c} 4\end{array}$]{} 4\end{array}$]{} 4\end{array}$]{} 4\end{array}$]{} $w^{1}$ $\frac{1}{w^{3}\left(w^{2}\right)^{1/3}}$ $\frac{1}{w\left(w^{2}\right)^{1/3}}$ $\frac{w}{\left(w^{2}\right)^{1/3}}$ $w\left(w^{2}\right)^{2/3}$ $w^{3}\left(w^{2}\right)^{2/3}$ $w^{5}\left(w^{2}\right)^{2/3}$ $w^{2}$ $\frac{1}{w^{2}\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $\frac{1}{\left(w^{2}\right)^{1/3}}$ $\frac{w^{2}}{\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $w^{2}\left(w^{2}\right)^{2/3}$ [$\begin{array}{c} $w^{4}\left(w^{2}\right)^{2/3}$ [$\begin{array}{c} $w^{6}\left(w^{2}\right)^{2/3}$ [$\begin{array}{c} 4\end{array}$]{} 3\\ 4\end{array}$]{} 4\end{array}$]{} 4\end{array}$]{} 4\\ 6\end{array}$]{} $w^{3}$ $\frac{1}{w\left(w^{2}\right)^{1/3}}$ $\frac{w}{\left(w^{2}\right)^{1/3}}$ $\frac{w^{3}}{\left(w^{2}\right)^{1/3}}$ [$\begin{array}{c} $w^{3}\left(w^{2}\right)^{2/3}$ $w^{5}\left(w^{2}\right)^{2/3}$ $w^{7}\left(w^{2}\right)^{2/3}$ 3\\ 6\end{array}$]{} --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : **Maxima 5.24 fullratsimp(...)** and **Maple 15 simplify(...)** for 1st row $\times$ 1st column, with flaw numbers. Compare with Table \[Flo:AlgorithmAtable\].[]{data-label="Flo:MaximaFullratsimpTableAndMapleSimplifyTable"} $\downarrow\!\!\!\overrightarrow{\times}$ ------------------------------------------- ------------------------------------------------ ------------------------------------------------ ------------------------------------------------ ---------------------------------------- ---------------------------------------- ---------------------------------------- $w^{-3}$ *$\frac{w}{\sqrt{\frac{1}{w^{2}}}}$* $\sqrt{\frac{1}{w^{2}}}w$ $\frac{\sqrt{\frac{1}{w^{2}}}}{w}$ $\frac{\sqrt{\frac{1}{w^{2}}}}{w^{3}}$ $\frac{\sqrt{\frac{1}{w^{2}}}}{w^{5}}$ $\frac{\sqrt{\frac{1}{w^{2}}}}{w^{7}}$ $w^{-2}$ $\frac{1}{\left(\frac{1}{w^{2}}\right)^{3/2}}$ $\frac{1}{\sqrt{\frac{1}{w^{2}}}}$ $\sqrt{\frac{1}{w^{2}}}$ $\left(\frac{1}{w^{2}}\right)^{3/2}$ $\left(\frac{1}{w^{2}}\right)^{5/2}$ $\left(\frac{1}{w^{2}}\right)^{7/2}$ $w^{-1}$ $\frac{w^{3}}{\sqrt{\frac{1}{w^{2}}}}$ $\frac{w}{\sqrt{\frac{1}{w^{2}}}}$ $\sqrt{\frac{1}{w^{2}}}w$ $\frac{\sqrt{\frac{1}{w^{2}}}}{w}$ $\frac{\sqrt{\frac{1}{w^{2}}}}{w^{3}}$ $\frac{\sqrt{\frac{1}{w^{2}}}}{w^{5}}$ $w^{0}$ $\frac{1}{\left(\frac{1}{w^{2}}\right)^{5/2}}$ $\frac{1}{\left(\frac{1}{w^{2}}\right)^{3/2}}$ $\frac{1}{\sqrt{\frac{1}{w^{2}}}}$ $\sqrt{\frac{1}{w^{2}}}$ $\left(\frac{1}{w^{2}}\right)^{3/2}$ $\left(\frac{1}{w^{2}}\right)^{5/2}$ $w^{1}$ $\frac{w^{5}}{\sqrt{\frac{1}{w^{2}}}}$ $\frac{w^{3}}{\sqrt{\frac{1}{w^{2}}}}$ $\frac{w}{\sqrt{\frac{1}{w^{2}}}}$ $\sqrt{\frac{1}{w^{2}}}w$ $\frac{\sqrt{\frac{1}{w^{2}}}}{w}$ $\frac{\sqrt{\frac{1}{w^{2}}}}{w^{3}}$ $w^{2}$ $\frac{1}{\left(\frac{1}{w^{2}}\right)^{7/2}}$ $\frac{1}{\left(\frac{1}{w^{2}}\right)^{5/2}}$ $\frac{1}{\left(\frac{1}{w^{2}}\right)^{3/2}}$ $\frac{1}{\sqrt{\frac{1}{w^{2}}}}$ $\sqrt{\frac{1}{w^{2}}}$ $\left(\frac{1}{w^{2}}\right)^{3/2}$ $w^{3}$ $\frac{w^{7}}{\sqrt{\frac{1}{w^{2}}}}$ $\frac{w^{5}}{\sqrt{\frac{1}{w^{2}}}}$ $\frac{w^{3}}{\sqrt{\frac{1}{w^{2}}}}$ $\frac{w}{\sqrt{\frac{1}{w^{2}}}}$ $\sqrt{\frac{1}{w^{2}}}w$ $\frac{\sqrt{\frac{1}{w^{2}}}}{w}$ : **Unflawed** results of **Appendix rewrite rules** for 1st row 1st column.\ Compare with Tables \[Flo:MathematicaDefaultSqrtReciprocalTable\] through \[Flo:MapleSimplifySqrtReciprocal\].[]{data-label="Flo:AlgorithmASqrtReciprocalTable"} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- $\downarrow\!\!\!\overrightarrow{\times}$ ------------------------------------------- ------------------------------------------------------- --------------------------- ------------------------------------------------------- --------------------------- ------------------------------------------- --------------------------- ---------------------------------------- --------------------------- ------------------------------------------------------ --------------------------- ------------------------------------------------------ --------------------------- $w^{-3}$ *$\frac{1}{\left(\frac{1}{w^{2}}\right)^{5/2}w^{3}}$* [$\begin{array}{c} *$\frac{1}{\left(\frac{1}{w^{2}}\right)^{3/2}w^{3}}$* [$\begin{array}{c} *$\frac{1}{\sqrt{\frac{1}{w^{2}}}w^{3}}$* [$\begin{array}{c} $\frac{\sqrt{\frac{1}{w^{2}}}}{w^{3}}$ [$\begin{array}{c} *$\frac{\left(\frac{1}{w^{2}}\right)^{3/2}}{w^{3}}$* [$\begin{array}{c} *$\frac{\left(\frac{1}{w^{2}}\right)^{5/2}}{w^{3}}$* [$\begin{array}{c} \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\end{array}$]{} \mathbf{2}\\ \mathbf{2}\\ 3\\ 3\\ 3\\ 5\\ 5\\ 5\\ 5\\ 6\\ 6\end{array}$]{} 6\end{array}$]{} 6\\ 6\\ 8\end{array}$]{} 8\end{array}$]{} 8\end{array}$]{} $w^{-2}$ $\frac{1}{\left(\frac{1}{w^{2}}\right)^{3/2}}$ [$\begin{array}{c} $\frac{1}{\sqrt{\frac{1}{w^{2}}}}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}$ [$\begin{array}{c} $\left(\frac{1}{w^{2}}\right)^{3/2}$ [$\begin{array}{c} $\left(\frac{1}{w^{2}}\right)^{5/2}$ [$\begin{array}{c} $\left(\frac{1}{w^{2}}\right)^{7/2}$ [$\begin{array}{c} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} $w^{-1}$ *$\frac{1}{\left(\frac{1}{w^{2}}\right)^{5/2}w}$* [$\begin{array}{c} *$\frac{1}{\left(\frac{1}{w^{2}}\right)^{3/2}w}$* [$\begin{array}{c} *$\frac{1}{\sqrt{\frac{1}{w^{2}}}w}$* [$\begin{array}{c} $\frac{\sqrt{\frac{1}{w^{2}}}}{w}$ [$\begin{array}{c} *$\frac{\left(\frac{1}{w^{2}}\right)^{3/2}}{w}$* [$\begin{array}{c} *$\frac{\left(\frac{1}{w^{2}}\right)^{5/2}}{w}$* [$\begin{array}{c} \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\end{array}$]{} \mathbf{2}\\ \mathbf{2}\\ 3\\ 3\\ 6\\ 5\\ 5\\ 5\\ 5\\ 8\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} 6\\ 6\\ 8\end{array}$]{} 8\end{array}$]{} $w^{0}$ $\frac{1}{\left(\frac{1}{w^{2}}\right)^{5/2}}$ [$\begin{array}{c} $\frac{1}{\left(\frac{1}{w^{2}}\right)^{3/2}}$ [$\begin{array}{c} $\frac{1}{\sqrt{\frac{1}{w^{2}}}}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}$ [$\begin{array}{c} $\left(\frac{1}{w^{2}}\right)^{3/2}$ [$\begin{array}{c} $\left(\frac{1}{w^{2}}\right)^{5/2}$ [$\begin{array}{c} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} $w^{1}$ *$\frac{w}{\left(\frac{1}{w^{2}}\right)^{5/2}}$* [$\begin{array}{c} *$\frac{w}{\left(\frac{1}{w^{2}}\right)^{3/2}}$* [$\begin{array}{c} $\frac{w}{\sqrt{\frac{1}{w^{2}}}}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}w$ [$\begin{array}{c} *$\left(\frac{1}{w^{2}}\right)^{3/2}w$* [$\begin{array}{c} *$\left(\frac{1}{w^{2}}\right)^{5/2}w$* [$\begin{array}{c} \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\\ \mathbf{2}\\ 5\\ 5\\ 3\\ 3\\ 6\end{array}$]{} 6\end{array}$]{} 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} $w^{2}$ *$\frac{w^{2}}{\left(\frac{1}{w^{2}}\right)^{5/2}}$* [$\begin{array}{c} *$\frac{w^{2}}{\left(\frac{1}{w^{2}}\right)^{3/2}}$* [$\begin{array}{c} *$\frac{w^{2}}{\sqrt{\frac{1}{w^{2}}}}$* [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}w^{2}$ [$\begin{array}{c} *$\left(\frac{1}{w^{2}}\right)^{3/2}w^{2}$* [$\begin{array}{c} *$\left(\frac{1}{w^{2}}\right)^{5/2}w^{2}$* [$\begin{array}{c} \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ 4\\ 4\\ 4\\ 3\\ 3\\ 3\\ 5\\ 5\\ 6\end{array}$]{} 4\\ 4\\ 4\\ 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} $w^{3}$ *$\frac{w^{3}}{\left(\frac{1}{w^{2}}\right)^{5/2}}$* [$\begin{array}{c} *$\frac{w^{3}}{\left(\frac{1}{w^{2}}\right)^{3/2}}$* [$\begin{array}{c} $\frac{w^{3}}{\sqrt{\frac{1}{w^{2}}}}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}w^{3}$ [$\begin{array}{c} *$\left(\frac{1}{w^{2}}\right)^{3/2}w^{3}$* [$\begin{array}{c} *$\left(\frac{1}{w^{2}}\right)^{5/2}w^{3}$* [$\begin{array}{c} \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\end{array}$]{} \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ 5\\ 5\\ 3\\ 3\\ 3\\ 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : ***Mathematica*** **8 default** simplify for 1st row 1st column, with flaw numbers. Compare with Table \[Flo:AlgorithmASqrtReciprocalTable\].[]{data-label="Flo:MathematicaDefaultSqrtReciprocalTable"} --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- $\downarrow\!\!\!\overrightarrow{\times}$ ------------------------------------------- -------------------------------------------------- -------------------- -------------------------------------------------- -------------------- ------------------------------------ -------------------- ---------------------------------------- -------------------- ------------------------------------------------------ -------------------- ------------------------------------------------------ -------------------- $w^{-3}$ $\sqrt{\frac{1}{w^{2}}}w^{3}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}w$ $\frac{\sqrt{\frac{1}{w^{2}}}}{w}$ $\frac{\sqrt{\frac{1}{w^{2}}}}{w^{3}}$ *$\frac{\left(\frac{1}{w^{2}}\right)^{3/2}}{w^{3}}$* [$\begin{array}{c} *$\frac{\left(\frac{1}{w^{2}}\right)^{5/2}}{w^{3}}$* [$\begin{array}{c} 3\\ 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} $w^{-2}$ $\frac{1}{\left(\frac{1}{w^{2}}\right)^{3/2}}$ $\frac{1}{\sqrt{\frac{1}{w^{2}}}}$ $\sqrt{\frac{1}{w^{2}}}$ $\left(\frac{1}{w^{2}}\right)^{3/2}$ $\left(\frac{1}{w^{2}}\right)^{5/2}$ $\left(\frac{1}{w^{2}}\right)^{7/2}$ $w^{-1}$ $\sqrt{\frac{1}{w^{2}}}w^{5}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}w^{3}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}w$ $\frac{\sqrt{\frac{1}{w^{2}}}}{w}$ *$\frac{\left(\frac{1}{w^{2}}\right)^{3/2}}{w}$* [$\begin{array}{c} *$\frac{\left(\frac{1}{w^{2}}\right)^{5/2}}{w}$* [$\begin{array}{c} 3\\ 3\\ 5\end{array}$]{} 5\end{array}$]{} 6\end{array}$]{} 5\\ 6\end{array}$]{} $w^{0}$ $\frac{1}{\left(\frac{1}{w^{2}}\right)^{5/2}}$ $\frac{1}{\left(\frac{1}{w^{2}}\right)^{3/2}}$ $\frac{1}{\sqrt{\frac{1}{w^{2}}}}$ $\sqrt{\frac{1}{w^{2}}}$ $\left(\frac{1}{w^{2}}\right)^{3/2}$ $\left(\frac{1}{w^{2}}\right)^{5/2}$ $w^{1}$ *$\frac{w}{\left(\frac{1}{w^{2}}\right)^{5/2}}$* [$\begin{array}{c} *$\frac{w}{\left(\frac{1}{w^{2}}\right)^{3/2}}$* [$\begin{array}{c} $\frac{w}{\sqrt{\frac{1}{w^{2}}}}$ $\sqrt{\frac{1}{w^{2}}}w$ $\left(\frac{1}{w^{2}}\right)^{3/2}w$ [$\begin{array}{c} $\left(\frac{1}{w^{2}}\right)^{5/2}w$ [$\begin{array}{c} 5\\ 5\\ 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} $w^{2}$ $\sqrt{\frac{1}{w^{2}}}w^{8}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}w^{6}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}w^{4}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}w^{2}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}$ $\left(\frac{1}{w^{2}}\right)^{3/2}$ \mathbf{1}\\ \mathbf{1}\\ \mathbf{1}\\ 3\\ 3\\ 3\\ 3\\ 4\end{array}$]{} 4\end{array}$]{} 4\end{array}$]{} 4\end{array}$]{} $w^{3}$ $\sqrt{\frac{1}{w^{2}}}w^{9}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}w^{7}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}w^{5}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}w^{3}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}w$ $\left(\frac{1}{w^{2}}\right)^{5/2}w^{3}$ [$\begin{array}{c} \mathbf{1}\\ \mathbf{1}\\ \mathbf{1}\\ 3\end{array}$]{} 3\\ 3\end{array}$]{} 3\end{array}$]{} 3\end{array}$]{} 5\end{array}$]{} --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : ***Mathematica*** **8 FullSimplify[\[]{}...[\]]{}** for 1st row 1st column, with flaw numbers. Compare with Table \[Flo:AlgorithmASqrtReciprocalTable\].[]{data-label="Flo:MathematicaFullSimplifySqrtReciprocalTable"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- $\downarrow\!\!\!\overrightarrow{\times}$ ------------------------------------------- ------------------------------------------------ -------------------- ------------------------------------------------ -------------------- ------------------------------------------------ -------------------- ---------------------------------------- -------------------- ---------------------------------------- -- ---------------------------------------- -- $w^{-3}$ $w^{3}\sqrt{\frac{1}{w^{2}}}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}w$ $\frac{\sqrt{\frac{1}{w^{2}}}}{w}$ $\frac{\sqrt{\frac{1}{w^{2}}}}{w^{3}}$ $\frac{\sqrt{\frac{1}{w^{2}}}}{w^{5}}$ $\frac{\sqrt{\frac{1}{w^{2}}}}{w^{7}}$ 3\end{array}$]{} $w^{-2}$ $\frac{1}{\left(\frac{1}{w^{2}}\right)^{3/2}}$ $\frac{1}{\sqrt{\frac{1}{w^{2}}}}$ $\sqrt{\frac{1}{w^{2}}}$ $\left(\frac{1}{w^{2}}\right)^{3/2}$ $\left(\frac{1}{w^{2}}\right)^{5/2}$ $\left(\frac{1}{w^{2}}\right)^{7/2}$ $w^{-1}$ $w^{5}\sqrt{\frac{1}{w^{2}}}$ [$\begin{array}{c} $w^{3}\sqrt{\frac{1}{w^{2}}}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}w$ $\frac{\sqrt{\frac{1}{w^{2}}}}{w}$ $\frac{\sqrt{\frac{1}{w^{2}}}}{w^{3}}$ $\frac{\sqrt{\frac{1}{w^{2}}}}{w^{5}}$ 3\end{array}$]{} 3\end{array}$]{} $w^{0}$ $\frac{1}{\left(\frac{1}{w^{2}}\right)^{5/2}}$ $\frac{1}{\left(\frac{1}{w^{2}}\right)^{3/2}}$ $\frac{1}{\sqrt{\frac{1}{w^{2}}}}$ $\sqrt{\frac{1}{w^{2}}}$ $\left(\frac{1}{w^{2}}\right)^{3/2}$ $\left(\frac{1}{w^{2}}\right)^{5/2}$ $w^{1}$ $w^{7}\sqrt{\frac{1}{w^{2}}}$ [$\begin{array}{c} $w^{5}\sqrt{\frac{1}{w^{2}}}$ [$\begin{array}{c} $w^{3}\sqrt{\frac{1}{w^{2}}}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}w$ $\frac{\sqrt{\frac{1}{w^{2}}}}{w}$ $\frac{\sqrt{\frac{1}{w^{2}}}}{w^{3}}$ 3\end{array}$]{} 3\end{array}$]{} 3\end{array}$]{} $w^{2}$ $\frac{1}{\left(\frac{1}{w^{2}}\right)^{7/2}}$ $\frac{1}{\left(\frac{1}{w^{2}}\right)^{5/2}}$ $\frac{1}{\left(\frac{1}{w^{2}}\right)^{3/2}}$ $\frac{1}{\sqrt{\frac{1}{w^{2}}}}$ $\sqrt{\frac{1}{w^{2}}}$ $\left(\frac{1}{w^{2}}\right)^{3/2}$ $w^{3}$ $w^{9}\sqrt{\frac{1}{w^{2}}}$ [$\begin{array}{c} $w^{7}\sqrt{\frac{1}{w^{2}}}$ [$\begin{array}{c} $w^{5}\sqrt{\frac{1}{w^{2}}}$ [$\begin{array}{c} $w^{3}\sqrt{\frac{1}{w^{2}}}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}w$ $\frac{\sqrt{\frac{1}{w^{2}}}}{w}$ 3\end{array}$]{} 3\end{array}$]{} 3\end{array}$]{} 3\end{array}$]{} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : ***Derive*** **6 default** simplify for 1st row 1st column, with flaw numbers.\ Compare with Table \[Flo:AlgorithmASqrtReciprocalTable\].[]{data-label="Flo:DeriveSqrtReciprocalTable"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- $\downarrow\!\!\!\overrightarrow{\times}$ ------------------------------------------- ------------------------------------------------------- --------------------------- ------------------------------------------------------- --------------------------- ------------------------------------------- --------------------------- ------------------------------------------ --------------------------- ------------------------------------------------------ --------------------------- ------------------------------------------------------ --------------------------- $w^{-3}$ *$\frac{1}{w^{3}\left(\frac{1}{w^{2}}\right)^{5/2}}$* [$\begin{array}{c} *$\frac{1}{w^{3}\left(\frac{1}{w^{2}}\right)^{3/2}}$* [$\begin{array}{c} *$\frac{1}{w^{3}\sqrt{\frac{1}{w^{2}}}}$* [$\begin{array}{c} $\frac{\sqrt{\frac{1}{w^{2}}}}{w^{3}}$ [$\begin{array}{c} *$\frac{\left(\frac{1}{w^{2}}\right)^{3/2}}{w^{3}}$* [$\begin{array}{c} *$\frac{\left(\frac{1}{w^{2}}\right)^{5/2}}{w^{3}}$* [$\begin{array}{c} \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\end{array}$]{} \mathbf{2}\\ \mathbf{2}\\ 3\\ 3\\ 3\\ 5\end{array}$]{} 5\end{array}$]{} 5\\ 5\\ 5\\ 6\\ 6\\ 6\\ 8\end{array}$]{} 8\end{array}$]{} 8\end{array}$]{} $w^{-2}$ *$\frac{1}{w^{2}\left(\frac{1}{w^{2}}\right)^{5/2}}$* [$\begin{array}{c} *$\frac{1}{w^{2}\left(\frac{1}{w^{2}}\right)^{3/2}}$* [$\begin{array}{c} *$\frac{1}{w^{2}\sqrt{\frac{1}{w^{2}}}}$* [$\begin{array}{c} *$\frac{\sqrt{\frac{1}{w^{2}}}}{w^{2}}$* [$\begin{array}{c} *$\frac{\left(\frac{1}{w^{2}}\right)^{3/2}}{w^{2}}$* [$\begin{array}{c} *$\frac{\left(\frac{1}{w^{2}}\right)^{5/2}}{w^{2}}$* [$\begin{array}{c} \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ 3\\ 3\\ 3\\ 4\\ 4\\ 4\\ 4\\ 4\\ 4\\ 6\end{array}$]{} 5\\ 5\\ 5\\ 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} 6\\ 6\\ 6\\ 8\end{array}$]{} 8\end{array}$]{} 8\end{array}$]{} $w^{-1}$ *$\frac{1}{w\left(\frac{1}{w^{2}}\right)^{5/2}}$* [$\begin{array}{c} *$\frac{1}{w\left(\frac{1}{w^{2}}\right)^{3/2}}$* [$\begin{array}{c} *$\frac{1}{w\sqrt{\frac{1}{w^{2}}}}$* [$\begin{array}{c} $\frac{\sqrt{\frac{1}{w^{2}}}}{w}$ [$\begin{array}{c} *$\frac{\left(\frac{1}{w^{2}}\right)^{3/2}}{w}$* [$\begin{array}{c} *$\frac{\left(\frac{1}{w^{2}}\right)^{5/2}}{w}$* [$\begin{array}{c} \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\end{array}$]{} \mathbf{2}\\ \mathbf{2}\\ 3\\ 3\\ 6\\ 5\\ 5\\ 5\\ 5\\ 8\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} 6\\ 6\\ 8\end{array}$]{} 8\end{array}$]{} $w^{0}$ $\frac{1}{\left(\frac{1}{w^{2}}\right)^{5/2}}$ [$\begin{array}{c} $\frac{1}{\left(\frac{1}{w^{2}}\right)^{3/2}}$ [$\begin{array}{c} $\frac{1}{\sqrt{\frac{1}{w^{2}}}}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}$ [$\begin{array}{c} $\left(\frac{1}{w^{2}}\right)^{3/2}$ [$\begin{array}{c} $\left(\frac{1}{w^{2}}\right)^{5/2}$ [$\begin{array}{c} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} $w^{1}$ *$\frac{w}{\left(\frac{1}{w^{2}}\right)^{5/2}}$* [$\begin{array}{c} *$\frac{w}{\left(\frac{1}{w^{2}}\right)^{3/2}}$* [$\begin{array}{c} $\frac{w}{\sqrt{\frac{1}{w^{2}}}}$ [$\begin{array}{c} $w\sqrt{\frac{1}{w^{2}}}$ [$\begin{array}{c} *$w\left(\frac{1}{w^{2}}\right)^{3/2}$* [$\begin{array}{c} *$w\left(\frac{1}{w^{2}}\right)^{5/2}$* [$\begin{array}{c} \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\\ \mathbf{2}\\ 5\\ 5\\ 3\\ 3\\ 6\end{array}$]{} 6\end{array}$]{} 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} $w^{2}$ *$\frac{w^{2}}{\left(\frac{1}{w^{2}}\right)^{5/2}}$* [$\begin{array}{c} *$\frac{w^{2}}{\left(\frac{1}{w^{2}}\right)^{3/2}}$* [$\begin{array}{c} *$\frac{w^{2}}{\sqrt{\frac{1}{w^{2}}}}$* [$\begin{array}{c} $w^{2}\sqrt{\frac{1}{w^{2}}}$ [$\begin{array}{c} *$w^{2}\left(\frac{1}{w^{2}}\right)^{3/2}$* [$\begin{array}{c} *$w^{2}\left(\frac{1}{w^{2}}\right)^{5/2}$* [$\begin{array}{c} \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ 4\\ 4\\ 4\\ 3\\ 3\\ 3\\ 5\\ 5\\ 6\end{array}$]{} 4\\ 4\\ 4\\ 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} $w^{3}$ *$\frac{w^{3}}{\left(\frac{1}{w^{2}}\right)^{5/2}}$* [$\begin{array}{c} *$\frac{w^{3}}{\left(\frac{1}{w^{2}}\right)^{3/2}}$* [$\begin{array}{c} $\frac{w^{3}}{\sqrt{\frac{1}{w^{2}}}}$ [$\begin{array}{c} $w^{3}\sqrt{\frac{1}{w^{2}}}$ [$\begin{array}{c} *$w^{3}\left(\frac{1}{w^{2}}\right)^{3/2}$* [$\begin{array}{c} *$w^{3}\left(\frac{1}{w^{2}}\right)^{5/2}$* [$\begin{array}{c} \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\end{array}$]{} \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ 5\\ 5\\ 3\end{array}$]{} 3\\ 3\\ 6\end{array}$]{} 6\end{array}$]{} 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : **TI-CAS and Maple default simplification** for 1st row 1st column, with flaw numbers. Compare with Table \[Flo:AlgorithmASqrtReciprocalTable\].[]{data-label="Flo:TIandMapleDefaultSqrtReciprocal"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ $\downarrow\!\!\!\overrightarrow{\times}$ ------------------------------------------- ------------------------------------------ --------------------------- ------------------------------------------ --------------------------- ----------------------------------------- --------------------------- ---------------------------------------- --------------------------- ------------------------------------------- --------------------------- ------------------------------------------- --------------------------- $w^{-3}$ $\frac{\left(w^{2}\right)^{5/2}}{w^{3}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{3/2}}{w^{3}}$ [$\begin{array}{c} $\frac{1}{\sqrt{\frac{1}{w^{2}}}w^{3}}$ [$\begin{array}{c} $\frac{\sqrt{\frac{1}{w^{2}}}}{w^{3}}$ [$\begin{array}{c} $\frac{1}{w^{3}\left(w^{2}\right)^{3/2}}$ [$\begin{array}{c} $\frac{1}{w^{3}\left(w^{2}\right)^{5/2}}$ [$\begin{array}{c} \mathbf{1}\\ \mathbf{1}\\ \mathbf{2}\\ \mathbf{2}\end{array}$]{} \mathbf{1}\\ \mathbf{1}\\ \mathbf{2}\\ \mathbf{2}\\ 3\\ \mathbf{2}\\ \mathbf{2}\\ 3\\ 3\\ 6\\ 5\\ 5\\ 5\\ 5\\ 8\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} $w^{-2}$ $\frac{\left(w^{2}\right)^{5/2}}{w^{2}}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{3/2}}{w^{2}}$ [$\begin{array}{c} $\frac{1}{\sqrt{\frac{1}{w^{2}}}w^{2}}$ [$\begin{array}{c} $\frac{\sqrt{\frac{1}{w^{2}}}}{w^{2}}$ [$\begin{array}{c} $\frac{1}{w^{2}\left(w^{2}\right)^{3/2}}$ [$\begin{array}{c} $\frac{1}{w^{2}\left(w^{2}\right)^{5/2}}$ [$\begin{array}{c} \mathbf{1}\\ \mathbf{1}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{1}\\ \mathbf{1}\\ \mathbf{2}\\ \mathbf{2}\\ 3\\ 4\\ \mathbf{2}\\ \mathbf{2}\\ 3\\ 3\\ 4\\ 6\end{array}$]{} 4\\ 4\\ 4\\ 4\\ 6\\ 5\\ 5\\ 5\\ 5\\ 8\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} $w^{-1}$ $\frac{\left(w^{2}\right)^{5/2}}{w}$ [$\begin{array}{c} $\frac{\left(w^{2}\right)^{3/2}}{w}$ [$\begin{array}{c} $\frac{1}{\sqrt{\frac{1}{w^{2}}}w}$ [$\begin{array}{c} $\frac{\sqrt{\frac{1}{w^{2}}}}{w}$ [$\begin{array}{c} $\frac{1}{w\left(w^{2}\right)^{3/2}}$ [$\begin{array}{c} $\frac{1}{w\left(w^{2}\right)^{5/2}}$ [$\begin{array}{c} \mathbf{1}\\ \mathbf{1}\\ \mathbf{2}\\ \mathbf{2}\end{array}$]{} \mathbf{1}\\ \mathbf{1}\\ \mathbf{2}\\ \mathbf{2}\\ 6\\ \mathbf{2}\\ \mathbf{2}\\ 3\\ 3\\ 8\end{array}$]{} 5\\ 5\\ 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} $w^{0}$ $\left(w^{2}\right)^{5/2}$ [$\begin{array}{c} $\left(w^{2}\right)^{3/2}$ [$\begin{array}{c} $\frac{1}{\sqrt{\frac{1}{w^{2}}}}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}$ [$\begin{array}{c} $\frac{1}{\left(w^{2}\right)^{3/2}}$ [$\begin{array}{c} $\frac{1}{\left(w^{2}\right)^{5/2}}$ [$\begin{array}{c} \mathbf{1}\\ \mathbf{1}\\ \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{1}\\ \mathbf{1}\\ \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} $w^{1}$ $w\left(w^{2}\right)^{5/2}$ [$\begin{array}{c} $w\left(w^{2}\right)^{3/2}$ [$\begin{array}{c} $\frac{w}{\sqrt{\frac{1}{w^{2}}}}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}w$ [$\begin{array}{c} $\frac{w}{\left(w^{2}\right)^{3/2}}$ [$\begin{array}{c} $\frac{w}{\left(w^{2}\right)^{5/2}}$ [$\begin{array}{c} \mathbf{1}\\ \mathbf{1}\\ \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{1}\\ \mathbf{1}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ 5\\ 5\\ 3\\ 3\\ 6\end{array}$]{} 6\end{array}$]{} 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} $w^{2}$ $w^{2}\left(w^{2}\right)^{5/2}$ [$\begin{array}{c} $w^{2}\left(w^{2}\right)^{3/2}$ [$\begin{array}{c} $\frac{w^{2}}{\sqrt{\frac{1}{w^{2}}}}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}w^{2}$ [$\begin{array}{c} $\frac{w^{2}}{\left(w^{2}\right)^{3/2}}$ [$\begin{array}{c} $\frac{w^{2}}{\left(w^{2}\right)^{5/2}}$ [$\begin{array}{c} \mathbf{1}\\ \mathbf{1}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{1}\\ \mathbf{1}\\ \mathbf{2}\\ \mathbf{2}\\ 4\\ 4\\ \mathbf{2}\\ \mathbf{2}\\ 4\\ 4\\ 6\end{array}$]{} 6\end{array}$]{} 3\\ 3\\ 5\\ 5\\ 4\\ 4\\ 6\end{array}$]{} 6\end{array}$]{} 5\\ 5\\ 6\end{array}$]{} 6\end{array}$]{} $w^{3}$ $w^{3}\left(w^{2}\right)^{5/2}$ [$\begin{array}{c} $w^{3}\left(w^{2}\right)^{3/2}$ [$\begin{array}{c} $\frac{w^{3}}{\sqrt{\frac{1}{w^{2}}}}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}w^{3}$ [$\begin{array}{c} $\frac{w^{3}}{\left(w^{2}\right)^{3/2}}$ [$\begin{array}{c} $\frac{w^{3}}{\left(w^{2}\right)^{5/2}}$ [$\begin{array}{c} \mathbf{1}\\ \mathbf{1}\\ \mathbf{2}\end{array}$]{} \mathbf{2}\\ \mathbf{1}\\ \mathbf{1}\\ \mathbf{2}\\ \mathbf{2}\\ 3\\ \mathbf{2}\\ \mathbf{2}\\ 5\\ 5\\ 6\end{array}$]{} 3\\ 3\\ 6\end{array}$]{} 6\end{array}$]{} 5\\ 5\\ 6\end{array}$]{} 6\\ 8\end{array}$]{} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ : **Maxima 5.24 default** simplify for 1st row 1st column, with flaw numbers. Compare with Table \[Flo:AlgorithmASqrtReciprocalTable\].[]{data-label="Flo:MaximaDefaultSqrtReciprocal"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- $\downarrow\!\!\!\overrightarrow{\times}$ ------------------------------------------- --------------------- -------------------- -------------------------- -------------------- ----------------------------------------- --------------------------- ---------------------------------------- --------------------------- ------------------------------- -------------------- ------------------------------- -------------------- $w^{-3}$ $w\sqrt{w^{2}}$ [$\begin{array}{c} $\frac{\sqrt{w^{2}}}{w}$ [$\begin{array}{c} $\frac{1}{\sqrt{\frac{1}{w^{2}}}w^{3}}$ [$\begin{array}{c} $\frac{\sqrt{\frac{1}{w^{2}}}}{w^{3}}$ [$\begin{array}{c} $\frac{1}{w^{5}\sqrt{w^{2}}}$ [$\begin{array}{c} $\frac{1}{w^{7}\sqrt{w^{2}}}$ [$\begin{array}{c} \mathbf{1}\\ \mathbf{1}\\ \mathbf{2}\\ \mathbf{2}\end{array}$]{} \mathbf{1}\\ \mathbf{1}\\ 6\end{array}$]{} \mathbf{2}\\ 3\\ \mathbf{2}\\ \mathbf{2}\\ 6\end{array}$]{} 6\\ 6\end{array}$]{} 6\end{array}$]{} 8\end{array}$]{} $w^{-2}$ $w^{2}\sqrt{w^{2}}$ [$\begin{array}{c} $\sqrt{w^{2}}$ [$\begin{array}{c} $\frac{1}{\sqrt{\frac{1}{w^{2}}}w^{2}}$ [$\begin{array}{c} $\frac{\sqrt{\frac{1}{w^{2}}}}{w^{2}}$ [$\begin{array}{c} $\frac{1}{w^{4}\sqrt{w^{2}}}$ [$\begin{array}{c} $\frac{1}{w^{6}\sqrt{w^{2}}}$ [$\begin{array}{c} \mathbf{1}\\ \mathbf{1}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{1}\\ \mathbf{1}\\ 4\\ \mathbf{2}\\ 3\\ 4\\ \mathbf{2}\\ \mathbf{2}\\ 6\end{array}$]{} 6\end{array}$]{} 4\\ 6\end{array}$]{} 4\\ 4\\ 6\\ 6\end{array}$]{} 6\end{array}$]{} 8\end{array}$]{} $w^{-1}$ $w^{3}\sqrt{w^{2}}$ [$\begin{array}{c} $w\sqrt{w^{2}}$ [$\begin{array}{c} $\frac{1}{\sqrt{\frac{1}{w^{2}}}w}$ [$\begin{array}{c} $\frac{\sqrt{\frac{1}{w^{2}}}}{w}$ [$\begin{array}{c} $\frac{1}{w^{3}\sqrt{w^{2}}}$ [$\begin{array}{c} $\frac{1}{w^{5}\sqrt{w^{2}}}$ [$\begin{array}{c} \mathbf{1}\\ \mathbf{1}\\ \mathbf{2}\\ \mathbf{2}\end{array}$]{} \mathbf{1}\\ \mathbf{1}\\ 6\end{array}$]{} \mathbf{2}\\ 6\\ \mathbf{2}\\ \mathbf{2}\\ 6\end{array}$]{} 8\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} $w^{0}$ $w^{4}\sqrt{w^{2}}$ [$\begin{array}{c} $w^{2}\sqrt{w^{2}}$ [$\begin{array}{c} $\frac{1}{\sqrt{\frac{1}{w^{2}}}}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}$ [$\begin{array}{c} $\frac{1}{w^{2}\sqrt{w^{2}}}$ [$\begin{array}{c} $\frac{1}{w^{4}\sqrt{w^{2}}}$ [$\begin{array}{c} \mathbf{1}\\ \mathbf{1}\\ \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{1}\\ \mathbf{1}\\ 4\end{array}$]{} \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ 4\\ 4\\ 4\\ 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} $w^{1}$ $w^{5}\sqrt{w^{2}}$ [$\begin{array}{c} $w^{3}\sqrt{w^{2}}$ [$\begin{array}{c} $\frac{w}{\sqrt{\frac{1}{w^{2}}}}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}w$ [$\begin{array}{c} $\frac{1}{w\sqrt{w^{2}}}$ [$\begin{array}{c} $\frac{1}{w^{3}\sqrt{w^{2}}}$ [$\begin{array}{c} \mathbf{1}\\ \mathbf{1}\\ \mathbf{2}\end{array}$]{} \mathbf{2}\end{array}$]{} \mathbf{1}\\ \mathbf{1}\\ 6\end{array}$]{} \mathbf{2}\\ \mathbf{2}\\ \mathbf{2}\\ 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} $w^{2}$ $w^{6}\sqrt{w^{2}}$ [$\begin{array}{c} $w^{4}\sqrt{w^{2}}$ [$\begin{array}{c} $\frac{w^{2}}{\sqrt{\frac{1}{w^{2}}}}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}w^{2}$ [$\begin{array}{c} $\frac{1}{\sqrt{w^{2}}}$ [$\begin{array}{c} $\frac{1}{w^{2}\sqrt{w^{2}}}$ [$\begin{array}{c} \mathbf{1}\\ \mathbf{1}\\ \mathbf{2}\\ \mathbf{2}\\ \mathbf{1}\\ \mathbf{1}\\ 4\\ \mathbf{2}\\ 4\\ 4\\ \mathbf{2}\\ \mathbf{2}\\ 6\end{array}$]{} 4\\ 6\end{array}$]{} 6\end{array}$]{} 4\\ 4\\ 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} $w^{3}$ $w^{7}\sqrt{w^{2}}$ [$\begin{array}{c} $w^{5}\sqrt{w^{2}}$ [$\begin{array}{c} $\frac{w^{3}}{\sqrt{\frac{1}{w^{2}}}}$ [$\begin{array}{c} $\sqrt{\frac{1}{w^{2}}}w^{3}$ [$\begin{array}{c} $\frac{w}{\sqrt{w^{2}}}$ [$\begin{array}{c} $\frac{1}{w\sqrt{w^{2}}}$ [$\begin{array}{c} \mathbf{1}\\ \mathbf{1}\\ \mathbf{2}\end{array}$]{} \mathbf{2}\\ \mathbf{1}\\ \mathbf{1}\\ 6\end{array}$]{} \mathbf{2}\\ 3\\ \mathbf{2}\\ \mathbf{2}\\ 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} 6\end{array}$]{} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : **Maxima 5.24 fullratsimp(...)** for 1st row 1st column, with flaw numbers. Compare with Table \[Flo:AlgorithmASqrtReciprocalTable\].[]{data-label="Flo:MaximaFullratsimpSqrtRecip"} $\downarrow\!\!\!\overrightarrow{\times}$ ------------------------------------------- ---------------------------------------------- ---------------------------------------------- ------------------------------------------------------- ------------------------------------------------------- ------------------------------------------------------- ------------------------------------------------------- $w^{-3}$ $w^{2}\mathrm{csgn}\left(\frac{1}{w}\right)$ $\mathrm{csgn}\left(\frac{1}{w}\right)$ $\frac{\mathrm{csgn}\left(\frac{1}{w}\right)}{w^{2}}$ $\frac{\mathrm{csgn}\left(\frac{1}{w}\right)}{w^{4}}$ $\frac{\mathrm{csgn}\left(\frac{1}{w}\right)}{w^{6}}$ $\frac{\mathrm{csgn}\left(\frac{1}{w}\right)}{w^{8}}$ $w^{-2}$ $w^{3}\mathrm{csgn}\left(\frac{1}{w}\right)$ $w\,\mathrm{csgn}\left(\frac{1}{w}\right)$ $\frac{\mathrm{csgn}\left(\frac{1}{w}\right)}{w}$ $\frac{\mathrm{csgn}\left(\frac{1}{w}\right)}{w^{3}}$ $\frac{\mathrm{csgn}\left(\frac{1}{w}\right)}{w^{5}}$ $\frac{\mathrm{csgn}\left(\frac{1}{w}\right)}{w^{7}}$ $w^{-1}$ $w^{4}\mathrm{csgn}\left(\frac{1}{w}\right)$ $w^{2}\mathrm{csgn}\left(\frac{1}{w}\right)$ $\mathrm{csgn}\left(\frac{1}{w}\right)$ $\frac{\mathrm{csgn}\left(\frac{1}{w}\right)}{w^{2}}$ $\frac{\mathrm{csgn}\left(\frac{1}{w}\right)}{w^{4}}$ $\frac{\mathrm{csgn}\left(\frac{1}{w}\right)}{w^{6}}$ $w^{0}$ $w^{5}\mathrm{csgn}\left(\frac{1}{w}\right)$ $w^{3}\mathrm{csgn}\left(\frac{1}{w}\right)$ $w\,\mathrm{csgn}\left(\frac{1}{w}\right)$ $\frac{\mathrm{csgn}\left(\frac{1}{w}\right)}{w}$ $\frac{\mathrm{csgn}\left(\frac{1}{w}\right)}{w^{3}}$ $\frac{\mathrm{csgn}\left(\frac{1}{w}\right)}{w^{5}}$ $w^{1}$ $w^{6}\mathrm{csgn}\left(\frac{1}{w}\right)$ $w^{4}\mathrm{csgn}\left(\frac{1}{w}\right)$ $w^{2}\mathrm{csgn}\left(\frac{1}{w}\right)$ $\mathrm{csgn}\left(\frac{1}{w}\right)$ $\frac{\mathrm{csgn}\left(\frac{1}{w}\right)}{w^{2}}$ $\frac{\mathrm{csgn}\left(\frac{1}{w}\right)}{w^{4}}$ $w^{2}$ $w^{7}\mathrm{csgn}\left(\frac{1}{w}\right)$ $w^{5}\mathrm{csgn}\left(\frac{1}{w}\right)$ $w^{3}\mathrm{csgn}\left(\frac{1}{w}\right)$ $w\,\mathrm{csgn}\left(\frac{1}{w}\right)$ $\frac{\mathrm{csgn}\left(\frac{1}{w}\right)}{w}$ $\frac{\mathrm{csgn}\left(\frac{1}{w}\right)}{w^{3}}$ $w^{3}$ $w^{8}\mathrm{csgn}\left(\frac{1}{w}\right)$ $w^{6}\mathrm{csgn}\left(\frac{1}{w}\right)$ $w^{4}\mathrm{csgn}\left(\frac{1}{w}\right)$ $w^{2}\mathrm{csgn}\left(\frac{1}{w}\right)$ $\mathrm{csgn}\left(\frac{1}{w}\right)$ $\frac{\mathrm{csgn}\left(\frac{1}{w}\right)}{w^{2}}$ : **Unflawed** results of **Maple simplify(...)** for 1st row 1st column – a variant of form 4.\ Compare with Table \[Flo:AlgorithmASqrtReciprocalTable\].[]{data-label="Flo:MapleSimplifySqrtReciprocal"} system transformations --------------- ------------------------ ------------------------------- ------------- Appendix rewrite rules $\frac{\sqrt{w^{2}}}{w}$ *Mathematica* default *$\frac{w}{\sqrt{w^{2}}}$* **2**, 6, 8 *Mathematica* FullSimplify(...) *$\frac{w}{\sqrt{w^{2}}}$* 6, 8 *Derive* default $\frac{\sqrt{w^{2}}}{w}$ TI-CAS default *$\frac{w}{\sqrt{w^{2}}}$* **2**, 6, 8 Maple default *$\frac{w}{\sqrt{w^{2}}}$* **2**, 6, 8 Maxima default *$\frac{w}{\sqrt{w^{2}}}$* **2**, 6, 8 Maxima fullratsimp(...) *$\frac{w}{\sqrt{w^{2}}}$* 6, 8 Maple simplify(...) $\mathrm{csgn}\left(w\right)$ : Simplification of $w/\sqrt{w^{2}}$, with flaw numbers[]{data-label="Flo:WonSqrtWSquaredTable"} ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- system and transformation --------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------- Appendix rewrite rules $-i^{\left(-2\,\mathrm{Arg}\left[w\right]+\mathrm{Arg}\left[w^{2}\right]\right)/\pi}+\frac{\sqrt{w^{2}}}{w}$ **2** *Mathematica* default $-i^{\left(-2\,\mathrm{Arg}\left[w\right]+\mathrm{Arg}\left[w^{2}\right]\right)/\pi}+\frac{\sqrt{w^{2}}}{w}$ **2** *Mathematica* FullSimplify(...) $-i^{\left(-2\mathrm{\, Arg}\left[w\right]+\mathrm{Arg}\left[w^{2}\right]\right)/\pi}+\frac{\sqrt{w^{2}}}{w}$ **2** *Derive* default $\frac{\sqrt{w^{2}}}{w}-\mathrm{IF}\left(w=0,\:0,\:\left(-1\right)^{\left(\mathrm{PHASE}\left(w^{2}\right)-2\mathrm{\, PHASE}\left(w\right)\right)/(2\pi)}\right)$ **2** TI-CAS, default *$\frac{\sqrt{w{}^{2}}}{w}-e^{\pi i\begin{cases} **2** 0, & w=0\\ \frac{\left(\mathrm{angle}\left(w^{2}\right)-2\,\mathrm{angle}(w)\right)1/2}{\pi} & \mathrm{else}\end{cases}}$* Maple default *$\frac{w}{\sqrt{w^{2}}}-\left(-1\right)^{\frac{1}{2}\frac{\mathrm{argument}\left(w^{2}\right)-\mathrm{2\, argument}\left(z\right)}{\pi}}$* **2** Maxima default *$\frac{w}{\sqrt{w^{2}}}-\left(-1\right)^{\frac{\mathrm{atan2}\left(\sin\left(2\,\mathrm{carg}\left(w\right)\right),\cos\left(2\,\mathrm{carg}\left(w\right)\right)\right)-2\,\mathrm{carg}\left(w\right)}{2\pi}}$* **2** Maxima fullratsimp(...) *$\frac{\sqrt{w^{2}}\left(-1\right)^{\frac{\mathrm{carg}\left(w\right)}{\pi}}-w\left(-1\right)^{\frac{\mathrm{atan2}\left(\sin\left(2\mathrm{\, carg}\left(w\right)\right),\cos\left(2\,\mathrm{carg}\left(w\right)\right)\right)}{2\pi}}}{w\left(-1\right)^{\frac{\mathrm{carg}\left(w\right)}{\pi}}}$* **2** Maple simplify(...) *$\mathrm{csgn}\left(w\right)-\left(-1\right)^{\frac{1}{2}\frac{\mathrm{argument}\left(w^{2}\right)-2\,\mathrm{argument}\left(z\right)}{\pi}}$* **2** ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : Simplification of $\sqrt{w^{2}}/w-(-1)^{\left(1/2\right)\left(\arg(w^{2})-2\arg(w)\right)/\pi}$, with flaw numbers.[]{data-label="Flo:Form1MinusForm4Table"} [^1]: dstout at hawaii dot edu [^2]: The computer algebra embedded in a succession of TI handheld calculators, Windows and Macintosh computers has no name independent of the product names, the most recent of which is TI-Nspire^tm^. [^3]: I am guilty as a coauthor of *Derive* and TI-computer algebra. [^4]: By default some systems assume that indeterminates represent *real* values and/or use the *real* branch wherein for reduced integers $m$ and $n$, $(-1)^{m/n}\rightarrow1$ for $m$ even, and $(-1)^{m/n}\rightarrow-1$ for $m$ and $n$ odd. However, most computer algebra systems provide a way to force the principal branch if it isn’t the default – and to declare that an indeterminate is complex if that isn’t the default. [^5]: It is of course audacious to define undefined. Although unnecessary for this article, systems could usefully also - display 0/0 as 0/0 rather than a vague controversial word such as “undefined”, and - contract functions of 0/0 to strict subsets of the complex plane wherever possible, such as $\arg(0/0)\rightarrow(-\pi,\pi]$. Having $\arg(0/0)\rightarrow0/0$ snatches defeat from the jaws of compromise. Try this on your systems! Many systems throw an error, which is worse because it requires even amateur authors of functions to know about all the potential throws, catch them or vet to prevent them, and respond appropriately to make their functions robust. [^6]: Canceling a gcd occasionally increases bulk significantly, such as $(x^{99}-1)/(x-1)\rightarrow x^{98}+x^{97}+\cdots+x+1,$ but the algorithms described here consider only *syntactic* cancellation, which always decreases bulk. [^7]: This happens for Tables \[Flo:MaximaDefaultSqrtReciprocal\] and \[Flo:MaximaFullratsimpSqrtRecip\]: *All* of the columns would exhibit flaw 2 if one of the two equivalent expressions was always taken from the correct results in columns 3 or 4. The results are not equivalent to the inputs for columns 1, 2, 5 and 6, so the only reason the difference simplified to 0 for columns 1 and 6 was the subtraction of incorrect but identical results – an instance where two wrongs make a right. [^8]: This is *not* generally equivalent to $\left(u^{m}\right)^{1/n}$: “be faithful to your roots” – Mason Cooley. [^9]: Jeffrey [@Jeffrey] uses the unwinding function to generalize csgn to a $C_{n}$ that works *for all* fractional powers. If and when implemented in Maple, that will avoid unwelcome mixtures of $\mathrm{csgn}(\ldots)$ with other form 4 notations for results containing both half-integer and other nested powers. [^10]: If you are familiar enough with those systems, then most of them probably have a quick way to generate all of the results for test families 1 and 2 by entries analogous to the following one for *Mathematica*:$$\mathtt{Table\,}[\mathtt{Table\,}[w^{j}(w^{2})^{k},\left\{ k,\,-7/3,\,8/3\right\} ,\left\{ j,\,-3,\,3\right\} \;//\mathtt{TableForm}$$ I am interested in knowing your results. [^11]: Do your computer algebra system’s default and optional transformations de-nest $\left(w^{\beta}\right)^{\gamma}$ for such $\beta$, $\gamma$, and $w$ declared non-negative? [^12]: In a degenerate case, one or more of the pieces of pie might be a ray – very dietetic. [^13]: For TI-CAS, $\ln(0)\rightarrow\mathrm{undef}$. An error is inconveniently thrown by Maple for $\ln(0)$ and by Maxima for log(0).
[**Quasinormal Modes of Charged Scalars around Dilaton Black Holes in 2 +1 Dimensions : Exact Frequencies**]{} Sharmanthie Fernando [^1]\ [*Department of Physics & Geology*]{}\ [*Northern Kentucky University*]{}\ [*Highland Heights*]{}\ [*Kentucky 41099*]{}\ [*U.S.A.*]{}\ [**Abstract**]{} We have studied the charged scalar perturbation around a dilaton black hole in 2 +1 dimensions. The wave equations of a massless charged scalar field is shown to be exactly solvable in terms of hypergeometric functions. The quasinormal frequencies are computed exactly. The relation between the quasinormal frequencies and the charge of the black hole, charge of the scalar and the temperature of the black hole are analyzed. The asymptotic form of the real part of the quasinormal frequencies are evaluated exactly. [*Key words*]{}: Static, Charged, Dilaton, Black Holes, Quasinormal modes Introduction ============ When a black hole is perturbed by an external field, the dynamics of the scattered waves can be described in three stages [@frolov]. The first corresponds to the initial wave which will depend on the source of disturbance. The second corresponds to the quasinormal modes with complex frequencies. Such modes are called quasinormal in contrast to normal modes since these are damped oscillations. The values of the quasinormal modes are independent of the initial disturbance and only depend on the parameters of the black hole. The focus of this paper is to analyze quasinormal modes of a charge scalar around a dilaton black hole in 2 + dimensions. The last stage of perturbations is described by a power-law tail behavior of the corresponding field in some cases. In recent times, there had been extensive work done to compute quasinormal modes (QNM) and to analyze them in various black hole backgrounds. A good review is Kokkotas et. al.[@kokko]. One of the reasons for the attention on QNM’s is the conjecture relating anti-de-Sitter space (AdS) and conformal field theory(CFT) [@aha]. It is conjectured that the imaginary part of the QNM’s which gives the time scale to decay the black hole perturbations also corresponds to the time scale of the conformal field theory (CFT) on the boundary to reach thermal equilibrium. There are many works on AdS black holes on this subject [@horo] [@car1] [@moss] [@wang]. Also, if signals due to QNM’s are detected by the gravitational wave detectors, one may be able to identify the charges of black holes and obtain depper understanding of the inner structure of the black holes in nature. A recent review on QNM’s and gravitational wave astronomy written by Ferrari and Gualtieri discuss such possibilities [@ferr]. There are many papers on the study of perturbations of black holes by neutral scalars. However, when a charged black hole is formed with gravitational collapse of charged matter, one expects perturbations by charged fields to develop out side the black hole. Hence, it is worthwhile to study charged scalar field perturbations. The late time evolution of a charged scalar in the gravitational collapse of charged matter to form Reissner-Nordstrom black holes were analyzed by Hod and Pirani [@hod1] [@hod2] [@hod3]. QNM’s of massive charged scalar field around Reissner-Nordstrom black hole were studied by Konoplya [@kono1]. Decay of charged scalar and the Dirac field around Kerr-Newmann-de-Sitter black hole were studied by Konoplya and Zhidenko in [@kono2]. In [@kono3], decay of massless charged scalars around variety of black holes in four dimensions were studied by Konoplya. To the authors knowledge, most of the work on QNM’s of black holes in four and higher dimensions are numerical except for few cases. Few we are aware of are, the massless topological black hole calculation done by Aros et. al. [@aros], exact frequencies computed for gravitational perturbation of topological black holes in [@birs] and QNM computations for de Sitter space in [@ort1] [@ort2]. However, in 2+1 dimensions, QNM’s can be computed exactly due to the nature of the wave equations. In particular, the well known BTZ black hole [@banados] has been studied with exact results [@bir1] [@bir2] [@car2] [@abd]. The QNM’s of the neutral scalars around the dilaton black hole were computed exactly in [@fer1]. The Dirac QNM’s for the dilaton black hole was computed in [@ort3]. In this paper we take a step further by studying QNM’s of a charged scalar around dilaton black holes in 2+1 dimensions which leads to exact results. To the authors knowledge, all work related QNM’s of charged scalars have been done numerically. Extensions of the BTZ black hole with charge have lead to many interesting work. The first investigation was done by Banados et. al.[@banados]. Due to the logarithmic nature of the electromagnetic potential , these solutions give rise to unphysical properties[@chan1]. The horizonless static solution with magnetic charge were studied by Hirshmann et.al.[@hirsh] and the persistence of these unphysical properties was highlighted by Chan [@chan1]. Kamata et.al.[@kamata] presented a rotating charged black hole with self (anti-self) duality imposed on the electromagnetic fields. The resulting solutions were asymptotic to an extreme BTZ black hole solution but had diverging mass and angular momentum [@chan1]. Clement [@clem], Fernando and Mansouri[@fer3] introduced a Chern-Simons term as a regulator to screen the electromagnetic potential and obtained horizonless charged particle-like solutions. In this paper we consider an interesting class of black hole solutions obtained by Chan and Mann [@chan2]. The solutions represents static charged black holes with a dilaton field. It is a solution to low-energy string action. Furthermore, it has finite mass unlike some of the charged black holes described above. We have organized the paper as follows: In section 2 an introduction to the geometry of the black hole is given. The charge scalar perturbation of the black hole is given in section 3. The general solution to the wave equation is given in section 4. Solution with boundary conditions is given in section 5. QNM frequencies of the black hole is computed and analyzed in detail in section 6. Finally the conclusion is given in section 7. Geometry of the static charged dilaton black hole ================================================== In this section we will present the geometry and important details of the staic charged black hole. The Einstein-Maxwell-dilaton action which lead to these black holes considered by Chan and Mann [@chan2] is given as follows: $$S = \int d^3x \sqrt{-g} \left[ R - 4 (\bigtriangledown \phi )^2 - e^{-4 \phi} F_{\mu \nu} F^{\mu \nu} + 2 e^{4 \phi} \Lambda \right]$$ Here, $ \Lambda$ is treated as the cosmological constant. In [@chan2], it was discussed that black hole solutions exists only for $ \Lambda > 0$. Hence throught this paper we will treat $\Lambda > 0$. The paramter $\phi$ is the dilaton field, $R$ is the scalar curvature and $F_{\mu \nu}$ is the Maxwell’s field strength in the action. This action is conformally related to the low-energy string action in 2+1 dimensions. The static circularly symmetric solution to the above action is given by, $$ds^2= - f(r)dt^2 + \frac{4 r^2 dr^2}{f(r)} + r^2 d \theta^2$$ $$f(r) =\left( -2Mr + 8 \Lambda r^2 + 8 Q^2 \right); \hspace{0.1cm} \phi = \frac{1}{4} ln (\frac{r}{\beta}) ; \hspace{1.0cm}F_{rt} = \frac{Q}{r^2}$$ For $M \geq 8 Q \sqrt{\Lambda}$, the space-time represent a black hole. It has two horizons given by the zeros of $g_{tt}$; $$r_+ = \frac{M + \sqrt{ M^2 - 64 Q^2 \Lambda}}{8 \Lambda}; \hspace{1.0cm} r_- = \frac{M - \sqrt{ M^2 - 64 Q^2 \Lambda}}{8 \Lambda}$$ There is a singularity at $r=0$ and it is time-like. Note that in the presence of a non-trivial dilaton, the space-geometry of the black hole does not behave as either de-Sitter ( $\Lambda < 0$) or anti-de-Sitter ($\Lambda > 0$) [@chan2]. An important thermodynamical quantity corresponding to a black hole is the Hawking temperature $T_H$. It is given by, $$T_H= \frac{1}{4 \pi} |\frac{dg_{tt}}{dr}| \sqrt{-g^{tt} g^{rr}} |_{r=r_+} = \frac{M}{4 \pi r_+} \sqrt{ 1 - \frac{64 Q^2 \Lambda}{M^2}}$$ The temperature $T_H=0$ for the extreme black hole with $M= 8 Q \sqrt{\Lambda}$. For the uncharged black hole $T_H = \frac{\Lambda}{ \pi}$. This black hole is also a solution to low energy string action by a conformal transformation, $$g^{String} = e^{4 \phi} g^{Einstein}$$ In string theory, it is possible to create charged solutions from uncharged ones by duality transformations. For a review of such transformations see Horowitz[@horo1]. It is possible to apply such transformations to the uncharged blackhole with charge $Q =0$ in the metric in eq.(2) to obtain the charged blackhole with $Q \neq 0$. Such a duality was discussed in detail in the paper by Fernando [@fer1]. Charged scalar perturbation of dilton black holes ================================================= We will develop the equations for a charged scalar field in the background of the static charged dilaton black hole in this section. The general equation for a massless charged scalar field in curved space-time can be written as, $$\bigtriangledown ^{\mu} \bigtriangledown_{\mu} \Phi + (i e)^2 A^{\mu} A_{\mu} \Phi - 2 i e A^{\mu} \partial_{\mu} \Phi - i e \Phi \bigtriangledown^{\mu} A_{\mu} =0$$ Using the anzatz, $$\Phi = e^{ i m \theta} \frac{\eta(t,r)} { \sqrt{r}}$$ eq.(6) simplifies into, $$\frac{ \partial^2 \eta(t,r) }{ \partial t^2} - \frac{ \partial^2 \eta(t,r) }{ \partial r_{*}^2} + \frac{ 2 i e Q}{r} \frac{\partial \eta(t,r)} { \partial t} + V(r) \eta(t,r) =0$$ Here, $V(r)$ is given by, $$V(r) = \frac{f(r) } { 2 r^{3/2} } \frac{d}{dr} \left( \frac{ f(r) } { 4 r^{3/2} } \right) + \frac{ m^2 f(r)}{r^2} - \frac{ e^2 Q^2 } { r^2}$$ and $r_{*}$ is the tortoise coordinate computed as, $$dr_{*} = \frac{2 r dr}{f(r)} \Rightarrow r_* = \frac{ 1}{ 4 \Lambda (r_+ - r_-)} \left( r_+ ln( (r - r_+) - r_- ln( r - r_-) \right)$$ Note that when $r \rightarrow r_+$ , $r_* \rightarrow - \infty$ and for $r \rightarrow \infty$, $ r_* \rightarrow \infty $. The function $f(r) $ is given by eq.(2) in section 2. By substituting the function $f(r)$ into the eq.(9), one can obtain a simplified version of the potential $V(r)$ as, $$V(r) = -\frac{12 Q^4}{r^4} + \frac{ 4 M Q^2}{r^{3}} + \frac{1}{r^2} \left( - \frac{M^2}{4} + 8 m^2 Q^2 - 8 Q^2 \Lambda - e^2 Q^2 \right) - \frac{ 2 m^2 M}{ r} + ( 8 m^2 \Lambda + 4 \Lambda^2)$$ Note that if the function $\eta(t,r)$ is redefined as, $$\eta(t,r) = e^{-i \omega t} \xi( r_{*} )$$ the wave equation will simplifies to the equation, $$\left(\frac{d^2 }{dr_*^2} + \omega^2 + \frac{ 2 e Q \omega } { r} - V(r) \right) \xi(r_*) = 0$$ It is clear that if $e = 0$, eq.(13) becomes the $Schrodinger$-type equation with a potential $V_{e=0}(r)$ given by, $$V_{e=0}(r) = -\frac{12 Q^4}{r^4} + \frac{ 4 M Q^2}{r^{3}} + \frac{1}{r^2} \left( - \frac{M^2}{4} + 8 m^2 Q^2 - 8 Q^2 \Lambda \right) - \frac{ 2 m^2 M}{ r} + ( 8 m^2 \Lambda + 4 \Lambda^2)$$ [Figure 1. The behavior of the potentials $V(r)$ and $V_{e=0}(r)$ with $r$ for $\Lambda=2$ $M=120$, $Q=3$, $m=2$ and $e = 6$. The dark curve represents $V(r)$ and the light curve represents $V_{e=0}(r)$]{}\ The potentials are plotted in the Fig.1. Greater the value $e$ of the charged scalar, smaller the peak of the potential. General solution to the charged scalar wave equation ===================================================== In order to find exact solutions to the wave equation for the charged scalar, we will revisit the eq.(6) in section 3. Using the anzatz, $$\Phi = e^{- i \omega t} e^{i m \theta} R(r)$$ eq.(6) leads to the radial equation, $$\frac{d}{dr} \left( \frac{f(r)}{2} \frac{dR(r)}{dr} \right) + 2r^2 \left( \frac{\omega^2}{f(r)} - \frac{m^2}{r^2} \right) R(r) - \frac{ 4 e Q \omega r R(r) } { f(r) } + \frac{ 2 e^2 Q^2 R(r) }{ f(r) }=0$$ In order to solve the wave equation exactly, one can redefine $r$ coordinate of the eq.(16) with a new variable $z$ given by, $$z = \left( \frac{ r - r_+}{ r - r_-} \right)$$ Note that in the new coordinate system, $z = 0$ corresponds to the horizon $r_+$ and $z = 1$ corresponds to infinity. With the new coordinate, eq.(16) becomes, $$z(1-z) \frac{d^2 R}{dz^2} + (1-z) \frac{d R}{dz} + P(z) R =0$$ Here, $$P(z) = \frac{A}{z} + \frac{B}{-1+z} + C$$ where, $$A= \frac{(r_+ \omega -e Q )^2}{ 16 (r_+- r_-)^2 \Lambda^2}; \hspace{1.0cm} B = \frac{ 8m^2 \Lambda - \omega^2} { 16 \Lambda^2}; \hspace{1.0cm} C = - \frac{(r_- \omega - e Q )^2}{16 (r_+- r_-)^2 \Lambda^2}$$ Now, if $R(z)$ is redefined as, $$R(z) = z^{\alpha} (1-z)^{\beta} F(z)$$ the radial equation given in eq.(18) becomes, $$z(1-z) \frac{d^2 F}{dz^2} + \left(1 + 2 \alpha - (1+ 2 \alpha + 2\beta )z \right) \frac{d F}{dz} + \left(\frac{\bar{A}}{z} + \frac{\bar{B}}{-1+z} + \bar{C} \right) F =0$$ where, $$\bar{A} = A + \alpha^2$$ $$\bar{B} = B + \beta - \beta^2$$ $$\bar{C} = C -(\alpha + \beta)^2$$ The above equation resembles the hypergeometric differential equation which is of the form [@math], $$z(1-z) \frac{d^2 F}{dz^2} + (c - (1+a + b )z) \frac{d F}{dz} -ab F =0$$ By comparing the coefficients of eq.(22) and eq. (24), one can obtain the following identities, $$c = 1+ 2 \alpha$$ $$a+b = 2 \alpha + 2 \beta$$ $$\bar{A}=A + \alpha^2 =0; \Rightarrow \alpha= \pm \frac{ i (r_+ \omega - e Q)}{ 4 \Lambda ( r_+ - r_-)}$$ $$\bar{B} = B + \beta - \beta^2=0; \Rightarrow \beta = \frac{1 + i \sqrt{ ( \frac{ \omega^2 - 8 m^2 \Lambda}{4 \Lambda} - 1 }) }{2}$$ $$ab = -\bar{C} = (\alpha + \beta)^2 - C$$ From eq.(26) and eq.(29), $$a= \alpha + \beta + \gamma$$ $$b= \alpha + \beta -\gamma$$ Here, $$\gamma= \sqrt{C} = \pm \frac{ i (r_- \omega - eQ )}{ 4 \Lambda ( r_+ - r_-)}$$ With the above values for $a$, $b$, and $c$, The solution to the hypergeometric function $F(z)$ is given by [@math], $$F(a,b,c;z) = \frac{\Gamma(c)} {\Gamma(a) \Gamma(b)} \Sigma \frac{ \Gamma(a+n) \Gamma( b+n)}{ \Gamma(c+n)} \frac{z^n}{n!}$$ with a radius of convergence being the unit circle $|z| =1$. Hence the general solution to the radial part of the charged scalar wave equation is given by, $$R(z) = z^{\alpha} (1-z)^{\beta} F(a,b,c;z)$$ with $a$, $b$, and $c$ given in the above equations. The general solution for the charged wave scalar equation is, $$\Phi( z, t, \theta) = z^{\alpha} (1-z)^{\beta} F(a,b,c;z) e^{ i m \theta} e^ { - i \omega t}$$ Solution with boundary conditions ================================= In this section we will obtain solutions to the charged scalar with the boundary condition that the wave is purely ingoing at the horizon. The solutions are analyzed closer to the horizon and at infinity to obtain exact results for the wave function. Solution at the near-horizon region ------------------------------------ First, the solution of the wave equation closer to the horizon is analyzed. For the charged black hole, $$z = \frac{(r- r_+)}{ ( r - r_-)}$$ and as the radial coordinate $r$ approaches the horizon, $z$ approaches $0$. In the neighborhood of $z=0$, the hypergeometric function has two linearly independent solutions given by [@math] $$F(a,b;c;z) \hspace{1.0cm} and \hspace{1.0cm} z^{(1-c)} F(a-c+1,b-c+1;2-c;z)$$ Substituting the values of $a,b,c$ in terms of $\alpha$, $\beta$, and $\gamma$, the general solution for $R(z)$ can be written as, $$R(z) = C_1 z^{\alpha} (1-z)^{\beta} F(\alpha + \beta + \gamma, \alpha+\beta - \gamma, 1+ 2 \alpha, z)$$ $$+ C_2 z^{-\alpha}(1-z)^{\beta} F( -\alpha + \beta + \gamma,-\alpha+\beta - \gamma,1-2 \alpha, z)$$ Here, $C_1$ and $C_2$ are constants to be determined. Before proceeding any further, we want to point out that the above equation is symmetric for $ \alpha \leftrightarrow - \alpha$. Note that in eq.(27), $\alpha$ could have both $\pm$ signs. Due to the above symmetry in eq.(37), we will choose the “+" sign for $\alpha$ for the rest of the paper. Since closer to the horizon $z \rightarrow 0$, the above solution in eq.(37) approaches, $$R(z \rightarrow 0) = C_1 z^{\alpha} + C_2 z^{-\alpha}$$ Closer to the horizon, $r \rightarrow r_+$. Hence, $z$ can be approximated with $$z \approx \frac{ r - r_+}{r_+ - r_-}$$ The “tortoise” coordinate for the charged black hole is given in eq.(10). Near the horizon $r \rightarrow r_+$, the “tortoise” coordinate can be approximated to be $$r_* \approx \frac{r_+}{4 \Lambda ( r_+ - r_-)} ln( r - r_+)$$ Hence, $$r - r_+ = e^{ \frac{ 4 \Lambda ( r_+ - r_-)}{r_+} r_* }$$ leading to, $$z \approx \frac{ r - r_+}{ r_+ - r_-} = \frac{ 1 }{ (r_+ - r_-)} e^{ \frac{ 4 \Lambda ( r_+ - r_-)}{r_+} r_* }$$ Hence eq.(38) can be re-written in terms of $r_*$ as, $$R(r \rightarrow r_+) = C_1 \left(\frac{ 1} { r_+ - r_-} \right)^{\alpha} e ^{ i \hat{\omega} r_*} + C_2 \left(\frac{1}{ r_+ - r_-} \right)^{ - \alpha} e^{ -i \hat{\omega} r_*}$$ To obtain the above expression, $\alpha$ is substituted from eq.(27) and $$\hat{\omega} = \omega - \frac{ e Q} { r_+}$$ The first and the second term in eq.(43) corresponds to the outgoing and the ingoing wave respectively. Now, one can impose the condition that the wave is purely ingoing at the horizon. Hence we pick $C_1 = 0$ and $C_2 \neq 0$. Therefore the solution closer to the horizon is, $$R(z \rightarrow 0 ) = C_2 z^{-\alpha} (1-z)^{\beta} F(-\alpha + \beta + \gamma, -\alpha+\beta - \gamma, 1 - 2 \alpha, z)$$ Solution at asymptotic region ----------------------------- Now the question is what the wave equation is when $r \rightarrow \infty$. For large $r$, the function $f(r) \rightarrow 8 \Lambda r^2$. When $f(r)$ is replaced with this approximated function in the wave equation given by eq.(16), it simplifies to, $$\frac{d}{dr} \left( 4 \Lambda r^2 \frac{dR(r)}{dr} \right) + 2r^2 \left( \frac{\omega^2}{ 8 \Lambda r^2} - \frac{m^2}{r^2} \right) R(r) - \frac{ e Q \omega R(r) } { 2 \Lambda r } + \frac{ e^2 Q^2 R(r) }{ 4 \Lambda r^2}=0$$ For large $r$, one can neglect the last two terms in the above equation. Hence finally, the wave equation at large $r$ can be expanded to be, $$r^2 R'' + 2 r R' + p R =0$$ where, $$p = \frac{\omega^2} { 16 \Lambda^2} - \frac{m^2}{2 \Lambda}$$ One can observe that $ p = -B$ from eq.(20). Also eq.(47) is the well known Euler equation with the solution, $$R(r) = D_1 \left( \frac{r_+ - r_-}{r} \right)^{a_1} + D_2 \left( \frac{r_+ - r_- }{r} \right)^{a_2}$$ with, $$a_1= \frac{ 1 + \sqrt{ 1 - 4 p} }{2} = \beta; \hspace{1.0 cm} a_2 = \frac{ 1 - \sqrt{ 1 - 4 p} }{2} = (1- \beta)$$ The expression for $\beta$ is given in eq.(28). Note that the form in eq.(49) is chosen to facilitate to compare it with the matching solutions in section 5.3. Matching the solutions at the near horizon and the asymptotic region -------------------------------------------------------------------- In this section we match the asymptotic solution given in eq.(49) to the large $r$ limit (or the $z \rightarrow 1$ ) of the near-horizon solution given in eq.(45) to obtain an exact expression for $D_1$ and $D_2$. To obtain the $z \rightarrow 1$ behavior of eq. (45), one can perform a well known transformation on hypergeometric function given as follows [@math] $$F(a,b,c,z) = \frac{ \Gamma(c) \Gamma(c-a-b)}{\Gamma(c-a) \Gamma(c-b)} F(a,b;a+b-c+1;1-z)$$ $$+(1-z)^{c-a-b}\frac{ \Gamma(c) \Gamma(a+b-c)}{\Gamma(a) \Gamma(b)} F(c-a,c-b;c-a-b+1;1-z)$$ Applying this transformation to eq.(45) and substituting for the values of $a,b,c$, one can obtain the solution to the wave equation in the asymptotic region as follows; $$R(z) = C_2 z^{-\alpha} (1-z)^{\beta} \frac{ \Gamma(1 - 2 \alpha) \Gamma(1 - 2 \beta)}{\Gamma(1 - \alpha - \beta - \gamma) \Gamma( 1 - \alpha - \beta + \gamma)} F( -\alpha + \beta + \gamma, -\alpha + \beta -\gamma; 2 \beta ;1-z)$$ $$+ C_2 z^{-\alpha} (1-z)^{1 - \beta} \frac{ \Gamma( 1 - 2 \alpha ) \Gamma( -1 + 2 \beta )}{\Gamma( -\alpha + \beta + \gamma) \Gamma( -\alpha + \beta - \gamma)} F( 1 - \alpha - \beta -\gamma, 1 - \alpha - \beta + \gamma ; 2 - 2 \beta;1-z)$$ Now we can take the limit of $R(z)$ as $ z \rightarrow 1$ ( or $r \rightarrow \infty$) which will lead to, $$R(z \rightarrow 1) = C_2 (1-z)^{\beta} \frac{ \Gamma(1 - 2 \alpha) \Gamma(1 - 2 \beta)}{\Gamma(1 - \alpha - \beta - \gamma) \Gamma( 1 - \alpha - \beta + \gamma)}$$ $$+ C_2 (1-z)^{1 - \beta} \frac{ \Gamma( 1 - 2 \alpha ) \Gamma( -1 + 2 \beta )}{\Gamma( -\alpha + \beta + \gamma) \Gamma( -\alpha + \beta - \gamma) }$$ Note that we have replaced $F(a,b,c,1- z)$ and $z^{\alpha}$ with 1 when $z$ approaches 1. Since, $$1 - z = \frac{r_+ - r_-}{r- r_-},$$ for large $r$, the above can be approximated with, $$1 - z \approx \frac{r_+ - r_-}{r }$$ By replacing $1 - z$ with the above expression in eq.(55), $R(r)$ for large $r$ can be written as, $$R(r \rightarrow \infty) = C_2 \left(\frac{r_+ - r_-}{r }\right)^{\beta} \frac{ \Gamma(1 - 2 \alpha) \Gamma(1 - 2 \beta)}{\Gamma(1 - \alpha - \beta - \gamma) \Gamma( 1 - \alpha - \beta + \gamma}$$ $$+ C_2 \left(\frac{r_+ - r_-}{r }\right)^{1 - \beta} \frac{ \Gamma( 1 - 2 \alpha ) \Gamma( -1 + 2 \beta )}{\Gamma( -\alpha + \beta + \gamma) \Gamma( -\alpha + \beta - \gamma) }$$ By comparing eq.(49) and eq.(56), the coefficients $D_1$ and $D_2$ can be written as, $$D_1 = C_2 \frac{ \Gamma(1 - 2 \alpha) \Gamma(1 - 2 \beta)}{\Gamma(1 - \alpha - \beta + \gamma) \Gamma( 1 - \alpha - \beta - \gamma) }$$ $$D_2 = C_2 \frac{ \Gamma( 1 - 2 \alpha ) \Gamma( -1 + 2 \beta )}{\Gamma( -\alpha + \beta + \gamma) \Gamma( -\alpha + \beta - \gamma )}$$ To determine which part of the solution in eq.(49) corresponds to the “ingoing” and “outgoing” respectively, we will first find the tortoise coordinate $r_{*}$ in terms of $r$ at large r. Note that for large $r$, $f(r) \rightarrow 8 \Lambda r^2 $. Hence the equation relating the tortoise coordinate $r_*$ and $r$ in eq.(10) simplifies to, $$dr_{*} = \frac{ dr}{ 4 \Lambda r}$$ The above can be integrated to obtain, $$r_{*} \approx \frac{1}{4 \Lambda} ln( \frac{ r}{r_+} )$$ Hence, $$r \approx r_+ e^{ 4 \Lambda r_*}$$ Substituting $r$ from eq.(61) and $\beta$ from eq.(28) into the eq.(49), $R(r \rightarrow \infty)$ is rewritten as, $$R(r \rightarrow \infty ) \rightarrow D_1 \left( \frac{r_+ - r_-}{r_+}\right)^{\beta} e ^{ -i \omega r_* \sqrt{1 - \frac{ 4 \Lambda^2}{ \omega^2} ( \frac{2 m^2}{\Lambda} +1) } - 2 \Lambda r_*}$$ $$+ D_2 \left( \frac{r_+ - r_-}{r_+}\right)^{1 - \beta} e ^{ i \omega r_* \sqrt{1 - \frac{ 4 \Lambda^2}{ \omega^2} ( \frac{2 m^2}{\Lambda} +1) } - 2 \Lambda r_*}$$ From the above it is clear that the first term and the second term represents the ingoing and outgoing waves respectively. Quasinormal modes of the dilaton black hole =========================================== Quasi normal modes of a classical perturbation of black hole space-times are defined as the solutions to the related wave equations with purely ingoing waves at the horizon. In addition, one has to impose boundary conditions on the solutions at the asymptotic region as well. In asymptotically flat space-times, the second boundary condition is the solution to be purely outgoing at spatial infinity. For non-asymptotically flat space times, there are two possible boundary conditions to impose at sufficiently large distances from the black hole horizon: one, is the field to vanish at large distances and the other is for the flux of the field to vanish at far from the horizon. Here, we will choose the first. This is the condition imposed in reference [@fer1]. Another example in 2+1 dimensions where the vanishing of the filed at large distance is imposed is given in reference [@bir1] where QNM’s of scalar perturbations of BTZ black holes were computed exactly. Let’s consider the field $R(r)$ at large distances given by eq.(56). Clearly the second term vanishes when $ r \rightarrow \infty$. This also can be seen from eq.(52) where the second term vanishes for $ z \rightarrow 1$. Since $C_2$ is not zero, the first term vanish only at the poles of the Gamma functions $\Gamma(1 - \alpha - \beta + \gamma)$ or $\Gamma( 1 - \alpha - \beta - \gamma)$. Note that the Gamma function $\Gamma(x)$ has poles at $ x = - n$ for $ n = 0,1,2..$.Hence to obtain QNM’s, the following relations has to hold. $$1- \alpha - \beta - \gamma = - n$$ or $$1 - \alpha - \beta + \gamma = -n$$ The above two equations leads to two possibilities for $\beta$ as follows, $$\beta = ( 1 + n) - \alpha \pm \gamma$$ We want to recall here that $\gamma$ in eq.(31) could have both signs. Due to the nature of eq.(65), there is no need to choose a specific sign to proceed from here. The two possibilities leads to two equations for $\beta$ given by, $$\beta = ( 1 + n) - \frac{ i \omega } { 4 \Lambda }$$ and $$\beta = ( 1 + n) - i ( \kappa_1 \omega - \kappa_2 e Q )$$ where, $$\kappa_1 = \frac{ 1 }{ 4 \Lambda} \left( \frac{ r_+ + r_-}{ r_+ - r_-} \right); \hspace{1 cm} \kappa_2 = \frac{ 1} { 2 \Lambda ( r_+ - r_- ) }$$ By combining the above equations with the eq.(28) given by, $$\frac{m^2}{ 2 \Lambda} - \frac{ \omega^2}{ 16 \Lambda^2} = - \beta + \beta^2$$ one can obtain the quadratic equation for $\omega$ given by, $$\omega^2 \left( \frac{1}{ ( 16 \Lambda^2 \kappa_1^2 - 1)} \right) + \omega \left( i( 2n + 1 ) \kappa_1 - 2 \kappa_1 \kappa_2 e Q \right) + \left( \frac{m^2}{ 2 \Lambda} - n^2 - n - i ( 2n +1) \kappa_1 e Q + (\kappa_2 e Q )^2 \right)$$ Note that the $\beta$ in eq.(66) corresponds to the QNM’s of the neutral scalars for the uncharged black hole with $Q=0$ leading to $r_- = 0$. Hence by taking $r_-= 0$ in $\kappa_1$, one recover the quadratic equation for the neutral scalar for $ Q = 0 $. One can solve th above quadratic equation to obtain exact values of QNM frequencies $\omega$. There are three cases on can consider: QNM’s of neutral scalars ( for $Q = 0$ and $ Q \neq 0$) and charged scalars. The QNM’s of the neutral scalars were analyzed in detail in the paper by Fernando [@fer1]. We will any way state the results in order to compare the QNM’s of the charged scalars in the following section. QNM frequencies of neutral scalars with $e = 0$ ------------------------------------------------- By letting $e=0$ in the quadratic equation given above, one can solve it for $\omega$ as discussed in [@fer1]. First one can consider the QNM’s for the uncharged black hole with $Q =0$. The solution for $\omega$ is given as, $$\omega = \frac{-2 i}{ 2n +1} \left( 2 \Lambda n (1+n) - m^2 \right)$$ They are pure imaginary. Due to the minus sign in front, these oscillations will be damped leading to stable perturbations for $2 \Lambda n (1+n) > m^2$. However, for $2 \Lambda n (1+n) < m^2$, the oscillations would lead to unstable modes. This was pointed out in [@ort3]. One can also compute the QNM’s for the neutral scalar for the charged dilaton black hole with $Q \neq 0$ as, $$\omega= \frac{-i}{ ( 16 \Lambda^2 \kappa_1^2 - 1)} \left( 8 \Lambda^2 \kappa_1(1+2 n) + 2 \sqrt{ 2m^2 \Lambda ( 16 \Lambda^2 \kappa_1^2 - 1) + 4 \Lambda^2 (4 \Lambda^2 \kappa_1^2 + n^2 + n )} \right)$$ Note that $16 \Lambda^2 \kappa_1 >1$ and $\omega$ will always be pure imaginary. Also, due to the minus sign in front, these oscillations will be damped leading to stable neutral scalar perturbations. QNM’s of the charged scalar with $ e \neq 0$ -------------------------------------------- Now, one can solve the eq.(70) to obtain the exat results for the QNM frequencies for the charged scalar as, $$\omega= \frac{1}{ ( 16 \Lambda^2 \kappa_1^2 - 1)} \left( -i 8 \Lambda^2 \kappa_1(1+2 n) + 16 e \kappa_1 \kappa_2 Q \Lambda^2 - \right.$$ $$\left. 2 i \sqrt{ 2m^2 \Lambda ( 16 \Lambda^2 \kappa_1^2 - 1) + 4 \Lambda^2 (4 \Lambda^2 \kappa_1^2 + n^2 + n ) - 4 e^2 \kappa_2^2 Q^2 \Lambda^2 + 4 i e \kappa_2 Q \Lambda^2 ( 2 n + 1)} \right)$$ $\omega$ is not pure imaginary in this case: it has a real part which depends on $e$. For $e \rightarrow 0$, the above QNM approaches to the values for the neutral scalar in eq.(72). To separate the real part and the imaginary part of $\omega$, the part inside the square root is redefined as follows; Let the parameters $z_1$, $z_2$, $\rho$ and $Z$ defined as, $$z_1 = 2m^2 \Lambda ( 16 \Lambda^2 \kappa_1^2 - 1) + 4 \Lambda^2 (4 \Lambda^2 \kappa_1^2 + n^2 + n ) - 4 e^2 \kappa_2^2 Q^2 \Lambda^2$$ $$z_2 = 4 e \kappa_2 Q \Lambda^2 ( 2 n + 1)$$ $$Z = \sqrt{ z_1^2 + z_2^2 }$$ $$\rho = tan^{-1} \left( \frac{z_2}{z_1} \right)$$ Then, $\omega = \omega_{real} + i \omega_{imaginary}$ can be separated with, $$\omega_{real} = \frac{1}{ ( 16 \Lambda^2 \kappa_1^2 - 1)} \left(16 e \kappa_1 \kappa_2 Q \Lambda^2 + 2 \sqrt{ Z} Sin( \rho/2) \right)$$ $$\omega_{imaginary} = \frac{1}{ ( 16 \Lambda^2 \kappa_1^2 - 1)} \left( -8 \Lambda^2 \kappa_1(1+2 n) - 2 \sqrt{Z} Cos( \rho/2) \right)$$ When $ e \rightarrow 0$, $ \rho \rightarrow 0$ which leads to $\omega_{real} \rightarrow 0$ as expected.\ In the Figure. 2, $\omega_{imaginary}$ is plotted against the charge $Q$ of the black hole. It is clear that the magnitude of $\omega_{imaginary}$ is larger for charged scalar in comparison with the neutral scalar. Hence, the neutral scalar decays slower compared to the charged scalar. Similar behavior was observed in the charged scalar decay compared to the neutral scalar in Reissner-Nordstrom and Reissner-Nordstrom anti-de-Sitter black hole [@kono3]. [Figure 2. The imaginary part of $\omega$ Vs $Q$ for $\Lambda=2$ $M=120$, $m=2$ and $n = 1$. The dark curve represents the curve for $e = 4$ and the light curve represents for $ e = 0$]{}\ Next, we observe the behavior of $\omega$ Vs charge $e$ for two different values of black hole charge $Q$ as given in the Figure.3. Higher the $Q$, larger the $\omega_{imaginary}$. Similarly, the real part of $\omega$ is larger for large $Q$ as given in Figure.4. [Figure 3. The imaginary part of $\omega$ Vs $e$ for $\Lambda=2$ $M=120$, $m=2$ and $n = 1$. The dark curve represents the curve for $Q = 5$ and the light curve represents for $ Q = 2$]{} [Figure 4. The real part of $\omega$ Vs $e$ for $\Lambda=2$ $M=120$, $m=2$ and $n = 1$. The dark curve represents the curve for $Q = 5$ and the light curve represents for $ Q = 2$]{}\ In Figure. 5, $\omega_{imaginary}$ is plotted Vs the temperature of the black hole. For both the neutral scalar and the charged scalar, there is a linear behavior of $\omega_{imaginary}$ Vs T. [Figure 5. The imaginary part of $\omega$ Vs $T$ for $\Lambda=2$ $r_{-}=2$, $m=2$, and $n = 1$. The dark curve represents the curve for fixed $e = 2$ and the light curve represents for fixed $ e = 0$]{}\ In Figure. 6, the behavior of $\omega_{imaginary}$ is plotted Vs the horizon radius $r_+$. It is concluded that for the same $r_+$, the neutral scalar has a smaller decay rate than the charged scalar. [Figure 6. The imaginary part of $\omega$ Vs $r_+$ for $\Lambda=2$ $r_m = 2$, $m=2$ and $n = 1$. The dark curve represents the curve for $e = 4$ and the light curve represents for fixed $ e = 0$]{}\ As noted in the introduction, there are several papers focused on computing the asymptotic value of the $\omega_{real}$ of black holes with regard to the quantization of the black holes. In the Figure. 7, $\omega_{real}$ is plotted Vs $n$. It is observed that it reaches a constant for large $n$. The asymptotic from of the real part of QNM is computed taking the limit of $\omega_{real}$ as $ n \rightarrow \infty$: the value is simply, $$\omega_{real} ( n \rightarrow \infty ) = \frac{ e \sqrt{ r_p \Lambda} }{ \sqrt{r_m}}$$ [Figure 7. The real part of $\omega$ Vs $n$ for $\Lambda=2$, $r_p= 4$, $ r_m = 2$ and $m=2$. The dark curve represents the curve for $e = 4$ and the light curve represents for fixed $ e = 3.9$]{}\ Conclusion ========== We have studied the perturbation of the dilaton black hole in 2+1 by a charged scalar. The wave equations are solved exactly as hypergeometric functions. The QNM frequencies are computed exactly. It is observed that the QNM’s have both a real and an imaginary component. The QNM’s of the neutral scalars were pure imaginary [@fer1]. Also it is noted that the charged scalars decay faster compared to the neutral scalars for a given black hole. This observation is in agreement with the behavior observed by Konoplya [@kono3] in Reissner-Nordstrom and Reissner-Nordstrom-anti-de-Sitter black holes in four dimensions. The behavior of $\omega$ with various parameters are analyzed in detail. We observe the linear relation of $\omega_{imaginary}$ with the temperature of the black hole. Similar observations were reported for QNM frequencies of higher dimensions in AdS space in [@horo]. The asymptotic value of $\omega_{real}$ is computed to be $\frac{ e \sqrt{ r_p \Lambda} }{ \sqrt{r_m}}$. It would be interesting to compute the greybody factors and particle emission rates for the charged scalars for this black hole. The greybody factors were studied for the neutral scalar in [@fer2]. Since the wave equation is been already solved, it should be a welcome step towards understanding the Hawking radiation from these black holes. There are few works related to such computations of charged particles: the particle emission by charged leptons from non rotating black holes by Page [@page] and emission of charged particles by four and five dimensional blackholes by Gubser and Klebanov [@gub]. Since the asymptotic values of the real part of the QNM frequencies are computed exactly, it would be interesting to study the area spectrum of these black holes along the lines of the work by Setare [@set1] [@set2]. Another interesting avenue to proceed would be to analyze the QNM’s of the extreme dilaton black hole studied in this paper. Some extreme black holes have proven to be supersymmetric. For example, the extreme Reissner-Nordstrom black hole is shown to be supersymmetric since it can be embedded in N=2 supergravity theory [@gib]. Onozawa et.al [@ono] showed that the QNM’s of the extreme Reissner-Nordstrom blackhole for spin 1, 3/2 and 2 are the same. If it is possible to find a suitable supergravity theory to embed the dilaton black hole in this paper, one may be able to observe if extremality plays a role in it. Hence it would be interesting to compute the QNM’s for the extreme dilaton black hole in 2 + 1 dimensions for the charged Dirac fields and vector fields along with the charged scalar to understand such behavior in low dimensions. The dilaton black hole considered in this paper is one of the most favorable charged black holes in 2 +1 dimensions to study many issues discussed above in a simpler setting with exact values for QNM frequencies. [99]{} “Black Hole Physics : Basic concepts and new developments”, V. P. Frolov and I. D. Novikov, Kluwer Academic Publishers, (1998) K. D. Kokkotas, B.G. Schmidt, Living Rev. Relativ. [**2**]{} (1999) 2 O. Aharony, S. S. Gubser, J. Maldacena, H. Ooguri & Y. Oz, “Large N Field Theories”, hep-th/9905111. G. T. Horowitz & V. E. Hubeny, Phys. Rev. [**D62**]{} (2000) 024027. V. Cardoso & J. P. S. Lemos, Phys. Rev. [**D64**]{} (2001) 084017 I. G. Moss & J. P. Norman, Class. Quan. Grav. [**19**]{} (2002), 2323 B. Wang, C. Lin & E. Abdalla, Phys. Lett [**B481**]{} (2000) 79 V. Ferrari and L. Gualtrier, gr-qc/0709.0657 S. Hod and T. Pirani, Phys. Rev [**D58**]{} (1998) 024017 S. Hod and T. Pirani, Phys. Rev [**D58**]{} (1998) 024018 S. Hod and T. Pirani, Phys. Rev [**D58**]{} (1998) 024019 R. A. Konoplya, Phys. Lett. [**B 550**]{} (2002) 117 R. A. Konoplya and A. Zhidenko, Phys. Rev [**D76**]{} (2007) 084018 R. A. Konoplya, Phys. Rev [**D66**]{} (2002) 084007 R. Aros, C. Martinez and R. Troncoso & J. Zanelli, Phys. Rev. [**D67**]{} (2002) 044014 D. Birmingham and S. Mokhtari, Phys.Rev. [**D74**]{} (2006) 084026 A. Lopez-Ortega, Gen,Rel. Grav. [**39**]{} (2007) 1011-1029 A. Lopez-Ortega, Gen,Rel. Grav. [**38**]{} (2006) 1565-1591 M. Bañados, C. Teitelboim, J. Zanelli, Phys. Rev. Lett. [**69**]{} (1992) 1849; M. Bañados, M. Henneaux, C. Teitelboim, J. Zanelli, Phys. Rev. D [**48**]{} (1993) 1506. D. Birmingham, Phys. Rev [**D64**]{} (2001) 064024 D. Birmingham, I. Sachs & S.N. Solodukhin, Phys. Rev .Lett. 88 (202) 151301. V. Cardoso & J.P.S. Lemos, Phys. Rev. [**D63**]{} (2001) 124015 E. Abdalla, B. Wang, A. Lima-Santos & W.G. Qiu, Phys. Lett. [**B38**]{} (2002) 435 S. Fernando, Gen. Rel. Grav. [**36** ]{} (2004) 71 A. Lopez-Ortega, Gen,Rel. Grav. [**38**]{} (2005) 167-190 K. C. K. Chan, Phys. Lett. [**B373**]{} (1996) 296 E. W. Hirshmann, D.L. Welch, Phys. Rev. [**D53**]{} (1996) 5579. M. Kamata and T. Koikawa, Phys. Lett[ **B 353**]{} (1995) 196 G. Clement, Phys. Lett. [**B 367**]{} (1996) 70. S. Fernando, F. Mansouri, Commun. Math. And Theo. Phys. 1 (1998) 14. K. C. K. Chan, R. B. Mann , Phys. Rev. [**D50**]{} (1994) 6385. G. Horowitz, [ *“The Dark Side of String Theory: Black Holes and Black Strings”*]{}, hep-th/9210119. “Handbook of Mathematical Functions”, M. Abramowitz and A. Stegun, Dover, (1977) G. W. Gibbons and C. M. Hull, Phys. Lett. [**B 109**]{} ( 1982) 190 H. Onozawa, T. Okumura, T. Mishima and H. Ishihara, Phys. Rev. [**D55**]{} (1997) 4529 S. Fernando, Gen. Rel. Grav. [**37** ]{} (2005) 461 D. N. Page, Phys. Rev [**D16**]{} (1977) 2402 S. S. Gubser and I. R. Klebanov, Nucl. Phys. [**b 482**]{} (1996) 173 M. Setare, Class. Quan. Grav. [**21**]{} (2004) 1453 M. Setare, Phys. Rev. [**D69**]{} (2004) 044016 [^1]: [email protected]
--- abstract: 'Research on success factors involved in the agile transformation process is not conclusive and there is still need for guidelines to help in the transformation process considering the organizational context (culture, values, needs, reality and goals). The usage of success factors as a tool to help agile transformation raises the following research question: What are the success factors for an organization and their teams in preparation for the agile transformation process? This research presents an assessment of the organizational environment including the company’s goals and the perception of the team members to provide awareness of how the organization should prepare for the next steps in the agile transformation and a single case study for the assessment validation. The findings show that a company based in Chicago, USA, succeeded implementing customer involvement and self-organized teams but faces challenges with measurement models and training. The main contributions of the research is the assessment of agile transformation success factors and the success factors difficulty ranking to be used by other organizations in their agile transformation processes.' author: - Amadeu Silveira Campanelli - Florindo Silote Neto - Fernando Silva Parreiras title: Assessing Agile Transformation Success Factors --- Agile transformation $\cdot$ Success factors $\cdot$ Organizational context. [10]{} \[1\][`#1`]{} Ayed, H., Habra, N., Vanderose, B.: Am-quick: a measurement-based framework for agile methods customisation. In: Software Measurement and the 2013 Eighth International Conference on Software Process and Product Measurement (IWSM-MENSURA), 2013 Joint Conference of the 23rd International Workshop on. pp. 71–80. IEEE (2013) Campanelli, A.S., Parreiras, F.S.: Agile methods tailoring – a systematic literature review. Journal of Systems and Software 110, 85 – 100 (2015) Chow, T., Cao, D.: A survey study of critical success factors in agile software projects. Journal of Systems and Software 81(6), 961–971 (2008) Gandomani, T.J., Zulzalil, H., Ghani, A.A., Sultan, A.B.M., Sharif, K.Y.: How human aspects impress agile software development transition and adoption. International Journal of Software Engineering and its Applications 8(1), 129–148 (2014) Gandomani, T.J., Zulzalil, H., Ghani, A.A.A., Sultan, A.B.M., Nafchi, M.Z.: Obstacles in moving to agile software development methods; at a glance. Journal of Computer Science 9(5), 620 (2013) Gregory, P., Barroca, L., Sharp, H., Deshpande, A., Taylor, K.: The challenges that challenge: Engaging with agile practitioners’ concerns. Information [&]{} Software Technology 77, 92–104 (2016) Gregory, P., Barroca, L., Taylor, K., Salah, D., Sharp, H.: Agile challenges in practice: a thematic analysis. In: Agile Processes, in Software Engineering, and Extreme Programming, pp. 64–80. Springer (2015) Javdani Gandomani, T., Ziaei Nafchi, M.: An empirically-developed framework for agile transition and adoption. Journal of Systems and Software 107(C), 204–219 (2015) Jyothi, V.E., Rao, K.N.: Effective implementation of agile practices ingenious and organized theoretical framework. IJACSA - International Journal of Advanced Computer Science and Applications 2(3), 41–48 (2011) Lahrmann, G., Marx, F., Mettler, T., Winter, R., Wortmann, F.: Inductive design of maturity models: Applying the rasch algorithm for design science research. In: Service-Oriented Perspectives in Design Science Research - 6th International Conference, [DESRIST]{} 2011, Milwaukee, WI, USA, May 5-6, 2011. Proceedings. pp. 176–191 (2011) Linacre, J.: Smartpls 3. <http://www.winsteps.com> (2016), accessed: 2016-07-20 Nerur, S.P., Mahapatra, R., Mangalaraj, G.: Challenges of migrating to agile methodologies. Commun. [ACM]{} 48(5), 72–78 (2005) Nishijima, R.T., Dos Santos, J.G.: The challenge of implementing scrum agile methodology in a traditional development environment. International Journal of Computers & Technology 5(2), 98–108 (2013) Qumer, A., Henderson[-]{}Sellers, B.: Crystallization of agility back to basics. In: [ICSOFT]{} 2006, First International Conference on Software and Data Technologies, Set[ú]{}bal, Portugal, September 11-14, 2006. pp. 121–126 (2006) Runeson, P., H[ö]{}st, M.: Guidelines for conducting and reporting case study research in software engineering. Empirical software engineering 14(2), 131 (2009) Soundararajan, S., Balci, O., Arthur, J.D.: Assessing an organization’s capability to effectively implement its selected agile method(s): An objectives, principles, strategies approach. In: 2013 Agile Conference, [AGILE]{} 2013, Nashville, TN, USA, August 5-9, 2013. pp. 22–31 (2013) Taylor, P.S., Greer, D., Coleman, G., McDaid, K., Keenan, F.: Preparing small software companies for tailored agile method adoption: *Minimally intrusive risk assessment*. Software Process: Improvement and Practice 13(5), 421–437 (2008) VersionOne: 10th annual state of agile development survey. <http://stateofagile.versionone.com> (2015), accessed: 2015-10-05
--- abstract: 'Our objective is to estimate the unknown compositional input from its output response through an unknown system after estimating the inverse of the original system with a training set. The proposed methods using artificial neural networks (ANNs) can compete with the optimal bounds for linear systems, where convex optimization theory applies, and demonstrate promising results for nonlinear system inversions. We performed extensive experiments by designing numerous different types of nonlinear systems.' author: - 'Se Un Park [^1]' bibliography: - 'bibl05.bib' title: Estimation for Compositional Data using Measurements from Nonlinear Systems using Artificial Neural Networks --- Introduction ============ Compositional data is used in many fields because the data in population ratios or fractions is easy to interpret. However, when the compositional data cannot be produced by simple scaling or normalization with the whole population size from the raw data or measurements, the process to produce such compositional outputs may not be straightforward. Here, we consider noisy outputs as our observations from an unknown linear or nonlinear system with the corresponding compositional variable inputs of interest. The pairs of input and outputs will be used as a training set for artificial neural networks (ANN) modeling to estimate the inverse of the unknown system. This trained inverse system can predict the unknown compositional input, given the output measurement coming from the original system with the input. As our approach is based on ANNs, we do not directly estimate the forward observation model, as in the classical inversion theory, but the inverse of the original system. The measurements, the outputs from the original system with the compositional inputs, are then the input of our estimated inverse system, which will predict the original compositional inputs. We do not apply post-processings or ad-hoc approaches such as truncation of the estimate followed by scaling so that the final answer is a non-negative vector that sums up to one. Rather, we directly apply non-negativity and scaling layers in the proposed ANNs. We considered both linear observation models and several types of nonlinear models. For the linear cases, where we can theoretically analyze the optimal performance bounds, we demostrated with our experiments that the performance of ANNs for the inversion of the linear model outputs can compete with the optimal bounds. For the nonlinear systems, where convex optimization methods are not well suited for these general cases, we could still present promising results compared to the error levels in the linear models and leave the comparitive analysis with other feasible optimization methods for our future work. Observation Models {#sec:models} ================== We first define a compositional vector and then present a general observation model. Then, we will formulate more specific observation models. An example of a compositional data or vector includes population ratos, concentration of chemicals in the air, numerous survey statistics in percentage. We define the compositional vector ${\ensuremath{{\mathbf{m}}}}$ to be constrained such that its components are nonnegative and sum to unity. These constraints define a simplex set such that any compositional vector [${\mathbf{m}}$]{} is in the simplex set. An $M$-dimensional simplex, or simply $M$-simplex, is defined by $$\label{eq:def_simplex} S^M = \{ (x_1, \ldots , x_M) \in {\ensuremath{{\mathbb{R}}}}^M \, : \, \sum_{i=1}^M x_i = 1, x_i \geq 0 \mbox{ for } \forall i \} .$$ Let $m_i$ be the $i$th component of a compositional column vector ${\ensuremath{{\mathbf{m}}}}$, then it can be denoted by ${\ensuremath{{\mathbf{m}}}} = {\ensuremath{{[m_1, m_2, \dots, m_M]}^{\mathsf{T}}}}$ where $\mathsf{T}$ is a transpose operator. Further decomposing it leads to ${\ensuremath{{\mathbf{m}}}} = \sum_{i=1}^M m_i {\ensuremath{{\mathbf{e}}}}_i$in terms of its components with basis vectors ${\ensuremath{{\mathbf{e}}}}_i$, which is $i$th column of $M \times M$ identity matrix $\mathbb{I}_M$. We now assume the following system $h$, i.e., a forward, observation model that generates our observation [${\mathbf{s}}$]{} from the $M$ dimensional compositional input [${\mathbf{m}}$]{} such that ${\ensuremath{{\mathbf{m}}}} \in S^M$. $${\ensuremath{{\mathbf{s}}}} = h ({\ensuremath{{\mathbf{m}}}}) + {\ensuremath{{\mathbf{n}}}},$$ where ${\ensuremath{{\mathbf{s}}}} \in \mathbb{R}^L$, $h$ is a function from $S^M$ to $\mathbb{R}^L$, and [${\mathbf{n}}$]{} is additive noise[^2]. In the rest of this chapter, we define specific forms of a nonlinear system $h$ with more restrictions as we proceed, finally leading to a linear model. General Systems --------------- The system response from an input [${\mathbf{m}}$]{}, without noise, is $$\begin{aligned} h({\ensuremath{{\mathbf{m}}}}) &= h\left(\sum_{i=1}^M m_i {\ensuremath{{\mathbf{e}}}}_i \right) .\end{aligned}$$ The input [${\mathbf{m}}$]{} is decomposed by using trivial bases ${\ensuremath{{\mathbf{e}}}}_i$s. If the system behaves nonlinearly or non-parametric ways without closed forms, then for the characterization of the system and the inversion for the input with the given the output, mapping or non-parametric esimations such as based on nearest neighbors of pairs of input and output could be working solutions. Training of ANNs is also possible as a candidate mapping solution. For example, $h({\ensuremath{{\mathbf{m}}}}) = {\ensuremath{{\mathbf{A}}}}^{p({\ensuremath{{\mathbf{m}}}})} {\ensuremath{{\mathbf{m}}}} \times \exp ( - K \| {\ensuremath{{\mathbf{B}}}} {\ensuremath{{\mathbf{m}}}} \|_2 ) $ where $p({\ensuremath{{\mathbf{m}}}}) = ceil( {\ensuremath{{\mathbf{C}}}} {\ensuremath{{\mathbf{m}}}} ) $, $ceil(\cdot)$ is a ceiling operator that maps to integer domain, dimensional compatible matrices [${\mathbf{A,B,C}}$]{} and, a scalar constant $K$. Systems with additivity ----------------------- ### A System with partial additivity If the system holds partial additivity for several sets of groups, $G_k$s, each of which is a set of component indices for the input vector [${\mathbf{m}}$]{}, then $$\begin{aligned} h\left(\sum_{i=1}^M m_i {\ensuremath{{\mathbf{e}}}}_i \right) = \sum_{k} h'_{G_k} \left(\sum_{i \in G_k} m_i {\ensuremath{{\mathbf{e}}}}_i \right) = \sum_{k} h'_{G_k} \left( \{ m_i \}_{i \in G_k} \right),\end{aligned}$$ where $h'_{G_k}$ is a function of the same dimension as $h$ but specific to the group $G_k$ and $ \{ m_i \}_{i \in G_k}$ is a tuple of the components of [${\mathbf{m}}$]{} in the indices in $G_k$. Note that $G_k$s do not have to be exhaustive such that the intersection of $G_k$ and $G_j$ for $i \neq j$ may not be empty. A special case of this system can be the multiplicative system with the constant vectors ${\ensuremath{{\mathbf{h}}}}_k$ corresponding to $k$th functions of ${\ensuremath{{\mathbf{m}}}}$, $g_k({\ensuremath{{\mathbf{m}}}})$. This can be seen as a linear system with respect to $g_k({\ensuremath{{\mathbf{m}}}})$s. $$\begin{aligned} \label{eq:lin_g} h\left(\sum_{i=1}^M m_i {\ensuremath{{\mathbf{e}}}}_i \right) = \sum_{k} {\ensuremath{{\mathbf{h}}}}_{k} g_k({\ensuremath{{\mathbf{m}}}}),\end{aligned}$$ where ${\ensuremath{{\mathbf{h}}}}_{k} \in {\ensuremath{{\mathbb{R}}}}^L $ is a constant vector and independent of [${\mathbf{m}}$]{} and $g_k$ is a nonlinear scalar function of [${\mathbf{m}}$]{}. Note that $g$ can be either invertible or non-invertible. For the special case of the latter, where $g$ is a thresholding operator, we can minimize the inevitable estimation bias by configuring the optimal (inversion) mapping rule from output to input. Refer to Appendix \[appen:noninvertible\_rule\]. This special case model can be practical because a general system on a simplex $S^M \subset [0,1]^M$ can be well-approximated if $h$ is differentiable with Taylor expansion. Even non-differentiable systems can be approximated to differentiable ones and can be decomposed. For a point ${\ensuremath{{\mathbf{a}}}} \in S^M$, a general system response $h({\ensuremath{{\mathbf{m}}}}) \approx \sum_{k} {\ensuremath{{\mathbf{h}}}}_{k} g_k({\ensuremath{{\mathbf{m}}}}) $ with ${\ensuremath{{\mathbf{h}}}}_{k} = D^\alpha h({\ensuremath{{\mathbf{a}}}}) , g_k({\ensuremath{{\mathbf{m}}}}) = ({\ensuremath{{\mathbf{m}}}}-{\ensuremath{{\mathbf{a}}}})^\alpha / \alpha !$, for order $\alpha$, but note that the notation $g_k$ is ‘loosely’ defined in relating the order $\alpha$. For example, for ${\ensuremath{{\mathbf{m}}}} \in S^2, {\ensuremath{{\mathbf{a}}}}={\ensuremath{{\mathbf{0}}}}, \alpha=2$, $g$ in the $k$th term can be either $m_1^2/2, m_2^2/2$, or $ m_1 m_2$. For more precisely defined terms, refer to Appendix \[appen:TaylorExpansion\]. Example of this model can be as the followings: $h({\ensuremath{{\mathbf{m}}}}) = \sum_{k} {\ensuremath{{\mathbf{h}}}}_{k} g_k({\ensuremath{{\mathbf{m}}}})$ with $g_k({\ensuremath{{\mathbf{m}}}}) = m_k m_{k+1}$ for $k \in [1,M-1]$ and $g_M = m_M m_1$. $h({\ensuremath{{\mathbf{m}}}}) = {\ensuremath{{\mathbf{A}}}} {\ensuremath{{\mathbf{m}}}} \times \exp ( - K \| {\ensuremath{{\mathbf{B}}}} {\ensuremath{{\mathbf{m}}}} - \mu\|_2^2/2 ) $ $h({\ensuremath{{\mathbf{m}}}}) = {\ensuremath{{\mathbf{H}}}} {\ensuremath{{\mathbf{g}}}}({\ensuremath{{\mathbf{m}}}})$ with ${\ensuremath{{\mathbf{g}}}}({\ensuremath{{\mathbf{m}}}}) = {\ensuremath{{[m_1 , 0.4 m_2, 0.2 m_1^2 , m_3^2 , 0.7 m_1 m_2 ]}^{\mathsf{T}}}}$ ### An additive system with component-wise responses $ {\ensuremath{{\mathbf{h}}}}_i$ If additivity holds for the system and the component-wise system response depends on the composition, then we model this system as the following. $$\begin{aligned} h\left(\sum_{i=1}^M m_i {\ensuremath{{\mathbf{e}}}}_i \right) = \sum_{i=1}^M h \left( m_i {\ensuremath{{\mathbf{e}}}}_i \right) = \sum_{i=1}^M {\ensuremath{{\mathbf{h}}}}_i(m_i), \end{aligned}$$ where ${\ensuremath{{\mathbf{h}}}}_i(m_i) \in \mathbb{R}^L$ is a function of a scalar $m_i$. For the $i$th component, the system response $ {\ensuremath{{\mathbf{h}}}}_i$ depends on the composition of $m_i$, such as shape change in the response. For example, $${\ensuremath{{\mathbf{h}}}}_i(m_i) = \left( ( a_1 -a_0 ) m_i + a_0 \right) \exp\left( -K \left( {\ensuremath{{\mathbf{x}}}} - \left( (b_1 - b_0) m_i + b_0 \right) \right)^2 \right)$$ for a fixed index vector in observation $ {\ensuremath{{\mathbf{x}}}} ={\ensuremath{{[1,..., L]}^{\mathsf{T}}}}$. The peak location of this function is translated from $b_0$ to $b_1$ and the magnitude of the peak is scaled from $a_0$ to $a_1$, as $m_i$ changes from 0 to 1. ### An additive system with fixed-shape component-wise responses $ {\ensuremath{{\mathbf{h}}}}_i$ and nonlinear scaling factors If additivity holds for the system and the component-wise system response is a scaled version of a fixed shape characterized by the component, then we model this system as the following. $$\begin{aligned} \label{eq:system04} h\left(\sum_{i=1}^M m_i {\ensuremath{{\mathbf{e}}}}_i \right) = \sum_{i=1}^M h \left( m_i {\ensuremath{{\mathbf{e}}}}_i \right) = \sum_{i=1}^M g_i(m_i) h({\ensuremath{{\mathbf{e}}}}_i) = \sum_{i=1}^M g_i(m_i) {\ensuremath{{\mathbf{h}}}}_i, \end{aligned}$$ where $g_i(m_i)$ is an arbitrary scalar function on the specific component of index $i$ and ${\ensuremath{{\mathbf{h}}}}_i = h({\ensuremath{{\mathbf{e}}}}_i) \in \mathbb{R}^L$. For example, $g$ can be quadratic or piecewise continuous: $g_1(t) = t^2, g_2(t) = t^{0.3} $ where $t \in [0.2 , 0.3]$ and zero elsewhere. ### A Linear System When linearity holds for the system response, then $g$ from can be treated as an identity operator, i.e., $g(t)=t$ and $$\begin{aligned} \label{eq:linearModel_noiseless} h\left(\sum_{i=1}^M m_i {\ensuremath{{\mathbf{e}}}}_i \right) = \sum_{i=1}^M m_i {\ensuremath{{\mathbf{h}}}}_i = {\ensuremath{{\mathbf{H}}}} {\ensuremath{{\mathbf{m}}}},\end{aligned}$$ where [${\mathbf{H}}$]{} is a linear system matrix comprising ${\ensuremath{{\mathbf{h}}}}_i$ as its $i$th column. The analysis and inversion under this linearity assumption were performed in our previous work [@Park2019spe]. Systems with missing or noise compositional vector as obfuscating unknowns --------------------------------------------------------------------------- Here, we do not assume a complete knowledge of the dimension $M$ of the unknown compositional vector but we are ignorant of a partial vector in some dimensions or interested in the compositional vector except this partial vector. In other words, we consider that the whole compositional vector ${\ensuremath{{\mathbf{m}}}}^0$ comprises two components ${{\ensuremath{{\mathbf{m}}}}}, {{\ensuremath{{\mathbf{m}}}}^1}$ and the measurement forward model is $$\begin{aligned} \label{eq:sys_obf} {\ensuremath{{\mathbf{s}}}} = h ({\ensuremath{{\mathbf{m}}}}^0) + {\ensuremath{{\mathbf{n}}}} = h \left(\left[ \begin{array}{l} {\ensuremath{{\mathbf{m}}}} \\ {\ensuremath{{\mathbf{m}}}}^1 \end{array} \right] \right) + {\ensuremath{{\mathbf{n}}}} .\end{aligned}$$ We assume that we do not have knowledge of the existence of the obfucsticating unknown vector or compositional noise vector $ {\ensuremath{{\mathbf{m}}}}^1$ and equivalently we are interested in obtaining only [${\mathbf{m}}$]{}. The training set consists of pairs $ ({\ensuremath{{\mathbf{m}}}}_i, {\ensuremath{{\mathbf{s}}}}_i)$ without ${\ensuremath{{\mathbf{m}}}}^1$. In practice, such a compositional noise vector ${\ensuremath{{\mathbf{m}}}}^1$ can be from environmental effects, which are difficult to measure but still affects – even controlled – experiments. Note that this model includes a trivial but practical case where a constant bias is added to $h({\ensuremath{{\mathbf{m}}}})$ and our observation, eg., spectral offsets from environments such as contribution of environmental elements in X-ray based spectroscopy. Baseline Performance Analysis for Inversion ============================================ Considering the models introduced in the last chapter, we will provide analyses based on the loss functions, metrics, and obfuscating variables in this chapter. Because the inversion performance of nonlinear systems with the simplex constraint is difficult to analyze compared to the linear inversion without the constraint, we provide theoretical analysis or bounds for the linear case as surrogate ones. Loss functions and performance metrics -------------------------------------- ### Loss function with the compositional target Ideally, we want to directly minimize some distance, as an estimatior error, between the estimate and the true composition vector. In other words, the loss function in an ideal form can be used to minimize a distance $d(\cdot,\cdot)$ between the true vector $ {\ensuremath{{\mathbf{m}}}}_{true}$ as the target and the estimated vector $\hat{{\ensuremath{{\mathbf{m}}}}} = f({\ensuremath{{\mathbf{s}}}})$ obtained from an estimator $f$ on the corresponding measurement [${\mathbf{s}}$]{}, as seen below. $$\begin{aligned} \label{eq:ideal_loss} L_{ideal} = d ( {\ensuremath{{\mathbf{m}}}}_{true} , \hat{{\ensuremath{{\mathbf{m}}}}} ), \end{aligned}$$ where both ${\ensuremath{{\mathbf{m}}}}_{true}$ and $\hat{{\ensuremath{{\mathbf{m}}}}}$ satisfy the simplex constraints. A trained system after minimizing using a set of samples $\{{\ensuremath{{\mathbf{m}}}}_i,{\ensuremath{{\mathbf{s}}}}_i \}_i$ can produce a compositional estimate on a new measurement but this estimation is performed through mapping of the measurement as an input to the system, not by typical inversion. In this work, we will perform the optimization of the mapping function $f$ by minimizing the above distance using ANN on the training set under a given model order or hyper-parameters. The trained model retains estimated parameters such as weights and biases. Considering possible convex optimization approaches, we note that it is difficult to formulate and efficiently solve a convex loss function with an explicitly form of $f$ because of the simplex conditions. For example, the typical projection onto a simplex is not a convex function. The simplex constaint is linear but applying the boundary conditions is not always trivial, especially in high dimensional space [@Park2019spe]. To the best of our knowledge, efficient convex optimization algorithms guaranteeing global optimal solutions are difficult to find. In contrast, ANNs are generally non-convex with nonlinear activation functions but its training phase, if performed well with sufficient data, empirically guarantees good performances with a large size of training samples. ### Loss function with the measurement In practice or testing of the inversion of a measurement [${\mathbf{s}}$]{} by using the trained system, we cannot directly minimize the distance of the estimate from the true compositional input because the input is not known but will be estimated. Therefore, many inversion methods do not use the ideal loss function of with the unknown [${\mathbf{m}}$]{} but adopt loss functions of the measurements and the estimated projections on the observation domain, called projection errors. For practical optimization using measurements only, we will use the following loss function $$\begin{aligned} \label{eq:loss_surrogate} L = \| {\ensuremath{{\mathbf{s}}}} - T({\ensuremath{{\mathbf{m}}}}) \| ,\end{aligned}$$ with $ {\ensuremath{{\mathbf{m}}}} \in S_M$ and $\ell_2$ distance $\| \cdot \|$. The simplest case of this type of optimization is for the linear system and the unconstrained domain for [${\mathbf{m}}$]{}, i.e., $T({\ensuremath{{\mathbf{m}}}}) = {\ensuremath{{\mathbf{A}}}} {\ensuremath{{\mathbf{m}}}} $ for $ {\ensuremath{{\mathbf{m}}}} \in {\ensuremath{{\mathbb{R}}}}^M$. Standard, classical linear regression methods can be used for this unconstrained optimization in minimizing the distance between the linear observation and the projection of the estimate. We note a special case where training samples are used for the estimation of linear system with the simplex constraint [@Park2019spe]. This work cannot cover nonlinear systems but shows how the direct inversion is effectively done after training the linear system having compositional inputs as unknowns. For this simplest case with linear systems $T$, in the view of approaches using ANNs, the minimum structure is a shallow network where only one matrix of weights without bias is used. This weight matrix is the same as the pseudo-inverse of the linear system matrix [${\mathbf{A}}$]{}, denoted by ${\ensuremath{{\mathbf{A}}}}^\dagger$. However, we empirically confirmed that the ANN with this minimum order converges slowly but higher ordered models converge fast while guranteeing the performance. Such higher orders seem redundant at first but we experimentally observe that they converge and perform better and consistently throughout our different experiments. In other words, the minimum possible structure in ANNs may not be the practically optimal. We adopted this principle in our work. ### Performance metrics {#sssec:metrics} For fair comparisons of different methods, we use the following metrics of $e$ (average of $l_2$ distances of errors) and $aad$ (average of absolute deviations or errors) in percent (%). $$\begin{aligned} \label{eq:l2error} e &= \frac{1}{N} \sum_i^N \| {\ensuremath{{\mathbf{m}}}}_{i,true} - \hat{{\ensuremath{{\mathbf{m}}}}}_i \|_2 \times 100 , \\ \label{eq:l1error} add &= \frac{1}{N} \sum_i^N | {\ensuremath{{\mathbf{m}}}}_{i,true} - \hat{{\ensuremath{{\mathbf{m}}}}}_i | \times 100 ,\end{aligned}$$ where $N$ is the sample size and $|{\ensuremath{{\mathbf{x}}}}|$ is a vector of component-wise absolute value, i.e., $[|x_1|, ..., |x_M|]$ Benchmark performance in linear systems --------------------------------------- ### Inversion with the knowledge of the dimension of unknowns Here, we assume a linear system $T={\ensuremath{{\mathbf{H}}}}$ to produce closed form metric as a (surrogate) benchmark performance. Also, we assume a complete knowledge of the dimension $M$ of the unknown compositional vector. We assume that [${\mathbf{H}}$]{} is full-rank and overdetermined, $M < L$, so ${\ensuremath{{({\ensuremath{{\mathbf{H}}}}}^{\mathsf{T}}}} {\ensuremath{{\mathbf{H}}}}) ^{-1} $ is well defined. Let ${\ensuremath{{\mathbf{H}}}} = {\ensuremath{{\mathbf{U}}}} {\ensuremath{{\mathbf{S}}}} {\ensuremath{{{\ensuremath{{\mathbf{V}}}}}^{\mathsf{T}}}}$ by singular vector decomposition and $diag({\ensuremath{{\mathbf{S}}}}) = {\ensuremath{{[ s_1, ..., s_M]}^{\mathsf{T}}}}$, where $diag()$ is an operator that vectorize a matrix by extracting diagonal entries. let $ {\ensuremath{{\mathbf{H}}}}^\dagger := ({\ensuremath{{{\ensuremath{{\mathbf{H}}}}}^{\mathsf{T}}}} {\ensuremath{{\mathbf{H}}}}) ^{-1} {\ensuremath{{{\ensuremath{{\mathbf{H}}}}}^{\mathsf{T}}}} $ be the pseudo inverse of ${\ensuremath{{\mathbf{H}}}} $. The expected error in $\ell_2$ norm, $d_{oracle,uc}$, on unconstrained domain for ${\ensuremath{{\mathbf{m}}}}$ is calculated as follows. $$\begin{aligned} \label{eq:oracle_est_error2} d_{oracle,uc}^2 &= {\ensuremath{{\mathbb{E}}}}\| {\ensuremath{{\mathbf{x}}}} - {\ensuremath{{\mathbf{H}}}}^\dagger {\ensuremath{{\mathbf{y}}}} \|^2 = \sigma^2 \sum_k^M s_k^{-2}, \end{aligned}$$ where $tr()$ is the trace operator. Therefore, the equation becomes $$\begin{aligned} \label{eq:oracle_est_error} d_{oracle,uc} = \sigma \sqrt{\sum_k^M s_k^{-2} }, \end{aligned}$$ ### Inversion with missing or noise compositional vector as obfuscating unknowns If we know that there can be obfusating variables, then the standard simplex constraint for the estimated portion ${\ensuremath{{\mathbf{m}}}}$ should be relaxed; we will have sum-to-less-than-or-equal-to-1 constraint instead of sum-to-one. Without knowing the dimension of missing or obfuscating variables, or simply our ignoring such variables, we can re-define the estimation error for a composition vector $\hat{{\ensuremath{{\mathbf{m}}}}} \in S^M$ with the partial true vector [${\mathbf{m}}$]{} of interest but without the noise vector $ {\ensuremath{{\mathbf{m}}}}^1$ in , by normalizing [${\mathbf{m}}$]{} so that it satisfies the simplex constraint. $$\label{eq:ideal_loss_obs} L_{ideal}^2 = \| \frac{{\ensuremath{{\mathbf{m}}}}}{\|{\ensuremath{{\mathbf{m}}}}\|_1} - \hat{{\ensuremath{{\mathbf{m}}}}} \|_2^2$$ We provide an analysis of impact of an obfuscating vector on inversion for linear systems. The observation model equation can be rewritten as $$\begin{aligned} {\ensuremath{{\mathbf{s}}}} = h ({\ensuremath{{\mathbf{m}}}}^0) + {\ensuremath{{\mathbf{n}}}} = \|{\ensuremath{{\mathbf{m}}}}\|_1 {\ensuremath{{\mathbf{H}}}} \left( \frac{{\ensuremath{{\mathbf{m}}}}}{ \|{\ensuremath{{\mathbf{m}}}}\|_1} \right) + {\ensuremath{{\mathbf{H}}}}^1 {\ensuremath{{\mathbf{m}}}}^1 + {\ensuremath{{\mathbf{n}}}} = {\ensuremath{{\mathbf{H}}}}'{\ensuremath{{\mathbf{m}}}}' + {\ensuremath{{\mathbf{n}}}}' , \end{aligned}$$ where $ {\ensuremath{{\mathbf{H}}}}' = c {\ensuremath{{\mathbf{H}}}} , c=\|{\ensuremath{{\mathbf{m}}}}\|_1 \in (0,1], {\ensuremath{{\mathbf{m}}}}' = {\ensuremath{{\mathbf{m}}}} / \|{\ensuremath{{\mathbf{m}}}}\|_1 \in S^M, {\ensuremath{{\mathbf{n}}}}' = {\ensuremath{{\mathbf{H}}}}^1 {\ensuremath{{\mathbf{m}}}}^1 + {\ensuremath{{\mathbf{n}}}}$. Therefore, [*in practice without the knowledge of even existence of an obfuscating vector of missing variables, we seek a solution in a simplex where the linear system matrix is scaled with also an unknown factor $\|{\ensuremath{{\mathbf{m}}}}\|_1$ from a measurment mixed with perturbed noise ${\ensuremath{{\mathbf{n}}}}'$*]{}. The effective noise ${\ensuremath{{\mathbf{n}}}}'$ is generally centered at a non-zero vector and even correlated, even if ${\ensuremath{{\mathbf{n}}}}$ is zero-mean and uncorrelated because of the unknown system ${\ensuremath{{\mathbf{H}}}}^1$ and the obfuscating vector ${\ensuremath{{\mathbf{m}}}}^1$. The obfuscating vector can be treated as either a fixed unknown or a stochastic quantity which leads to correlated effective noise ${\ensuremath{{\mathbf{n}}}}'$. The loss $L$ is defined as the following. $$L^2 = \| {\ensuremath{{\mathbf{s}}}} - {\ensuremath{{\mathbf{H}}}}\hat{{\ensuremath{{\mathbf{m}}}}} \|_2^2$$ Without knowing ${\ensuremath{{\mathbf{H}}}}^1$, to obtain $\hat{{\ensuremath{{\mathbf{m}}}}}$, a ‘myopic’ estimator uses only [${\mathbf{H}}$]{}, which is either given or estimated. A simple myopic estimator is $ \hat{{\ensuremath{{\mathbf{m}}}}} = P ( {\ensuremath{{\mathbf{H}}}}^\dagger {\ensuremath{{\mathbf{s}}}} ) $, where $P({\ensuremath{{\mathbf{x}}}}) = P_s( P_t({\ensuremath{{\mathbf{x}}}}))$ projects any nonzero vector ${\ensuremath{{\mathbf{x}}}} \in \mathbb{R}^M$ to $S^M$, $P_t({\ensuremath{{\mathbf{x}}}}) = [ .. \max(0,x_i) ..] $ is a thresholding opereator, $P_s({\ensuremath{{\mathbf{x}}}}) = {\ensuremath{{\mathbf{x}}}} / \| {\ensuremath{{\mathbf{x}}}} \|_1$ is a scaling operator. The expected squared loss with an unconstrained pseudo-inverse of ${\ensuremath{{\mathbf{H}}}}$ without projection $P$ is $$\begin{aligned} {\ensuremath{{\mathbb{E}}}}L^2 &= {\ensuremath{{\mathbb{E}}}}\| {\ensuremath{{\mathbf{s}}}} - {\ensuremath{{\mathbf{H}}}}\hat{{\ensuremath{{\mathbf{m}}}}} \|_2^2 = {\ensuremath{{\mathbb{E}}}}\| {\ensuremath{{\mathbf{P}}}}_\perp {\ensuremath{{\mathbf{H}}}}^1 {\ensuremath{{\mathbf{m}}}}^1 + {\ensuremath{{\mathbf{P}}}}_\perp {\ensuremath{{\mathbf{n}}}} \|^2 = \| {\ensuremath{{\mathbf{P}}}}_\perp {\ensuremath{{\mathbf{H}}}}^1 {\ensuremath{{\mathbf{m}}}}^1 \|^2 + {\ensuremath{{\mathbb{E}}}}\| {\ensuremath{{\mathbf{P}}}}_\perp {\ensuremath{{\mathbf{n}}}} \|^2 \\ &= \| {\ensuremath{{\mathbf{P}}}}_\perp {\ensuremath{{\mathbf{H}}}}^1 {\ensuremath{{\mathbf{m}}}}^1 \|^2 + \sigma^2 tr( {\ensuremath{{\mathbf{P}}}}_\perp) = \| {\ensuremath{{{\ensuremath{{\mathbf{U}}}}_1}^{\mathsf{T}}}} {\ensuremath{{\mathbf{H}}}}^1 {\ensuremath{{\mathbf{m}}}}^1 \|^2 + (M' - M) \sigma^2 \\ & \leq \|{\ensuremath{{\mathbf{m}}}}^1\|_1 \lambda( {\ensuremath{{{\ensuremath{{\mathbf{U}}}}_1}^{\mathsf{T}}}} {\ensuremath{{\mathbf{H}}}}^1 ) + (M' - M) \sigma^2 , \end{aligned}$$ where ${\ensuremath{{\mathbf{P}}}} = {\ensuremath{{\mathbf{A}}}}{\ensuremath{{\mathbf{A}}}}^\dagger$ is a projection matrix of [${\mathbf{A}}$]{}, ${\ensuremath{{\mathbf{P}}}}_\perp = {\ensuremath{{\mathbf{I}}}} - {\ensuremath{{\mathbf{P}}}} = {\ensuremath{{\mathbf{I}}}} - {\ensuremath{{\mathbf{U}}}} {\ensuremath{{{\ensuremath{{\mathbf{U}}}}}^{\mathsf{T}}}} = {\ensuremath{{\mathbf{U}}}}_1 {\ensuremath{{{\ensuremath{{\mathbf{U}}}}_1}^{\mathsf{T}}}}$ is a orthogonal projection matrix of [${\mathbf{A}}$]{}, ${\ensuremath{{\mathbf{A}}}} = {\ensuremath{{\mathbf{U}}}} {\ensuremath{{\mathbf{S}}}} {\ensuremath{{{\ensuremath{{\mathbf{V}}}}}^{\mathsf{T}}}}$ by SVD, ${\ensuremath{{\mathbf{U}}}}_1$ have orthogonal basis vectors with which [${\mathbf{U}}$]{} span $\mathbb{R}^{M'}$ with $M'$ being the sum of the dimensions of ${\ensuremath{{\mathbf{m}}}}$ and ${\ensuremath{{\mathbf{m}}}}^1$ (${\ensuremath{{\mathbf{m}}}}^0 \in \mathbb{R}^{M'}$), ${\ensuremath{{\mathbf{n}}}}$ follows Gaussian distribution with mean zero and covariance matrix $\sigma^2 {\ensuremath{{\mathbf{I}}}}$, $\lambda({\ensuremath{{\mathbf{A}}}})$ is the largest eigenvalue of [${\mathbf{A}}$]{}, and $tr(\cdot)$ is a trace operator. The squared estimation error is $$\begin{aligned} \label{eq:obfuscated_est_error2} {\ensuremath{{\mathbb{E}}}}\| {\ensuremath{{\mathbf{x}}}} - \hat{{\ensuremath{{\mathbf{x}}}}}\|^2 & = {\ensuremath{{\mathbb{E}}}}\| {\ensuremath{{\mathbf{H}}}}^\dagger {\ensuremath{{\mathbf{n}}}}' \|^2 = \| {\ensuremath{{\mathbf{H}}}}^\dagger {\ensuremath{{\mathbf{H}}}}^1 {\ensuremath{{\mathbf{m}}}}^1 \|^2 + {\ensuremath{{\mathbb{E}}}}\| {\ensuremath{{\mathbf{H}}}}^\dagger {\ensuremath{{\mathbf{n}}}} \|^2 = \| {\ensuremath{{\mathbf{H}}}}^\dagger {\ensuremath{{\mathbf{H}}}}^1 {\ensuremath{{\mathbf{m}}}}^1 \|^2 + \sigma^2 \sum_k^M s_k^{-2} \\ & \leq \|{\ensuremath{{\mathbf{m}}}}^1 \|_1 \lambda^2({\ensuremath{{\mathbf{H}}}}^\dagger {\ensuremath{{\mathbf{H}}}}^1) + \sigma^2 \sum_k^M s_k^{-2} . \end{aligned}$$ This error has an additional obfuscating factor $\|{\ensuremath{{\mathbf{m}}}}^1 \|_1 \lambda^2({\ensuremath{{\mathbf{H}}}}^\dagger {\ensuremath{{\mathbf{H}}}}^1)$ compared to . We note that this error converges to when the obfuscating variables become negligible (${\ensuremath{{\mathbf{m}}}}^1 \rightarrow {\ensuremath{{\mathbf{0}}}} $) or the system for the variables has a negligible effect (${\ensuremath{{\mathbf{H}}}}^1 \approx {\ensuremath{{\mathbf{0}}}}$). Experiments =========== We perform experiments based on the examples following the described models in Section \[sec:models\]. We start from the simple models to more complex and nonlinear models. Design and implementations -------------------------- We implemented the designed simulations using Python 3.5 and extensively experimented several objective functions, strucutures, tuning strategies, and different nonlinear and non-negative activation functions in ANNs. First, to efficiently train ANNs and to better generalize, we include some redundancy in the structure. Indeed, minimal structures may not guarantee good convergence rate or sometimes fail to converge due to sensitivity, e.g., linear systems and modeling of it using only weights linking input and output directly. Further redundancy to avoid overfitting such as dropout layers was tried but not used in our experiments because they did not improve the estimation or has little effect. Batchnorm layers are inserted between layers for efficient training. To obtain compositional vectors as outputs of our estimators, we added a simplex projection to the last layer in our ANNs, which is nonconvex. Here, we apply only rescaling of the vector, by dividing it with the sum of the vector components obtained from the previous layer, because the chosen activation function of the layer already guarantees non-negativity. We note that optimization of ANNs is a generally non-convex procedue but with rich empirical guidelines to avoid local minima and achieve satisfactory performance. As an objective function to minimize, we use a mean squared ($\ell_2$) distance between the ANN output $:=ANN({\ensuremath{{\mathbf{m}}}})$ and [${\mathbf{s}}$]{} in the loss function to optimize the ANNs, after trying different distances such as mean abosolute distance (using $\ell_1$ distance), mean absolute percentage distance, categorial crossentropy, soft-max types, etc. We empirically confirmed that using the $\ell_2$ distance achieves the performance in terms of lowest estimation bias and fast convergence rate. Among many optimizers or packages, we adopted Adam optimizer for ANN training [@Kingma2014] after experimenting other optimizers such as SGD, RMSProp, Adagrad, Nadam in Keras package [@Keras2015]. Also, we have tried many tuning strategies and the tunned parameters are mostly default values: $\beta_1=0.9, \beta_2=0.999$, decay rate is 0.01. The learning rates and batch sizes depend on the experiments and range from $10^{-6}$ to $10^{-3}$, from 64 to $N_{training}$, respectively. In training stages, we checked the validation errors so that the overfitted parameters are not used in testing. We evaluated the performance mainly using the compositional samples drawn according to the uniform distribution in a simplex, because this distribution is the most scattered distribution having the highest entropy in information theory under the volume measure. However, we added several tests having compositional samples drawn according to a mixture of concentrated distributions and uniform distribution. Simple linear systems --------------------- We perform the experiments on the linear systems of the low dimensional spaces of observations and unknowns. We set $L = 5, M = 3, N = 10000$ (the number of training samples), $N_{test} = 10000$ (the number of testing samples). Thus, even if we do not know the system function, the multiplicative system matrix in this case, we know its dimension and the matrix will be estimated using the training data. We simulated the linear system matrix [${\mathbf{H}}$]{} so that each of its entries was generated according to standard Gaussian distribution. The training and test set of compositional vectors are generated uniformly on simplex $S^L$ [@Onn2009]. Let ${\ensuremath{{\mathbf{X}}}}_{train}$ and $ {\ensuremath{{\mathbf{X}}}}_{test}$ be the matrix comprising of the true label (compositional) vectors in the training and test set, respectively. The realistic linear model can be described with an additive noise as the following $$\begin{aligned} \label{eq:linearModel} {\ensuremath{{\mathbf{s}}}} = {\ensuremath{{\mathbf{H}}}} {\ensuremath{{\mathbf{m}}}} + {\ensuremath{{\mathbf{n}}}} ,\end{aligned}$$ where [${\mathbf{n}}$]{} is a noise vector. The additive noise vector in is generated such that each entry of the vector follows mean zero Gaussian distribution with standard deviation $\sigma = 0.005$. The system responses in the training and test sets using the compositional input ${\ensuremath{{\mathbf{X}}}}_{train}$ and $ {\ensuremath{{\mathbf{X}}}}_{test}$ are collected into the matrix ${\ensuremath{{\mathbf{Y}}}}_{train}$ and $ {\ensuremath{{\mathbf{Y}}}}_{test}$, respectively. The MLE (maximum likelihood estimator) of the system matrix is obtained as the following [@Park2019spe]: $$\begin{aligned} \label{eq:MLE_H_linear} \hat{{\ensuremath{{\mathbf{H}}}}}={\ensuremath{{\mathbf{Y}}}}_{train}{\ensuremath{{{\ensuremath{{\mathbf{X}}}}_{train}}^{\mathsf{T}}}}({\ensuremath{{\mathbf{X}}}}_{train}{\ensuremath{{{\ensuremath{{\mathbf{X}}}}_{train}}^{\mathsf{T}}}})^{-1} .\end{aligned}$$ Using such an estimated linear system matrix, we perform inversion to estimate the unknown compositional vector from its system response. For the experiment with ANNs, we try two cases: ANN with one layer vs. ANN with multiple layers. We measure the estimation performance by evaluating the difference of matrix of the test set $ {\ensuremath{{\mathbf{X}}}}_{test}$ and the matrix of the estimated compositional vectors obtained from ${\ensuremath{{\mathbf{Y}}}}_{test}$. The error metric is precisely formulated by equation in Section \[sssec:metrics\]. We note that the shallowest ANN will have nonunique optimal solutions depending on initialization or randomization. This is described in Appendix. \[app:nonuniqueness\_shallow\] and we do not experiment on this shallow structure. ### ANN with 1 layer We present several trivial ANN learning cases to demonstrate that our intuitions match the desired behaviors of the learned models. We omit reporting error values of these trivial cases. We first train this shallow ANN to learn the mapping from compositional domain in $S^M$ to the output domain. The learned ANN should have the weight matrix related to the original linear system matrix. We below provide the discussion of this considering an optional bias term in ANNs and both forward and inversion models. - [ Estimation of linear system matrix [${\mathbf{H}}$]{} without a bias term: ]{} We model $ANN({\ensuremath{{\mathbf{m}}}}) \approx {\ensuremath{{\mathbf{s}}}}$. The input ${\ensuremath{{\mathbf{m}}}} \in S^M$ is multiplied by the first ANN weight matrix ${\ensuremath{{\mathbf{W}}}}_1$ and the distance between this vector and the desired system output [${\mathbf{s}}$]{} is minimized. We experimentally observed that the trained mapping result was good, i.e., $ANN({\ensuremath{{\mathbf{m}}}}) \approx {\ensuremath{{\mathbf{s}}}}$ and the weight matrix ${\ensuremath{{\mathbf{W}}}}_1 \approx \hat{{\ensuremath{{\mathbf{H}}}}}$ as expected. - [ Estimation of linear system matrix [${\mathbf{H}}$]{} with a bias term: ]{} The input ${\ensuremath{{\mathbf{m}}}} \in S^M$ is multiplied by the first ANN weight matrix and added with a bias term. We empirically obtained the same good results as above but, the weight matrix ${\ensuremath{{\mathbf{W}}}}_1$ differs the system matrix [${\mathbf{H}}$]{} and the MLE $\hat{{\ensuremath{{\mathbf{H}}}}}$ because of the bias term in the ANN. Theoretically, if the distribution of the training samples cover all the possible domain space and $N$ goes to infinity, the bias terms will converge to zero and ${\ensuremath{{\mathbf{W}}}}_1 \rightarrow \hat{{\ensuremath{{\mathbf{H}}}}}$. The above cases consider learning the forward model whereas the below cases consider learning the inversion so that the ANN can produce the compositional vector [${\mathbf{m}}$]{} from a measurement [${\mathbf{s}}$]{}. - [(Inversion) Estimation of pseudo-compositional vector without a bias term: ]{} Similar to matrix inversion, we used a linear activation function after mulitplying a weight matrix. The trained ANN performs good inversion and the result is comparable to using the inverse matrix of the estimated [${\mathbf{H}}$]{}, i.e. $\hat{{\ensuremath{{\mathbf{H}}}}}^{-1}$. Thresholding and scaling operations are required to project the ANN output onto the simplex domain. - [(Inversion) Estimation of pseudo-compositional vector with a bias term: ]{} Similar to the above case, we used a linear activation function after mulitplying a weight matrix but adding a bias. The trained ANN performs good inversion and the result is comparable to using the inverse matrix of the estimated [${\mathbf{H}}$]{} but with a constant term due to the introduced bias term in the model. Thresholding and scaling operations are required to project the ANN output onto the simplex domain. - [(Inversion) Estimation of compositional vector without a bias term: ]{} We performed a similar experiment as above but added a mapping layer so that the ANN ouput is in a simplex. Then we do not need to apply thresholding and scaling operations to project the ANN output onto the simplex domain as done above. Throughout experiments[^3] we could observed that this ANN shows good performance without a need for post-processing of mapping onto a simplex. From the observation of the above last case demonstrating good inversion with a projection layer, we can extend the model further by adding another layer before the projection. ### ANN with multiple layers {#sssection:linear} To investigate extendibility of ANNs with muliple, possibly deep, layers, we designed two-layered ANN with the projection layer as the last layer. The first and second layers each have $4 \times M$ nodes, each followed by batch normalization and applying a sigmoid activation, and the last layer has $M$ nodes with ReLu activation [@Nair2010] followed by the scaling operation as the projection layer because non-negativity is guaranteed by the previous activation function. Note that the generated system matrix can have negative numbers, as was in our realization that was used throughout in our applicable experiments with its condition number 3.23 (the ratio of the largest singular value to the smallest). condition number The errors of are $$\begin{aligned} e_{oracle} &= 0.56962239814362881 ,\\ e_{benchmark} &= 0.56958924114641452 ,\\ e_{ann} &= 0.5716387941750467 ,\end{aligned}$$ where the $oracle$ case uses the true system matrix for inversion so the estimator is $P({\ensuremath{{\mathbf{H}}}}^\dagger {\ensuremath{{\mathbf{s}}}}) \in S^M$, $benchmark$ case uses the MLE of the system matrix for inversion so the estimator is $P(\hat{{\ensuremath{{\mathbf{H}}}}}^\dagger {\ensuremath{{\mathbf{s}}}})$, and $ann$ case indicates results from the trained ANN. The three error values are comparable. The error from the ANN approach is slightly larger than the rest. The difference between ${\ensuremath{{\mathbf{H}}}}$ and $\hat{{\ensuremath{{\mathbf{H}}}}}$ is $$\| {\ensuremath{{\mathbf{H}}}} -\hat{{\ensuremath{{\mathbf{H}}}}} \|_F / \| {\ensuremath{{\mathbf{H}}}} \|_F = 0.00016788000456487503 ,$$ where $\|\cdot\|_F$ indicates a Frobenius norm. This small number implies the MLE for the system matrix is accurate enough and the benchmark performance with MLE should be similar to the oracle case, as shown above. We note that the theoretical bound for unconstrained estimator is $$d_{oracle,uc} \times 100 = 0.68009831763330508 .$$ This is significant larger than the error level of 0.57 seen in $e_{oracle},e_{benchmark},e_{ann}$ we obtain from several estimators, primarily because the simplex constraint applied with a projectiion operator or scaling seems to limit the variable ranges unlike the unconstrained estimator[^4]. Simple nonlinear systems ------------------------ We perform the experiments on several different nonlinear systems of the low dimensional spaces of observations and unknowns. Most of these have dimensions of $L = 5, M = 3$, unless explicitly stated, and $N = 10000$ (the number of training samples), $N_{test} = 10000$ (the number of testing samples). ### Nonlinear systems: invertible transformation on simplex variable {#sssection:nonlinear_inv} We designed a nonlinear system where the output should be uniquely invertible to the original input without noise. We designed the following particular nonlinear system: $$\begin{aligned} T({\ensuremath{{\mathbf{m}}}}) &= {\ensuremath{{\mathbf{H}}}} \, g({\ensuremath{{\mathbf{m}}}}), \\ \label{eq:g_nonlinear_inv} g({\ensuremath{{\mathbf{m}}}}) &= {\ensuremath{{[ m_1 ^2 , m_2^{0.5} + 0.1 , m_3 ]}^{\mathsf{T}}}},\end{aligned}$$ where [${\mathbf{H}}$]{} has entries generated according to the standard Gaussian distribution. The inverse function of $g$ is as the following: $$\begin{aligned} \label{eq:g_inv_nonlinear_inv} g^{inv}({\ensuremath{{\mathbf{x}}}}) &= {\ensuremath{{[ ( P_t(x_1) ) ^{0.5} , ( P_t(x_2 - 0.1) ) ^{2} , x_3 ]}^{\mathsf{T}}}},\end{aligned}$$ where ${\ensuremath{{\mathbf{x}}}}$ is not necessarily in a simplex and can be negative as an input argument of $g^{inv}$ due to the presence of noise, thus requiring non-negative projection $P_t$ for the square-root operation, and the third variable is by-passed as in $g$. The averaged $\ell_2$ errors in percentage are, again for $N_{test}=10000, \sigma=0.005$, $$\begin{aligned} e_{oracle} &= 0.899552 ,\\ e_{benchmark} &= 0.899289 ,\\ e_{ann} &= 0.493694 ,\end{aligned}$$ where the $oracle$ case uses the true system matrix for inversion so the estimator is $P(g^{inv}({\ensuremath{{\mathbf{H}}}}^\dagger {\ensuremath{{\mathbf{s}}}})) \in S^M$, $benchmark$ case uses the MLE of the system matrix for inversion so the estimator is $P(g^{inv}(\hat{{\ensuremath{{\mathbf{H}}}}}^\dagger {\ensuremath{{\mathbf{s}}}}))$, and $ann$ case indicates results from the trained ANN but without knowledge of $g$. It is surprising to note that ANN significantly beats other two estimators. We may not directly compare the results coming from two different systems of this nonlinear system and the previous linear system. However, it is clear to notice the gap of errors from ANN and the pseudo-inversion methods, compared to the plain linear model in \[sssection:linear\] with the negligible gap in errors from different methods. The only change added to the linear model is the additional nonlinear effects on $m_1, m_2$ by the function $g$. Again, the benchmark case is similar to the oracle case because of close proximity of $\hat{{\ensuremath{{\mathbf{H}}}}}$ to ${\ensuremath{{\mathbf{H}}}}$. It is noteworthy to observe that the performance of these two has relatively degraded due to nonlinear effects of $g$, while the ANN performance relatively improved even without using functional form of $g$. This result also implies that there must be optimal estimator better than the above ‘oracle’ estimator, which should depend on a particular nonlinear function $g$. The cascading inversion operation after the pseudo-inverse with the system matrix may better be combined but the search of the better estimator, although interesting, is not in the scope of this work and we leave it as future work. ### Nonlinear systems: noninvertible transformation on simplex variable {#sssection:nonlinear_noninv} Unlike the previous experiment above, we consider partially noninvertible and nonlinear transformation on simplex variables. Because of partial noninvertibility, the estimation has an unavoidable bias regarding the noninvertible space. In our experiment, we apply $g(\cdot)$ and $g^{inv}(\cdot)$ of equations \[eq:g\_nonlinear\_inv\] and \[eq:g\_inv\_nonlinear\_inv\], respectively, which perform transformations on the first two dimensions of [${\mathbf{m}}$]{}. We added a noninvertible transformation with a thresholding operator on $m_3$ as below. $$\begin{aligned} \label{eq:g_3} g_3(x) &= \exp( P_t(x -T) ) -1 , \\ g_3^{inv}(x) &= \begin{cases} x', & \text{if } x' = \log P_{t,\epsilon}(x+1)'\geq T\\ T/2, & \text{otherwise} \end{cases}\end{aligned}$$ where $T$ is a threshold level. For numerical stability in $\log$, we use $P_{t,\epsilon}(x) = \max(\epsilon,x)$ with a small positive number $\epsilon$ and $g^{inv}$ is the optimal inversion function minimizing the expected $\ell_2$ loss (See Appendix. \[appen:noninvertible\_rule\]). For example, $g$ with $T=0.4$ is illustrated in Fig. \[fig:g\_nonlinear\_noninv\]. In our experiment, we used $T=0.02$, so any value of $m_3$ less than two percents will be ignored, and $\epsilon=10^{-10}$. \[h\] The averaged $\ell_2$ errors in percentage are, again for $N_{test}=10000, \sigma=0.005$, $$\begin{aligned} e_{oracle} &= 0.872863 ,\\ e_{benchmark} &= 0.872274 ,\\ e_{ann} &= 0.403223 .\end{aligned}$$ Again, the direct comparison with the results coming from above other systems may not be feasible due to different system functions but the superiority of the ANN approach is evident. The bias introduced from the thresholding effect derived in Appendix. \[appen:noninvertible\_rule\] is $ (0.02^3/12)^{0.5} \times 100\% = 0.082\%$ so the expected increased error is not large. ### Nonlinear systems: invertible transformation with an obfuscating variable {#sssection:nonlinear_inv_obf} We added an obfuscating variable to the invertible system described in the above Section \[sssection:nonlinear\_inv\]. The dimension of the unknowns became $M=4$. We assume that this obfuscating variable is not a dominant in that its weight is not greater than $20\%$. Generally, we can assume that the $\ell_1$ norm of the obfuscating variables are bounded. This is a reasonable assumption in practice too, because unknown variables outside our consideration or interest do not significantly determine the observations. If so, we would either include them in the model or research the physics to rebuild a model. The errors from the oracle and benchmark estimators are calculated using equation where $\hat{{\ensuremath{{\mathbf{m}}}}} \in S^3$ contains only the scaled first 3 dimensional components such that $\sum_{i=1}^{M=3} \hat{m}_i = 1$. $$\begin{aligned} e_{oracle} &= 1.066473 ,\\ e_{benchmark} &= 1.065830 ,\\ e_{ann} &= 0.557085 .\end{aligned}$$ In our experiment, we bound the obfuscating variable such that $m_4 \leq 0.2 $, which increases estimation error less than introducing a thresholding operation with the level 0.2 in one variable would because $(0.2^3/12)^{0.5} \times 100\% = 2.6\%$, and all the averaged errors above are less than $2\%$. In our simulation with $10000$ samples, the test error increase in the ANN approach is slightly less than those in the other approach in Section \[sssection:nonlinear\_inv\] but, this requires more investigation because their system functions are different with different input vectors. ### Nonlinear systems: noninvertible transformation with an obfuscating variable {#sssection:nonlinear_noninv_obf} We added an obfuscating variable to the noninvertible system described in Section \[sssection:nonlinear\_noninv\]. As in the previous experiment, $m_4 \leq 0.2 $ and the errors are increased compared to those in Section \[sssection:nonlinear\_noninv\]. $$\begin{aligned} e_{oracle} &= 1.089608 ,\\ e_{benchmark} &= 1.090022 ,\\ e_{ann} &= 0.534887 .\end{aligned}$$ ### Nonlinear systems: transformation with varying magnitudes {#sssection:nonlinear_H_scaled} We define the following nonlinear system and experimented the ANN approach with $N_{train} =N_{test}=10000, M=3, L= 5, \sigma=0.005$ as in Section \[sssection:linear\]. $$\begin{aligned} \label{eq:nonlinear_vary_mag} {\ensuremath{{\mathbf{s}}}} = \| {\ensuremath{{\mathbf{H}}}} {\ensuremath{{\mathbf{m}}}} \|_2^2 \, {\ensuremath{{\mathbf{H}}}} {\ensuremath{{\mathbf{m}}}} + {\ensuremath{{\mathbf{n}}}}\end{aligned}$$ This case cannot have oracle nor benchmar inversion results because we cannot estimate the scale factor $\| {\ensuremath{{\mathbf{H}}}} {\ensuremath{{\mathbf{m}}}} \|_2^2$ and the unknown variable ${\ensuremath{{\mathbf{m}}}}$ simultaneously without good prior knowledge. This inversion is called generally blind-deconvolution and semi-blind or myopic deconvolution with some prior knowledge of the unknown or the system [@Park2012]. Our approach in this work estimates the inverse system in the ANN and the unknowns. The evaluated error shows the better result than other previous cases. $$\begin{aligned} e_{ann} &= 0.272344 .\end{aligned}$$ This better performance would be due to the effectively increased signal-to-noise ratio (SNR); the minimum of the scaling factors was $0.66$ and $79\%$ of the factors were larger than 1, as seen in Fig. \[fig:nonlinear\_H\_scaled\_hist\]. The averaged $\ell_2$ norm $\| {\ensuremath{{\mathbf{H}}}} {\ensuremath{{\mathbf{m}}}} \|_2$ is ${\ensuremath{{\mathbb{E}}}}\| {\ensuremath{{\mathbf{H}}}} {\ensuremath{{\mathbf{m}}}} \|_2 = 1.28$ and ${\ensuremath{{\mathbb{E}}}}\| {\ensuremath{{\mathbf{H}}}} {\ensuremath{{\mathbf{m}}}} \|_2^2=1.74 $ with ${\ensuremath{{\mathbb{E}}}}$ being an empirical averaging operator here. \[h\] ### Nonlinear systems: transformation with added correlations of unknowns {#sssection:nonlinear_corr} We designed another type of nonlinear system with a nonlinear function $g$ mapping from simplex to an auxilary vector [${\mathbf{z}}$]{} below. $$\begin{aligned} {\ensuremath{{\mathbf{z}}}} &= g({\ensuremath{{\mathbf{m}}}}) = {\ensuremath{{[m_1 , 0.4 m_2, 0.2 m_1^2 , m_3^2 , m_1 m_2 ]}^{\mathsf{T}}}} , \\ {\ensuremath{{\mathbf{s}}}} &= {\ensuremath{{\mathbf{H}}}} {\ensuremath{{\mathbf{z}}}} + {\ensuremath{{\mathbf{n}}}} .\end{aligned}$$ In this system response, the information of $m_1$ is abundant also with its original value, while $m_2, m_3$ are transformed and multiplied with others. We have more redundant intermediate variables of 5 dimensions from ${\ensuremath{{\mathbf{m}}}} \in S^3$ and the system matrix is enlarged, from $5\times 3$ to $5\times 5$, having more perturbations or variations in outputs. However, a large training set can accurately estimate the inverse system and the unknowns. Because the number of training samples seems large enough, the performance is similar to the linear case and other nonlinear cases as expected. $$\begin{aligned} e_{ann} &= 0.584665 .\end{aligned}$$ The oracle and benchmark cases are not evaluated because without knowing functional form or the intermediate dimension the estimators cannot be formulated. In contrast, the ANN approach is agnostic to such a knowledge of intermediate transformations and introduced correlations. If we assume this knowledge, then we can refer to the errorr levels in the linear system case in Section \[sssection:linear\] and these should be comparable with the above ANN performance. ### Nonlinear systems: transformation with varying peak responses {#sssection:nonlinear_peak_vary} We define the following nonlinear system and experimented the ANN approach with $N_{train} =N_{test}=10000, M=3, L= 5, \sigma=0.005$ as in Section \[sssection:linear\]. $$\begin{aligned} \label{eq:nonlinear_peak_vary} {\ensuremath{{\mathbf{s}}}} &= \sum_{i=1}^M {\ensuremath{{\mathbf{h}}}}_i + {\ensuremath{{\mathbf{n}}}}, \\ \label{eq:nonlinear_peak_vary_h} {\ensuremath{{\mathbf{h}}}}_i &= c_i {\ensuremath{{\mathbf{g}}}}_i ,\\ c_i &= ({\ensuremath{{\mathbf{A}}}}_{1,i} - {\ensuremath{{\mathbf{A}}}}_{2,i} ) m_i + {\ensuremath{{\mathbf{A}}}}_{2,i} ,\\ {\ensuremath{{\mathbf{g}}}}_i &= \exp \left[ \left( {\ensuremath{{\mathbf{v}}}} - \left(({\ensuremath{{\mathbf{B}}}}_{1,i} - {\ensuremath{{\mathbf{B}}}}_{2,i} ) m_i + {\ensuremath{{\mathbf{B}}}}_{2,i} \right) \right)^2 \right] ,\end{aligned}$$ where ${\ensuremath{{\mathbf{v}}}}$ is an index vector ${\ensuremath{{\mathbf{v}}}} = {\ensuremath{{[1,2,\dots, L]}^{\mathsf{T}}}}$ and $$\label{eq:A_B_matrices} {\ensuremath{{\mathbf{A}}}}=\left[ \begin{array}{ccc} 2 & 0.7 & 0.8 \\ 1 & 1.5 & 0.3 \end{array} \right], \, {\ensuremath{{\mathbf{B}}}}=\left[ \begin{array}{ccc} 4 & 2.7 & 0.8 \\ 0 & 3.5 & 4.3 \end{array} \right] .$$ This system response has varying magnitudes dependent on composition weights $m_i$s in $c_i$s and different shapes also dependent on composition weights $m_i$s in ${\ensuremath{{\mathbf{g}}}}_i$s. Therefore, this case is more general than the one presented in Section \[sssection:nonlinear\_H\_scaled\]. Fig. \[fig:nonlinear\_peak\_vary\] shows the varying responses in shape or peak locations of the component-wise system functions as its argument $m_i$ for $i=1,2,3$ changes, sampled at $m_i= [0,0.2,0.4,0.6,0.8,1]$. ${\ensuremath{{\mathbf{h}}}}_1$ has a moving peak centered at the index 1 to 5 and the magnitude slightly increases as $m_1$ increases from 0 to 1, while ${\ensuremath{{\mathbf{h}}}}_3$ shows the opposite behavior in terms of the peak locations and magnitudes. ${\ensuremath{{\mathbf{h}}}}_2$ decreases slightly with shape changes as $m_2$ increases. \[h\] The result shown below, from the ANN approach, is comparable with other cases but direct comparisons do not make much sense because the systems are different. $$\begin{aligned} e_{ann} &= 0.517819 .\end{aligned}$$ Again, the oracle and benchmark cases are not evaluated because it is difficult even with functional forms and parameter values due to complex nonlinearity. Instead, we provide the ratio of intensity, eg., $\ell_2$ norm, in noiseless system output of this system to that in linear system. $$\begin{aligned} {\ensuremath{{\mathbb{E}}}}\| {\ensuremath{{\mathbf{z}}}} \|_2 / {\ensuremath{{\mathbb{E}}}}\| {\ensuremath{{\mathbf{H}}}}{\ensuremath{{\mathbf{m}}}} \|_2 \approx 2 ,\end{aligned}$$ where ${\ensuremath{{\mathbb{E}}}}$ is an empirical averaging operator here, ${\ensuremath{{\mathbf{z}}}}=\sum_{i=1}^M {\ensuremath{{\mathbf{h}}}}_i$ and [${\mathbf{H}}$]{} is the same as used in Sections \[sssection:nonlinear\_H\_scaled\] and \[sssection:linear\], $ {\ensuremath{{\mathbb{E}}}}\| {{\ensuremath{{\mathbf{H}}}}{\ensuremath{{\mathbf{m}}}}^{(t)}} \|_2 =1.28$ (reported also in Section \[sssection:nonlinear\_H\_scaled\]) and ${\ensuremath{{\mathbb{E}}}}\| {\ensuremath{{\mathbf{z}}}} \|_2= 2.68$. Considering only the amplified signal intensity we expect the better performance but the changing shapes must adversely affect the inversion performance. ### Nonlinear systems: transformation with varying peak responses wiht added correlations of unknowns {#sssection:nonlinear_peak_vary_corr} We define a similar nonlinear system to the previous system with $N_{train} =N_{test}=10000, M=3, L= 5, \sigma=0.005$ but with the added correlated terms. $$\begin{aligned} {\ensuremath{{\mathbf{s}}}} &= \sum_{i=1}^5 {\ensuremath{{\mathbf{h}}}}_i + {\ensuremath{{\mathbf{n}}}}, \\ {\ensuremath{{\mathbf{h}}}}_i &= c_i {\ensuremath{{\mathbf{g}}}}_i ,\\ c_i &= ({\ensuremath{{\mathbf{A}}}}_{1,i} - {\ensuremath{{\mathbf{A}}}}_{2,i} ) \tilde{m}_i + {\ensuremath{{\mathbf{A}}}}_{2,i} ,\\ {\ensuremath{{\mathbf{g}}}}_i &= \exp \left[ \left( {\ensuremath{{\mathbf{v}}}} - \left(({\ensuremath{{\mathbf{B}}}}_{1,i} - {\ensuremath{{\mathbf{B}}}}_{2,i} ) \tilde{m}_i + {\ensuremath{{\mathbf{B}}}}_{2,i} \right) \right)^2 \right] , \\ \tilde{{\ensuremath{{\mathbf{m}}}}} &= {\ensuremath{{[m_1, 0.4 \, m_2, 0.2 \, m_1^2, m_3^2 , m_2 m_3]}^{\mathsf{T}}}},\end{aligned}$$ where ${\ensuremath{{\mathbf{v}}}}$ is an index vector ${\ensuremath{{\mathbf{v}}}} = {\ensuremath{{[1,2,\dots, L]}^{\mathsf{T}}}}$ and $$\label{eq:A_B_matrices} {\ensuremath{{\mathbf{A}}}}=\left[ \begin{array}{ccccc} 2 & 0.7 & 0.8 & 2.2 & 0.5\\ 1 & 1.5 & 0.3 & 0.9 & 0.2 \end{array} \right], \, {\ensuremath{{\mathbf{B}}}}=\left[ \begin{array}{ccccc} 4 & 2.7 & 0.8 & 2.3 & 3.1 \\ 0 & 3.5 & 4.3 & 2.0 & 3.2 \end{array} \right] .$$ Fig. \[fig:nonlinear\_peak\_vary\_corr\] shows the varying responses in shape or peak locations of the component-wise system functions as its argument $\tilde{m}_i$ for $i=1,2,3,4,5$ changes, sampled at $\tilde{m}_i= [0,0.2,0.4,0.6,0.8,1]$. ${\ensuremath{{\mathbf{h}}}}_1,{\ensuremath{{\mathbf{h}}}}_2,{\ensuremath{{\mathbf{h}}}}_3$ are the same as in the previous sytem in Section \[sssection:nonlinear\_peak\_vary\] but with $\tilde{{\ensuremath{{\mathbf{m}}}}}$, a function of the unknown compositional vector ${\ensuremath{{\mathbf{m}}}}$. According to this function and the given system responses, a small quantity in $m_3$ seems difficult to estimate because its information is only in ${\ensuremath{{\mathbf{h}}}}_4, {\ensuremath{{\mathbf{h}}}}_5$ where small quantities of $\tilde{m}_i$ correspond to attenuated system responses. This would cause the degraded performance in inversion. $$\begin{aligned} e_{ann} &= 0.723837 .\end{aligned}$$ Also, comparing the number to the previous system in Section \[sssection:nonlinear\_peak\_vary\], the added correlated terms did not help the inversion performance. Note that the direct comparison cannot be performed because ${\ensuremath{{\mathbf{h}}}}_2,{\ensuremath{{\mathbf{h}}}}_3$ are now linear and squared functions of $m_2,m_1$, not identity functions of $m_2,m_3$ as in the previous Section \[sssection:nonlinear\_peak\_vary\], resectively. \[h\] High dimensional linear systems {#ssec:lin_highdim} ------------------------------- We experiment on high dimensional simplex variables. To simulate realistic experiments, we set $M=20,L=1000$ to represent high dimensional spaces for the unknowns and observations. We set $ N_{test}=10000, \sigma=0.005=.5\%$ and the designed system matrix in Fig. \[fig:lin\_highdim\_H\] has all nonnegative response curves. The designed system is given in Appendix. \[app:lin\_highdim\_H\]. For the training of the ANN, new $N_{train} = 10000$ samples every 100 epochs were generated to train the ANN because of the memory limitation while avoiding overfitting. The samples in the training and tests sets are drawn according to the uniform distribution. The ANN is designed and tuned to the same parameter values as done in the previous experiments with linearly increased complexity of the networks as $M$ increases in the double layers of $4\times M$ nodes and another layer of $M$ nodes. \[h\] From Fig. \[fig:lin\_highdim\_H\], the correlations of components whose indices are 11 – 20 must be significant because their overall envelop shapes are similar expect the valley shapes. These components have information residing in their valleys not envelope and the result high corelations are seen in the red block in Fig. \[fig:A\_corr\_01\]. Because of high correlations in the components number 11 – 20, their estimation errors are higher than the components 1 – 10, as seen in Fig. \[fig:fig\_nonlinear\_cases\_004\]. \[h\] The trained system matrix for benchmark estimator is close to the true one because $$\| {\ensuremath{{\mathbf{H}}}} -\hat{{\ensuremath{{\mathbf{H}}}}} \|_F / \| {\ensuremath{{\mathbf{H}}}} \|_F = 0.0029386144131524146 .$$ The results on the test using the oracle and benchmark estimators are thus similar. $$\begin{aligned} e_{oracle} &= 3.779764 ,\\ e_{benchmark} &= 3.72622 ,\\ e_{ann} &= 2.214486. \end{aligned}$$ The nonnegative high dimensional matrix with the larger condition number, 360, compared to that in low dimensional system, 3.23, degrades the performance from $0.57\%$ to more than $2\%$ errors. This can be seen visually in Fig. \[fig:lin\_highdim\_H\], where there are many overlapped, similar shaped parts. However, the reported errors are still less than the theoretical bound for the unconstrained estimator , $$d_{oracle,uc} \times 100 = 4.7868905294620099 .$$ Moreover, the ANN approach outperforms the other two. Compared to the low dimensional linear case in Section. \[sssection:linear\], the difference in the errors is significant. This must come from the locality of the ANN approach specific to the training set and the globality of the methods based on matrix pseudo-inversion. In the experiment, even with the uniform sampling in a simplex, the high dimensional simplex seems to exhibit locality with rare samples near the end-members ($100\% - \epsilon$) and relatively many samples away from them. High dimensional simplex spaces may seem counter-intuitive particularly regarding the volume distribution. In fact, high dimensional simplices, along with other high dimensional polytopes, have the major volumn concentration on their surfaces but, near the corner, where the end-members are located, the volume diminishes as the dimension increases. This can be also demonstrated empirically by using uniform sampler on a simplex (see Appendix \[appen:thinVolCorner\]). This implies that under the uniform distribution in a high dimensional simplex, the chance of drawing samples close to any end-members is negligible. However, in controlled experiments where observations are measured based on fabricated or designed samples on a simplex domain, as known as designed compositions, we can have the measurements corresponding to end-member compositions or pure contents of only one individual composition, i.e., ${\ensuremath{{\mathbf{m}}}}={\ensuremath{{\mathbf{e}}}}_i$ for the $i$th end-member. Therefore, we can add the observations from end-members into our training set if we believe that the observations coming from near end-members are expected in practice. To test the locality of the ANN and globality of the other two based on matrix inversion, we performed a simple test with the observations only from the $M$ end-members. Here, for the benchmark estimator, the training and test sets coincide on the $M$ observations, while the ANN estimator was already trained using the $N_{train}$ training samples. $$\begin{aligned} e_{oracle} &= 4.690122 ,\\ e_{benchmark} &\approx 0 ,\\ e_{ann} &= 30.940643 .\end{aligned}$$ The oracle estimator is indepedent of the training set and uses the true matrix, whose error is now much closer to but still less than $d_{oracle,uc}=4.79$, the benchmark uses the trained matrix and again use it for testing, leading to close to zero error as expected, and the ANN approach produces a significantly large error because there were extremely rare samples among $N_{train}=10000$ training samples that are close to any end-members. Therefore, in practice if we believe there is a significant number of samples coming from near end-members, we should include them in the training data. High dimensional nonlinear systems {#ssec:nonlin_highdim} ---------------------------------- We defined a high dimensional nonlinear systems in Appendix. \[app:highdim\_nonlin\], where obfuscating variables and mixture models are also considered too. The system correlates some variables and transforms the original unknown vector with nonlinearly with fractional polynomials and exponential functions, thresholding, and shape changing with moving peaks and valleys. In this section, we experimented numerous ANN structures because of the higher order of complexity of the system: our base model with double layers of $4 \times M_{v}$, where $M_{v}$ is the number of components of interest or assumed, double layers of $16 \times M_v$, $32 \times M_v$, convolutional neural networks (CNN) of having a convolutional layer and then either double layers of $4 \times M_v$ or $32 \times M_v$ feedforward networks. Additionally, we tested two cases for the compositonal distributions. One is the uniform distribution and the other is a mixture model. In the designed mixture model, the mixture centers in percent are shown in Fig. \[fig:mixture\_centers\_01\], and the corresponding $\sigma_i$s, the sample proportions, and details are provided in Appendix. \[app:highdim\_nonlin\]. In the mixture model, there are still samples drawn from the uniform distribution. The drawn compositional vector can be truncated and normalized to satisfy the simplex condition. Also, in generating samples we disgard the samples whose $m_{19}, m_{20}$, as obfuscating variables, are greater than $5\%$. The result samples in $S^{20}$ with the described specification and corresponding noisy measurements using the nonlinear system with the observational noise level $0.005=0.5\%$ constitute the training and test sets. In the experiments using the mixtures, we randomly shuffled the samples in training and test sets. We may retain the original compositional vector ${\ensuremath{{\mathbf{m}}}}^0 \in S^M (M=20)$ including obfuscating variables in ${\ensuremath{{\mathbf{m}}}}^1$, which is used to synthesize noisy measurments, but use ${\ensuremath{{\mathbf{m}}}} \in S^{M_v} ( {M_v}=18)$ without those variables for comparisons (Eq. \[eq:ideal\_loss\_obs\]). In other words, even if the responses of noisy observations embed the obfuscating variable effects, we do not use obfuscating varibles for training, and testing considers only the normalized version of the variables excluding obfuscating varibles. \[h\] The performance in high dimensional examples with $L=1000$ are demonstrated by considering numerous cases of sample distribution, system types, and neural network structures. We added two nonlinear systems whose response is divided by its maxium or $\ell_2$ norm, resulting in added nonlinearity and slightly increased errors. We tried convolutional neural networks (CNN) too. We placed the convolutional layers before the double layers. The CNN layers consists of a layer of 32 nodes and another of 16 with the kernel size 7, 3 strides, and ReLu activation. For completeness, we included the results from linear systems in this section. For linear systems $M=M_v=20$ and for nonlinear systems $M =20, M_v = 18$ and there are two obfuscating variables. The $\ell_2$ error, as the overall error, is computed using Eq.  and reported in Table. \[tab:errors\]. The component-wise $\ell_1$ error, the average of absolute deviation, is computed using Eq.  and illustrated in Fig. \[fig:fig\_nonlinear\_cases\_004\]. We note that the two linear cases along with the largest model with double layers of $32 \times M$ or larger achieve the minimal errors due to the lowest complexity or the adaptive power, respectively. Possibly, the two cases with double layers of $4\times M_v$ whose $\ell_2$ errors are more than 3 seem to have estimation bias or optimized insufficiently, because the optimization with the simple ANN structure showed too slow convergence emprically through many trials of different optimizers, tunings, and techniques. In other words, the simplest ANN structure applied in nonlinear systems may have under-fitting or convergence problem in practice. Especially, component 14, corresonding to the signal of a moving peak, seems to cause the problem as the most difficult variable to estimate especially in simpler models, while the models of the order of $32 \times M$ or larger do not exhibit such problems (Fig. \[fig:fig\_nonlinear\_cases\_004\]). Generally, increasing the number of nodes, $16$ or $32 \times M$ in our experiments, improves stability and accuracy without causing over-fitting by training on sufficient data. Adding a convolutional layer into our base structure with double layers of $\{4,16,32\}\times M$ may help but has not been extensively experimented in our work. [|l|l|l|l|l|]{} System type & samples & ANN type or method & error\ linear & uniform & double layers of $4\times M_v$ & 2.21 ------------------------------------------------------------------------ \ nonlinear & mixture & double layers of $4\times M_v$ & 10.51 ------------------------------------------------------------------------ \ nonlinear & mixture & double layers of $16\times M_v$& 3.53 ------------------------------------------------------------------------ \ nonlinear & mixture & double layers of $32\times M_v$& 2.16 ------------------------------------------------------------------------ \ nonlinear & uniform & double layers of $4\times M_v$ & 2.36 ------------------------------------------------------------------------ \ nonlinear, divided by its max & uniform & double layers of $4\times M_v$ & 2.98 ------------------------------------------------------------------------ \ nonlinear, divided by its $\ell_2$ norm & uniform & double layers of $4\times M_v$ & 6.53 ------------------------------------------------------------------------ \ linear & uniform & CNN layers + double layers of $4\times M_v$ & 2.45 ------------------------------------------------------------------------ \ nonlinear & mixture & CNN layers + double layers of $32\times M_v$ & 2.87 ------------------------------------------------------------------------ \ linear & uniform & pseudo-inverse of [${\mathbf{H}}$]{} & projection & 3.78 ------------------------------------------------------------------------ \ linear & uniform & kNN (optimal k=11) & 2.54 ------------------------------------------------------------------------ \ nonlinear & mixture & kNN (optimal k=11) & 6.54 ------------------------------------------------------------------------ \ nonlinear & uniform & kNN (optimal k=11) & 13.21 ------------------------------------------------------------------------ \ \[h\] For comparison, we put the results in Section. \[ssec:nonlin\_highdim\] using matrix inversion and projection and k-nearest neighbor (kNN) interpolation methods described in Appendix. \[app:kNN\] in Table. \[tab:errors\]. For fair comparisons between ANN and kNN approaches, we use the same number of training samples; $N_{train} = 10000 , N_{test} = 1000 $. For stability of the kNN method, a computed distance is truncated to 0 when it is a negative number or $10^{10}$ when it is greater than $10^{10}$. For the linear system responses, the matrix inversion even with the known system matrix followed by the simplex projection is significantly inferior to ANN and kNN methods, while kNN method with optimal setting ($k=11$) can compete with ANN approaches. However, for the nonlinear system, kNN produced large biases. For simplicity of the figure, we did not include component-wise errors in Fig. \[fig:fig\_nonlinear\_cases\_004\]. Another drawback in using kNN estimators is the increasing computation time in application or testing as the training set increases. This is because a test sample needs to be compared to the whole training set. This drawback can be mitigated by using tree building and searching algorithms but is out of scope of our work. One interesting observation from kNN approach is that it performs better in interpolating the concentrated samples as in the designed mixture than in exptrapolating between the scattered samples as in the uniformly distributed samples. Conclusion ========== We demonstrated the promising performances in estimating the compositional unknown vectors using our simple ANN design throughout our extensive experiments. The ANN approaches can compete with the optimal bounds for linear systems, where efficient convex optimization theory applies and there are guranteed global optima. However, in complex nonlinear system inversions, we do not have such benchmark or global properties. We thus provided several surrogate bounds or analysis and performed extensive experiments by designing numerous different types of nonlinear systems, both in low and high dimensions. In our experiments with low noise level, we demonstrated that the double layers of $4 \times M_v$ through $4 \times (LM_v)^{0.5}$ nodes in ANNs guarantee good estimation performances. We thus conjecture that the double layers of such order are sufficient in other nonlinear systems with other noise levels and will work on this as our future work. The estimation performance may depend on the desired distribution in the data. We simulated mostly uniform distribution on simplices because it is the most scattered distribution as the worst case in the volume measure of a simplex set. We also performed additional experiments using mixtures of concentrated distributions and uniform distribution. It is worth to note that the uniform distribution on high dimensional simplex shows counter-intuitive characteristics in terms of rare chance of selecting near any end-members. In this sense, the drawn samples seem to exhibit concentrations away from any component being large because of the low probability of selecting near end-members whose major component is $(100\% - \epsilon)$ with a small positive number $\epsilon$. Indeed, the probability decreases exponentially as the dimension increases. As was done in our experiments with mixtures, we can include samples concentrated near end-members into the training set for estimation of such compositions. Even though the considered nonlinear system types in this paper may not cover all the possible system types, this work covers numerous different types of nonlinear systems by our designing them and extensive experiments. An investigation of other possible types is our future work. Another future work is to find the minimum or optimal depth of ANNs to effectively invert a nonlinear system that can be perfectly represented with terms up to order $\alpha$ as in Taylor series approximations of a nonlinear system in Appendix. \[appen:TaylorExpansion\]. This work would further embolden our empirical conclusion that the ANNs with nonlinear activations are sufficient to provide good inversions. We conjecture that the order of $\alpha +1 $ layers would be sufficient according numerous experiments we tried. Nonuniqueness of parameters of a shallow network for sum-constant vectors {#app:nonuniqueness_shallow} ------------------------------------------------------------------------- The parameters in a shallow network ${\ensuremath{{\mathbf{W}}}},{\ensuremath{{\mathbf{m}}}}$ are not uniquely determined in learning [${\mathbf{A}}$]{} using the training set $\{{\ensuremath{{\mathbf{x}}}}_i,{\ensuremath{{\mathbf{y}}}}_i\}_i$ coming from ${\ensuremath{{\mathbf{y}}}} = {\ensuremath{{\mathbf{A}}}} {\ensuremath{{\mathbf{x}}}}$. Let ${\ensuremath{{{\ensuremath{{\mathbf{1}}}}}^{\mathsf{T}}}} {\ensuremath{{\mathbf{x}}}} = K$ and ${\ensuremath{{\mathbf{W}}}} = {\ensuremath{{\mathbf{A}}}} - {\ensuremath{{\mathbf{b}}}}{\ensuremath{{{\ensuremath{{\mathbf{1}}}}}^{\mathsf{T}}}} / K$ with an arbitrary constant vector [${\mathbf{b}}$]{}. But, the ANN output always matches ${\ensuremath{{\mathbf{y}}}}$. $${\ensuremath{{\mathbf{W}}}}{\ensuremath{{\mathbf{x}}}} + {\ensuremath{{\mathbf{b}}}} = {\ensuremath{{\mathbf{A}}}}{\ensuremath{{\mathbf{x}}}} - {\ensuremath{{\mathbf{b}}}} \sum x_i / K = {\ensuremath{{\mathbf{A}}}}{\ensuremath{{\mathbf{x}}}} - {\ensuremath{{\mathbf{b}}}} = {\ensuremath{{\mathbf{y}}}}$$ Optimal inversion estimator for a partially noninvertible thresholding operator {#appen:noninvertible_rule} ------------------------------------------------------------------------------- Let a random variable $x \in \mathbb{R}^1$ to be estimated follow a uniform distribution on its domain $[0, U_x]$. We assume that a partially noninvertible thresholding operator, such as hard or soft-thresholding, has a noninvertible region on $[0,T]$ with $T<U_x$. Except the thresholded region, a perfect inversion is achieved so the estimator $\hat{x} = x$ on $[T,U_x]$. Without loss of generality, we let $U_x = 1$ for a simple derivation. We can obtain the best estimator minimizing $\ell_2$ distance between $x$ and $\hat{x}$. The squared loss function is then defined with the expectation with respect to the probability function $P(x)$. $$L_{threshold}^2 = {\ensuremath{{\mathbb{E}}}}\| x - \hat{x} \|_2^2 = \int_0^T (x - \hat{x})^2 d P(x) = T (\hat{x} - T/2)^2 + T^3/12$$ Therefore, the minimum $L_{threshold}^*$ is achieved when $\hat{x} = T/2$ $$L_{threshold}^* = \sqrt{T^3/12 }$$ Note that the maximum error is $| x -\hat{x}| = T/2$ In practice, using Monte Carlo simulations, $L_{threshold}^2 \approx \| {\ensuremath{{\mathbf{x}}}} - \hat{{\ensuremath{{\mathbf{x}}}}} \|^2_2 / N $ with ${\ensuremath{{\mathbf{x}}}}\in [0, U_x]^N$, leading to $$L_{threshold} \approx \| {\ensuremath{{\mathbf{x}}}} - \hat{{\ensuremath{{\mathbf{x}}}}} \|_2 / \sqrt{N}$$ For example, in a simplex domain, when there is a thresholding on $[0,0.1]$, the best estimator, assuming uniform distribution of the unknown, will predict $0.05$ on any input in the region. The maximum error $| x -\hat{x}|$ is $0.05$, but the overall loss $L_{threshold} = 0.00913$ or near 1% error. Taylor expansion on a simplex {#appen:TaylorExpansion} ----------------------------- A general model with a differentiable $h$ can be practically decomposed and well-approximated with Taylor expansion. Without loss of generality, let we consider the series centered at ${\ensuremath{{\mathbf{0}}}}$. A general system response $$\begin{aligned} h({\ensuremath{{\mathbf{m}}}}) &\approx \sum_{k} S_k \\ S_k &= \sum_{i_1} \cdots \sum_{i_k} {\ensuremath{{\mathbf{h}}}}_{i_1,...,i_k} \, m_{i_1} \cdots m_{i_k}\end{aligned}$$ where ${\ensuremath{{\mathbf{h}}}}_{i_1,...,i_k}$ is a derivative with respect to $m_{i_1} \cdots m_{i_k}$ and $i_j \in [1,2, ..., M]$ for $ \forall j$. Volumne concentration in a high dimensional simplex --------------------------------------------------- ### Thin concentration of volume in a self-similar corner of a high dimensional simplex {#appen:thinVolCorner} Let $V_M$ be the volume of a polytope in $M$ dimension without degenerate dimensions such as in a simplex. Then the volume of a self-similar polytope, whose size is $\epsilon < 1 $ times smaller, is $V_M \epsilon^M$. The volumne of this smaller polytope decreases as the dimension increases with the exact rate of $\epsilon^M$. This polytope can be placed to cover a corner inside the original polytope if it is convex. Specifically, the volumne of M-simplex $S^M$, according to [@Park2019spe], is $$V(S^M) = \frac{\sqrt{M}}{ (M-1)!}$$ Without loss of generality, considering the first axis of $S^M$, we define the subset $S^M_{\epsilon,1}$ located around the corner of the first which is the endmember ${\ensuremath{{\mathbf{e}}}}={\ensuremath{{[1,0,\cdots,0]}^{\mathsf{T}}}}$ $$S^M_{\epsilon, 1} =\{ (x_1, \ldots , x_M) \in S^M \, : \, 1-\epsilon \leq x_1 \leq 1 \} .$$ The volume of the self-similar $\epsilon$-sized polytope $S^M_{\epsilon, 1}$ is $$V(S^M_{\epsilon,1}) = V(S^M) \epsilon^{M-1} ,$$ with the dimensional factor $M-1$, due to one degenerate dimension of a simplex. The volume ratio $V(S^M_{\epsilon,1}) / V(S^M) = \epsilon^{M-1} \rightarrow 0$ as $M$ increases. Therefore, the contribution of the volume of any corner of a vertex diminishes in high dimensions because $M \epsilon^{M-1} \rightarrow 0$ as $M$ grows. This behavior of a unit simplex is different in high dimensions from other polytopes such as a unit hyper cube. Note that a unit hyper cube in the dimension $M$ denoed by $C^M = [0,1]^M$ has volume of 1. Consider the set of the thin ($\epsilon$-thick) slice of the cube covering the first coordinate value of 1: $$C^M_{\epsilon,1} =\{ (x_1, \ldots , x_M) \in [0,1]^M \, : \, 1-\epsilon \leq x_1 \leq 1 \}$$ and the volume of it is constant and not decaying over dimension. $$V(C^M_{\epsilon,1}) = \epsilon$$ Therefore, the realization of a uniform random variable on a simplex in a high dimension produces near end members more rarely as the dimension grows; under a uniform distribution in a simplex $S^M$, an arbitrary volume inside the simplex is proportional to the probability that a drawn sample of the uniform random variable is within the volume. Therefore, the probability of a drawn sample being within $\epsilon$ distance from the $i$th end-member, $x_i=1$, is $P(x_i \in [1-\epsilon, 1] ) = \epsilon^{M-1}$ for a sample ${\ensuremath{{\mathbf{x}}}} \in S^M$. When $\epsilon > 0.5, M \geq 2$, because there are $M$ vertices, in $S^M$ the probability of a sample being within $\epsilon$ distance from any end-members is $$\begin{aligned} \label{eq:P_epsilon} P_\epsilon = P(x_i \in [1-\epsilon, 1] \mbox{ for } \forall i) = M \epsilon^{M-1}\end{aligned}$$ Empirically, we can verify this exponential decrease in the number of drawn samples according to the uniform distribution, by evaluating the ratio of the number of samples, one of whose entries is over a specified number $1-\epsilon$, to the number of total drawn samples. With $N=10^6$ samples drawn according to the uniform distribution, we set several values of $\epsilon := 1-T$ to observe samples whose first component is greater than $T$. The result of this experiment is presented in Fig. \[fig:P\_thr\_MC\]. \[h\] \[h\] ### Still thin concentration of volume above the center of a high dimensional simplex We present another behavior of sampling related with the mean of the uniform distribution in simplices or a center of simplices. The mean of uniform distribution on simplex $S^M$ [@Park2019spe] is $\mu_M = {\ensuremath{{\mathbf{1}}}}/M$ and the probability of a drawn sample having one entry, e.g., $x_1$, greater than $c/M$ ($c$ time than the mean) is $$\begin{aligned} \label{eq:P_cmu} P(x_1 \in (c\mu_M, 1] ) = P(x_1 > c\mu_M )= (1-c/M)^{M-1} \end{aligned}$$ This bound converges as the following $$\begin{aligned} \label{eq:P_cmu_asym} P(x_1 > c\mu_M ) \rightarrow e^{-c}, \mbox{ as } M \rightarrow \infty \end{aligned}$$ The equivalent $\epsilon $ value in is $1-c/M$ and $$\begin{aligned} M P(x_1 > c\mu_M ) \geq P(x_i \in [1/M, 1] \mbox{ for } \forall i) \approx M e^{-c},\end{aligned}$$ with large $M$ and $c$. Several curves of the probability $P(x_1>c\mu)$ of Eq.  with the asymptotes of Eq.  are presented in Fig. \[fig:P\_x1\_cmu\] with the theoretical values and in Fig. \[fig:P\_x1\_cmu\_MC\] with Monte Carlo estimates using 10,000 samples. \[h\] \[h\] ### High concentration of volume near the center in a high dimensional simplex The probability of a sample lying in a band centered in the mean, under the uniform distribution, is $$\begin{aligned} \label{eq:P_band} P( | x_1 -\mu | \geq a ) &\leq \frac{Var(x_1)}{a^2} \\ \label{eq:P_band2} &= \frac{1}{a^2} \frac{M-1}{M+1} \frac{1}{M^2} \\ \label{eq:P_band4} &= \frac{1}{k^2} \frac{M-1}{M+1} \mbox{ with $a=k/M$ } \\ \label{eq:P_band5} &\rightarrow \frac{1}{k^2} \mbox{ as $M$ increases} ,\end{aligned}$$ where the variance of a first component $Var(x_1)$ is computed using the first and second moments in [@Park2019spe]. For a fixed bandwidth $2a$, Eq.  states that the chance of selecting a sample drawn outside the band $[\mu - a, \mu+a]$ decreases with the asymptotic rate of $1/M^2$. For the band $[ \mu-k/M , \mu+k/M]$, which linearly decreases as the dimension $M$ increases, Eq.  states that the chance of selecting a sample drawn outside it is asymptotically constant. This analysis reveals the concentration near the mean in a high dimensional simplex in terms of its volume. This is comparable to the volume concentration near the surface or boundary in high dimensional cubes or spheres, because the above mean $\mu$ or the volume center in a simplex is close to its boundary in a high dimension. Equations used to generate the linear system matrix in Section \[ssec:lin\_highdim\] {#app:lin_highdim_H} ------------------------------------------------------------------------------------- We first define a radial basis function $\phi$. $$\begin{aligned} \phi(a,b) = \exp\left( -\dfrac{ ({\ensuremath{{\mathbf{v}}}} - a)^2} {2 b^2} \right), \end{aligned}$$ where $a,b$ are real numbers and ${\ensuremath{{\mathbf{v}}}}$ is an index vector ${\ensuremath{{\mathbf{v}}}} = {\ensuremath{{[1,2,\dots, L]}^{\mathsf{T}}}}$ and ${\ensuremath{{\mathbf{v}}}} - a := {\ensuremath{{\mathbf{v}}}} - a {\ensuremath{{\mathbf{1}}}} $. We define ${\ensuremath{{\mathbf{h}}}}_i$ as follows: [$$\begin{aligned} {\ensuremath{{\mathbf{h}}}}_1 &= \phi(100,10) \\ {\ensuremath{{\mathbf{h}}}}_2 &= 0.2 \phi(120,15) + 0.7 \phi(520,30) \\ {\ensuremath{{\mathbf{h}}}}_3 &= 0.8 \phi(120,17) + 0.1 \phi(525,25) \\ {\ensuremath{{\mathbf{h}}}}_4 &= 0.6 \phi(200,40) \\ {\ensuremath{{\mathbf{h}}}}_5 &= 0.4 \phi(300,100) \\ {\ensuremath{{\mathbf{h}}}}_6 &= 0.6 \phi(400,40) \\ {\ensuremath{{\mathbf{h}}}}_7 &= 0.9 \phi(500,15) \\ {\ensuremath{{\mathbf{h}}}}_8 &= 0.5 \phi(600,10) \\ {\ensuremath{{\mathbf{h}}}}_9 &= \phi(700,60) \\ {\ensuremath{{\mathbf{h}}}}_{10} &= 0.2 \phi(800,15) + 0.4 \phi(330,30) \\ {\ensuremath{{\mathbf{h}}}}_{11} &= \phi(850,200) - 0.3 \phi(700,30) - 0.1\phi(890,8) \\ {\ensuremath{{\mathbf{h}}}}_{12} &= 3 \phi(1500,500) / \max(3 \phi(1500,500))\\ {\ensuremath{{\mathbf{h}}}}_{13} &= 0.7 \phi(850,200) + 0.2 \phi(1500,500) / \max( \phi(1500,500)) \\ {\ensuremath{{\mathbf{h}}}}_{14} &= \phi(850,200) - 0.7 \phi(900,10) - 0.9 \phi(810,6)\\ {\ensuremath{{\mathbf{h}}}}_{15} &= \phi(850,200) - 0.7 \phi(900,10) - 0.2 \phi(830,15) \\ {\ensuremath{{\mathbf{h}}}}_{16} &= \phi(850,200) - 0.7 \phi(900,10) - 0.1 \phi(830,20) \\ {\ensuremath{{\mathbf{h}}}}_{17} &= \phi(850,200) - 0.8 \phi(940,15) \\ {\ensuremath{{\mathbf{h}}}}_{18} &= \phi(850,200) - 0.5 \phi(800,10) \\ {\ensuremath{{\mathbf{h}}}}_{19} &= 0.1 \phi(850,200) + 0.17 \phi(350,30) + 0.1 \phi(450,20)\\ {\ensuremath{{\mathbf{h}}}}_{20} &= 0.04 \phi(850,500) . \end{aligned}$$ ]{} An example of high dimensional nonlinear systems {#app:highdim_nonlin} ------------------------------------------------ A soft-thresholding function is defined as the following $$\begin{aligned} \label{eq:highdim_nonline_f_T} f_T({\ensuremath{{\mathbf{m}}}}) = \max({\ensuremath{{\mathbf{m}}}} - T {\ensuremath{{\mathbf{1}}}}, {\ensuremath{{\mathbf{0}}}}),\end{aligned}$$ with $T=0.03=3\%$. A function that produces an extended version with correlated terms is defined as the following $$\begin{aligned} \label{eq:highdim_nonline_g} g({\ensuremath{{\mathbf{x}}}}) = {\ensuremath{{[x_1,x_2,x_3,x_1 x_2, 3 x_2 x_3]}^{\mathsf{T}}}} \in \mathbb{R}^5,\end{aligned}$$ with ${\ensuremath{{\mathbf{x}}}} \in \mathbb{R}_+$ (non-negative real set). The system function is designed as the following $$\begin{aligned} {\ensuremath{{\mathbf{h}}}} &= h({\ensuremath{{\mathbf{m}}}}) = {\ensuremath{{\mathbf{H}}}} {\ensuremath{{\mathbf{z}}}} + {\ensuremath{{\mathbf{y}}}}_{4} + {\ensuremath{{\mathbf{y}}}}_{14} + {\ensuremath{{\mathbf{y}}}}_{21} + {\ensuremath{{\mathbf{y}}}}_{22} \\ z_k &= m_k \mbox{ for } k = 1,2,3,8,{10}, 12,13, 15,16,19,20 \\ z_k &= 0 \mbox{ for } k = 4,14 \\ [z_5,z_6,z_7] &= [g_1,g_2,g_3] = [m_5,m_6,m_7]\\ {\ensuremath{{\mathbf{g}}}} &= g({\ensuremath{{[m_5,m_6,m_7]}^{\mathsf{T}}}}) =: {\ensuremath{{[g_1,g_2,g_3,g_4,g_5]}^{\mathsf{T}}}} \\ z_9 &= f_T(m_9) \\ z_{11} &= \exp( f_T(m_{11})) - 1 \\ z_{17} &= m_{17}^{1.5} \\ z_{18} &= m_{18}^{0.9} + m_{18}^2 \\ {\ensuremath{{\mathbf{y}}}}_{4} &= 0.6 \phi(peak_4,40) \times m_4\\ peak_4 &= 100 m_4 + 200 \mbox{ (moving peak) } \\ {\ensuremath{{\mathbf{y}}}}_{14} &= \left( \phi(850,200) - 0.7 \phi(valley_{14},10) - 0.9 \phi(810,6) \right) \times m_{14}\\ valley_{14} &= 100 (1-m_{14}) + 820 \mbox{ (moving valley) } \\ {\ensuremath{{\mathbf{y}}}}_{21} &= \phi(350,130) g_4 = \phi(350,130) m_5 m_6 \\ {\ensuremath{{\mathbf{y}}}}_{22} &= \phi(450,70) g_5 = \phi(450,70) m_6 m_7 ^3 \end{aligned}$$ with $f_T({\ensuremath{{\mathbf{m}}}})$ in Eq. , $g({\ensuremath{{\mathbf{x}}}})$ in Eq. , [${\mathbf{H}}$]{} in Appendix. \[app:lin\_highdim\_H\], and ${\ensuremath{{\mathbf{m}}}} \in S^{20}$, which leads to ${\ensuremath{{\mathbf{z}}}} \in \mathbb{R}_+^{20}$. For the $i$th mixture having the sample proportion of $p_i$, the samples are drawn according to Gaussian distribution with mean ${\ensuremath{{\mathbf{\mu}}}}_i$ and the covariance $\sigma_i {\ensuremath{{\mathbf{I}}}}$. The mixture centers in percent are defined as the followings also shown in Fig. \[fig:mixture\_centers\_01\] [$$\begin{aligned} \label{eq:mixture_centers} {\ensuremath{{\mathbf{\mu}}}}_1 &= {\ensuremath{{ [0.79, 1.59, 2.38, 3.17, 0.79, 53.17, 7.94, 1.59, 0.79, 1.59, 3.17, 0.79, 1.59, 0.79, 15.87, 0.79, 0.79, 0.79, 0.79, 0.79] }^{\mathsf{T}}}} \\ {\ensuremath{{\mathbf{\mu}}}}_2&= {\ensuremath{{ [43.69, 1.94, 12.62, 3.88, 0.97, 1.94, 0.97, 19.42, 0.97, 1.94, 0.97, 0.97, 1.94, 0.97, 1.94, 0.97, 0.97, 0.97, 0.97, 0.97] }^{\mathsf{T}}}} \\ {\ensuremath{{\mathbf{\mu}}}}_3&= {\ensuremath{{ [0.99, 9.90, 2.97, 3.96, 0.99, 16.83, 9.90, 1.98, 0.99, 1.98, 12.87, 0.99, 19.80, 0.99, 9.90, 0.99, 0.99, 0.99, 0.99, 0.99] }^{\mathsf{T}}}} \end{aligned}$$ ]{} and the corresponding $\sigma_i$s in percent are $$\begin{aligned} \label{eq:mixture_sigmas} \sigma_1 = 1 \% , \, \sigma_2 = 2 \% , \, \sigma_3 = 3 \%\end{aligned}$$ with the sample proportions $$\begin{aligned} \label{eq:mixture_proportions} p_1 = 0.2 , \,p_2= 0.2 , \,p_3= 0.3 \end{aligned}$$ while the remaining proportion of 0.3 ($=1-0.2-0.2-0.3$) is filled with the samples drawn according to the uniform distribution. Moreover, to satisfy the simplex condition on the samples coming from the mixtures, the negative components are truncated to zero and the components greater than one are truncated to one, followed by the scaling or normalization step with the $\ell_1$ norm of the possibly truncated vector. Note that the truncation can lead to the scaling of the vector for normalization and the final sample distribution can be non-Gaussian. However, because this truncation rarely occurs from our experiments with low $\sigma_i$s, the result distribution is approximately Gaussian. After this, we disgard the sample whose $m_{19}, m_{20}$, as obfuscating variables, are greater than $5\%$. If we want to include end-members in the training or test set, we generate them except the end-members of obfuscating variables. Now, let $G$ be such a sample generator function that generates the samples following the uniform distribution or a mixture in $S^{20}$ with the selected specification described above and corresponding noisy measurements with noise level $0.005$. In experiments, when the mixture model is used, we randomly shuffled the samples in training and test sets. We may retain the original compositional vector ${\ensuremath{{\mathbf{m}}}}^0$ including obfuscating variables in ${\ensuremath{{\mathbf{m}}}}^1$, which is used to synthesize noisy measurments, but use ${\ensuremath{{\mathbf{m}}}}$ without those variables for comparisons (Eq. \[eq:ideal\_loss\_obs\]). Nearest neighbors estimators {#app:kNN} ----------------------------- We provide the details of $k$-nearest neighbors (kNN) estimators that we use for comparisons. From the test observation vector [${\mathbf{y}}$]{}, we evaluate the distance $d({\ensuremath{{\mathbf{y}}}}_j,{\ensuremath{{\mathbf{y}}}})$, which the squared Euclidean distance between [${\mathbf{y}}$]{} and $j$th observation vector, ${\ensuremath{{\mathbf{y}}}}_j$, in the training set. Let $I_k({\ensuremath{{\mathbf{y}}}})$ be the index set of the training samples having the $k$ smallest distances to [${\mathbf{y}}$]{}. $$\begin{aligned} \hat{{\ensuremath{{\mathbf{x}}}}} = \sum_{j \in I_k({\ensuremath{{\mathbf{y}}}})} l_j {\ensuremath{{\mathbf{x}}}}_j \, / \sum_{j \in I_k({\ensuremath{{\mathbf{y}}}})} l_j , \end{aligned}$$ where $l_j = 1/d({\ensuremath{{\mathbf{y}}}}_j,{\ensuremath{{\mathbf{y}}}})$ and $d({\ensuremath{{\mathbf{y}}}}_j,{\ensuremath{{\mathbf{y}}}}) = \|{\ensuremath{{\mathbf{y}}}}_j - {\ensuremath{{\mathbf{y}}}} \|_2^2$. As a special case of $k=1$, the estimate will be the closest neighbor in the train set. The performance of kNN depends on the number of neighbors $k$ and the distribution of the data. When $k$ is too small, the estimator does not use enough neighbors information. When it is too large, the estimator averages too many neighbors and insensitive to the given sample. Therefore, the optimal $k$ under a given data distribution should be an intermediate number. The detailed theoretical analysis of kNN performance is out of scope of this work but we empirically demonstrate this by evaluating the performance for our designed distribution of samples and corresponding system outputs. For fair comparisons with ANN approaches, we set $N_{train} = 10000 , N_{test} = 1000 $. For stability of the method, a computed distance is truncated to 0 when it is a negative number or $10^{10}$ when it is greater than $10^{10}$. ### kNN in a linear low dimension {#app:kNN_lin_lowdim} We first performed the linear low dimensional cases where $L=7, M=5, \sigma=0.005$ and [${\mathbf{H}}$]{} is generated according to the standard Gaussian distribution for its entries. The true composition data is generated according to the uniform distribution and it is used to synthesize the observation following Eq. \[eq:linearModel\_noiseless\] with additive noise. As discussed, the optimal performance is observed in the mid-range of $k=9$ or 11, shown in Fig. \[fig:kNN\_performance\_01\] and \[fig:kNN\_performance\_02\] evaluated on the test data. \[ht\] ### kNN in a linear high dimension {#app:kNN_lin_highdim} The high dimensional linear case was experimented with the same setting as the above except for $M=20, L=1000$. As seen in the low dimensional case, the optimal performance is observed in the mid-range of $k=9$ or 11, shown in Fig. \[fig:kNN\_performance\_M20\_01\] and \[fig:kNN\_performance\_M20\_02\]. The individual error is comparable but the overall error increased compared to the previous linear low dimension case in Appendix. \[app:kNN\_lin\_lowdim\]. \[ht\] ### kNN in a nonlinear high dimension with obfuscating variables {#app:kNN_nonlin_highdim} We then simulated the high dimensional nonlinear case with the same setting as the above high dimensional linear case except for $M=20, L=1000$, the nonlinear system defined in Appendix. \[app:highdim\_nonlin\], 2 obfuscating variables out of $M=20$ components, and using uniformly distributed compositional vectors. As seen in previous cases, the optimal performance is observed in the mid-range of $k=9$ or 11, shown in Fig. \[fig:kNN\_performance\_nonlin\_01\] and \[fig:kNN\_performance\_nonlin\_02\]. The larger errors than those in the previous high dimensional linear case are evident due to the nonlinearity of the defined system, based on the evaluation on the test data. \[ht\] ### kNN in a nonlinear high dimension with obfuscating variables and mixture models {#app:kNN_nonlin_highdim_mix} This case adds more complexity to the high dimensional nonlinear case with the mixtures defined in Appendix. \[app:highdim\_nonlin\], so the compositional vectors are sampled from the mixture of three truncated approximate Gaussian distributions and uniform distribution. As seen in the previous cases, the optimal performance is observed in the mid-range of $k=9$ or 11, shown in Fig. \[fig:kNN\_performance\_nonlin\_mix\_01\] and \[fig:kNN\_performance\_nonlin\_mix\_02\]. Because the three Gaussian distributions are much more concentrated, thus having dense neighborhood, than the uniform distribution, which is the most scattered distribution as the worst case, the performance is improved significantly compared to the previous case of having only uniformly distributed samples. \[ht\] Author Bio {#author-bio .unnumbered} ========== [**Se Un Park**]{} is Senior Data Scientist at Schlumberger. His research interests include stochastic signal processing, variational Bayesian methods, blind deconvolution, machine learning, and prognostics and health management, and mineralogy characterization for oil field reservoirs. He holds a PhD degree in Electrical Engineering and Computer Science from the University of Michigan, Ann Arbor, MI, USA. [^1]: Se Un Park is with Schlumberger, Houston, TX 77077, USA. (e-mail: [email protected], [email protected]). [^2]: For multiplicative noise, taking log transformation of the observation leads to the same formula. [^3]: The softmax activation was not good in training for this shallow layered ANN in our experiments even with batch normalization after weight multiplication. [^4]: We performed our experiments multiple times with different randomized-realization of the system matrix so this trend of observation is valid.
--- author: - 'A. Blondel,' - 'F. Cadoux,' - 'S. Fedotov,' - 'M. Khabibullin,' - 'A. Khotjantsev,' - 'A. Korzenev,' - 'A. Kostin,' - 'Y. Kudenko,' - 'A. Longhin,' - 'A. Mefodiev,' - 'P. Mermod,' - 'O. Mineev,' - 'E. Noah,' - 'D. Sgalaberna,' - 'A. Smirnov,' - 'N. Yershov' title: 'A fully-active fine-grained detector with three readout views' --- Introduction {#sec:intro} ============ Plastic scintillator material is very commonly used in high-energy physics and astroparticle physics, providing detectors with good timing properties (often below 1 ns resolution) and measurement of the deposited energy. The advent of scintillating fibers and/or wavelength shifting fiber readout allows for very flexible geometrical and tracking properties. Neutrino experiments have used scintillators quite systematically, recent examples being the MINOS [@MINOS], Miner$\nu$a [@MINERvA] and the near detectors suite (ND280) of T2K [@Abe:2011ks; @Abe:2016tii]. In the case of ND280, the readout with silicon photo-multipliers was applied systematically to 60’000 channels, allowing a more compact geometry to be achieved. In the above examples, narrow plastic scintillator bars are disposed perpendicularly to the neutrino beam direction, a geometry that is suitable for neutrino interactions in beams of energy above typically a few GeV, for which the leading final state particles are predominantly emitted in the forward direction. For lower neutrino energies, the final state lepton of the neutrino charged current interactions is emitted more isotropically. In that case it is interesting to aim at a more isotropic geometry. At all energies, nevertheless, nuclear effects commensurate to the binding energy of the Carbon nucleus or its Fermi Momentum, occurring either in the initial state or in the final state of the neutrino interaction, affect the energy balance and the energy reconstruction [@t2k_longpaper_nuecc1pi]. It is thus important to be able to study the effect of nuclear activity by locating the energy deposited by additional nucleons originating from the interaction or resulting from nuclear breakup. In this context the natural granularity scale is around 1 cm, corresponding to the range in plastic of protons with momentum commensurate with the Fermi motion of about 220 MeV/c. Another example of application of a more isotropic geometry could be the astroparticle physics experiments where the detector orbits around the Earth and can detect particles produced by several different sources and coming from any direction. In case of a detector with scintillator bars disposed perpendicularly to the beam axis (hereinafter this axis referred to as $Z$), acceptance and resolution are highly direction-dependent: a particle traveling along a single scintillator bar cannot be tracked and the momentum cannot be defined. Furthermore, in a realistic situation several tracks can be produced and it often happens that the energy deposited cannot be uniquely identified and assigned to a particular spatial direction. In this case a three-dimensional readout of the signals will ensure a more isotropic acceptance and reconstruction. In this paper, concentrating on the case of the project of the ND280 detector upgrade, we present an attempt at such a 3-D design, keeping in mind the need to have the number of channels to a reasonable value and potential applications to other fields in physics. This article is organized as follows: the detector concept and design is described in Sec. \[sec:design\]; in Sec. \[sec:test\] the measurements performed on a small prototype with cosmic particles is shown; finally in Sec. \[sec:simulation\] the simulation results of the proposed detector are described. The design of the detector {#sec:design} ========================== The goal of the currently running long baseline experiments, T2K and NO$\nu$A [@NOvA], is to measure the CP violating phase in the neutrino sector, by measuring neutrino appearance phenomena, such as the $\nu_{\mu} \rightarrow \nu_{e}$ and $\bar{\nu}_{\mu} \rightarrow \bar{\nu}_{e}$ transitions. To this effect, one compares the neutrino event rate at a near detector, before oscillations occur, with the neutrino event rate at the far detector, whose position, in the case of T2K and NO$\nu$A, is located near the oscillation maximum. On a longer time scale, new experiments, Hyper-K [@t2hk] and DUNE [@dune], will start searching for CP violation with much larger data sample. For this reason it is timely to develop near detector designs in which as much as possible information is acquired, both by establishing the rate and flavour of neutrino interaction events and by understanding the measurement of neutrino energy – since neutrino energy is the quantity which governs the neutrino oscillations. This latter point requires a detailed knowledge of neutrino interactions. Furthermore, the possible differences between electron- vs muon- (anti)neutrino cross sections are essential for the precise measurement of the appearance oscillation phenomenon. Several experiments [@Abe:2011ks; @MINERvA; @WAGASCI; @NOvA] are currently measuring the neutrino interaction cross sections with scintillator detectors. However a dedicated effort is required to address the specific needs of the neutrino oscillation program. In the case of T2K, these have been spelled out in the T2K ND280 upgrade program: - the near detector measurements must cover the full polar angle range for the final state lepton with a well understood acceptance; - the near detector must be capable of measuring (at least the ratio of) electron and muon neutrino cross sections; - the near detector should be able to address the issue of nuclear effects and their impact on energy reconstruction. The detector must also be fully active and the amount of dead material must be minimized, in order to detect all the energy released by the produced particles and reconstruct with precision the energy of the interacting neutrino. Furthermore, in an experiment like T2K, Hyper-K and NO$\nu$A it becomes very important to have very similar nuclear targets at the near and far detectors, for instance $\text{H}_2\text{O}$ at T2K (Hyper-K) and liquid scintillator at NO$\nu$A. Organic scintillators, $^{12}{\text C}$-based, fulfill as much as possible this requirement. A good compromise would be given by a fine granularity detector, $\sim1~\text{cm}$, the range of a proton with the Fermi momentum, with a good acceptance over the full solid angle. An interesting solution is a full 3D detector [@calocube], that would solve the tracking ambiguity issues. However given the combination of large mass and very fine granularity, a prohibitively large number of readout channels ($O(1\text{M})$, since it scales with the detector volume) would be required if one would read out individually cm-size cubes, leading to high costs and a large amount of dead material. We propose here an alternative solution with more acceptable costs and less dead material inside the neutrino target. The proposed detector consists of many cubes of extruded scintillator, each one covered by a reflector and read out along three orthogonal directions by wavelength shifting fibers. The chosen scintillator is a composition of a polystyrene doped with 1.5% of paraterphenyl (PTP) and 0.01% of POPOP. The cubes produced by Uniplast, a company in Vladimir, Russia, are covered by a $\sim 50~\mu \text{m}$ thick diffusing layer. Depending on the physics case, different technologies can be used for the reflector to constrain the scintillation light inside the cube. In order to provide good light yield and uniformity in the neutrino target nuclei, the reflector is obtained by etching the scintillator surface in a chemical agent that results in the formation of a white micropore deposit, acting as a diffuser, over a polystyrene surface [@Kudenko:2001qj]. Each cube has three orthogonal cylindrical holes of 1.5 mm diameter drilled along X, Y and Z axes. Three 1.0 mm wavelength shifting (WLS) fibers, multi-clad Kuraray Y11, are inserted through the holes. The ideal size of each cube is $1\times1\times1~\text{cm}^3$, providing the required fine granularity. The axes $X$, $Y$ and $Z$ define respectively the width, the height and the length of the detector and the neutrino beam direction is supposed to be centered along the $Z$ axis. We consider a detector of the size of $1.8\times0.6\times2.0~\text{m}^3$, i.e. about 2 tons, that would correspond to approximately 59k readout channels. Should this number prove to be too high, the size of the cubes could be increased for instance up to $2\times2\times2~\text{cm}^3$, corresponding to only 15k readout channels. For a given detector size, the number of cubes is inversely proportional to the third power of the cube size, while the number of channels scales as the inverse of the square. A picture of a small prototype is shown in Figure \[prototype\]. The parameters of the detector and scintillator cubes are shown in Table \[Table:parameters\_detector\] and \[Table:parameters\_cube\]. ![\[prototype\]The picture of a small prototype is shown. Several cubes of extruded plastic scintillator with three WLS fibers inserted in the three holes are assembled. The size of each cube is $1\times1\times1~\text{cm}^3$.](cube "fig:"){height="5cm"}\ ![\[prototype\]The picture of a small prototype is shown. Several cubes of extruded plastic scintillator with three WLS fibers inserted in the three holes are assembled. The size of each cube is $1\times1\times1~\text{cm}^3$.](YuryTalk1_14_6_17 "fig:"){height="5cm"}\ ![\[prototype\]The picture of a small prototype is shown. Several cubes of extruded plastic scintillator with three WLS fibers inserted in the three holes are assembled. The size of each cube is $1\times1\times1~\text{cm}^3$.](YuryTalk2_14_6_17 "fig:"){height="5cm"}\ The power of such a detector is given by the three views that would provide a $4\pi$ acceptance and solve the tracking ambiguity efficiently. Thanks to this novel configuration we expect it will be possible to associate the particle hits to the right track for most of the neutrino events. The third view will help solving the ambiguity providing a great improvement compared to the two-views detectors, where different tracks can share the same hits in the 2D projection. This will be possible also thanks to the fact that O(1 GeV) neutrino interaction events are usually low multiplicity. Furthermore the track reconstruction itself, where hits are linked together, will help. Some ambiguities are still expected for hits close to the neutrino interaction vertex, within a few cms, where some low-momentum nucleons could be ejected by the nucleus within a very close range. In this case, thanks to the fine granularity, this detector can provide a precise calorimetry of the energy released, improving the final reconstruction of the neutrino energy. At the T2K most probable neutrino energy, typically 600 MeV, the event multiplicity is low and a full cell-by-cell readout, which would be prohibitive both in terms of number of readout channels and added passive material, is not necessary. Our first investigations show that a system with a three-view readout looks appropriate. The fine granularity would allow to measure protons with momenta down to 300 MeV/c. None of the above properties could be achieved with a scintillator bar detector. In addition the light is enclosed within the cube and the light yield is expected to be higher than in standard scintillator bars. In this configuration, the energy deposited by a given ionizing particle is simultaneously collected by three WLS fibers, instead of only one, improving pattern recognition and the light yield. Furthermore, in order to obtain three views, the energy deposited by a particle in a single cube is enough. This is a tremendous advantage compared to scintillator bar detectors where two views are provided by a particle depositing energy in at least two different bars. This detector is quite useful to distinguish pions and muons from protons and electrons. In addition it is essential for separating the electrons, which are a manifestation of the electron neutrino interactions, from the $\gamma \rightarrow e^+ e^-$ background. This detector could be also used to detect neutrons produced in the neutrino interaction. Indeed an additional coating of either Gadolinium or Lithium could be applied on the surface of each cube, similarly to what has been done for the SOLID experiment [@solid]: the neutrons are captured after being thermalized and photons, delayed with respect to the lepton produced by the interaction, are released. As already mentioned above, the conceived geometry could be useful also for astroparticle physics experiments, thanks to the full solid angle acceptance. Different configurations of the detector could be obtained in order to fulfill different requirements, such as an improved angular resolution. In this case half of the plastic cubes could be replaced with a very low density material, e.g. AIREX (with a density of $0.06 \text{ g/cm}^3$, about 6% density of that of plastic scintillator) allowing tracks to travel for a longer distance and improve their separation. These aspects are not addressed here. Parameter Cube edge: 1 cm Cube edge: 2 cm ---------------- ----------------- ----------------- \# of cubes 2.16M 270k \# of channels 58.8k 14.7k : Main parameters of the proposed detector of the size of $1.8 \times 0.6 \times 2.0~\text{m}^3$. []{data-label="Table:parameters_detector"} Parameter Value -------------------- ------------ Coating thickness $50~\mu m$ Hole diameter 1.5 mm WLS fiber diameter 1.0 mm : Main parameters of each scintillator cube with 1 cm edge. []{data-label="Table:parameters_cube"} Measurements {#sec:test} ============ In order to study the performance of the scintillator cubes, we carried out measurements of the light yield produced from cosmic ray muons using a small plastic counter. The test bench for detector measurements is shown in Figure \[fig:setup\]. ![\[fig:setup\] Test bench for study of the parameters of scintillator cubes using cosmic muons. The tested array comprised nine $1\times1\times1$ cm$^3$ cubes. Two trigger counters, each of a $8\times 8$ mm$^2$ cross section, were located above and below the tested cubes. ](test_setup.png){width="12cm"} The ionization area within the tested cube was localized to a $8\times 8$ mm$^2$ spot defined by the trigger counter size. The readout of the scintillation light from each cube was provided by a 1.3 m long double-clad Kuraray Y11 WLS fiber coupled at one end to a photosensor, a Hamamatsu multi-pixel photodiode (MPPC). The sensor S12571-025C [@hamamatsu] consists of an array of 1600 independent $25\times 25$ $\mu$m avalanche photodiodes (pixels) operating in Geiger mode. The MPPC sensitive area is $1\times 1$ mm$^2$. The photo detection efficiency of this MPPC is about 35% (3.5 V overvoltage) for green light of 520 nm as emitted by a Y11 fiber. Signals from MPPCs were amplified by a custom-made preamplifier with a gain of 20, then sent to the 5 GHz sampling digitizer CAEN DT5742 with 12-bit resolution. The signal charge was calculated as an area of signal waveform normalized to number of photoelectrons. The signal timing was obtained at the 10% fraction of the signal amplitude. The position of all cubes along the WLS fibers inserted in the holes was fixed at the distance of 1 m from the photosensor. In order to increase the light yield, the far end of the fiber was covered by a teflon tape. The light yield of one scintillator cube in photoelectrons (p.e.) per minimum ionizing particle (MIP) obtained with one fiber is plotted in Figure \[fig:ly\]. About 55 p.e./MIP were measured with a 1 m long WLS fiber. For a fiber length of 2 m the estimated light yield is expected to be about 35 p.e. according to the attenuation length of Y11 for green light [@Mineev:2011xp]. ![\[fig:ly\] The light yield of a scintillator cube per a minimum ionizing particle with one WLS fiber measured at a distance of 1 m from the MPPC. ](light_yield.pdf){width="9cm"} Since the white chemical reflector does not fully contain the scintillation light, its leak of the scintillating light from one cube to neighboring ones was investigated. The cross-talk was measured between the fired central cube and adjacent cubes which surround the central one as shown in Figure \[fig:setup\]. The cross-talk is defined as the ratio of the light yield in an adjacent cube to the signal in the central cube. The MPPC dark noise was measured simultaneously during the test and subtracted from the signal in the adjacent cube. Figure \[fig:crosstalk\] shows that on average less than 3% of scintillating light penetrates from one cube to another. ![\[fig:crosstalk\]The ratio of the signal in the adjacent cube to the signal in the fired central cube. Negative values are caused by subtracting the average dark noise contribution on the event-by-event basis. Values higher than 0.2 are likely due not to light cross talk but, in large part, to delta-rays or multiple scattering of the cosmic ray in the scintillator cube that produce signal in the adjacent cubes. These entries are taken into account in the final cross talk calculation. The cross-talk is estimated to be 2.9%, that corresponds to the mean of the shown distribution. ](cross_talk.pdf){width="9cm"} In Figure \[fig:timeresolution\] the detector time resolution measured with the setup described above is shown. Thanks to the high light yield provided by this geometry combined with a “fast” readout electronics, we achieve a time resolution of a single MIP particle hit of about 0.91 ns in a single WLS fiber and about 0.63 ns for the case when the light is collected by two orthogonal WLS fibers. Further improvement can be obtained by considering the light collected simultaneously by all the three orthogonal WLS fibers. A double-end readout, though more expensive given the doubled number of readout channels, might also improve the time resolution by approximately 40%. ![\[fig:timeresolution\] The measured time distribution of the cosmic particle hit in one scintillator cube is shown when the light yield is collected in one WLS fiber (left) and two WLS fibers (right). ](time_resolution_1fiber_RMS.pdf "fig:"){width="7.5cm"} ![\[fig:timeresolution\] The measured time distribution of the cosmic particle hit in one scintillator cube is shown when the light yield is collected in one WLS fiber (left) and two WLS fibers (right). ](time_resolution_2fiber_RMS.pdf "fig:"){width="7.5cm"} Simulations {#sec:simulation} =========== The full detector, corresponding to the parameters in Table \[Table:parameters\_detector\], has been simulated with the GEANT4 software [@geant4]. The detector response was parametrized and the track reconstruction was performed, for the time being, without pattern recognition. However if there are more tracks in a single event they must fulfill separation criteria in order to be reconstructed. The axes $X$, $Y$ and $Z$ define respectively the width, the height and the length of the detector. A magnetic field of 0.2 T has been simulated along $X$, in the same configuration as ND280. About 230k neutrino interactions in the detector were simulated with the GENIE software [@genie] with the neutrino beam centered along the $Z$ axis and the energy spectrum expected at ND280. The expected performance of the proposed detector was compared to a plastic scintillator detector made of $1 \times 1 \text{ cm}^2$ cross-section bars directed along the $X$ and $Z$ direction, whose goal is to measure particles produced at about $90^{\circ}$ with respect to the neutrino beam direction. The results are shown in Figure \[eff\_neutrino\]. The reconstruction efficiency for muons is shown as a function of the muon angle with respect to the $Z$ axis. It is clear that the detector proposed in this article has a $4\pi$ angular acceptance, with a track reconstruction efficiency exceeding 90% for the whole angular range. Also the reconstruction efficiency as a function of the proton momentum is improved: the momentum threshold to detect protons is reduced from about 450 MeV/c down to 300 MeV/c. This also shows that in the proposed detector it may be possible to improve the reconstruction of the neutrino energy by measuring also low energy protons and pions. We confirmed with a dedicated study that the particle identification capability is similar to that of the $XZ$ scintillator bars detector. ![\[eff\_neutrino\] Track reconstruction efficiencies are shown for the particles produced by GENIE neutrino interactions with $10^{21}$ p.o.t. for both the 3-views detector (named SuperFGD) proposed in this article and a plastic scintillator detector made of bars along the X and Z directions (named FGD XZ). Left: muon reconstruction efficiency as a function of the truth muon $\cos \theta$. Right: proton reconstruction efficiency as a function of the truth proton momentum. ](eff_superfgd_muon_costheta "fig:"){width="7.5cm"} ![\[eff\_neutrino\] Track reconstruction efficiencies are shown for the particles produced by GENIE neutrino interactions with $10^{21}$ p.o.t. for both the 3-views detector (named SuperFGD) proposed in this article and a plastic scintillator detector made of bars along the X and Z directions (named FGD XZ). Left: muon reconstruction efficiency as a function of the truth muon $\cos \theta$. Right: proton reconstruction efficiency as a function of the truth proton momentum. ](eff_superfgd_prot_mom "fig:"){width="7.5cm"}\ Conclusions {#sec:conclusion} =========== We have shown that the technique of extruded scintillator with wavelength shifting fiber readout can be extended to a three directional readout. The first simulations and the measurements with a small prototype of the conceived detector show encouraging results, with a light yield of more than 50 photo-electrons for one direction. A larger size prototype (aim is $5\times 5\times 5 \text{ cm}^3$ for a total of 125 cubes and 75 readout channels) is under construction to be exposed to tests at particle beams, in which the timing properties and the possible cross-talk between channels can be evaluated. Further simulation studies will establish the predicted performance, for e.g. electron and photon separation or PID by dE/dx, to be benchmarked at test beams. A challenging aspect of further design will be the mechanical integration providing sufficient rigidity as well as compact and light disposition of the readout. Acknowledgements {#sec:acknowledgements} ================ This work was initiated in the framework of the T2K ND280 upgrade task force, convened by M. Yokoyama and M. Zito. Fruitful discussions in this context with our colleagues from T2K are gratefully acknowledged. D. Sgalaberna was supported by the grant number 200020-172709 of the Swiss National Foundation. The work was supported in part by the RFBR/JSPS grant \# 17-52-50038. [99]{} P. Adamson [*et al.*]{}, “The MINOS scintillator calorimeter system”, IEEE Trans. Nucl. Sci. [**49**]{} (2002) 861-863 . L. Aliaga [*et al.*]{},“ Design, Calibration, and Performance of the MINERvA Detector”, Nucl.Instrum.Meth. A [**743**]{} (2014) 130-159 \[arXiv:1305.5199\]. K. Abe [*et al.*]{} \[T2K Collaboration\], “The T2K Experiment”, Nucl. Instrum. Meth. A [**659**]{} (2011) 106 \[arXiv:1106.1238 \[physics.ins-det\]\]. K. Abe [*et al.*]{}, “Proposal for an Extended Run of T2K to $20\times10^{21}$ POT”, arXiv:1609.04111 \[hep-ex\]. K. Abe [*et al.*]{}, “Measurement of neutrino and antineutrino oscillations by the T2K experiment including a new additional sample of $\nu_e$ interactions at the far detector”, arXiv:1707.01048 \[hep-ex\]. D. Ayres [*et al.*]{}, “NO$\nu$A Proposal to Build a 30 Kiloton Off-Axis Detector to Study Neutrino Oscillations in the Fermilab NuMI Beamline”, arXiv:hep-ex/0503053. K. Abe [*et al.*]{} \[Hyper-Kamiokande Proto- Collaboration\], “Physics potential of a long-baseline neutrino oscillation experiment using a J-PARC neutrino beam and Hyper-Kamiokande”, PTEP [**2015**]{} (2015) 053C02 \[arXiv:1502.05199 \[hep-ex\]\]. R.Acciarri [*et al.*]{}, “ Long-Baseline Neutrino Facility (LBNF) and Deep Underground Neutrino Experiment (DUNE) Conceptual Design Report Volume 2: The Physics Program for DUNE at LBNF”, arXiv:1512.06148 T. Koga [*et al.*]{}, “Water/CH Neutrino Cross Section Measurement at J-PARC (WAGASCI Experiment)”, JPS Conf. Proc. [**8**]{} (2015) 023003. E. Vannuccini [*et al.,*]{}, “CaloCube: A new-concept calorimeter for the detection of high-energy cosmic rays in space”, NIM A [**845**]{} (2017) 421-424. Y. G. Kudenko [*et al.*]{}, “Extruded plastic counters with WLS fiber readout”, Nucl. Instrum. Meth. A [**469**]{} (2001) 340. Y. Abreu [*et al.*]{}, “A novel segmented-scintillator antineutrino detector”, arXiv:1703.01683 http://www.hamamatsu.com/ O. Mineev [*et al.*]{}, “Scintillator detectors with long WLS fibers and multi-pixel photodiodes”, JINST [**6**]{} (2011) P12004 \[arXiv:1110.2651 \[physics.ins-det\]\]. J. Allison [*et al.*]{}, “Recent developments in Geant4”, Nucl. Instrum. Meth. A [**835**]{} (2016) 186-225. C.Andreopoulos et al., “The GENIE Neutrino Monte Carlo Generator”, Nucl.Instrum.Meth.A614 (2010) 87-104
--- abstract: 'We expect a detectable correlation between two seemingly unrelated quantities: the four point function of the cosmic microwave background (CMB) and the amplitude of flux decrements in quasar (QSO) spectra. The amplitude of CMB convergence in a given direction measures the projected surface density of matter. Measurements of QSO flux decrements trace the small-scale distribution of gas along a given line-of-sight. While the cross-correlation between these two measurements is small for a single line-of-sight, upcoming large surveys should enable its detection. This paper presents analytical estimates for the signal to noise (S/N) for measurements of the cross-correlation between the flux decrement and the convergence, $\langle\delta {\mathcal{F}}\kappa\rangle$, and for measurements of the cross-correlation between the variance in flux decrement and the convergence, $\langle(\delta {\mathcal{F}})^2 \kappa\rangle$. For the ongoing BOSS (SDSS III) and Planck surveys, we estimate an S/N of 30 and 9.6 for these two correlations. For the proposed BigBOSS and ACTPOL surveys, we estimate an S/N of 130 and 50 respectively. Since $\langle(\delta {\mathcal{F}})^2 \kappa\rangle \propto \sigma_8^4$, the amplitude of these cross-correlations can potentially be used to measure the amplitude of $\sigma_8$ at $z \sim 2$ to 2.5% with BOSS and Planck and even better with future data sets. These measurements have the potential to test alternative theories for dark energy and to constrain the mass of the neutrino. The large potential signal estimated in our analytical calculations motivate tests with non-linear hydrodynamical simulations and analyses of upcoming data sets.' author: - Alberto Vallinotto - Matteo Viel - Sudeep Das - 'David N. Spergel' bibliography: - 'VDSV.bib' - 'cmblensing.bib' - 'projects\_new.bib' title: | Cross-correlations of the Lyman-$\alpha$ forest with weak lensing convergence I:\ Analytical Estimates of S/N and Implications for Neutrino Mass and Dark Energy --- Introduction ============ The confluence of high resolution Cosmic Microwave Background (CMB) experiments and large-scale spectroscopic surveys in the near future is expected to sharpen our view of the Universe. Arcminute scale CMB experiments such as Planck [@PLANCK], the Atacama Cosmology Telescope [@ACT; @hincks/etal:2009], the South Pole Telescope [@SPT; @staniszewski/etal:2009], QUIET [@QUIET] and PolarBeaR [@POLAR], will chart out the small scale anisotropies in the CMB. This will shed new light on the primordial physics of inflation, as well as the astrophysics of the low redshift Universe through the signatures of the interactions of the CMB photons with large scale structure. Spectroscopic surveys like BOSS [@mcdonald05; @seljak05] and BigBOSS [@Schlegel:2009uw] will trace the large scale structure of neutral gas, probing the distribution and dynamics of matter in the Universe. While these two datasets will be rich on their own, they will also complement and constrain each other. An interesting avenue for using the two datasets would be to utilize the fact that the arcminute-scale secondary anisotropies in the CMB are signatures of the same large scale structure that is traced by the spectroscopic surveys, and study them in cross-correlation with each other. In this paper, we present the analytic estimates for one such cross correlation candidate - that between the gravitational lensing of the CMB and the flux fluctuations in the [Lyman-$\alpha$[ ]{}]{}forest. The gravitational lensing of the CMB, or CMB lensing in short, is caused by the deflection of the CMB photons by the large scale structure potentials [see @lewis.challinor:2006 for a review]. On large scales, WMAP measurements imply that the primoridal CMB is well described as an isotropic Gaussian random field [@Komatsu:2008hk]. On small scales, lensing breaks this isotropy and introduces a specific form of non-Gaussianity. These properties of the lensed CMB sky can be used to construct estimators of the deflection field that lensed the CMB. Therefore, CMB lensing provides us with a way of reconstructing a line-of-sight (los) projected density field from zero redshift to the last scattering surface, with a broad geometrical weighting kernel that gets most of its contribution from the $z=1-4$ range [@hu.okamoto:2002; @hirata.seljak:2003; @yoo.zaldarriaga:2008]. While CMB lensing is mainly sensitive to the geometry and large scale projected density fluctuations, the [Lyman-$\alpha$[ ]{}]{}forest, the absorption in quasar (QSO) spectra caused by intervening neutral hydrogen in the intergalactic medium, primarily traces the small-scale distribution of gas (and hence, also matter) along the line of sight. A cross-correlation between these two effects gives us a unique way to study how small scale fluctuations in the density field evolve on top of large scale over and under-densities, and how gas traces the underlying dark matter. This signal is therefore a useful tool to test to what extent the fluctuations in the [Lyman-$\alpha$[ ]{}]{}flux relate to the underlying dark matter. Once that relationship is understood, it can also become a powerful probe of the growth of structure on a wide range of scales. Since both massive neutrinos and dark energy alter the growth rate of structure at $z\sim 2$, these measurements can probe their effects. This new cross-correlation signal, should also be compared with other existing cross-correlations between CMB and LSS that have already been observed and that are sensitive to different redshift regimes [@peiris.spergel:2000; @giannantonio08; @hirata08; @croft06; @Xia:2009dr]. In this work, we build an analytic framework based on simplifying assumptions to estimate the cross-correlation of the first two moments of the [Lyman-$\alpha$[ ]{}]{}flux fluctuation with the weak lensing convergence $\kappa$, obtained from CMB lensing reconstruction, measured along the same line of sight. The finite resolution of the spectrogram limits the range of parallel $k$-modes probed by the absorption spectra and the finite resolution of the CMB experiments limits the range of perpendicular $k$-modes probed by the convergence measurements. These two effect break the spherical symmetry of the $k$-space integration. However, we show that by resorting to a power series expansion it is still possible to obtain computationally efficient expressions for the evaluation of the signal. We then investigate the detectability of the signal in upcoming CMB and LSS surveys, and the extent to which such a signal can be used as a probe of neutrino masses and early dark energy scenarios. A highlight of our results is that the estimated cross-correlation signal seems to have significant sensitivity to the normalization of the matter power spectrum $\sigma_8$. Consistency with CMB measurements – linking power spectrum normalization and the sum of the neutrino masses – allows to use this cross-correlation to put additional constrain on the latter. The structure of the paper is as follows. In Section \[sectI\] we introduce the two physical observables, the [Lyman-$\alpha$[ ]{}]{}flux and the CMB convergence (\[sectIa\]), the cross-correlation estimators (\[sectIb\]) and their variances (\[variance\]). Our main result is presented in section \[sn\] where the signal-to-noise ratios are computed. Section \[spectral\] contains a spectral analysis of the observables that aims at finding the [Lyman-$\alpha$[ ]{}]{}wavenumbers that contribute most to such a signal. We focus on two cosmologically relevant applications in sections \[neutrinos\] and \[ede\], for massive neutrinos and early dark energy models, respectively. We conclude with a discussion in section \[discuss\]. Analytical Results {#sectI} ================== Physical Observables {#sectIa} -------------------- ### Fluctuations in the [Lyman-$\alpha$[ ]{}]{}flux {#fluctuations-in-the-lyman-alpha-flux .unnumbered} Using the *fluctuating Gunn–Peterson approximation* [@Gunn:1965hd], the transmitted flux ${\mathcal{F}}$ along a los ${\hat{n}}$ is related to the density fluctuations of the intergalactic medium (IGM) $\delta_{\textrm{IGM}}$ by $${\mathcal{F}}({\hat{n}},z)=\exp\left[-A\left(1+\delta_{\textrm{IGM}}({\hat{n}},z)\right)^{\beta}\right],\label {eq:FGPA}$$ where $A$ and $\beta$ are two functions relating the flux fluctuation to the dark matter overdensities. These two functions depend on the redshift considered: $A$ is of order unity and is related to the mean flux level, baryon fraction, IGM temperature, cosmological parameters and the photoionization rate of hydrogen. A good approximation for its redshift dependence is $A(z)\approx 0.0023\,(1+z)^{3.65}$ (see [@kim07]). $\beta$ on the other hand depends on to the so-called IGM temperature-density relation and in particular on the power-law index of this relation (e.g. [@huignedin97; @mcdonald03]) and should be less dependent on redshift (unless temperature fluctuations due for example to reionization play a role, see [@mcquinn09]). For the calculation of signal/noise in the paper, we neglect the evolution of $A$ and $\beta$ with redshift. While the value of the correlators considered will depend on $A$ and $\beta$, their signal-to-noise (S/N) ratio will not. On scales larger than about $1\,{\,h^{-1}{\rm Mpc}}$ (comoving), which is about the Jeans length at $z=3$, the relative *fluctuations* in the [Lyman-$\alpha$[ ]{}]{}flux $\delta{\mathcal{F}}\equiv({\mathcal{F}}-\bar{{\mathcal{F}}})/\bar{{\mathcal{F}}}$ are proportional to the fluctuations in the IGM density field [@Bi:1996fh; @croft98; @croftweinberg02; @Viel:2001hd; @saitta08]. We assume that the IGM traces the dark matter on large scales, $$\begin{aligned} \delta{\mathcal{F}}({\hat{n}},\chi)&\approx& -A\beta\delta_{\rm IGM}({\hat{n}},\chi) \approx -A\beta\delta({\hat{n}},\chi) .\label{deltaF}\end{aligned}$$ The (variance of the) flux fluctuation in the redshift range covered by the [Lyman-$\alpha$[ ]{}]{}spectrum is then proportional to (the variance of) the fluctuations in dark matter $$\begin{aligned} \delta{\mathcal{F}}^r({\hat{n}})&=&\int_{\chi_i}^{\chi_Q}d\chi\, \delta{\mathcal{F}}^r({\hat{n}},\chi){\nonumber\\}&\approx& \int_{\chi_i}^{\chi_Q}d\chi\, \left(-A\beta\right)^r\delta^r({\hat{n}},\chi),\label{deltaF2}\end{aligned}$$ where the range of comoving distances probed by the [Lyman-$\alpha$[ ]{}]{}spectrum extends from $\chi_i$ to $\chi_Q$. The $r=1$ case corresponds to the fluctuations in the flux and the $r=2$ case corresponds to their variance. We stress that the above approximation is valid in linear theory neglecting not only the non-linearities produced by gravitational collapse but also those introduced by the definition of the flux and those produced by the thermal broadening and peculiar velocities. Note that while the assumption of “tracing” between gas and dark matter distribution above the Jeans length is expected in the standard linear perturbation theory [@eisensteinhu98], the one between the flux and the matter has been verified a-posteriori using semi-analytical methods ([@Bi:1996fh; @zaroubi06]) and numerical simulations ([@gnedinhui98; @croft98; @vhs06]) that successfully reproduce most of the observed [Lyman-$\alpha$[ ]{}]{}properties. Furthemore, non-gravitational processes such as temperature and/or ultra-violet fluctuations in the IGM should alter the [Lyman-$\alpha$[ ]{}]{}forest flux power and correlations in a distinct way as compared to the gravitational instability process and to linear evolution (e.g. [@fangwhite04; @croft04; @slosar09]). ### Cosmic Microwave Background convergence field {#cosmic-microwave-background-convergence-field .unnumbered} The effective weak lensing convergence $\kappa({\hat{n}})$ measured along a los in the direction ${\hat{n}}$ is proportional to the dark matter overdensity $\delta$ through $$\kappa(\hat{n},\chi_F)=\frac{3H_0^2\Omega_m}{2c^2}\int_0^{\chi_F}d\chi\,W_L (\chi,\chi_F)\frac{\delta(\hat{n},\chi)}{a(\chi)}\label{kappa},$$ where the integral along the los extends up to a comoving distance $\chi_{F}$ and where $W_L(\chi,\chi_{F})=\chi(\chi_{F}-\chi)/\chi_{F}$ is the lensing window function. In what follows we consider the cross-correlation of [Lyman-$\alpha$[ ]{}]{}spectra with the convergence field measured from the CMB, as in Vallinotto et al. [@Vallinotto:2009wa], in which case $\chi_F$ is the comoving distance to the last scattering surface. Note however that it is straightforward to extend the present treatment to consider the cross-correlation of the [Lyman-$\alpha$[ ]{}]{}flux fluctuations with convergence maps constructed from other data sets, like optical galaxy surveys. It is necessary to stress here that Eq. (\[eq:FGPA\]) above depends on the density fluctuations in the IGM, which in principle are distinct from the ones in the dark matter, whereas $\kappa$ depends on the dark matter overdensities $\delta$. If the IGM and dark matter overdensity fields were completely independent, the cross-correlation between them would inevitably yield zero. If however the fluctuations in the IGM and in the dark matter are related to one another, then cross-correlating $\kappa$ and $\delta{\mathcal{F}}$ will yield a non-zero result. The measurement of these cross-correlations tests whether the IGM is tracing the underlying dark matter field and quantifies the bias between flux and matter. The Correlators {#sectIb} --------------- ### Physical Interpretation {#physical-interpretation .unnumbered} The two correlators $\langle \delta{\mathcal{F}}\kappa\rangle$ and $\langle \delta{\mathcal{F}}^2 \kappa\rangle$ have substantially different physical meaning: $\kappa$ is proportional to the over(under)density integrated along the los and is dominated by long wavelength modes with $k\sim 10^{-2}$ ${\,h\,{\rm Mpc}^{-1}}$. Intuitively $\kappa$ therefore measures whether a specific los is probing an overall over(under)dense region. If the IGM traces the dark matter field, then by Eq. (\[deltaF2\]) $\delta{\mathcal{F}}$ is expected to measure the dark matter overdensity along the same los extending over the redshift range $\Delta z$ spanned by the QSO spectrum. This implies that - $\langle \delta{\mathcal{F}}\kappa\rangle$ quantifies whether and how much the overdensities traced by the [Lyman-$\alpha$[ ]{}]{}flux contribute to the overall overdensity measured all the way to the last scattering surface. Because both $\kappa$ and $\delta{\mathcal{F}}$ are proportional to $\delta$, it is reasonable to expect that this correlator will be dominated by modes with wavelengths of the order of hundreds of comoving Mpc. As such, this correlator may be difficult to measure as it may be more sensitive to the calibration of the [Lyman-$\alpha$[ ]{}]{}forest continuum. - $\langle\delta{\mathcal{F}}^2\kappa\rangle$ measures the relationship between long wavelength modes in the density and the amplitude of the variance of the flux. The variance on small scales and the amplitude of fluctuations on large-scales are not coupled in linear theory. However, in non-linear gravitational theory regions of higher mean density have higher matter fluctuations. These lead to higher amplitude fluctuations in flux [@2001ApJ...551...48Z]. Since $\langle\delta{\mathcal{F}}^2\kappa\rangle$ is sensitive to this interplay between long and short wavelength modes, this correlator is much more sensitive than $\langle \delta{\mathcal{F}}\kappa\rangle$ to the structure growth rate. Furthermore, because $\delta{\mathcal{F}}^2$ is sensitive to short wavelengths, this signal is dominated by modes with shorter wavelength than the ones dominating $\langle \delta{\mathcal{F}}\kappa\rangle$. As such, this signal should be less sensitive to the fitting of the continuum of the [Lyman-$\alpha$[ ]{}]{}forest. ### Tree level approximation {#tree-level-approximation .unnumbered} In what follows we focus on obtaining analytic expressions for the correlations between the (variance of the) flux fluctuations in the [Lyman-$\alpha$[ ]{}]{}spectrum and the CMB convergence $\kappa$ measured along the same los. From Eqs. (\[deltaF2\], \[kappa\]) above it is straightforward to obtain the general expression for the signal $$\begin{aligned} \label{eq:deltaFmK_1} \langle \delta{\mathcal{F}}^r(\hat{n}) \kappa(\hat{n})\rangle&=\frac{3H_0^2\Omega_m} {2c^2}\int_0^{\chi_F}d\chi_c \frac{W_L(\chi_c,\chi_F)}{a(\chi_c)} {\nonumber\\}&\times\int_{\chi_i}^{\chi_Q}d\chi_q \,(-A\beta)^r\, \langle \delta^r(\hat{n},\chi_q) \, \delta(\hat{n},\chi_c) \rangle.\end{aligned}$$ Since the QSOs used to measure the [Lyman-$\alpha$[ ]{}]{}forest lie at $z>2$, it is reasonable to expect that non-linearities induced by gravitational collapse will not have a large impact on the final results. In the following we therefore calculate the $r=1$ and $r=2$ correlators at *tree-level* in cosmological perturbation theory. While beyond the scope of the current calculation, we could include the effects of non-linearities induced by gravitational collapse by applying the *Hyperextended Perturbation Theory* of Ref. [@Scoccimarro:2000ee] to the terms in Eq. (\[eq:deltaFmK\_1\]). At tree level in perturbation theory the redshift dependence of the matter power spectrum factorizes into $P(k,\chi_c,\chi_q)=P_L(k)\,D(\chi_c)\,D(\chi_q)$, where $P_L(k)$ denotes the zero-redshift linear power spectrum and $D(\chi)$ the growth factor at comoving distance $\chi$. Furthermore, the correlator appearing in the integrand of Eq. (\[eq:deltaFmK\_1\]) depends on the separation $\Delta\chi=\chi_q-\chi_c$ between the two points running on the los and in general it will be significantly non-zero only when $|\Delta\chi|\leq \Delta\chi_0\approx 150 {\,h^{-1}{\rm Mpc}}$. Also, at tree level in perturbation theory these correlators carry $2r$ factors of $D$.[^1] Using the approximation $$\begin{aligned} D(\chi_c)&=&D(\chi_q-\Delta\chi)&\approx D(\chi_q),\label{eq:approx_D}\\ W_L(\chi_c,\chi_F)&=&W_L(\chi_q-\Delta\chi,\chi_F)&\approx W_L(\chi_q,\chi_F),\\ a(\chi_c)&=&a(\chi_q-\Delta\chi)&\approx a(\chi_q),\label{eq:approx_a}\end{aligned}$$ we can then write $\langle \delta^r(\hat{n},\chi_q) \, \delta(\hat{n},\chi_c) \rangle \approx \xi_r(\Delta\chi)D^{2r}(\chi_q)$ and trade the double integration (over $ \chi_c$ and $\chi_q$) for the product of two single integrations over $\Delta\chi$ and $\chi_q$. Equation (\[eq:deltaFmK\_1\]) factorizes into $$\begin{aligned} \label{eq:deltaFmK_2} \langle \delta{\mathcal{F}}^r \kappa\rangle&\approx(-A \beta)^r\frac{3H_0^2\Omega_m}{2c^2}\int_{\chi_i}^{\chi_Q}d\chi_q \frac{W_L(\chi_q,\chi_F)}{a(\chi_q)} D^{2r} (\chi_q) {\nonumber\\}&\times\int_{-\Delta\chi_0}^{\Delta\chi_0}d\Delta\chi \, \xi_r(\Delta\chi).\end{aligned}$$ This is the expression used to evaluate the signal. The determination of an expression for $\xi_r$ and of an efficient way for evaluating it is the focus of the rest of the section. ### Window Functions {#window-functions .unnumbered} The experiments that measure the convergence and the flux fluctuations have finite resolutions. We approximate the effective window functions of these experiments by analytically tractable Gaussian function. These two window functions act differently: the finite resolution of the CMB convergence measurements limits the accessible range of modes perpendicular to the los, $\vec{k}_{\perp}$, and the finite resolution of the [Lyman-$\alpha$[ ]{}]{}spectrum limits the range of accessible modes $k_{\parallel}$ parallel to the los. This separation of the modes into the ones parallel and perpendicular to the los is intrinsically dictated by the nature of the observables and it cannot be avoided once the finite resolution of the various observational campaigns is taken into account. Because of this symmetry, the calculation is most transparent in cylindrical coordinates: $\vec{k}=k_{\parallel}{\hat{n}}+\vec{k}_{\perp}$. The high-$k$ (short wavelength) cutoff scales for the CMB and [Lyman-$\alpha$[ ]{}]{}modes are denoted by $k_C$ and $k_L$ respectively. Furthermore, we also add a low-$k$ (long wavelength) cutoff for the [Lyman-$\alpha$[ ]{}]{}forest, to take into account the fact that wavelengths longer than the spectrum will appear in the spectrum itself as a background. We denote this low-$k$ cutoff by $k_l$. After defining the auxiliary quantities $$\begin{aligned} \bar{k}^2&\equiv&\frac{k_L^2\,k_l^2}{k_L^2+k_l^2},\\ \hat{k}^2&\equiv&\frac{k_L^2\,k_l^2}{2k_l^2+k_L^2},\end{aligned}$$ the window functions acting on the [Lyman-$\alpha$[ ]{}]{}and on the CMB modes, denoted respectively by $W_{\alpha}$ and $W_{\kappa}$, are defined through $$\begin{aligned} W_{\alpha}(k_{\parallel},k_L,k_l)&\equiv&\left[1-e^{-(k_{\parallel}/k_l)^2}\right]e^{- (k_{\parallel}/k_L)^2}\nonumber\\ &=&e^{-(k_{\parallel}/k_L)^2}-e^{-(k_{\parallel}/\bar{k})^2},\label{eq:klkL}\\ W_{\kappa}(\vec{k}_{\perp},k_C)&\equiv&e^{(-\vec{k}^2_{\perp}/k^2_C)},\end{aligned}$$ where the direction dependence of the two window functions has been made explicit. We determine the values of the cutoff scales as follows. For the [Lyman-$\alpha$[ ]{}]{}forest, we consider the limitations imposed by the spectrograph, adopting the two cutoff scales $k_L$ and $k_l$ according to the observational specifications. For the reconstruction of the CMB convergence map we compute the minimum variance lensing reconstruction noise following Hu and Okamoto [@Hu:2001kj]. We then identify the multipole $l_c$, where the signal power spectrum equals the noise power spectrum for the reconstructed deflection field (for $l>l_c$ the noise is higher than the signal). Finally, we translate the angular cutoff $l_c$ into a 3-D Fourier mode $k_C$ at the relevant redshift so to keep only modes with $k\le k_C$ in the calculation. Note that if we had used the shape of the noise curve instead of this Gaussian cutoff, we would have effectively retained more Fourier modes, thereby increasing the signal. However, to keep the calculations simple and conservative we use the above Gaussian window. In what follows, we will present results for convergence map reconstructions from the datasets of two CMB experiments: Planck and an hypothetical CMB polarization experiment based on a proposed new camera for the Atacama Cosmology Telescope (ACTPOL). For the former, we adopt the sensitivity values of the 9 frequency channels from the Blue Book [@PlanckBB]. For the latter we assume a hypothetical polarization based CMB experiment with a $3$ arcmin beam and $800$ detectors, each having a noise-equivalent-temperature (NET) of 300 $\mu$K-$\sqrt{s}$ over $8000$ sq. deg., with an integration time of $3\times 10^7$ seconds. We further assume that both experiments will completely cover the $8000$ sq. deg footprint of BOSS. ![image](dK_S_Planck_LCDM_CMB.eps){width="49.00000%"} ![image](dK_S_Pol_LCDM_CMB.eps){width="49.00000%"} ### Auxiliary Functions {#auxiliary-functions .unnumbered} Because the calculation has cylindrical rather than spherical symmetry, the evaluation of the correlators of Eq. (\[eq:deltaFmK\_2\]) is more complicated, particularly for $r > 1$. As shown in the appendix, it is possible to step around this complication and to obtain results that are computationally efficient with the adoption of a few auxiliary functions that allow the integrations in $k$-space to be carried out in two steps, first integrating on the modes perpendicular to the los, and subsequently on the ones parallel to the los. The perturbative results for the correlators are expressed as combinations of the following auxiliary functions: $$\begin{aligned} \tilde{H}_m(k_{\parallel};k_C)&\equiv \int_{|k_{\parallel}|}^{\infty}\frac{k\,dk}{2\pi}\, \frac{P_L(k)}{m!}\,\left(\frac{k^2-k_{\parallel}^2}{k_C^2}\right)^{m}\,\exp\left(-\frac {k^2-k_{\parallel}^2}{k_C^2}\right), & \label{eq:DefA}\\ \tilde{L}_m(k_{\parallel};k_C)&\equiv \int_{|k_{\parallel}|}^{\infty}\frac{dk}{2\pi k} \,\frac{P_L(k)}{m!}\,\left(\frac{k^2-k_ {\parallel}^2}{k_C^2}\right)^{m}\,\exp\left(-\frac{k^2-k_{\parallel}^2}{k_C^2}\right),& \label{eq:Defbeta}\\ f^{(n)}_m(\Delta\chi;k_C,k_L)&\equiv \int_{-\infty}^{\infty}\frac{dk_{\parallel}}{2\pi}\, \left(\frac{k_{\parallel}}{k_L}\right)^n\exp\left[-\frac{k_{\parallel}^2}{k_L^2}+ik_ {\parallel}\Delta\chi\right]\tilde{f}_m(k_{\parallel};k_C) &\textrm{ with } f=\{L,H\}, \label{eq:Deff_m^n}\\ \bar{f}_0^{(n)}(s)&\equiv \int_{-\infty}^{\infty}\frac{dk}{2\pi}\left(\frac{k}{s}\right)^n \left[e^{-2k^2/s^2}-e^{-k^2/\hat{k}^2}\right]\tilde{f}_0(k;\infty) &\textrm{ with } f=\{L,H \}.\label{eq:Deff_bar}\end{aligned}$$ Equations (\[eq:DefA\]) and (\[eq:Defbeta\]) above represent an intermediate step, where the integration on the modes perpendicular to the los is carried out. Equations (\[eq:Deff\_m\^n\]) and (\[eq:Deff\_bar\]) are then used to carry out the remaining integration over the modes that are parallel to the los. The symmetry properties of the auxiliary functions are as follows. The functions $\tilde{f}_m$ are *real and even* in $k_{\parallel}$ regardless of the actual value of $m$. This in turn implies that $f^{(n)}_m$ are real and even (imaginary and odd) in $\Delta\chi$ when $n$ is even (odd). Furthermore, the coefficients $\bar{f}_0^{(n)}$ are real and non-zero only if $n$ is even, thus ensuring that $\xi_r(\Delta\chi)$ is always real-valued. ![image](S_Planck_LCDM_CMB.eps){width="49.00000%"} ![image](S_Pol_LCDM_CMB.eps){width="49.00000%"} ### The $\langle \delta{\mathcal{F}}\kappa\rangle$ correlator {#the-langle-deltamathcalfkapparangle-correlator .unnumbered} In the $r=1$ case it is straightforward to identify $\xi_1(\Delta\chi)$ with a two point correlation function measured along the los. However, the intrinsic geometry of the problem and the inclusion of the window functions leads to evaluate this correlation function in a way that is different from the usual case, where the spherical symmetry in $k$-space can be exploited. In the present case we have $$\xi_1(\Delta\chi)=H_0^{(0)}(\Delta\chi;k_C,k_L)- H_0^{(0)}(\Delta\chi;k_C,\bar{k}). \label{eq:xi_1}$$ It is then straightforward to plug Eq. (\[eq:xi\_1\]) into Eq. (\[eq:deltaFmK\_2\]) to obtain $\langle \delta{\mathcal{F}}(\hat{n}) \kappa(\hat{n})\rangle$.[^2] In Fig. \[Fig:dK\_S\_3\_2\] we show the absolute value of the cross-correlation of the convergence $\kappa$ of the CMB with the [Lyman-$\alpha$[ ]{}]{}flux fluctuations $\delta{\mathcal{F}}$ observed for a quasar located at redshift $z$ and whose spectrum spans a range of redshift $\Delta z$. The cosmological model used (and assumed throughout this work) is a flat universe with $\Omega_{\rm m}=0.25$, $h=0.72$ and $\sigma_8=0.84$ consistent with the WMAP-5 cosmology [@Komatsu:2008hk]. The left and right panel show the results for the resolution of Planck and of the proposed ACTPOL experiment. We artificially set $A=\beta=1$, effectively “turning off” the physics of IGM: this choice is not dictated by any physical argument but from the fact that it makes apparent the dynamics of structure formation. The behavior of $\langle \delta{\mathcal{F}}\, \kappa\rangle$ shown in Fig. \[Fig:dK\_S\_3\_2\] makes physical sense. Recall that this correlator is sensitive to the overdensity integrated along the redshift interval $\Delta z$ (spanned by the QSO spectrum) that contributes to the CMB convergence. It then increases almost linearly with the length of the QSO spectrum $\Delta z$. It also increases if the resolution of the CMB experiment $k_C$ is increased. An increased value of $\Delta z$ corresponds to a longer [Lyman-$\alpha$[ ]{}]{}spectrum, carrying a larger amount of information and thus leading to a larger correlation. Similarly, an increased value of $k_C$ corresponds to a higher resolution of the reconstructed convergence map and therefore more modes – and information – being included in the correlation. Deepening the source’s redshift (while keeping $A$ and $\beta$ fixed) on the other hand results in a *decrease* in $\langle \delta{\mathcal{F}}\, \kappa\rangle$. This fact is related to the growth of structure: the spectrum of a higher redshift QSO is probing regions where structure is less clumpy and therefore the absolute value of the correlation is smaller. Finally, once the redshift dependence of $A$ is turned on ($\beta$ is only mildly redshift dependent) the above result change, leading to a final signal that is increasing with redshift. We stress here that values of the correlators will be different when $A$ and $\beta$ are different from unity. Ultimately these values should be recovered from a full non-linear study based on large scale-high resolution hydrodynamical simulations. However, numerical studies based on hydrodynamical simulations have shown convincingly that for both the flux power spectrum (2-pt function) and flux bispectrum (3-pt function) the shape is very similar to the matter power and bispectrum, while the amplitude is usually matched for values of $A$ and $\beta$ that are different from linear predictions (see discussion in [@vielbisp]). In this framework, non-linear hydrodynamical simulations should at the end provide the “effective” values for $A$ and $\beta$ that will match the observed correlators and our results can be recasted in terms of these new parameters in a straightforward way. ### The $\langle \delta{\mathcal{F}}^2 \kappa\rangle$ correlator {#the-langle-deltamathcalf2-kapparangle-correlator .unnumbered} The $r=2$ case, where the variance of the flux fluctuation $\delta{\mathcal{F}}^2$ integrated along the los is cross-correlated with $\kappa$, is more involved. Looking back at Eqs. (\[eq:deltaFmK\_1\], \[eq:deltaFmK\_2\]) it is possible to realize that the cumulant correlator $\langle \delta^2(\hat{n},\chi_q) \, \delta(\hat{n},\chi_c) \rangle=\xi_2(\Delta\chi)$ corresponds to a collapsed three-point correlation function, as two of the $\delta$’s refer to the same physical point. The evaluation of $\xi_2$ is complicated by the introduction of the window functions $W_{\alpha}$ and $W_{\kappa}$. For sake of clarity, we report here only the final results at tree level in cosmological perturbation theory, relegating the lengthy derivation to the appendix. Letting $$\xi_2(\Delta\chi)=\langle\delta^2_q\delta_c\rangle_{1,2} +2\langle\delta^2_q\delta_c\rangle_{2,3},\label{eq:Def_ddd12_ddd23}$$ and using the auxiliary functions defined in Eqs. (\[eq:DefA\]-\[eq:Deff\_bar\]) above, it is possible to obtain the following series solution $$\begin{aligned} \langle\delta^2\delta\rangle_{1,2}&=& 2\sum_{m=0}^{\infty}\left\{\frac{5}{7}\, \left[H_m^{(0)}(\Delta\chi,\chi_q;k_C,k_L)-H_m^{(0)}(\Delta\chi,\chi_q;k_C,\bar{k}) \right]^2\right.\nonumber\\ &+&\left[k_L\,H_m^{(1)}(\Delta\chi,\chi_q;k_C,k_L)-\bar{k}\,H_m^{(1)}(\Delta\chi, \chi_q;k_C,\bar{k})\right] \left[k_L\,L_m^{(1)}(\Delta\chi, \chi_q;k_C,k_L)-\bar{k}L_m^{(1)}(\Delta\chi, \chi_q;k_C,\bar{k})\right] {\nonumber\\}&-&m\,k_C^2\,\left[H_m^{(0)}(\Delta\chi,\chi_q;k_C,k_L)-H_m^{(0)}(\Delta\chi, \chi_q;k_C,\bar{k})\right]\left[L_m^{(0)}(\Delta\chi,\chi_q;k_C,k_L)-L_m^{(0)}(\Delta \chi,\chi_q;k_C,\bar{k})\right] {\nonumber\\}&+& \frac{2}{7}\left[k_L^2\,L_m^{(2)}(\Delta\chi,\chi_q;k_C,k_L)-\bar{k}^2\,L_m^{(2)} (\Delta\chi,\chi_q;k_C,\bar{k})\right]^2{\nonumber\\}&-&\frac{4m}{7}\, k_C^2\,\left[k_L\,L_m^{(1)}(\Delta\chi,\chi_q;k_C,k_L)-\bar{k}\,L_m^{(1)}(\Delta\chi, \chi_q;k_C,\bar{k})\right]^2 {\nonumber\\}&+&\left.\frac{m(2m-1)}{ 7}\,k_C^4\,\left[L_m^{(0)}(\Delta\chi,\chi_q;k_C,k_L)-L_m^{(0)}(\Delta\chi, \chi_q;k_C,\bar{k})\right]^2 \right\},\label{d2d_12_kLkl_final}\\ \langle\delta_q^2\delta_c\rangle_{2,3}&=& 2\sum_{m=0}^{\infty}\frac{(-1)^m\,2^m}{m!} \left[\frac{6}{7}\bar{H}_0^{(m)}(k_L)H_0^{(m)}(\Delta\chi;k_C,k_L)\right. +\frac{1}{2}k_L^2\bar{L}_0^{(m+1)}(k_L)\,H_0^{(m+1)}(\Delta\chi;k_C,k_L){\nonumber\\}&+&\frac{1}{2}k_L^2\bar{H}_0^{(m+1)}(k_L)\,L_0^{(m+1)}(\Delta\chi;k_C,k_L) +\frac{3}{7}k_L^4\,\bar{L}_0^{(m+2)}(k_L)\,L_0^{(m+2)}(\Delta\chi;k_C,k_L){\nonumber\\}&-&\left. \frac{k_L^2}{7} \bar{H}_0^{(m)}(k_L)\,L_0^{(m+2)}(\Delta\chi;k_C,k_L) -\frac{k_L^2}{7} \bar{L}_0^{(m+2)}(k_L)\,H_0^{(m)}(\Delta\chi;k_C,k_L) + (k_L\rightarrow\bar{k})\right].\label{eq:d2d_23final_kLkl} \end{aligned}$$ In Fig. \[Fig:S\_3\_2\] we show the result obtained using the tree level expression for $\langle\delta{\mathcal{F}}^2\kappa\rangle$, Eqs. (\[eq:Def\_ddd12\_ddd23\]-\[eq:d2d\_23final\_kLkl\]). As before, we focus on the physics of structure formation and we turn off the IGM physics by setting $A=\beta=1$. First, it is necessary to keep in mind that $\langle\delta{\mathcal{F}}^2\kappa\rangle$ is sensitive to the interplay of long and short wavelength modes and it probes the enhanced growth of short wavelength overdensities that lie in an environment characterized by long wavelength overdensities. The behavior of $\langle\delta{\mathcal{F}}^2\kappa\rangle$ with respect to $z$ and $\Delta z$ is similar to that of $\langle\delta{\mathcal{F}}\kappa\rangle$: it increases if $\Delta z$ is increased or if the QSO redshift is decreased. However, the effect of the growth of structure is in this case stronger than in the previous case. This does not come as a surprise, as the growth of structure acts coherently in two ways on $\langle\delta{\mathcal{F}}^2\kappa\rangle$. Since in a model all modes grow at the same rate, a lower redshift for the source QSO implies larger overdensities on large scales which in turn enhance even further the growth of overdensities on small scales. Thus by lowering the source’s redshift two factor play together to enhance the signal: first the fact that long and short wavelength modes have both grown independently, and second the fact that being coupled larger long-wavelength modes boost the growth of short wavelength modes by a larger amount. This dependence is also made explicit in Eq. (\[eq:deltaFmK\_2\]), where we note that $\langle\delta{\mathcal{F}}^2\kappa\rangle$ depends on four powers of the growth factor. Finally, as before, the higher the resolution of the CMB experiment the larger is $d\langle\delta{\mathcal{F}}^2\kappa\rangle/d\Delta z$. This too makes physical sense, as a larger resolution leads to more modes contributing to the signal and therefore to a larger cross-correlation. Variance of correlators {#variance} ----------------------- To assess whether the correlations between fluctuations in the flux and convergence are detectable we need to estimate the signal-to-noise ratio, which in turn requires the evaluation of the noise associated with the above observable. As mentioned above, both instrumental noise and cosmic variance are considered. We then move to estimate the variance of our correlator $$\begin{aligned} \sigma_r^2\equiv\langle\delta{\mathcal{F}}^{2r}\kappa^2\rangle-\langle\delta{\mathcal{F}}^{r} \kappa\rangle^2.\end{aligned}$$ Since $\langle\delta{\mathcal{F}}^{r}\kappa\rangle^2$ is just the square of the signal, we aim here to obtain *estimates* for $\langle\delta{\mathcal{F}}^{2r}\kappa^2\rangle$. From Eq. (\[eq:deltaFmK\_1\]), we get: $$\begin{aligned} \langle \delta{\mathcal{F}}^{2r} \kappa^2\rangle&=\left(A^r\beta^r \frac{3H_0^2\Omega_m} {2c^2}\right)^2\int_0^{\chi_F}d\chi_c \frac{W_L(\chi_c,\chi_F)}{a(\chi_c)}{\nonumber\\}&\times \int_0^{\chi_F}d\chi'_c \frac{W_L(\chi'_c,\chi_F)}{a(\chi'_c)} \int_{\chi_i}^{\chi_Q}d\chi_q \,\int_{\chi_i}^{\chi_Q}d\chi'_q {\nonumber\\}&\times \langle \delta^{r}(\hat{n},\chi_q) \,\delta^{r}(\hat{n},\chi'_q)\, \delta(\hat{n},\chi_c) \delta(\hat{n},\chi'_c) \rangle \, , \label{eq:deltaF2mK2_1}\end{aligned}$$ where there are now two integrals running along the convergence los (on $\chi_c$ and $\chi_c'$) and two running along the [Lyman-$\alpha$[ ]{}]{}spectrum (on $\chi_q$ and $\chi_q'$). The correlator appearing in the integrand of Eq. (\[eq:deltaF2mK2\_1\]) is characterized by an even ($2r+2$) number of $\delta$ factors. This implies that an approximation to its value can be obtained using Wick’s theorem. When Wick’s theorem is applied, many different terms will in general appear. Adopting for sake of brevity the notation $\delta(\hat{n},\chi'_i)\equiv\delta_i$, terms characterized by the contraction of $\delta_i$ and $\delta_j$ will receive non-negligible contributions over the overlap of the respective los. The terms providing the largest contribution to $\langle \delta{\mathcal{F}}^{2r} \kappa^2\rangle$ are the ones where $\delta_c$ is contracted with $\delta_{c'}$: these terms in fact contain the value of the cosmic variance of the convergence and receive significant contributions from all points along the los from the observer all the way to the last scattering surface. On the other hand, whenever we consider the cross-correlation between a $\delta_c$ and a $\delta_q$, this will acquire a non-negligible value only for those set of points where the los to the last scattering surface overlaps with the [Lyman-$\alpha$[ ]{}]{}spectrum. As such, these terms are only proportional to the length of the [Lyman-$\alpha$[ ]{}]{}spectrum, and thus sensibly smaller than the ones containing the variance of the convergence. We note in passing that the same argument should also apply to the connected part of the correlator, which should be significantly non-zero only along the [Lyman-$\alpha$[ ]{}]{}spectrum. Mathematically, these facts become apparent from Eq. (\[eq:deltaF2mK2\_1\]) above, where terms containing $\langle\delta_c\delta_{c'}\rangle$ are the only ones for which the integration over $\chi_c$ and $\chi'_{c}$ can be traded for an integration over $\Delta\chi_c$ *and* an integration over $\chi_c$ that extends *all the way to* $\chi_F$. If on the other hand $\delta_c$ is contracted with a $\delta_q$ factor, then the approximation scheme of Eqs. (\[eq:approx\_D\]-\[eq:approx\_a\]) leads to an integral over $\Delta\chi$ and to an integral over $\chi_q$ that extends only over the length probed by the [Lyman-$\alpha$[ ]{}]{}spectrum. It seems therefore possible to safely neglect terms where the $\delta$’s referring to the convergence are not contracted with each other. ### The variance of $\delta{\mathcal{F}}\,\kappa$ {#the-variance-of-deltamathcalfkappa .unnumbered} We start by considering the variance of $\delta{\mathcal{F}}\,\kappa$. Setting $r=1$ in Eq.  (\[eq:deltaF2mK2\_1\]) and using Wick’s theorem we obtain $$\begin{aligned} \langle \delta_q\,\delta_{q'}\, \delta_c\,\delta_{c'}\rangle&\approx& 2\langle \delta_q \delta_c\rangle \langle \delta_{q'}\delta_{c'}\rangle+\langle \delta_q \delta_{q'}\rangle \langle \delta_{c}\delta_{c'}\rangle.\end{aligned}$$ We notice immediately that the first term is twice the square of $\langle\delta{\mathcal{F}}\, \kappa\rangle$, while the second term is proportional to two correlation function characterized by cutoffs acting *either* on the modes that are parallel *or* perpendicular to the los, *but not on both*. It is then possible to show that $$\begin{aligned} \langle \delta_{q}\delta_{q'}\rangle &=&D(\chi_q)\,D(\chi_{q'})\left[H_0^{(0)}(\Delta\chi_q;\infty,\,k_L/\sqrt{2})\right.{\nonumber\\}&-&\left. H_0^{(0)}(\Delta\chi_q;\infty,\,\bar{k}/\sqrt{2})\right],\label{Adqdq}\\ \langle \delta_{c}\delta_{c'}\rangle &=&D(\chi_c)\,D(\chi_{c'})\,H_0^{(0)}(\Delta\chi_c;k_C/\sqrt{2}, \infty)\label{Adcdc},\\ \langle\delta_q\delta_{c}\rangle&=&D(\chi_c)D(\chi_q){\nonumber\\}&\times& \left[H_0^0(\Delta\chi;k_C, k_L)-H_0^0(\Delta\chi;k_C, \bar{k})\right],\\ \langle\delta_q^2\rangle&=&D^2(\chi_q)\left[\bar{H}_0^{(0)}(\chi_q,k_L) +\bar{H}_0^{(0)}(\chi_q,\bar{k})\right],\end{aligned}$$ where the last two equations have been added here for sake of completeness, as they will be useful in what follows. The variance of $\delta{\mathcal{F}}\kappa$ is then $$\begin{aligned} \sigma_1^2&\approx&\langle\delta{\mathcal{F}}\,\kappa\rangle^2{\nonumber\\}&+& \left(A\beta\frac{3H_0^2\Omega_m}{2c^2}\right)^2 \int_0^{\chi_F}d\chi_c \frac{W_L^2(\chi_c,\chi_F)}{a^2(\chi_c)}D^2(\chi_c) {\nonumber\\}&\times&\int_{\chi_i}^{\chi_Q}d\chi_q D^2(\chi_q) \int_{-\Delta\chi_{c,0}}^{\Delta\chi_{c,0}} d\Delta\chi_c H_0^{(0)}(\Delta\chi_c;k_C/\sqrt{2}, \infty){\nonumber\\}&\times& \int_{-\Delta\chi_{q,0}}^{\Delta\chi_{q,0}} d\Delta\chi_q \left[H_0^{(0)}(\Delta\chi_q;\infty,\,k_L/\sqrt{2})\right.{\nonumber\\}&-&\left. H_0^{(0)}(\Delta\chi_q;\infty,\,\bar{k}/\sqrt{2})\right].\label{var_dK_full}\end{aligned}$$ In the upper panels of Fig. \[Fig:N\_dFK\] we show the values obtained for the *standard deviation* of $\delta{\mathcal{F}}\kappa$ for two different CMB experiments’ resolution, again turning off the IGM physics evolution and focusing on the growth of structure. ![image](dK_N_Planck_LCDM_CMB.eps){width="49.00000%"} ![image](dK_N_Pol_LCDM_CMB.eps){width="49.00000%"} ![image](N_Planck_LCDM_CMB.eps){width="49.00000%"} ![image](N_Pol_LCDM_CMB.eps){width="49.00000%"} ### The variance of $\delta{\mathcal{F}}^2\kappa$ {#the-variance-of-deltamathcalf2kappa .unnumbered} Setting $r=2$ in Eq. (\[eq:deltaF2mK2\_1\]), we then apply Wick’s theorem to $ \langle\delta^{2}_q\delta^{2}_{q'}\delta_c\delta_{c'}\rangle$. Neglecting again terms where the $\delta_c$’s are not contracted with one another, we obtain $$\begin{aligned} \langle\delta^{2}_q\delta^{2}_{q'}\delta_c\delta_{c'}\rangle&\approx& 2\langle\delta^{2}_q\delta_c\rangle\langle\delta^{2}_{q'}\delta_{c'}\rangle{\nonumber\\}&+&\langle\delta_{c}\delta_{c'}\rangle\,\left(\langle\delta^2_{q}\rangle\langle \delta^2_{q'}\rangle+2\langle\delta_q\delta_{q'}\rangle^2\right),\label {N:d2qd2qdcdc}\end{aligned}$$ which then leads to the expression for $\sigma_2^2$ $$\begin{aligned} \sigma_2^2 &\approx \langle\delta{\mathcal{F}}^2\,\kappa\rangle^2{\nonumber\\}&+\left(A\beta\frac{3H_0^2\Omega_m}{2c^2}\right)^2\int_0^{\chi_F}d\chi_c \frac {W_L^2(\chi_c,\chi_F)}{a^2(\chi_c)}D^2(\chi_c){\nonumber\\}&\times\int_{-\Delta\chi_{c,0}}^{\Delta\chi_{c,0}} d\Delta\chi_c H_0^{(0)}(\Delta\chi_c;k_C/\sqrt{2}, \infty){\nonumber\\}&\times \left\{ \left[\bar{H}_0^{(0)}(\chi_q,k_L) +\bar{H}_0^{(0)}(\chi_q,\bar{k})\right]^2\left[\int_{\chi_i}^{\chi_Q}d\chi_q D^2 (\chi_q)\right]^2\right.{\nonumber\\}&+2\int_{\chi_i}^{\chi_Q}d\chi_q D^4(\chi_q) \int_{-\Delta\chi_{q,0}}^{\Delta\chi_{q,0}} d\Delta\chi_q \left[H_0^{(0)}(\Delta\chi_q;\infty,\,k_L/\sqrt{2})\right.{\nonumber\\}&\left.\left.- H_0^{(0)}(\Delta\chi_q;\infty,\,\bar{k}/\sqrt{2})\right]^2 \right\}.\label{eq:sigma_2^2}\end{aligned}$$ In the lower panels of Fig. \[Fig:N\_dFK\] we show the estimates for the *standard deviation* $\delta{\mathcal{F}}^2 \kappa$ along a *single* line-of-sight for the two different CMB experiment. We note in Fig. \[Fig:N\_dFK\] the same trends that have been pointed out for the correlator itself in Fig. \[Fig:dK\_S\_3\_2\] and \[Fig:S\_3\_2\]: the standard deviation of $\delta{\mathcal{F}}\kappa$ and of $\delta{\mathcal{F}}^2 \kappa$ increase almost linearly with increasing length of the [Lyman-$\alpha$[ ]{}]{}spectrum $\Delta z$ and it decreases as the source redshift $z$ is increased because of the fact that the spectrum probes regions that are less clumpy. Also, by increasing the resolution of the CMB experiment used to reconstruct the convergence map, the deviation of $\delta{\mathcal{F}}\kappa$ and $\delta{\mathcal{F}}^2 \kappa$ also increase: if on one hand more modes carry more information, on the other hand they also carry more cosmic variance. One last aspect to note here is that while the signal for $\langle\delta{\mathcal{F}}^2\kappa\rangle$ arises from a three point correlation function (which in the gaussian approximation would yield zero), the dominant terms contributing to its variance arise from products of two point correlation functions. In particular, it is possible to show that the terms appearing in the second line of Eq. (\[N:d2qd2qdcdc\]) significantly outweight the square of the signal that appears in the first line. ![image](dK_SN_Planck_LCDM_CMB.eps){width="49.00000%"} ![image](dK_SN_Pol_LCDM_CMB.eps){width="49.00000%"} ![image](SN_Planck_LCDM_CMB.eps){width="49.00000%"} ![image](SN_Pol_LCDM_CMB.eps){width="49.00000%"} Signal-to-Noise ratio {#sn} --------------------- We now have all the pieces to assess to what extent the $\langle\delta{\mathcal{F}}^r\kappa\rangle$ correlations will be detectable by future observational programs. Even before moving to plot the S/N ratios for $\delta{\mathcal{F}}\kappa$ and $\delta{\mathcal{F}}^2\kappa$ it is possible to point out a couple of features of these ratios. First, we note that the S/N ratio for $\delta{\mathcal{F}}\kappa$ and $\delta{\mathcal{F}}^2\kappa$ do present a radical difference in their dependence on the QSO source redshift. This is because the signal for $\delta{\mathcal{F}}^2\kappa$ is characterized by mode coupling, whereas the dominant contributions to the variance are not. Physically, the signal for $\delta{\mathcal{F}}^2\kappa$ is more sensitive to the growth of structure with respect to its variance: while for the former the growth of long wavelength modes enhances the growth of structure on small scales, for the latter long and short wavelength modes grow independently at the same rate. Mathematically, this is apparent when comparing Eq. (\[eq:deltaFmK\_2\]) with Eq. (\[eq:sigma\_2\^2\]): while the $\lg\delta{\mathcal{F}}^2\kappa\rg$ signal carries four powers of the growth factor, the dominant terms contributing to its variance carry only six. In this case then the S/N is characterized by four growth factors in the numerator and only three in the denominator, thus leading to a “linear” dependence of S/N on the redshift (modulo integration over the los and behaviour of the lensing window function). Note that this is in stark contrast with the $\langle\delta{\mathcal{F}}\kappa\rangle$ case, where the signal is not characterized by mode coupling and the number of growth factors are equal for the signal and its standard deviation, thus leading to a S/N ratio with no dependence on the source’s redshift. Second, we note that S/N does not depend on the value of any constant. In particular, regardless of their redshift dependence, the S/N ratio will not depend on the functions $A$ and $\beta$ used to describe the IGM. This is of course very important since in such a way, at least in linear theory and using the FGPA at first order, the dependence on the physics of the IGM cancels out when computing the S/N ratio. In Fig. \[Fig:dk\_SN\_3\_2\] we show the estimates for the S/N *per los* of the $\lg\delta{\mathcal{F}}\kappa\rg$ (upper panels) and $\lg\delta{\mathcal{F}}^2\kappa\rg$ (lower panels) measurements. As expected, while the S/N for $\lg\delta{\mathcal{F}}\kappa\rg$ does not show any strong redshift dependence, the S/N for $\lg\delta{\mathcal{F}}^2\kappa\rg$ decreases linearly with increasing source redshift: the growth of structure is indeed playing a role and shows that QSOs lying at lower redshift will yield a larger S/N. Also, in both cases an increase in the resolution of the experiment measuring the convergence field translates in a larger S/N and in a larger derivative of the S/N with respect to $\Delta z$. This is not surprising, as it is reasonable to expect that a higher resolution convergence map will be carrying a larger amount of information about the density field. All this suggests that depending on what is the correlator that one is interested in measuring, different strategies should be pursued. In case of $\lg\delta{\mathcal{F}}\kappa\rg$ increasing the length of the spectra will provide a better S/N. In case of $\lg\delta{\mathcal{F}}^2\kappa\rg$, however, Fig. (\[Fig:dk\_SN\_3\_2\]) suggests that an increase in the number of quasar will be more effective in producing a large S/N, whereas an increase in the redshift range spanned by the spectrum will increase the S/N only marginally. Having obtained the S/N per los, we can then estimate the total S/N that will be obtained by cross-correlating the BOSS sample ($1.6\cdot10^5$ QSOs) and the proposed BigBOSS sample [@Schlegel:2009uw] ($10^6$ QSOs) with the convergence map measured by Planck or by the proposed ACTPOL experiment considered. Assuming a mean QSO redshift of $\bar{z}=2.5$ and a mean [Lyman-$\alpha$[ ]{}]{}spectrum length of $\Delta z=0.5$, a rough estimate of the S/N for the measurements of $\langle \delta{\mathcal{F}}\kappa \rangle$ and of $\langle \delta{\mathcal{F}}^2\kappa \rangle$ are given in Tab. \[Tab1:SN\_dFK\] and \[Tab1:SN\_dF2K\]. ---------- ----------- ----------- ------------ CMB Exp. S/N Total S/N Total S/N per *los* in BOSS in BigBOSS Planck 0.075 30 75 ACTPOL 0.130 52 130 ---------- ----------- ----------- ------------ : \[Tab1:SN\_dFK\] Estimates of the total and per single *los* signal–to–noise (S/N) of the $\langle\delta{\mathcal{F}}\,\kappa\rangle$ cross–correlation for different CMB experiments combined with BOSS and BigBOSS.[]{data-label="table1"} It is necessary to point out here that despite that the value of the S/N for $\langle \delta{\mathcal{F}}\kappa \rangle$ is almost three times larger than the one for $\langle \delta{\mathcal{F}}^2\kappa \rangle$, the actual measurement of the former correlator strongly depends on the ability of fitting the continuum of the [Lyman-$\alpha$[ ]{}]{}spectrum. The $\langle \delta{\mathcal{F}}^2\kappa \rangle$ correlator, on the other hand, is sensitive to the interplay between long and short wavelength modes and as such should be less sensitive to the continuum fitting procedure. Therefore, even if it is characterized by a lower S/N, it may actually be the easier to measure in practice. The numbers obtained above are particularly encouraging since the S/N values are typically very large and well above unity. ---------- ----------- ----------- ------------ CMB Exp. S/N Total S/N Total S/N per *los* in BOSS in BigBOSS Planck 0.024 9.6 24 ACTPOL 0.05 20.0 50 ---------- ----------- ----------- ------------ : \[Tab1:SN\_dF2K\] Estimates of the total and per single *los* signal–to–noise (S/N) of the $\langle\delta{\mathcal{F}}^2\,\kappa\rangle$ cross–correlation for different CMB experiments combined with BOSS and BigBOSS.[]{data-label="table2"} Analysis {#spectral} -------- Having developed a calculation framework for estimating $\langle \delta{\mathcal{F}}^r\kappa \rangle$ and the S/N for their measurement, we turn to estimate what is the range of [Lyman-$\alpha$[ ]{}]{}wavelengths contributing to the signal and what is the effect of changing the parameters that control the experiments’ resolution. ### Spectral Analysis {#spectral-analysis .unnumbered} We investigate here how the different [Lyman-$\alpha$[ ]{}]{}modes contribute to the correlators. This should tell us whether long wavelength modes have any appreciable effect on our observables and what is the impact of short and very short wavelength modes (in particular the ones that are expected to have entered the non-linear regime). Since the mean flux $\bar{{\mathcal{F}}}$ appearing in the definition of the flux fluctuation $\delta{\mathcal{F}}=({\mathcal{F}}-\bar{{\mathcal{F}}})/\bar{{\mathcal{F}}}$ is a *global* quantity which is usually estimated from a statistically significant sample of high resolution QSO spectra (see the discussion in [@seljak03] for the impact that such quantity has on some derived cosmological parameters), $\delta{\mathcal{F}}$ is sensitive also to modes with wavelengths longer than the [Lyman-$\alpha$[ ]{}]{}spectrum. These modes appear as a “background” in each spectra but they still have to be accounted for when crosscorrelating $\delta{\mathcal{F}}$ with $\kappa$ because the fluctuation in the flux is affected by them. More specifically, a QSO that is sitting in an overdense region that extends beyond the redshift range spanned by its spectrum will see its flux decremented by a factor that in its spectrum will appear as *constant* decrement. On the other hand, if the QSO spectrum extends beyond the edge of such overdensity, this mode would appear as a fluctuation (and not as a background) in the spectrum. This extreme scenario is somewhat mitigated by the fact that present and future QSO surveys will have many QSOs with los separated by few comoving Mpc [@BOSS_th]: as such, fluxes from neighboring QSO lying in large overdense regions should present similarities that should in principle allow to detect such large overdensities in 3D tomographical studies [@saitta08]. To measure the contributions of the different modes to the correlators, we vary $k_l$ and $k_L$ to build appropriate filters. As can be seen from Fig. \[Fig:Filters\], where three such filters are plotted for $\{k_l=0.001, k_L=0.01\}$, $\{k_l=0.01, k_L=0.1\}$ and $\{k_l=0.1, k_L=1\}$, the gaussian functional form assumed for the window function does not provide very sharp filters (hence this spectral analysis will not reach high resolution). Also, if $k_L=10\,k_l$ then the filters add exactly to one. This allows us to measure the contributions of the different wavenumber decades to the correlators and its standard deviation. ![\[Fig:Filters\] Three filters used to calculate the contribution of the different modes to the correlators, their variance and the SN ratio. The filters have $\{k_l=10^{-3}, k_L=10^{-2}\}$ (solid curve), $\{k_l=10^{-2}, k_L=10^{-1}\}$ (dotted curve) and $\{k_l=10^{-1}, k_L=1\}$ (dashed curve). Also shown is the sum of the filters (red dashed-dotted curve).](Filters.eps){width="49.00000%"} -------------------------------------------------------------------------------------------------------------------- $k_l$ $k_L$ $|\langle\delta{\mathcal{F}}\kappa\rangle|$ $\sigma_{\delta{\mathcal{F}}\kappa} Ratio $ ---------- ---------- --------------------------------------------- ------------------------------------- ---------- 1.00e-04 1.00e-03 1.66e-04 1.77e-04 9.39e-01 1.00e-03 1.00e-02 1.20e-03 1.21e-03 9.87e-01 1.00e-02 1.00e-01 2.12e-04 6.29e-04 3.37e-01 1.00e-01 1.00e+00 6.11e-07 1.42e-03 4.30e-04 1.00e+00 1.00e+01 7.26e-08 2.44e-03 2.97e-06 -------------------------------------------------------------------------------------------------------------------- : \[Table:dK\_Spectral\] Contribution of the different wavenumbers (split over decades) to the absolute value of the correlator $\langle\delta{\mathcal{F}}\kappa\rangle$, its standard deviation $\sigma_{\delta{\mathcal{F}}\kappa}$ and ratio of the two quantities. In this calculation we took into account the evolution of $A$ with redshift. Table \[Table:dK\_Spectral\] and \[Table:Spectral\] summarize the results for $\langle\delta{\mathcal{F}}\kappa\rangle$ and $\langle\delta{\mathcal{F}}^2\kappa\rangle$ respectively. Considering $\langle\delta{\mathcal{F}}\kappa\rangle$ we note immediately that the signal and the S/N ratio both peaks around $k\simeq 10^{-2}$ ${\,h\,{\rm Mpc}^{-1}}$, as expected from the fact that this signal is proportional to the two point correlation function, which in turn receives its largest contribution from the wavelengths that dominate the power spectrum: isolating the long wavelength modes of the [Lyman-$\alpha$[ ]{}]{}flux would allow to increase the S/N. However, this procedure is sensibly complicated by the continuum fitting procedures that are needed to correctly reproduce the long wavelength fluctuations of the [Lyman-$\alpha$[ ]{}]{}flux. The behavior of the variance is interesting, as in the first three decades shows an oscillating behavior. This is due to the different weights of the two terms appearing in Eq. (\[var\_dK\_full\]) for each range of wavelengths. In particular, for $k\lesssim 10^{-2}$ ${\,h\,{\rm Mpc}^{-1}}$ the variance of $\langle\delta{\mathcal{F}}\kappa\rangle$ is dominated by the first term, that is just the square of the signal. However, as the signal gets smaller with increasing $k$, for $k\gtrsim 10^{-1}$ ${\,h\,{\rm Mpc}^{-1}}$ it is the second term that dominates the variance. -------------------------------------------------------------------------------------------------------- $k_l$ $k_L$ $\langle\delta{\mathcal{F}}^2\kappa\rangle$ $\sigma_{\delta Ratio {\mathcal{F}}^2\kappa}$ ---------- ---------- --------------------------------------------- ------------------------- ---------- 1.00e-04 1.00e-03 1.08e-04 2.18e-02 4.99e-03 1.00e-03 1.00e-02 6.69e-03 1.96e-01 3.40e-02 1.00e-02 1.00e-01 5.92e-02 1.31e+00 4.52e-02 1.00e-01 1.00e+00 3.39e-01 7.06e+00 4.80e-02 1.00e+00 1.00e+01 9.92e-01 2.07e+01 4.79e-02 -------------------------------------------------------------------------------------------------------- : \[Table:Spectral\] Contribution of the different wavenumbers (split over decades) to the correlator $\langle\delta{\mathcal{F}}^2\kappa\rangle$, its standard deviation $\sigma_{\delta{\mathcal{F}}^2\kappa}$ and ratio of the two quantities. In this calculation we took into account the evolution of $A$ with redshift. ![image](S+N_kL_kC__nolowk_090527.eps){width="49.00000%"} ![image](SN_kL_kC__nolowk_090527.eps){width="49.00000%"} Regarding $\langle\delta{\mathcal{F}}^2\kappa\rangle$, it is necessary to point out two aspects. First, short wavelengths (high-$k$) modes provide the larger contribution to *both* the correlator *and* its standard deviation. Second, for $k\gtrsim 10^{-2}$ ${\,h\,{\rm Mpc}^{-1}}$ the ratio of the contribution to the correlator and to its standard deviation remain almost constant. This means that above $10^{-2}$ ${\,h\,{\rm Mpc}^{-1}}$ the different frequency ranges contribute roughly in the same proportion. This fact is both good news and bad news at the same time. It is bad news because it means that increasing the resolution of the [Lyman-$\alpha$[ ]{}]{}spectra does not automatically translate into increasing the *precision* with which the correlator will be measured, as the high-$k$ modes that are introduced will boost both the correlator and its variance in the same way. On the other, this appears also to be good news because it tells us that low resolution spectra *which do not record non-linearities on small scales* can be successfully used to measure this correlation. To increase the S/N ratio and to achieve a better precision for this measurement it is better to increase the number of QSO spectra than to increase the resolution of each single spectra. Finally, cutting off the long-wavelength modes with $k\lesssim 10^{-2}\, {\,h\,{\rm Mpc}^{-1}}$ should *not* have a great impact on the S/N ratio or on the measured value of the correlator: if on one hand the contribution of the modes with $k\lesssim 10^{-2} \, {\,h\,{\rm Mpc}^{-1}}$ are noisier due to cosmic variance, on the other hand the absolute value of such contributions to the correlator and to its variance are negligible compared to the ones arising from $k\gtrsim 10^{-2} \, {\,h\,{\rm Mpc}^{-1}}$. We can see this fact also comparing the last column of Tab. \[Table:Spectral\] with the right panel of Fig. \[Fig:kL\_kC\] where the *absolute value* of the S/N ratio is plotted for varying values of the cutoffs $k_L$ and $k_C$. By looking at the last column of Tab. \[Table:Spectral\] we see that the ratio between the correlator and its standard deviation increases until about $k\simeq 10^{-2}{\,h\,{\rm Mpc}^{-1}}$ where it levels off. Looking at the right panel of Fig. \[Fig:kL\_kC\] we notice exactly the same trend: increasing the resolution of the spectrum $k_L$ above $10^{-2}{\,h\,{\rm Mpc}^{-1}}$ does not improve the dramatically the S/N ratio. This is because from that point on each new mode contributes in almost the same amount to the correlator *and to its standard deviation*. ### Dependence on experimental resolutions {#dependence-on-experimental-resolutions .unnumbered} To analyze the impact of a change in the resolution of the experiments measuring the CMB convergence map or the [Lyman-$\alpha$[ ]{}]{}flux we consider a single QSO at redshift $z_0=2.6$ whose spectrum covers $\Delta z=0.5$ and vary $k_L$ and $k_C$. In this case we set $k_l=0$. In Fig. \[Fig:kL\_kC\] we show the value of $\lg\delta{\mathcal{F}}^2 \kappa\rg$, of its standard deviation and of its S/N ratio for varying values of $k_L$ and $k_C$. We note that both the correlator and its standard deviation increase with increasing resolution: this makes physical sense as increasing the resolution increases both the amount of information carried by each experiment and the cosmic variance associated with it. Except for very low values of $k_C$, an increase in the resolution of the [Lyman-$\alpha$[ ]{}]{}spectrum is characterized by an almost equal amount of increase in both the correlator and its cosmic variance. This implies that the S/N becomes roughly constant for $k_L\gtrsim 10^{-2}$ ${\,h\,{\rm Mpc}^{-1}}$. On the other hand, increasing $k_C$ increases both the correlator and its cosmic variance only up to the point where $k_C\simeq k_L$. Cosmological Applications ========================= Neutrinos --------- ![image](Neutrinos_Planck+Boss.eps){width="49.00000%"} ![image](Neutrinos_Pol+Boss.eps){width="49.00000%"} ![image](Neutrinos_Planck+BigBoss.eps){width="49.00000%"} ![image](Neutrinos_Pol+BigBoss.eps){width="49.00000%"} Massive neutrinos are known to suppress the growth of structure in the early universe on intermediate to small scales $k\gtrsim 10^{-2}$ ${\,h\,{\rm Mpc}^{-1}}$ [@Lesgourgues:2006nd]. Since $\langle\delta{\mathcal{F}}^2\kappa\rangle$ is mostly sensitive to the same range of scales, it seems reasonable to examine to what extent massive neutrinos will alter the $\langle\delta{\mathcal{F}}^2\kappa\rangle$ signal. The argument could also be turned around, asking how well a measurement of $\langle\delta{\mathcal{F}}^2\kappa\rangle$ would allow to constrain the sum of the neutrino masses. In this first work, we take the first route and we simply calculate how the $\langle\delta{\mathcal{F}}^2\kappa\rangle$ signal is affected by different values of the neutrino masses. We leave the analysis of the constraining power of $\langle\delta{\mathcal{F}}^2\kappa\rangle$ to a forthcoming work. Quite generally massive neutrinos affect the matter density power spectrum in a scale dependent way (see [@Lesgourgues:2006nd] for a review). To account for this effect in an exact way it would require substantial modifications of the formalism and of the code that we are currently using to evaluate $\langle\delta{\mathcal{F}}^2\kappa\rangle$. In particular, it would not be possible any longer to separate the integrations over the comoving distance from the ones over the wavenumbers $k$. We leave this important development to a future project and for the purpose of this work we rely on the following approximation [@Hu:1997vi] for the growth of the dark matter perturbations $$\delta_{\textrm{cdm}}\propto D(a)^{1-\frac{3}{5}f_{\nu}},$$ where $f_{\nu}\equiv\Omega_{\nu}/\Omega_m$. The above expression should be accurate from the very large scales down to those mildly non-linear ones of the [Lyman-$\alpha$[ ]{}]{}forest. Departures at small scales are best handled with N-body or hydrodynamical codes [@hanne08]. The second aspect that we need to take into account before proceeding with the calculation is that consistency with CMB data requires that a change in the sum of the neutrino masses is accompanied by a change in the power spectrum normalization $\sigma_8$ [@Komatsu:2008hk]. This fact has a profound consequence. Just by counting the number of powers of the power spectrum that enter in the different expressions, it is straightforward to note that $ \langle\delta{\mathcal{F}}^2\kappa\rangle\sim\sigma_8^4$, that $\sigma^2_{\langle\delta {\mathcal{F}}^2\kappa\rangle}\sim\sigma_8^6$ and that its S/N ratio is proportional to $ \sigma_8$. Consequently, a change in the neutrino masses, which requires a change in $\sigma_8$ to maintain consistency with CMB data, will cause a change in $\langle\delta{\mathcal{F}}^2\kappa\rangle$. To take this into account we proceed as follows. First we consider the set of values allowed by the WMAP-5 data in the $\sigma_8-\Sigma m_{\nu}$ space at 95% CL. These correspond the the dark red area of the center panel of Fig. 17 in Komatsu et al. [@Komatsu:2008hk]. We then choose three flat models with massive neutrinos consistent with the WMAP-5 data and we use CAMB to generate the respective dark matter power spectra to be used in the calculation. The value of the cosmological parameters used for each model are summarized in Tab. \[Table:nu\_cosm\_par\]. ------------------------------------------------------------------------------------------- Num. $\Omega_m$ $\Omega_{\Lambda}$ $\Omega_{\nu}$ $\Sigma\,m_ $\sigma_8$ h {\nu} (eV)$ ------ ------------ -------------------- ---------------- ------------- ------------ ------ 1 0.269 0.719 1.2e-2 0.54 0.657 0.70 2 0.269 0.722 8.8e-3 0.40 0.708 0.70 3 0.269 0.728 3.3e-3 0.15 0.786 0.70 4 0.256 0.744 0.0 0.0 0.841 0.72 ------------------------------------------------------------------------------------------- : \[Table:nu\_cosm\_par\] Values of the cosmological parameters assumed to estimate the effect of massive neutrinos on $\langle\delta{\mathcal{F}}^2\kappa\rangle$. All models assume flat geometry. One last point is left to be considered. Note in fact that the S/N ratio for $\langle\delta{\mathcal{F}}^2\kappa\rangle$, although increasing with $\Delta z$, does not increase at a very high rate. It seems therefore possible to speculate that subdividing the [Lyman-$\alpha$[ ]{}]{}spectra into sub-spectra, each of length $dz=0.1$, despite lowering the S/N ratio for each single sub-spectrum, would allow to reach a better measurement of the redshift dependence of the signal. Figure \[Fig:Neutrinos\] below shows the result of applying the latter procedure. The black, orange and red data points represents predicted values of the $\langle \delta{\mathcal{F}}^2\kappa\rangle$ correlator for values of $\sum\,m_{\nu}= \{0.54,0.4,0.15\}$ respectively, while the dashed black line shows the value of the correlator for a $\Lambda$CDM cosmology with massless neutrinos. As one can see, the cross-correlation signal is quite sensitive to the presence of massive neutrinos and already BOSS and Planck could provide constraints on the strength of such correlators. As pointed out above, this is due to the fact that more massive neutrinos requires smaller values of $\sigma_8$, which in turn depresses the signal. It is here necessary to point out one important caveat. In this paper, we are making a tree-level approximation to the growth rate of $k$ modes: this enables us to separate integrations along the comoving distances from integrations on the different modes. As previously mentioned, this approximation does not include the scale-dependent effects of neutrinos on the growth rate of structure. Similarly, this approximation also does not allow us to take into account the non-linearities induced by gravitational collapse, which on the other hand tend to enhance the power spectrum on small scales. We will need to either use Hyper-Extended Perturbation Theory results or non-linear simulations to evaluate these effects. Early Dark Energy {#ede} ----------------- ![\[Fig:EDE\_model\] Growth factors for the WMAP-5 flat $\Lambda$CDM cosmology (dotted curve) and for the early dark energy (EDE) model assumed in this section for comparison (solid curve).](Growth_factors.eps){width="0.99\columnwidth"} Since early dark energy or deviations from general relativity affect the growth rate of structure as a function of scale, the measurements of $\langle \delta{\mathcal{F}}^2\kappa\rangle(z)$ can in principle probe these effects. Here we focus on early dark energy (EDE) models, where dark energy makes a significant contribution to the energy density of the universe over a wide range of redshifts. The differences between EDE models and pure $\Lambda$CDM are particularly evident at high redshifts, when the former has been shown to influence the growth of the first cosmic structures both in the linear and in the non-linear regime. We consider here the EDE model proposed in [@linder06] and recently constrained by [@Xia:2009ys] (model EDE1 of [@Xia:2009ys]). We compare this model with the $\Lambda$CDM cosmology assumed until now. The differences in the growth factors for these two models is shown in Fig. \[Fig:EDE\_model\] (the difference in the Hubble parameter evolution is smaller). ![image](EDE_Planck+Boss.eps){width="49.00000%"}![image](EDE_Pol+Boss.eps){width="49.00000%"} ![image](EDE_Planck+BigBoss.eps){width="49.00000%"} ![image](EDE_Pol+BigBoss.eps){width="49.00000%"} We quantify the departure of the correlators predicted for the EDE model from the $\Lambda$CDM one using the following expression: $$\Delta\chi^2=\sum_i\frac{\left(\langle\delta{\mathcal{F}}^n\kappa\rangle_{\rm EDE}-\langle\delta{\mathcal{F}}^n\kappa\rangle_{\Lambda {\rm CDM}}\right)^2}{\sigma_{{\rm EDE},i}^2} \,.$$ The results are shown in Fig. \[figede\] and are summarized in Table IV. In this case the differences between EDE and $\Lambda$CDM are very limited and could only be appreciated at some significance with an advanced CMB experiment like ACTPOL and by increasing the number of spectroscopic QSOs with BigBOSS. However, it is worth stressing that the two models presented here are in perfect agreement with all the low redshift probes and the large-scale structure measurements provided by galaxy power spectra, CMB, Type Ia supernovae and [Lyman-$\alpha$[ ]{}]{}forest. Therefore, possible departures from $\Lambda$CDM can be investigated only exploiting the capabilities of this intermediate redshift regime with such correlations or with similar observables in this redshift range. QSO sample CMB Experiment $\Delta\chi^2$ -------------------------- ---------------- ---------------- $1.6\cdot10^5$ (BOSS) Planck 0.3451 $1.6\cdot10^5$ (BOSS) ACTPOL 2.157 $1.0\cdot10^6$ (BigBOSS) Planck 1.458 $1.0\cdot10^6$ (BigBOSS) ACTPOL 9.117 : \[Table:DChi2\] Summary of the estimated $\Delta\chi^2$ between EDE and $\Lambda$CDM for four different combinations of future QSO and CMB experiments using the $\langle\delta{\mathcal{F}}^2\kappa\rangle$ correlator. Conclusions {#discuss} =========== This work presents a detailed investigation of the cross-correlation signals between transmitted [Lyman-$\alpha$[ ]{}]{}flux and the weak lensing convergence of the CMB along the same line-of-sight. One of the motivations behind this work is that the [Lyman-$\alpha$[ ]{}]{}forest has already been shown to be a powerful cosmological tool and novel ways of exploring and deepening the understanding of the flux/matter relation could significantly improve our knowledge of the high redshift universe. These correlators are able to provide astrophysical and cosmological information: since they are sensitive to both the flux/matter relation and the value of cosmological parameters, in principle they can be used to put constraints on both. The correlators investigated in the present work have a clear physical meaning. The correlation of $\delta{\mathcal{F}}$ with $\kappa$ measures to what extent the fluctuations along the los mapped by the [Lyman-$\alpha$[ ]{}]{}forest contribute to the CMB convergence field. This correlation is dominated by long wavelength modes ($k \lesssim 10^{-1}\,{\,h\,{\rm Mpc}^{-1}}$) and as such is more sensitive to [Lyman-$\alpha$[ ]{}]{}forest continuum fitting procedures. The correlation of the flux variance $\delta{\mathcal{F}}^2$ with $\kappa $ measures to what extent the growth of short wavelength modes (mapped by the [Lyman-$\alpha$[ ]{}]{}flux) is enhanced or depressed by the fact that the latter are sitting in regions that are overdense or underdense on large scales. This interplay between short and long wavelength modes is well exemplified by the redshift dependence of the S/N ratio for $\langle\delta{\mathcal{F}}^2\kappa\rangle$: lowering the redshift increases the S/N ratio because while the variance of $\langle\delta{\mathcal{F}}^2\kappa\rangle$ is dominated by the independent growth of long and short wavelength modes, the value of $\langle\delta{\mathcal{F}}^2\kappa\rangle$ itself receives an extra contribution due to the fact that the growth of the short wavelength modes is enhanced by the presence (and independent growth) of the long wavelength modes. Furthermore, this correlator is sensitive to intermediate-to-small scales ($k\gtrsim 10^{-2}\, {\,h\,{\rm Mpc}^{-1}}$) and as such it should be less sensitive to [Lyman-$\alpha$[ ]{}]{}forest continuum fitting procedures. To estimate the values of the correlators, their variance and their S/N ratio we rely on linear theory and simple approximations, such as the fluctuating Gunn-Peterson approximation at first order. Although the framework is simplified, the results are by no means obvious since different modes enter non-trivially in these quantities and in their signal-to-noise ratio. We estimate that such correlations may be detectable at a high significance level by Planck and the SDSS-III BOSS survey, experiments that are already collecting data. Moreover, our investigation of the modes of the [Lyman-$\alpha$[ ]{}]{}forest that contribute to $ \langle\delta{\mathcal{F}}^2\kappa\rangle$ shows that the low-resolution [Lyman-$\alpha$[ ]{}]{}spectra measured by SDSS-III (which is aimed at the measurement of BAO at $z=2-4$ [@mcdonald.eisenstein:2007; @slosar09]) should have enough resolution to yield a significant S/N. The peculiar dependence of $\langle\delta{\mathcal{F}}^2\kappa\rangle$ on intermediate- to-short scales and its sensitivity to the value of the power spectrum normalization $\sigma_8$ makes it a very useful cosmological tool to test all models characterized by variations of the power spectrum on such scales. In particular, we applied our estimates to evaluate the sensitivity of $\langle\delta{\mathcal{F}}^2\kappa \rangle$ to changes in $\sigma_8$ due to variations in the sum of the neutrino masses and to show how promising this measurement could be in constraining the latter. Finally, some caveats are in order. First, the code developed to estimate $\langle \delta{\mathcal{F}}^2\kappa\rangle$ and its variance is based on the tree-level perturbation theory results reported here. As such, the results shown do not take into account nonlinearities induced by gravitational collapse. The extension of the analytic results to take into account this aspect is actually quite straightforward, as it only requires the implementation of the so-called “HyperExtended Perturbation Theory” for the bispectrum [@Scoccimarro:2000ee]. However, the implementation of such changes in a numerical code are less trivial, as the integrations over the power spectrum and over the comoving distance cannot be factored any longer. We have nonetheless reason to speculate that the nonlinearities induced by gravitational collapse will not dramatically change the picture outlined here. At the redshift range spanned by the [Lyman-$\alpha$[ ]{}]{}forest nonlinearities are normally mild and confined to short scales. Furthermore, as shown in Sec. \[spectral\], the S/N ratio for $\langle\delta {\mathcal{F}}^2\kappa\rangle$ dominated by modes with $k\gtrsim 10^{-2} \,{\,h\,{\rm Mpc}^{-1}}$, but all decades above $10^{-2} \,{\,h\,{\rm Mpc}^{-1}}$ contribute in the same proportion to both the signal *and* its variance. It is therefore conceivable to filter out of the [Lyman-$\alpha$[ ]{}]{} spectra the shortest scales, which are the most affected by nonlinearities and still be able to retain a non-negligible S/N. The second caveat pertains the estimate of the correlators’ variance. It is in fact necessary to point out that to obtain such *estimates* Wick’s theorem has been applied. Whether the use of Wick’s theorem may or may not lead to an accurate result when considering the variance of $\langle \delta{\mathcal{F}}^{2} \kappa\rangle$ is debatable. On one hand it is possible to point out that the largest part of the signal arises at small separations, where the value of the correlator is dominated by its connected part. Analogously, it could be possible to argue that the use of Wick’s theorem may lead to underestimating the correlators’ variance. An exact evaluation of the variance of $\langle \delta{\mathcal{F}}^{2} \kappa\rangle$, however, requires the exact calculation of a six point function, that to the best of our knowledge has never been determined. On the other hand it is also possible to point out that the connected part of $\langle\delta_c\delta_{c'}\delta_q^2\delta_{q'}^2\rangle$ will be significantly non-zero only when the distances between the different points is small. As such, this term will give a non-zero contribution proportional to the length of the [Lyman-$\alpha$[ ]{}]{}spectrum, which should be subdominant with respect to the ones considered in section \[variance\], that are proportional to the distance from the observer all the way to the last scattering surface. The third caveat pertains the expansion of the expression for the flux, Eq. (\[eq:FGPA\]). Despite the fact that the expansion carried out in Eq. (\[deltaF\]) is correct on scales larger than about 1 ${\,h^{-1}{\rm Mpc}}$, we point out here that the flux as expressed in Eq. (\[eq:FGPA\]) is intrinsically a non-linear function of the overdensity field. It is therefore reasonable to wonder whether the non-linearities induced by this non-linear mapping would somehow affect the conclusions presented here. A simple way to sidestep the present question is to undo the non-linear mapping by defining a new observable $\hat{{\mathcal{F}}}=-\ln({\mathcal{F}})=A(1+\delta_ {\rm IGM})^{\beta}$ and proceed by measuring its correlations. The best way to assess to what extent the above caveats affect the estimates reported in the present work is through numerical simulations, calculating the convergence field on a light cone and at the same time measuring [Lyman-$\alpha$[ ]{}]{}forest synthetic spectra and cross-correlating the two. This will be the next step in our investigation and the focus of the next publication. Finally, on the analytical side we still need to address the estimate of the correlators when the power spectrum shows evolution in redshift *and* on different scales at the same time. As pointed out, $\langle \delta{\mathcal{F}}^{2} \kappa\rangle$ is sensitive to scales $k\gtrsim 10^{-2} \,{\,h\,{\rm Mpc}^{-1}}$. As such this correlator is an ideal tool to test modifications of gravity that show scale dependent growth. At the same time, this development would also allow the implementation of the hyperextended perturbation theory results and as such to address analytically the impact of gravity induced nonlinearities on the value of the correlators. *Acknowledgements:* We thank S. Matarrese, F. Bernardeau, S. Dodelson, J. Frieman, E. Sefusatti, N. Gnedin, R. Scoccimarro, S. Ho, D. Weinberg and J. P. Uzan for useful conversations. AV is supported by the DOE at Fermilab. MV is supported by grants PD51, ASI-AAE and a PRIN MIUR. DNS and SD are supported by NSF grant AST/0707731 and NASA theory grant NNX08AH30G. DNS thanks the APC (Paris) for its hospitality in spring 2008 when this project was initiated. AV thanks IAP (Paris) for hospitality during different stages of this project. Derivation of perturbative results for $\langle \delta{\mathcal{F}}^2\, \kappa\rangle $ ===================================================================================== In this section we derive the expression for $\langle \delta{\mathcal{F}}^2\, \kappa\rangle$ shown in the text, Eqs. (\[eq:Def\_ddd12\_ddd23\]-\[eq:d2d\_23final\_kLkl\]). We move from Eq. (\[eq:deltaFmK\_1\]) and need to find an efficient way to evaluate $\langle\delta^2({\hat{n}},\chi_q)\delta({\hat{n}},\chi_c)\rangle$. We start by Fourier transforming this cumulant correlator to get $$\begin{aligned} \langle\delta^2_q\delta_c\rangle&=& \int \frac{d^3\vec{k}_1}{(2\pi)^3} \, \frac{d^3\vec{k}_2}{(2\pi)^3}\,\frac{d^3\vec{k}_3} {(2\pi)^3}\ e^{i\left[(\vec{k}_1+\vec{k}_2)\cdot\vec{x}_q+\vec{k}_3\cdot\vec{x}_c\right]}\, W_{\alpha}(k_{1,\parallel})W_{\alpha}(k_{2,\parallel})W_{\kappa}(k_{3,\perp}) \langle\delta(\vec{k}_1)\delta(\vec{k}_2)\delta(\vec{k}_3)\rangle \nonumber\\ &=&\int \frac{d^3\vec{k}_1}{(2\pi)^3} \, \frac{d^3\vec{k}_2}{(2\pi)^3}\,\frac{d^3\vec{k} _3}{(2\pi)^3}\ e^{i\left[(\vec{k}_1+\vec{k}_2)\cdot\vec{x}_q+\vec{k}_3\cdot\vec{x}_c\right]}\, (2\pi)^3\delta_D^3(\vec{k}_1+\vec{k}_2+\vec{k}_3) W_{\alpha}(k_{1,\parallel})W_{\alpha}(k_{2,\parallel})W_{\kappa}(k_{3,\perp}) B(\vec{k}_1,\vec{k}_2,\vec{k}_3) \nonumber\\ &=&\int \frac{d^3\vec{k}_1}{(2\pi)^3} \, \frac{d^3\vec{k}_2}{(2\pi)^3}\,\frac{d^3\vec{k} _3}{(2\pi)^3}\ e^{i\left[(\vec{k}_1+\vec{k}_2)\cdot\vec{x}_q+\vec{k}_3\cdot\vec{x}_c\right]}\, (2\pi)^3\delta_D^3(\vec{k}_1+\vec{k}_2+\vec{k}_3) W_{\alpha}(k_{1,\parallel})W_{\alpha}(k_{2,\parallel})W_{\kappa}(k_{3,\perp}){\nonumber\\}&\times&2\left[\,F_2(\vec{k}_1,\vec{k}_2)P_L(\vec{k}_1,\chi_1)\,P_L(\vec{k} _2,\chi_2) +\,F_2(\vec{k}_2,\vec{k}_3)P_L(\vec{k}_2,\chi_2)\,P_L(\vec{k}_3,\chi_3)+F_2(\vec {k}_3,\vec{k}_1)P_L(\vec{k}_3,\chi_3)\,P_L(\vec{k}_1,\chi_1)\right].{\nonumber\\}\label{d2d_fullexpression}\end{aligned}$$ In the second line we introduced the bispectrum $B(\vec{k}_1,\vec{k}_2,\vec{k} _3)$, while in the third line we replaced the bispectrum with the expression for its kernel $F_2$ and products of the linear matter power spectrum $P_L(\vec{k},\chi) $. For sake of brevity, we keep implicit the dependence of the window functions on the cutoff scales: $W_{\alpha}(k_{i,\parallel})=W_{\alpha}(k_{i, \parallel},k_L,k_l)$ and $W_{\kappa}(\vec{k}_{i,\perp})=W_{\kappa}(\vec{k}_{i, \perp},k_C)$. Next, we point out that the evaluation of Eq. (\[d2d\_fullexpression\]) requires in general the integration over a six dimensional $k$-space, which is further complicated by the fact that the different window functions break the spherical symmetry that one would normally exploit. In what follows we adopt the tree level approximation to the bispectrum kernel, $$\begin{aligned} F_2(\vec{k}_i,\vec{k}_j)&=&\frac{5}{7}\, +\frac{1}{2}\frac{\vec{k}_i\cdot\vec{k}_j}{k_i^2\,k_j^2}(k_i^2+k_j^2)\, +\frac{2}{7}\left(\frac{\vec{k}_i\cdot\vec{k}_j}{k_i\,k_j}\right)^2,\,\label{F2}\end{aligned}$$ which can readily be obtained from the more general expression derived by Scoccimarro and Couchman [@Scoccimarro:2000ee] $$\begin{aligned} F_2^{HEPT}(\vec{k}_i,\vec{k}_j)&=&\frac{5}{7}\,a(n,k_i)\,a(n,k_j) +\frac{1}{2}\frac{\vec{k}_i\cdot\vec{k}_j}{k_i^2\,k_j^2}(k_i^2+k_j^2)\,b(n,k_i)\,b (n,k_j) +\frac{2}{7}\left(\frac{\vec{k}_i\cdot\vec{k}_j}{k_i\,k_j}\right)^2\,c(n,k_i)\,c(n,k_j), \label{F2_HEPT}\end{aligned}$$ setting the three auxiliary functions $a(k)$, $b(k)$ and $c(k)$ that allow to account for non-linear growth of structure equal to unity. A generalization of the results shown below to take into account the more general formulation of Eq. (\[F2\_HEPT\]) is straightforward to derive. To proceed further we note that each of the three terms appearing in the square bracket of Eq. (\[d2d\_fullexpression\]) depend only on *two* of the three wavevectors. When moving from the second to the third line, it is then essential *not* to carry out the integration over the delta function, because for *each* of these terms we integrate the Dirac $\delta$ in order to obtain an expression that depends only on the same wavevectors that appear in the $F_2$ kernel. The fact that two of the three physical points are the same also spoils the cyclic symmetry of the bispectrum. In particular, the $\{1,2\}$ term will differ from the $\{2,3\}$ and $\{3,1\}$ terms. We therefore let $$\langle\delta^2_q\delta_c\rangle=\langle\delta_q^2\delta_c\rangle_{1,2}+2\langle \delta_q^2\delta_c\rangle_{2,3},$$ and start by considering $\langle\delta^2_q\delta_c\rangle_{1,2}$. Integrating over the $\delta_D$ function in order to get rid of $\vec{k}_3$ in favor of $\vec{k}_1$ and $\vec{k}_2$, and then adopting a cylindrical coordinate system in $k$-space we get $$\begin{aligned} \langle\delta^2\delta\rangle_{1,2}&=&2\,\int \frac{dk_{1,\parallel}}{2\pi}\frac{dk_{2,\parallel}}{2\pi} e^{i(k_{1,\parallel}+k_{2,\parallel})\Delta\chi\,}W_{\alpha}(k_{1,\parallel})\, W_{\alpha}(k_{2,\parallel}) \int_{|k_{1,\parallel}|}^{\infty} \frac{k_1 dk_1}{(2\pi)^2} P(\vec{k}_1,\chi_1)\,\int_{|k_{2,\parallel}|}^{\infty} \frac{k_2 dk_2}{(2\pi)^2}P(\vec{k}_2,\chi_2)\,{\nonumber\\}&\times&\int d\phi\,\int d\theta_{\perp} F_2(\vec{k}_1,\vec{k}_2)\,W_{\kappa}[|\vec{k}_{1,\perp}+\vec{k}_{2,\perp}|].\label {d2d_window1}\end{aligned}$$ As also recognized in [@Bernardeau:1995ty], the most challenging part of the calculation consists of the integration over the angular variables. This is because the convergence window function depends on $|\vec{k}_{1,\perp}+\vec{k}_{2,\perp}|$. The integration over the angular variables in this case does not necessarily lead to an expression that may be numerically efficient to evaluate. In particular we aim to keep integrations factored as much as possible. Our first goal then is to integrate $$\begin{aligned} \int d\phi\,\int d\theta_{\perp} F_2(\vec{k}_1,\vec{k}_2)\,W_{\kappa}[|\vec{k}_{1,\perp}+\vec{k}_{2,\perp}|]&=&2\pi \, \exp\left(-\frac{k^2_{1,\perp}}{k_C^2}\right)\, \exp\left(-\frac{k^2_{2,\perp}}{k_C^2}\right){\nonumber\\}&\times& \int d\theta_{\perp} F_2(\vec{k}_1,\vec{k}_2)\,\exp\left[-2\frac{k_{1,\perp}k_{2,\perp}\cos(\theta_ {\perp})}{k_C^2}\right],\end{aligned}$$ where $\theta_{\perp}$ is the angle between $\vec{k}_{1,\perp}$ and $\vec{k}_ {2,\perp}$. Now, as far as the integration over the angular variable is concerned, the kernel $F_2$ can be written as $$F_2(\vec{k}_1,\vec{k}_2)=R+S\,\cos(\theta_{\perp})+T\,\cos^2(\theta_{\perp}),$$ where we have decomposed $\vec{k}$ into its component parallel and perpendicular to the los according to $\vec{k}=k_{\parallel}{\hat{n}}+\vec{k}_{\perp}$ and extracted the terms that are proportional to different powers of $\cos(\theta_{\perp})$ $$\begin{aligned} R&=&\frac{5}{7}\, +\frac{1}{2}\frac{k_{1,\parallel}\,k_{2,\parallel}}{k_1^2\,k_2^2}(k_1^2+k_2^2)\, +\frac{2}{7}\left(\frac{k_{1,\parallel}\,k_{2,\parallel}}{k_1\,k_2}\right)^2,\\ S&=&\frac{1}{2}\frac{k_{1,\perp}\,k_{2,\perp}}{k_1^2\,k_2^2}(k_1^2+k_2^2)+\frac {4}{7}\frac{k_{1,\parallel}\,k_{2,\parallel}\,k_{1,\perp}\,k_{2,\perp}}{k_1^2\,k_2^2},\\ T&=&\frac{2}{7}\left(\frac{k_{1,\perp}\,k_{2,\perp}}{k_1\,k_2}\right)^2.\end{aligned}$$ Integration over the angular variable can then be carried out by remembering that $$\begin{aligned} \int_0^{2\pi}d\theta\,\exp\left[-\alpha\cos(\theta)\right]&=&2\pi\,I_0(\alpha),\\ \int_0^{2\pi}d\theta\,\exp\left[-\alpha\cos(\theta)\right] \cos(\theta)&=&-2\pi\,I_1 (\alpha),\\ \int_0^{2\pi}d\theta\,\exp\left[-\alpha\cos(\theta)\right] \cos^2(\theta)&=&\frac{2\pi} {\alpha}\left[\,I_1(\alpha)+\alpha\,I_2(\alpha)\right],\end{aligned}$$ where $I_n$ denotes the modified Bessel function of the first kind and n-th order. The integration over the angular variables yields $$\begin{aligned} &\int d\phi\,\int d\theta_{\perp} F_2(\vec{k}_1,\vec{k}_2)\,W_C[|\vec{k}_{1,\perp}+\vec{k}_{2,\perp}|] = (2\pi)^2\, \exp\left(-\frac{k^2_{1,\perp}}{k_C^2}\right)\, \exp\left(-\frac{k^2_{2,\perp}}{k_C^2}\right){\nonumber\\}&\times \left\{R\,I_0\left(2\frac{k_{1,\perp}k_{2,\perp}}{k_C^2}\right) -S\,I_1\left(2\frac{k_{1,\perp}k_{2,\perp}}{k_C^2}\right)+T\left[\frac{k_C^2}{2\,k_ {1,\perp}k_{2,\perp}}I_1\left(2\frac{k_{1,\perp}k_{2,\perp}}{k_C^2}\right)+I_2\left (2\frac{k_{1,\perp}k_{2,\perp}}{k_C^2}\right)\right]\right\}. \end{aligned}$$ The difficulty with this result is that every term depend on the product $k_{1,\perp} k_{2,\perp}$. As such, we are facing a 2D *joint* integration over the whole $[k_{1,\perp},k_{2,\perp}]$ domain. If on one hand this is doable, on the other hand we are more interested in obtaining a final result which is a product of integrals instead of the integral of the product. It is possible to move around this obstacle recalling that (Abramowitz and Stegun [@Abramowitz:1965hc], 9.6.10) $$\begin{aligned} I_{\nu}(z)&=&\sum_{n=0}^{\infty}\frac{1}{n!\Gamma(\nu+n+1)}\left(\frac{z}{2}\right)^ {2n+\nu}=\sum_{n=0}^{\infty}I_{\nu}^{(n)}\left(\frac{z}{2}\right)^{2n+\nu}.\end{aligned}$$ We can write the modified Bessel function splitting the dependence on $k_ {1,\perp}$ and $k_{2,\perp}$ as $$\begin{aligned} I_0\left(2\frac{k_{1,\perp}k_{2,\perp}}{k_C^2}\right)&=&\sum_{n=0}^{\infty}I_{0}^ {(n)}\left(\frac{k_{1,\perp}^2}{k_C^2}\right)^{n}\left(\frac{k_{2,\perp}^2} {k_C^2}\right)^{n},\\ I_1\left(2\frac{k_{1,\perp}k_{2,\perp}}{k_C^2}\right)&=&\frac{k_{1,\perp}k_ {2,\perp}}{k_C^2}\sum_{n=0}^{\infty}I_{1}^{(n)}\left(\frac{k_{1,\perp}^2} {k_C^2}\right)^{n}\left(\frac{k_{2,\perp}^2}{k_C^2}\right)^{n},\\ I_2\left(2\frac{k_{1,\perp}k_{2,\perp}}{k_C^2}\right)&=&\left(\frac{k_{1,\perp}k_ {2,\perp}}{k_C^2}\right)^2\sum_{n=0}^{\infty}I_{2}^{(n)}\left(\frac{k_{1,\perp}^2} {k_C^2}\right)^{n}\left(\frac{k_{2,\perp}^2}{k_C^2}\right)^{n},\end{aligned}$$ where for sake of brevity we use the following notation for the coefficients $$\begin{aligned} I_0^{(n)}&=&\frac{1}{n!^2},\\ I_1^{(n)}&=&\frac{1}{n!(n+1)!}=\frac{I_0^{(n)}}{n+1},\\ I_2^{(n)}&=&\frac{1}{n!(n+2)!}=\frac{I_0^{(n)}}{(n+1)(n+2)}.\end{aligned}$$ Now, however complicated, this form allows us to factor the different integrals. Let’s start by considering the term $R\,I_0$. We have $$\begin{aligned} R\,I_0&=&\sum_{m=0}^{\infty}I_{0}^{(m)}\left(\frac{k_{1,\perp}^2}{k_C^2}\right)^{m} \left(\frac{k_{2,\perp}^2}{k_C^2}\right)^{m} \left[ \frac{5}{7} +\frac{1}{2}\frac{k_{1,\parallel}\,k_{2,\parallel}}{k_1^2\,k_2^2}(k_1^2+k_2^2) +\frac{2}{7}\left(\frac{k_{1,\parallel}\,k_{2,\parallel}}{k_1\,k_2}\right)^2 \right]\nonumber\\ &=&\sum_{m=0}^{\infty}I_{0}^{(m)}\left(\frac{k_1^2-k_{1,\parallel}^2}{k_C^2}\right)^ {m}\left(\frac{k_2^2-k_{2,\parallel}^2}{k_C^2}\right)^{m}\frac{5}{7}\nonumber\\ &+&\sum_{m=0}^{\infty}I_{0}^{(m)}\left(\frac{k_1^2-k_{1,\parallel}^2}{k_C^2}\right)^ {m}\left(\frac{k_2^2-k_{2,\parallel}^2}{k_C^2}\right)^{m} \left[\frac{1}{2}\frac{k_{1,\parallel}\,k_{2,\parallel}}{k_1^2\,k_2^2} (k_1^2+k_2^2)\right]\nonumber\\ &+&\sum_{m=0}^{\infty}I_{0}^{(m)}\left(\frac{k_1^2-k_{1,\parallel}^2}{k_C^2}\right)^ {m}\left(\frac{k_2^2-k_{2,\parallel}^2}{k_C^2}\right)^{m} \left[\frac{2}{7}\left(\frac{k_{1,\parallel}\,k_{2,\parallel}}{k_1\,k_2}\right)^2\right],\end{aligned}$$ where in going from the first to the second step we expressed $k_{\perp}^2$ as a function of $k$ and $k_{\parallel}$ using the fact that $k^2=k_{\parallel}^2+k_{\perp}^2$. This is necessary because the power spectrum is function of $k$ and not of $k_{\perp}$. We can then proceed by defining the following functions $$\begin{aligned} \tilde{H}_m(k_{\parallel},\chi;k_C)&\equiv&\int_{|k_{\parallel}|}^{\infty}\frac{k\,dk} {2\pi}\,\sqrt{I_0^{(m)}}\,P(k,\chi)\,\left(\frac{k^2-k_{\parallel}^2}{k_C^2}\right)^{m}\, \exp\left(-\frac{k^2-k_{\parallel}^2}{k_C^2}\right),\label{def:Htilde}\\ \tilde{L}_m(k_{\parallel},\chi;k_C)&\equiv& \int_{|k_{\parallel}|}^{\infty}\frac{dk}{2\pi k} \,\sqrt{I_0^{(m)}}\,P(k,\chi)\,\left(\frac{k^2- k_{\parallel}^2}{k_C^2}\right)^{m}\,\exp\left(-\frac{k^2-k_{\parallel}^2} {k_C^2}\right).\label{def:Ltilde}\end{aligned}$$ It is important to notice that because of the integration domain *all* the above functions are *even* in $k_{\parallel}$, regardless of the value of $m$. With the help of these functions we then have $$\begin{aligned} && \int_{|k_{1,\parallel}|}^{\infty} \frac{k_1 dk_1}{(2\pi)^2} P(\vec{k}_1,\chi_1)\,\int_{|k_{2,\parallel}|}^{\infty} \frac{k_2 dk_2}{(2\pi)^2}P(\vec{k}_2,\chi_2) \int d\phi\,\int d\theta_{\perp} R\,W_{\kappa}[|\vec{k}_{1,\perp}+\vec{k}_{2,\perp}|]\nonumber\\ &=&\frac{5}{7}\sum_{m=0}^{\infty}\tilde{H}_m(k_{1,\parallel},\chi_1)\tilde{H}_m(k_ {2,\parallel},\chi_2) +\frac{2\,k_{1,\parallel}^2\,k_{2,\parallel}^2}{7}\sum_{m=0}^{\infty}\tilde{L}_m(k_ {1,\parallel},\chi_1)\tilde{L}_m(k_{2,\parallel},\chi_2)\nonumber\\ &+&\frac{k_{1,\parallel}\,k_{2,\parallel}}{2}\left[\sum_{m=0}^{\infty} \tilde{H}_m(k_{1,\parallel},\chi_1)\tilde{L}_m(k_{2,\parallel},\chi_2) +\sum_{m=0}^{\infty}\tilde{L}_m(k_{1,\parallel},\chi_1)\tilde{H}_m(k_{2,\parallel}, \chi_2)\right].\end{aligned}$$ We have therefore succeeded in obtaining an expression that has the dependence on $k_{1,\parallel}$ and $k_{2,\parallel}$ *completely factored*. The sums over $m$ and the fact that each term is a product of factors that only depend either on $k_{1,\parallel}$ or on $k_{2,\parallel}$ allows an integration term by term and at the same time to bypass the two dimensional joint integration. We can then proceed exactly in the same way for the other two terms, $S\,I_1$ and $T\,I_2$ with the only difference that in order to obtain expressions where only the coefficients of the modified Bessel function of $0$-th order $I_0^{(m)}$ appear we use the fact that $I_1^{(m)}=(m+1)\,I_0^{(m+1)}$. We then obtain for the $S$ term the following expression $$\begin{aligned} && \int_{|k_{1,\parallel}|}^{\infty} \frac{k_1 dk_1}{(2\pi)^2} P(\vec{k}_1,\chi_1)\,\int_{|k_{2,\parallel}|}^{\infty} \frac{k_2 dk_2}{(2\pi)^2}P(\vec{k}_2,\chi_2)\, \int d\phi\,\int d\theta_{\perp} S\,\cos(\theta_{\perp})\,W_{\kappa}[|\vec{k}_{1,\perp}+\vec{k}_{2,\perp}|]\\ &=&-\frac{k_C^2}{2}\sum_{m=0}^{\infty}m\,\left[\tilde{H}_m(k_{1,\parallel}, \chi_1)\tilde{L}_m(k_{2,\parallel},\chi_2) +\tilde{L}_m(k_{1,\parallel},\chi_1)\tilde{H}_m(k_{2,\parallel},\chi_2)\right] -\frac{4\,k_C^2\,k_{1,\parallel}\,k_{2,\parallel}}{7}\sum_{m=0}^{\infty}\,m\,\tilde{L} _m(k_{1,\parallel},\chi_1)\tilde{L}_m(k_{2,\parallel},\chi_2).\nonumber\end{aligned}$$ Finally, the $T$ term gives $$\begin{aligned} &&\int_{|k_{1,\parallel}|}^{\infty} \frac{k_1 dk_1}{(2\pi)^2} P(\vec{k}_1,\chi_1)\,\int_{|k_{2,\parallel}|}^{\infty} \frac{k_2 dk_2}{(2\pi)^2}P(\vec{k}_2,\chi_2) \int d\phi\,\int d\theta_{\perp} T\,\cos^2(\theta_{\perp})\,W_C[|\vec{k}_{1,\perp}+\vec{k}_{2,\perp}|] \nonumber\\ &=&\frac{k_C^4}{7}\sum_{m=0}^{\infty}\,m\,(2m-1)\,\tilde{L}_m(k_{1,\parallel}, \chi_1)\tilde{L}_m(k_{2,\parallel},\chi_2).\end{aligned}$$ With the introduction of the definitions (\[def:Htilde\]-\[def:Ltilde\]) and with the series expansion for the modified Bessel function we have therefore managed to carry out the integration over the perpendicular part of the wavevector. We are then left with the integration over $k_{\parallel}$. First recall that the window functions acting on the [Lyman-$\alpha$[ ]{}]{}flux are $$\begin{aligned} W_{\alpha}(k_{\parallel},k_L,k_l)&\equiv&\left[1-e^{-(k_{\parallel}/k_l)^2}\right]e^{- (k_{\parallel}/k_L)^2} =e^{-(k_{\parallel}/k_L)^2}-e^{-(k_{\parallel}/\bar{k})^2},\end{aligned}$$ and that in Eq. (\[d2d\_window1\]) they decouple from one another. We can proceed further by defining the following function $$\begin{aligned} f^{(n)}_m(\Delta\chi,\chi;k_C,k_L)&\equiv&\int_{-\infty}^{\infty}\frac{dk_{\parallel}} {2\pi}\,\left(\frac{k_{\parallel}}{k_L}\right)^n\exp\left[-\frac{k_{\parallel}^2} {k_L^2}+ik_{\parallel}\Delta\chi\right] \tilde{f}_m(k_{\parallel},\chi;k_C),\label{f_m^n}\end{aligned}$$ where $f=\{H,L\}$. It is straightforward to note that because all the tilde functions are even in $k_{\parallel}$, depending on the value of $n$ the above Fourier transforms are either purely real (if $n$ is even) or purely imaginary (if $n$ is odd). Furthermore, if $n$ is even the above functions are *real* and *even*, while if $n$ is odd the above functions are *imaginary* and *odd*. Carrying out the integration on $k_{\parallel}$ is then straightforward, as it just corresponds the replacement $k_{\parallel}^n \tilde {f}_m(k_{\parallel},\chi;k_C)\rightarrow k_L^n f_m^{(n)}(\Delta\chi,\chi;k_C,k_L)- \bar{k}^n f_m^{(n)}(\Delta\chi,\chi;k_C,\bar{k})$. Finally, from a computational point of view this approach is rather efficient, as the tilded function need to be calculated only once and then used to construct the two-index functions. With the help of these auxiliary functions we can finally obtain the following expression for the cumulant correlator $\langle\delta^2\delta\rangle_{1,2}$ $$\begin{aligned} \langle\delta^2\delta\rangle_{1,2}&=& 2\sum_{m=0}^{\infty}\left\{\frac{5}{7}\, \left[H_m^{(0)}(\Delta\chi,\chi_q;k_C,k_L)-H_m^{(0)}(\Delta\chi,\chi_q;k_C,\bar{k}) \right]^2\right.\nonumber\\ &+&\left[k_L\,H_m^{(1)}(\Delta\chi,\chi_q;k_C,k_L)-\bar{k}\,H_m^{(1)}(\Delta\chi, \chi_q;k_C,\bar{k})\right] \left[k_L\,L_m^{(1)}(\Delta\chi, \chi_q;k_C,k_L)-\bar{k}L_m^{(1)}(\Delta\chi, \chi_q;k_C,\bar{k})\right] {\nonumber\\}&-&m\,k_C^2\,\left[H_m^{(0)}(\Delta\chi,\chi_q;k_C,k_L)-H_m^{(0)}(\Delta\chi, \chi_q;k_C,\bar{k})\right]\left[L_m^{(0)}(\Delta\chi,\chi_q;k_C,k_L)-L_m^{(0)}(\Delta \chi,\chi_q;k_C,\bar{k})\right] {\nonumber\\}&+& \frac{2}{7}\left[k_L^2\,L_m^{(2)}(\Delta\chi,\chi_q;k_C,k_L)-\bar{k}^2\,L_m^{(2)} (\Delta\chi,\chi_q;k_C,\bar{k})\right]^2{\nonumber\\}&-&\frac{4m}{7}\, k_C^2\,\left[k_L\,L_m^{(1)}(\Delta\chi,\chi_q;k_C,k_L)-\bar{k}\,L_m^{(1)}(\Delta\chi, \chi_q;k_C,\bar{k})\right]^2 {\nonumber\\}&+&\left.\frac{m(2m-1)}{ 7}\,k_C^4\,\left[L_m^{(0)}(\Delta\chi,\chi_q;k_C,k_L)-L_m^{(0)}(\Delta\chi, \chi_q;k_C,\bar{k})\right]^2 \right\}.\label{eq:d2d_12_kLkl_final}\end{aligned}$$ A cautionary note is in order. As mentioned above, the functions defined through Eq. (\[f\_m\^n\]) are purely imaginary if the index $(n)$ is odd. However, notice that in Eq. (\[eq:d2d\_12\_kLkl\_final\]) above there are always two such functions that appear together (as in the case with $H_m^{(1)} L_m^{(1)}$), thus ensuring that $\langle\delta^2\delta\rangle_{1,2}$ is always real valued. Let’s now move to calculate $\langle\delta^2\delta\rangle_{2,3}$. Notice incidentally that this term is exactly equal to $\langle\delta^2\delta\rangle_{3,1}$. We start from the now usual expression $$\begin{aligned} \langle\delta^2\delta\rangle_{2,3}&=&2\,\int \frac{dk_{2,\parallel}}{2\pi}\frac{dk_ {3,\parallel}}{2\pi} e^{-i\,k_{3,\parallel}\Delta\chi}\, W_{\alpha}(-k_{2,\parallel}-k_{3,\parallel })\, W_{\alpha}(k_{2,\parallel}) \int_{|k_{2,\parallel}|}^{\infty} \frac{k_2 dk_2}{(2\pi)^2} P(\vec{k}_2,\chi_2)\,\int_{|k_{3,\parallel}|}^{\infty} \frac{k_3 dk_3}{(2\pi)^2}P(\vec{k}_3,\chi_3)\, {\nonumber\\}&\times&\int d\phi\,\int d\theta_{\perp} F_2(\vec{k}_2,\vec{k}_3)\,W_{\kappa}(\vec{k}_{3,\perp}),\label{d2d_window23}\end{aligned}$$ where, as previously, we have traded the integrations over $k_{i,\perp}$ for the ones over $k_i$. In this case the integration over the angular variables doesn’t pose any problem as the window function $W_{\kappa} $ is actually a function of $k_{3,\perp}$ only and it can be safely pulled out of the angular integrals $$\begin{aligned} \int d\phi \int d\theta_{\perp} F_2(\vec{k}_2,\vec{k}_3) &=&\frac{(2\pi)^2}{7}\,\left(5+1\right) +\frac{(2\pi)^2}{2}\,k_{2,\parallel}\,k_{3,\parallel} \left(\frac{1}{k_2^2}+\frac{1}{k_3^2}\right) +\frac{(2 \pi)^2}{7}\, \left[3\frac{k_{2,\parallel}^2\,k_{3,\parallel}^2}{k_2^2\,k_3^2}-\left(\frac{k_ {2,\parallel}^2}{k_2^2}+\frac{k_{3,\parallel}^2}{k_3^2}\right)\right].\label {Int_ang_vars_nowindow}\end{aligned}$$ It is here necessary to point out that since $W_{\kappa}$ depends only on $k_{3,\perp}$, the tilded functions that will appear when the integration over $k_2$ is carried out will contain no filter function. We characterize this functions by substituting to $k_C$ the $\infty$ symbol, as to all extent the gaussian filter with $k_C\rightarrow\infty$ just yields unity. We then have $$\begin{aligned} && \int_{|k_{2,\parallel}|}^{\infty} \frac{k_2 dk_2}{(2\pi)^2} P(\vec{k}_2,\chi_2)\,\int_{|k_{3,\parallel}|}^{\infty} \frac{k_3 dk_3}{(2\pi)^2}P(\vec{k}_3,\chi_3) \int d\phi\,\int d\theta_{\perp} F_2(\vec{k}_2,\vec{k}_3)\,W_{\kappa}(\vec{k}_{3,\perp})\nonumber\\ &=&\int_{|k_{2,\parallel}|}^{\infty} \frac{k_2 dk_2}{(2\pi)^2} P(\vec{k}_2,\chi_2) \int_{|k_{3,\parallel}|}^{\infty} \frac{k_3 dk_3}{(2\pi)^2}P(\vec{k}_3,\chi_3)\exp\left(-\frac{k_{3}^2-k_{3,\parallel}^2}{ k_C^2}\right)\nonumber\\ &\times&(2\pi)^2\left[\frac{5}{7} +\frac{1}{2}\,k_{2,\parallel}\,k_{3,\parallel} \left(\frac{1}{k_2^2}+\frac{1}{k_3^2}\right)+\frac{1}{7} \,\frac{1}{k_2^2\,k_3^2} \left(2\,k_{2,\parallel}^2\,k_{3,\parallel}^2+k_{2,\perp}^2\,k_{3,\perp} ^2\right)\right]\nonumber\\ &=&\frac{6}{7}\,\tilde{H}_0(k_{2,\parallel},\chi_2;\infty)\tilde{H}_0(k_{3, \parallel},\chi_3;k_C) +\frac{1}{2}\,k_{2,\parallel}\,k_{3,\parallel}\tilde{L}_0(k_{2,\parallel}, \chi_2;\infty)\tilde{H}_0(k_{3,\parallel},\chi_3;k_C) {\nonumber\\}&+&\frac{1}{2}\,k_{2,\parallel}\,k_{3,\parallel}\tilde{H}_0(k_{2,\parallel}, \chi_2;\infty)\tilde{L}_0(k_{3,\parallel},\chi_3;k_C) +\frac{3}{7} \,k_{2,\parallel}^2\,k_{3,\parallel}^2\tilde{L}_0(k_{2,\parallel}, \chi_2;\infty)\tilde{L}_0(k_{3,\parallel},\chi_3;k_C){\nonumber\\}&-&\frac{1}{7} k_{3,\parallel}^2\,\tilde{H}_0(k_{2,\parallel},\chi_2;\infty)\,\tilde{L} _0(k_{3,\parallel},\chi_3;k_C) -\frac{1}{7}k_{2,\parallel}^2\,\tilde{ L}_0(k_{2,\parallel}, \chi_2;\infty)\tilde{H}_0(k_{3,\parallel},\chi_3;k_C).\end{aligned}$$ The expression for the window function acting on the [Lyman-$\alpha$[ ]{}]{}flux is in this case $$\begin{aligned} W_{\alpha}(-k_{2,\parallel} -k_{3,\parallel})\,W_{\alpha}(k_{2,\parallel}) &=&\left[1-e^{-\left(\frac{k_{2,\parallel}+k_{3,\parallel}}{k_l}\right)^2}\right]e^{-\left (\frac{k_{2,\parallel}+k_{3,\parallel}}{k_L}\right)^2} \left[1-e^{-\left(\frac{k_{2,\parallel}}{k_l}\right)^2}\right]e^{-\left(\frac{k_{2,\parallel}} {k_L}\right)^2} {\nonumber\\}&=&e^{-k_{3,\parallel}^2/k_L^2}\left(e^{-2k_{2,\parallel}^2/k_L^2}-e^{-k_ {2,\parallel}^2/\hat{k}^2}\right) \sum_n\frac{(-2)^n}{n!}\left(\frac{k_{2,\parallel}}{k_L}\right)^n \left(\frac{k_{3,\parallel}}{k_L}\right)^n{\nonumber\\}&+&e^{-k_{3,\parallel}^2/\bar{k}^2} \left(e^{-2k_{2,\parallel}^2/\bar{k}^2}-e^{-k_{2,\parallel}^2/\hat{k}^2}\right) \sum_n\frac{(-2)^n}{n!}\left(\frac{k_{2,\parallel}}{\bar{k}}\right)^n \left(\frac{k_{3,\parallel}}{\bar{k}}\right)^n,\end{aligned}$$ where we have recast the window function in a combination that is suitable for furthering the calculation. Notice in fact that the first and second term in the sum differ only by the presence of $k_L$ or $\bar{k}$ in the denominators of the exponentials. Furthermore, the terms in square brackets are functions of $k_ {2,\parallel}$ only. We then define the coefficients $$\begin{aligned} \bar{f}_m^{(n)}(\chi;k_L)&\equiv&\int_{-\infty}^{\infty}\frac{dk_{\parallel}}{2\pi}\left (\frac{k_{\parallel}}{k_L}\right)^n\left[e^{-2k_{\parallel}^2/k_L^2}-e^{-k_{\parallel} ^2/\hat{k}^2}\right]\tilde{f}_m(k_{\parallel},\chi,\infty),\\ \bar{f}_m^{(n)}(\chi;\bar{k})&\equiv&\int_{-\infty}^{\infty}\frac{dk_{\parallel}}{2\pi}\left (\frac{k_{\parallel}}{\bar{k}}\right)^n\left[e^{-2k_{\parallel}^2/\bar{k}^2}-e^{-k_ {\parallel}^2/\hat{k}^2}\right] \tilde{f}_m(k_{\parallel},\chi,\infty).\end{aligned}$$ A point worth making is that the second expression can be obtained from the first one with the substitution $k_L\rightarrow \bar{k}$ in the denominators but *not* in the expression for $\hat{k}$, hence the necessity of two separate definitions. Considering then the following generic term, it is possible to show that $$\begin{aligned} && \int\frac{dk_2}{2\pi}\frac{dk_3}{2\pi}\,k_2^p\,k_3^q\, \tilde{f}_i(k_2,\chi_2;\infty)\tilde{g}_j(k_3,\chi_3;k_C) W_{\alpha}(-k_2 -k_3)\,W_{\alpha}(k_2)\,e^{-ik_3\Delta\chi}{\nonumber\\}&=&\sum_m\frac{(-2)^m}{m!} \left[k_L^{(p+q)}g_j^{(q+m)}(\Delta\chi,\chi;k_C,k_L)\bar{f}_i^{(p+m)}(\chi_2,k_L) +\bar{k}^{(p+q)}g_j^{(q+m)}(\Delta\chi,\chi;k_C,\bar{k})\bar{f}_i^{(p+m)}(\chi_2,\bar {k})\right],\end{aligned}$$ which then leads directly to $$\begin{aligned} \langle\delta_q^2\delta_c\rangle_{2,3}&=& 2\sum_{m=0}^{\infty}\frac{(-1)^m\,2^m}{m!} \left[\frac{6}{7}\bar{H}_0^{(m)}(k_L)H_0^{(m)}(\Delta\chi;k_C,k_L)\right. +\frac{1}{2}k_L^2\bar{L}_0^{(m+1)}(k_L)\,H_0^{(m+1)}(\Delta\chi;k_C,k_L){\nonumber\\}&+&\frac{1}{2}k_L^2\bar{H}_0^{(m+1)}(k_L)\,L_0^{(m+1)}(\Delta\chi;k_C,k_L) +\frac{3}{7}k_L^4\,\bar{L}_0^{(m+2)}(k_L)\,L_0^{(m+2)}(\Delta\chi;k_C,k_L){\nonumber\\}&-&\left. \frac{k_L^2}{7} \bar{H}_0^{(m)}(k_L)\,L_0^{(m+2)}(\Delta\chi;k_C,k_L) -\frac{k_L^2}{7} \bar{L}_0^{(m+2)}(k_L)\,H_0^{(m)}(\Delta\chi;k_C,k_L) + (k_L\rightarrow\bar{k})\right].\label{d2d_23final_kLkl}\end{aligned}$$ Note that in the above expression while $\bar{f}_m^{(n)}$ are always real, the $f_m^{(n)}$ can be real or imaginary depending on whether $n$ is even or odd. However, the fact that $\bar{f}_m^{(n)}$ is zero whenever the upper index is odd guarantees that $\langle\delta^2\delta\rangle_{2,3}$ is always real valued. Also, notice that while the coefficients $\bar{f}_m^{(n)}(\chi_Q;k_L)$ are decreasing with $m$, the coefficients $\bar{f}_m^{(n)}(\chi_Q;\bar{k})$ are actually *increasing* with $m$. However, the $m!$ factor present in the denominator more than compensates for these increasing coefficients and allows to truncate the series in an actual calculation. Finally, it is worth pointing out that the case without cutoff on the long wavelength mode is recovered from the above expression simply by setting $k_l=0$ and then noticing that in this case $\bar{k}=0$ and that therefore the corresponding terms appearing in Eqs. (\[eq:d2d\_12\_kLkl\_final\], \[d2d\_23final\_kLkl\]) disappear. [^1]: Notice in fact that even though in the $r=2$ case it would be reasonable to expect three factors of $D$, the first non-zero contribution to the three-point function carries four factors of $D$ because the gaussian term vanishes exactly. [^2]: We checked that in the limit where $k_L\rightarrow\infty$, $k_C \rightarrow\infty$ and $k_l\rightarrow 0$ the usual two point correlation function is recovered. Whereas one would naively expect that letting $k_L=k_C$ and $k_l=0$ would lead to recover the usual two point function calculated exploiting spherical symmetry in $k$-space with a cutoff scale equal to the common $k_L$, this is actually *not* the case. The reason for this is that the volume of $k$- space over which the integration is carried out is different for the two choices of coordinate systems. In particular, the spherical case always includes fewer modes than the cylindrical one. The two results therefore coincide *only* in the $k_L \rightarrow \infty$ limit.
--- abstract: 'We present a new approach to study galaxy evolution in a cosmological context. We combine cosmological merger trees and semi-analytic models of galaxy formation to provide the initial conditions for multi-merger hydrodynamic simulations. In this way we exploit the advantages of merger simulations (high resolution and inclusion of the gas physics) and semi-analytic models (cosmological background and low computational cost), and integrate them to create a novel tool. This approach allows us to study the evolution of various galaxy properties, including the treatment of the hot gaseous halo from which gas cools and accretes onto the central disc, which has been neglected in many previous studies. This method shows several advantages over other methods. As only the particles in the regions of interest are included, the run time is much shorter than in traditional cosmological simulations, leading to greater computational efficiency. Using cosmological simulations, we show that multiple mergers are expected to be more common than sequences of isolated mergers, and therefore studies of galaxy mergers should take this into account. In this pilot study, we present our method and illustrate the results of simulating ten Milky Way-like galaxies since $z=1$. We find good agreement with observations for the total stellar masses, star formation rates, cold gas fractions and disc scale length parameters. We expect that this novel numerical approach will be very useful for pursuing a number of questions pertaining to the transformation of galaxy internal structure through cosmic time.' author: - | Benjamin P. Moster$^{1}$ [^1], Andrea V. Macciò$^{2}$, Rachel S. Somerville$^{3}$\ $^1$ Max-Planck Institut für Astrophysik, Karl-Schwarzschild Straße 1, 85748 Garching, Germany\ $^2$ Max-Planck-Institut für Astronomie, Königstuhl 17, 69117 Heidelberg, Germany\ $^3$ Department of Physics and Astronomy, Rutgers University, 136 Frelinghuysen Rd., Piscataway, NJ 08854 bibliography: - 'moster2012.bib' title: 'Numerical hydrodynamic simulations based on semi-analytic galaxy merger trees: method and Milky-Way like galaxies' --- \[firstpage\] Galaxy: evolution – galaxies: evolution, interactions, structure – methods: numerical, N-body simulation Introduction {#sec:intro} ============ Large galaxy redshift surveys, such as the Two Degree Field Galaxy Redshift Survey [@folkes1999] and the Sloan Digital Sky Survey [@stoughton2002] have measured and quantified the large scale structure of the Universe. On the other side, computational advances also allow accurate predictions of the distribution of dark matter on large scales based on numerical simulations within the $\Lambda$ Cold Dark Matter (CDM) paradigm. The predictions of $\Lambda$CDM are in excellent agreement with observations on large scales. On small scales however, it is much more difficult to explain and predict galaxy structures as this requires an understanding of the baryonic processes that shape galaxy properties. The standard picture of hierarchical galaxy formation was introduced by @white1978 and has been subsequently extended. In this picture, small initial density fluctuations grow with time due to gravitational instability. While the dark matter collapses into haloes with a quasi-equilibrium state through violent relaxation, the baryonic matter falls into the potential wells of these haloes forming a hot gaseous halo for which the self-gravity is balanced by pressure gradients. The gas in this halo is able to cool, which reduces the pressure support and causes the accretion of cold gas from the halo onto a central disc. Here, stars can form depending on the gas density on a characteristic time-scale resulting in a rotating stellar disc. A fraction of the newly created stars is short-lived and these stars explode in supernovae (SNe) which can heat the surrounding gas, reducing the efficiency of star formation and in some cases blowing gas out of the galaxy in a galactic wind. Another potential source of feedback is provided by the accretion of gas onto a supermassive black hole (BH) in the centre of the galaxy, called active galactic nuclei (AGN). Due to the hierarchical nature of the $\Lambda$CDM scenario, a dark matter halo constantly accretes new material and eventually other galactic systems merge with it. Such a merger may be accompanied by a strong burst of star formation if the merging galaxies have a similar mass and contained significant amounts of cold gas. At the same time, angular momentum is transferred to the stars in the disc. For a major merger, the orbits of the disc stars are randomised, resulting in the destruction of the discs and the creation of an elliptical galaxy. After such a merger a new gas disc can be created and a new stellar disc formed [@moster2011a]. In minor mergers, the disc of the central galaxy typically survives, although it may be thickened and some of the stars can be removed from the disc and transferred to a spherical component while the satellite is completely destroyed [@moster2010b; @moster2011b]. In this way, galaxy mergers affect the morphology of galaxies and shape their properties. One of the major goals of galaxy formation modeling is to understand the correlations of galaxy internal properties with their formation history and environment. In addition to global properties such as luminosity or stellar mass, star formation rate, and color, we can also study the internal structure and kinematics of galaxies. For example, there are well-known scaling relations between galaxy luminosity or stellar mass, radial size, and rotation velocity or velocity dispersion (called the Fundamental Plane; the Tully-Fisher relation for late type galaxies or Faber-Jackson for early types are projections of this plane). Moreover, there are strong correlations between galaxy morphological or structural properties (e.g. spheroid vs. disk dominated) and spectrophotometric properties (color or specific star formation rate), which are not yet fully understood. In order to study the connection between large scale environment and galaxy internal properties, we can specify several requirements for a galaxy formation modeling approach. The first prerequisite is the inclusion of gas and the physical processes to which it is subject. While this may seem obvious, there are many studies in the literature that neglect the gaseous component and use initial conditions that include only the collisionless components of stars and dark matter. However, not only does the gas form stars, subsequently altering the density and gravitational potential, but the gas is also able to radiate energy away through cooling. By this mechanism, the total energy content of a system can be reduced, which is not possible in pure collisionless studies. The next requirement is high enough resolution to capture the internal structure of galaxies. Many cosmological simulations of galaxy formation have $\sim$ kpc spatial resolution, which clearly does not allow us to say anything about the internal structure or morphology of galaxies. Third, it is important to study the evolution of galaxies within a cosmological background. The cosmological framework has a large impact on the evolution of a galaxy. It specifies when a halo and a galaxy form and how much mass they accrete during their evolution. Furthermore, the cosmology determines the merger histories of the galaxies, i.e. when and how often a galaxy undergoes a merger event. Finally, while it can be useful to study a small number of systems in great detail, in order interpret the significance of any results and make use of the available large observational samples of galaxies, it is helpful to be able to simulate a statistically significant sample of systems, spanning a range of the relevant properties such as halo mass, formation history, etc. In this paper we present a novel approach that allows us to carry out numerical hydrodynamic simulations of galaxies at high resolution, within a proper cosmological context, with much greater computational efficiency than standard methods. Numerical simulations --------------------- Numerical simulations are a powerful tool for studying galaxy formation and evolution. Dissipationless $N$-body methods, which neglect the gas physics, have been extensively employed. State-of-the-art $N$-body simulations include the Millennium simulation [@springel2005d], the Bolshoi simulation [@klypin2011], the Via Lactea simulation [@diemand2007], the Aquarius project [@springel2008] and the GHALO simulations [@stadel2009]. Collectively, N-body simulations have characterized the formation of dark matter structures from the internal structure of dark matter halos, to the largest structures we expect to form on hundreds of Mpc. When a dissipational component is included in the simulation, the complexity of the problem increases substantially as does the computational time. One consequence is that cosmological hydrodynamical simulations typically cannot reach the same spatial and mass resolution of $N$-body runs. Due to the limited resolution, as well as imperfect treatment of feedback processes, cosmological hydrodynamic simulations have long suffered from a set of intertwined problems. They have tended to produce galaxies that are too compact and too bulge-dominated (the angular momentum catastrophe), and with stellar or baryonic masses that are too large compared with their halo masses (the overcooling problem). They do not typically reproduce the red colors and very low star formation rates of observed massive galaxies (another manifestation of the overcooling problem). Recent efforts have demonstrated that with very high resolution, achievable through “zoom” techniques, as well as improved treatment of star formation and stellar feedback, it may be possible to produce disk-dominated galaxies with sizes and baryon fractions that are consistent with observations. [@governato2004; @robertson2004; @okamoto2005; @governato2010; @agertz2011; @guedes2011; @stinson2012]. This is encouraging, but a drawback of this approach is the enormous computational expense. Even with very large expenditures of computational resources, it is only feasible to simulate a handful of systems at this resolution, particularly if one wishes to run the simulations to $z=0$. A third approach neglects the cosmological background and simulates an isolated galaxy or the interaction of two pre-formed galaxies in a binary merger. With this technique, it is possible to study galaxy transformations at very high resolution employing gas physics [@noguchi1988; @combes1990; @mihos1996; @cox2006b; @naab2006; @robertson2006; @moster2011a]. However, in this approach, the initial conditions (such as the properties of the progenitor galaxies and the orbit) are not derived from a cosmological context, but are specified a priori, based on observations or simply spanning a desired range of values. Perhaps more importantly, on the timescale of the merger process (a few Gyr), one expects a significant amount of new material (gas and dark matter) to be accreted in a cosmological framework, but this accretion has been neglected in most binary merger studies. This accretion can greatly affect the evolution of the galaxies [@moster2011a]. Furthermore, many galaxies experience multiple mergers over their lifetime, and indeed, these mergers frequently occur in rapid enough succession that the galaxy does not have time to relax in between (see section \[sec:mwmergers\]). Therefore this process may not be accurately simulated via a sequence of binary mergers. Semi-analytic models -------------------- The semi-analytic approach was proposed by @white1991, based on ideas presented in @white1978. This method, which could only predict average quantities, was later extended to model the properties of individual systems, based on merger histories as predicted by the CDM scenario [@kauffmann1993; @cole1994; @kauffmann1994; @baugh1996; @somerville1999a; @bower2006; @croton2006; @somerville2008a]. In a semi-analytic model (SAM) one starts by specifying a cosmological model and then traces the merger histories for a series of dark matter haloes using either an $N$-body simulation or the analytic extended Press-Schechter (EPS) formalism. The evolution of the baryonic component in the haloes is followed by using simple, yet physical analytic recipes for gas cooling and accretion, star formation and feedback. Cooling converts hot gas into cold gas, star formation converts cold gas into stars, and feedback converts cold gas into hot gas and in some cases ejects it from the halo. When two haloes merge, it is assumed that the galaxies merge on a time scale set by dynamical friction, possibly altering the properties of both systems. A sample of model galaxies that represents the galaxy population in the present-day Universe can then be built by modelling a large number of haloes that sample the halo mass function. The resulting star formation and chemical evolution histories can be convolved with stellar population synthesis models to predict observables such as luminosity and colour. For an excellent recent review on SAMs we refer to @baugh2006. Thus, in a SAM the complicated astrophysical processes that are responsible for the formation and evolution of galaxies are modelled as a set of recipes which carry a number of free parameters (it should be kept in mind, however, that numerical hydrodynamic simulations carry the same free parameters to characterize the sub-grid physics). As the current understanding of many of these processes is limited, the model parameters cannot be derived from first principles. Instead, the model is normalised with a set of observational constraints, and with the help of more detailed hydrodynamical simulations. The advantage of SAMs is their flexibility and their computational efficiency. It is possible to run many realisations in a short amount of time, and thereby test the effects of the various assumptions and model parameters. It is also possible to create ‘mock catalogs’ of large samples of galaxies (millions), which can be compared with large observational samples. Moreover, one can include more physical processes, such as AGN feedback, albeit in a schematic way. Currently, SAMs are much more successful than numerical cosmological simulations at reproducing the global properties of galaxies. Furthermore, SAMs reproduce quite well the results of hydrodynamic “zoom” simulations when similar physics is included [@hirschmann2012]. The main drawback of SAMs is that they yield only approximate and schematic information about the internal structure and morphology of galaxies. For example, most SAMs provide an estimate of the ratio of the spheroid mass to the disc mass, based on the merger history of the galaxy and the assumption that near-equal mass mergers destroy discs and build spheroids [e.g. @fontanot2011]. Many SAMs also include a simple recipe for computing the radial size and rotation velocity of the disc component based on angular momentum considerations [e.g. @somerville2008a], and recent work provides an approach for predicting the size and velocity dispersion of spheroids formed in mergers [@covington2011]. However, in the existing SAMs, these recipes are largely based on binary merger simulations, which we have argued may not accurately capture the associated physics because of the neglect of the cosmological accretion. In summary, we may consider the SAM to be an efficient tool for predicting the global properties of large samples of galaxies, but of limited use for studying galaxy internal properties. Combining simulations and semi-analytic models ---------------------------------------------- In this paper, we present a new approach that leverages the complementary strengths of semi-analytic models and hydrodynamic merger simulations to efficiently simulate the evolution of galaxies at high resolution and within a cosmological context. We start with a dark matter halo merger tree extracted from a dissipationless N-body simulation. We run the SAM within this halo merger tree to construct a *galaxy* merger tree, which specifies when each galaxy enters a larger halo and becomes a satellite, and the global properties of this galaxy (e.g. stellar mass of a bulge and disc component, cold gas mass, radial size) when it enters the halo. We use these SAM predictions to specify the initial conditions for a sequence of mergers, which we simulate with a full numerical hydrodynamic code. Specifically, we evolve the galaxy in the main branch with our hydrodynamical code, and include satellite galaxies in the simulation at the time when they cross the virial radius of the larger halo (with this time specified by the Nbody similulation) using the satellite properties predicted by the SAM. In addition, we include the growth of the main halo due to cosmological accretion, as specified by the merger tree. This approach is particularly well suited for studies that focus on the evolution of the internal structure of galaxies, including different components such as a thin and thick disc, spheroid, and stellar halo. While many applications of our model focus on the evolution of the central galaxy, we can also address the evolution of the satellite population. Finally, it is our goal to develop generalised semi-analytic prescriptions based on the results of our simulations. In this way, the existing semi-analytic recipes can also be improved. Methods {#sec:methods} ======= In this section we briefly describe the tools that are used in this paper. These are the simulation code [GADGET-2]{} that was employed both for the cosmological $N$-body simulation from which our merger trees were extracted and the hydrodynamic merger simulations, the code to create initial galaxy models, and the semi-analytic model used to populate N-body merger trees with galaxies. Throughout this paper we adopt cosmological parameters chosen to be consistent with results from [WMAP]{}-3 [@spergel2007] for a flat $\Lambda$CDM cosmological model: $\Omega_m=0.26$, $\Omega_{\Lambda}=0.74$, $h=H_0/(100$ km s$^{-1}$ Mpc$^{-1})=0.72$, $\sigma_8=0.77$ and $n=0.95$. These parameters differ only slightly from the currently favored WMAP7 parameters [@komatsu2011]. We adopt a @kroupa2001 IMF and compute all stellar masses accordingly. N-body and Smoothed Particle Hydrodynamics code {#sec:ncode} ----------------------------------------------- For all simulations in this work we employ the parallel TreeSPH-code [GADGET-2]{} [@springel2005a]. The code uses Smoothed Particle Hydrodynamics to evolve the gas using an entropy conserving scheme [@springel2002]. Radiative cooling is implemented for a primordial mixture of hydrogen and helium following @katz1996, and a spatially uniform time-independent local UV background in the optically thin limit [@haardt1996] is included. All simulations have been performed with a high force accuracy of $\alpha_{\rm force}=0.005$ and a time integration accuracy of $\eta_{\rm acc}=0.02$ [for further details see @springel2005a]. Star formation and the associated heating by supernovae (SN) is modelled following the sub-resolution multiphase ISM model described in @springel2003. The ISM in the model is treated as a two-phase medium with cold clouds embedded in a hot component at pressure equilibrium. Cold clouds form stars in dense ($\rho>\rho_{th}$) regions on a timescale chosen to match observations [@kennicutt1998]. The threshold density $\rho_{th}$ is determined self-consistently by demanding that the equation of state (EOS) is continuous at the onset of star formation. SN-driven galactic winds are included as proposed by @springel2003. In this model the mass-loss rate carried by the wind is proportional to the star formation rate (SFR) $\dot M_w= \eta \dot M_*$, where the mass-loading-factor $\eta$ quantifies the wind efficiency. Furthermore, the wind is assumed to carry a fixed fraction of the supernova energy, such that there is a constant initial wind speed $v_w$ (energy-driven wind). We do not include feedback from accreting black holes (AGN feedback) in our simulations. We compute the parameters for the multiphase feedback model following the procedure outlined in @springel2003 in order to match the Kennicutt Law. For a Kroupa IMF the mass fraction of massive stars is $\beta=0.16$ resulting in a cloud evaporation parameter of $A_0=1250$ and a SN “temperature” of $T_{\rm SN}=1.25\times10^8{\rm~K}$. Finally, the star formation timescale is set to $t_*^0=3.5{\rm~Gyr}$. For the galactic winds we adopt a mass-loading factor of $\eta = 1$ and a wind speed of $v_w \sim 500 \kms$. Merger Trees {#sec:trees} ------------ For this study we use dark matter merger trees drawn from an $N$-body simulation run with the [GADGET-2]{} code. The initial conditions for the [WMAP]{}-3 cosmology were generated using the [GRAFIC]{} software package [@bertschinger2001]. The simulation was done in a periodic box with a side length of $100$ Mpc, and contains $512^3$ particles with a particle mass of $2.8\times 10^8\Msun$ and a comoving force softening of $3.5$ kpc. Starting at a redshift of $z=43$, 94 snapshots were stored until $z=0$, equally spaced in expansion factor ($\Delta a=0.01$). Dark matter haloes were identified in the simulation snapshots using a Friends of Friends (FOF) halo finder with a linking parameter of $b=0.2$. Substructures inside the FOF groups are then identified using the [SUBFIND]{} code. For the most massive subgroup in a FOF group the virial radius and mass are determined with a spherical overdensity criterion using the fitting function by @bryan1998. The minimum particle number for haloes is set to 20, resulting in a minimum halo mass of $5.6\times10^9\Msun$. The number of parent haloes found at $z=0$ with this method is $\sim135~000$ and parent halo masses range between $10^{10}\Msun$ and $10^{15}\Msun$. The merger trees are constructed for all parent haloes at $z=0$ by connecting haloes between the 94 catalogues. The branches of the trees were determined by linking every halo to its most massive progenitor at previous snapshots. In total, we have 41 000 merger trees. Galaxy models {#sec:models} ------------- To construct the galaxy models used in our simulations we apply the method described in @springel2005b with the extension by @moster2011a. Each system is composed of a cold gaseous disc, a stellar disc and a stellar bulge with masses , and embedded in a halo that consists of hot gas and dark matter with masses and . The gaseous and stellar discs are rotationally supported and have exponential surface density profiles. The scale length of the gaseous disc is related to that of the stellar disc by $\rg = \chi \rd$. The vertical structure of the stellar disc is described by a radially independent sech$^2$ profile with a scale height $z_0$, and the vertical velocity dispersion is set equal to the radial velocity dispersion. The gas temperature is fixed by the EOS, rather than the velocity dispersion. The vertical structure of the gaseous disc is computed self-consistently as a function of the surface density by requiring a balance of the galactic potential and the pressure given by the EOS. The spherical stellar bulge is non-rotating and is constructed using the @hernquist1990 profile with a scale length . The dark matter halo has a @hernquist1990 profile with a scale length , a concentration parameter $c=\rvir/\rs$ and a dimensionless spin parameter $\lambda$. The hot gas is modelled as a slowly rotating halo with a spherical density profile following @moster2011a. We employ the observationally motivated $\beta$-profile [@cavaliere1976; @jones1984; @eke1998]: $$\rho_{\rm hg}(r) = \rho_0 \left[1+\left(\frac{r}{\rc}\right)^2\right]^{-\frac{3}{2}\beta}\;,$$ which has three free parameters: the central density $\rho_0$, the core radius  and the outer slope parameter $\beta$. We adopt $\beta = 2/3$ [@jones1984], $\rc=0.22\rs$ [@makino1998] and fix $\rho_0$ such that the hot gas mass within the virial radius is $M_{\rm hg}$. The temperature profile is fixed by assuming an isotropic model and hydrostatic equilibrium inside the galactic potential such that it is supported by pressure. In addition, the hot gaseous halo is rotating around the spin axis of the disc with a specific angular momentum $j_{\rm hg}$, which is a multiple of the specific angular momentum of the dark matter halo $j_{\rm dm}$ such that $ j_{\rm hg} = \alpha j_{\rm dm}$. The angular momentum distribution is assumed to scale with the product of the cylindrical distance from the spin axis $R$ and the circular velocity at this distance: $j(R) \propto R \; v_{\rm circ}(R)$. High resolution cosmological simulations that succeed in reproducing disc sizes find that at low redshift ($z\lta2$), $\alpha$ is greater than unity. This is because feedback processes preferentially remove low angular momentum material from the halo [@governato2010]. Similarly, @moster2011a constrain $\alpha$ by using isolated simulations of a MW-like galaxy and demanding that the evolution of the average stellar mass and scalelength found observationally be reproduced. The model that agrees best with the observational constraints has a spin factor of $\alpha=4$. We use this value throughout this work. Semi-analytic model ------------------- In this paper we use the SAM constructed by @somerville1999a in its current implementation [@somerville2008a; @somerville2012]. In this model each dark matter halo is assigned two properties: the spin parameter $\lambda$ and the concentration parameter $c$. Each top-level halo is assigned a value of $\lambda$ by selecting values randomly from a lognormal distribution with mean $\bar \lambda=0.05$ and width $\sigma_{\lambda}=0.5$ [@antonuccio2010]. The initial density profile of each halo is described by the NFW form. For $N$-body merger trees the concentration parameter and the halo position are extracted from the simulation. After accretion the orbital evolution of each subhalo is computed using the dynamical friction formula given by @boylankolchin2008, including the effects of tidal stripping and destruction. The SAM adopts a simple but fairly standard spherical cooling model, which makes use of the multi-metallicity cooling function of @sutherland1993. The hot gas density profile is assumed to be that of a singular isothermal sphere. In order to model the effects of a photo-ionising background, the SAM follows @gnedin2000 and @kravtsov2004 and defines a filtering mass $M_{\rm F}$ which is a function of redshift and depends on the re-ionisation history of the Universe. The cold gas profile is assumed to be an exponential disc with a scalelength proportional to the scalelength of the stellar disc. Only gas lying above a critical surface density threshold is available for star formation. In the quiescent phase star formation is based on the empirical Schmidt-Kennicutt law with the appropriate normalisation for a Kroupa IMF. Massive stars and supernovae impart thermal and kinetic energy to the cold interstellar medium, and cold gas may be “reheated” from the disc and either deposited in the hot halo or ejected from the halo altogether. When a subhalo loses its orbital energy due to dynamical friction the satellite merges with the central galaxy, driving a starburst. The efficiency of star formation in a merger-triggered burst is parametrized as a function of the mass ratio of the merging pairs and the gas fraction of the progenitors and calculated with the model proposed by @hopkins2009a, which is based on binary merger simulations. It is assumed that during every merger with a mass ratio above $\mu=0.1$, a fraction of the disc stars is transferred to the spheroidal or bulge component, again following the prescription outlined in @hopkins2009a. The SAM contains recipes for black hole growth and AGN feedback due to both “bright mode” AGN-driven winds and “radio mode” heating by radio jets [@somerville2008a]. However, at the halo mass scale considered here, AGN feedback has little impact on the predictions. Multiple mergers in a $\Lambda$CDM universe {#sec:mwmergers} =========================================== In order to help motivate this work we first address whether most mergers can be treated as “isolated” or whether multiple mergers are expected to be cosmologically significant. In an isolated merger (also called binary merger), two galaxies merge and become dynamically relaxed before the remnant merges with another galaxy. In multiple mergers, two galaxies have not had enough time to merge and become relaxed when another galaxy already enters their common halo. If isolated mergers are the standard event, then it is possible to study single merger events with specified parameters (e.g. in simulations) and then stack these merger events for a given merger history, in order to determine the effects of the mergers on the galaxy. This can be seen as a linear process. However, if multiple mergers are more common, then it may not be possible to string the results from binary merger simulations together. One then has to study the events with three or more galaxies involved which will have a different impact on the galaxy properties than two or more binary mergers. The merger process then becomes non-linear and the (already large) parameter space for the merger parameters multiplies. In order to study whether binary or multiple mergers are more common, we make use of the cosmological merger trees described in Section \[sec:trees\]. We divide the merger trees according to the mass of their parent halo at $z=0$ with a bin size of $\Delta \log(M/\Msun)=0.5$. We then identify all merger trees of these systems with two or more halo mergers and for every pair of mergers we record the time difference between the halo mergers. If e.g. the halo of the satellite galaxy A merges with the halo of the central galaxy C at a cosmic time $t_1=8.0\Gyr$ and later the halo of the satellite galaxy B merges with the halo of the central galaxy at $t_2=10.0\Gyr$, we store the time difference $\Delta t=2.0\Gyr$. We divide this time by the dynamical time of the halo $t_{\rm dyn}=r_{\rm vir}/V_{\rm vir}$ at the time of the first halo merger $t_1$ and distribute the resulting number in bins of width $\Delta t/t_{\rm dyn}=1$. Finally, we count the number of mergers pairs with a given mass ratio sequence and since a given redshift for each time bin. The sequence 1:4 $\rightarrow$ 1:10 for example means that the first merger has a mass ratio above 1:4 and the second merger has a mass ratio above 1:10. In order to get the probability for a merger sequence we normalise this number by dividing it by the number of mergers with the mass ratio of the first merger. We show the results of this analysis in Figure \[fig:ptdyn\]. The probability for two mergers happening within a few dynamical times of the halo is always higher than the probability for a larger time difference. This indicates that usually the second satellite galaxy enters the parent halo before the first satellite had time to merge with the central galaxy, which happens only after several dynamical times, depending on the mass ratio of the merger. For the mergers that happen since $z=1$ (upper row in the figure) his trend is most obvious. Here, the second satellite enters the halo within three dynamical times after the first satellite entered for more than 50 per cent of all merger pairs independent of mass ratio. If we start at a higher redshift, the merger pairs are on average a little more separated. Still for more the 50 per cent of all merger pairs, the second satellite enters the halo within five dynamical times after the first one since $z=5$. In summary, we find that multiple mergers are more common than sequences of isolated binary mergers. Therefore it is not optimal to focus on binary mergers and afterwards try to string their effects together. Thus merger simulations which consider several satellites entering the halo and orbiting together within the halo have to be performed and analysed. If parameters for the galaxies and the orbits are chosen from a multidimensional grid, the parameter space is then too large to cover. This means that one has to reduce this huge parameter space by selecting only those parameters that are common for the chosen cosmology. $N$-body merger trees populated with galaxies using a SAM offer this option and are therefore very practical to use as initial conditions for the merger simulations. In this way, mergers that are common in the Universe are automatically selected. Thus, the important regions of the parameter space are naturally covered. Simulations of galaxy merger trees {#sec:smt} ================================== Galaxy merger trees obtained from $N$-body merger trees in combination with a SAM provide natural initial conditions for merger simulations with high resolution. However, if we compare the galaxy models predicted by the SAM to the model galaxies used in merger simulations, we immediately recognise an important difference: the standard approach in binary merger simulations neglects the accretion of material coming from outside the initial virial radius. Standard codes used to create model galaxies only employ a dark matter halo of fixed mass. This means that all material that falls into the halo which is not included in the merger tree (i.e. does not come through satellites) is not accounted for. On the other hand, cosmological N-body simulations show a substantial increase of the mass of galactic dark matter haloes through “smooth accretion”, that then must be taken into account in a cosmologically based study of galaxy evolution. We now describe our method for including this smooth accretion in our simulations. Modelling the growth of the dark halo {#sec:smooth} ------------------------------------- In order to construct a merger tree one must define a lower mass limit (mass resolution) below which the tree is truncated. This means that all haloes that are smaller than this mass limit and fall into a larger halo are not considered as merging haloes, but as so-called smooth accretion. Thus, even if there are no mergers in a given time-step, the parent halo gains mass. In (non-cosmological) simulations of galaxies or mergers this smooth accretion is usually neglected. We model this accretion by placing the additional dark matter particles into small spherical systems which will be denoted as DM-spheres in the following. We place all DM-spheres at a specific distance from the halo centre such that the accretion rate of the DM-spheres matches the smooth accretion rate of the merger tree. The DM-spheres are uniformly distributed around the halo. In order to prevent the DM-spheres from falling to the very centre of the halo and thereby disturbing the disc, we give the DM-spheres an initial velocity in a random direction orthogonal to the radius vector. This of course, reduces the initial distance to the centre compared to freely falling particles. As we require a density profile that leads to a quick dissolving of the DM-sphere once it enters the halo (since the DM-spheres are not meant to represent substructure but smooth accretion) we employ a three dimensional Gaussian density profile. Note, that we do not add gas particles to the DM-spheres. ### Creating a spherical particle distribution We model each DM-sphere with a three dimensional Gaussian density profile $$\rho(r) = \frac{M_{\rm DMS}}{\left(\sqrt{2\pi}r_{\rm DMS}\right)^3}\exp\left(-\frac{r^2}{2r_{\rm DMS}^2}\right)\,,$$ where $M_{\rm DMS}$ is the total mass of the DM-sphere and $r_{\rm DMS}$ is a scalelength. The mass $M(r)$ enclosed within a radius $r$ is then found by integrating the density profile and the gravitational potential is found by integrating Poisson’s equation. Using $N$ particles of mass $m_i$ for each DM-sphere, a Gaussian density profile can by simply achieved by drawing a random number from a Gaussian distribution with width $r_{\rm DMS}/\sqrt3$ for each cartesian coordinate. Deriving the velocity for each particle is more complicated and has to be done using the distribution function as a function of the relative energy $f(\mathcal{E})$ which can be obtained with the so-called Eddington inversion [see e.g. @binney1987]. In order to solve for $f(\mathcal{E})$, we create a logarithmically spaced array in radius with $10^5$ bins. We fix the minimum and maximum of the bins at $10^{-3}$ and $10^2$ times the scalelength. On this finely spaced grid, we define the values for $\rho(r)$, $M(r)$ and $\Psi(r)$ and obtain the derivatives by finite differencing. The distribution function is then obtained by numerically integrating the Eddington equation. We find that it is well approximated by the fitting function $$\begin{aligned} f(\mathcal{E}) &=& \frac{2.2\times10^{-3}\Msun}{(\kpc\kms)^3} \; \sqrt{\frac{\Msun\kpc^3}{M_{\rm DMS}r_{\rm DMS}^3}}\nonumber\\ && \times Q~\left[\left(\frac{Q}{0.3}\right)^{-3.871}+\left(\frac{Q}{0.3}\right)^{0.166}\right]^{-5.4}\,,\end{aligned}$$ where $Q=\mathcal{E}/\Psi_0$ and $\Psi_0=-\Phi_0=\sqrt{2/\pi}GM_{\rm DMS}/r_{\rm DMS}$ is the maximum relative potential. With the distribution function, we can now find the velocity for each particle with a rejection sampling technique. The direction of the velocity is randomly chosen from the unit sphere. We test this model by creating $N$-body realisations of a DM-sphere of mass $M_{\rm DMS}=10^9\Msun$. This mass is below the resolution limit of the cosmological simulation used in section \[sec:mwmergers\] and is thus accounted for as smooth accretion. We model the DM-sphere with $N=100, 200$ and $1000$ particles and a scalelength of $r_{\rm DMS}=2, 5$ and $10\kpc$. We let each realisation evolve in isolation for $5\Gyr$ and measure the scalelength at every time-step. The results of this analysis are shown in Figure \[fig:dmsevo\]. All models are stable over the $5\Gyr$ of evolution, independent of the initial scalelength. For $N=100$ the scalelength increases by about 50 per cent until the end of the simulation, while for larger particle numbers the scalelength deviates only slightly from the initial value. For all particle numbers, models with a larger initial scalelength are more stable. ### Placing the dark matter systems around the halo With the model presented in the last section we can create DM-spheres that are stable in isolation, but due to their shallow potential they dissolve quickly when orbiting a dark matter halo. The next step in modeling the smooth accretion of a dark matter halo is to place dark matter at the right position, such that the accretion history of the halo is reproduced. Thus, the main difficulty here is where to position the DM-spheres and with which mass. As we want to simulate merger trees, we first extract the mass accretion history of the main branch from the $N$-body simulation, i.e. the mass of the main halo and its most massive progenitors as a function of cosmic time. The next step is to choose a starting redshift $z_{\rm start}$ which corresponds to a starting time $t_{\rm start}$, at which we start the simulation. If we assume that the profile of the parent halo does not change, which is true for our initial conditions, the virial mass of the parent halo increases as the background density decreases towards lower redshift (for a lower $\rho_{\rm crit}$ the radius that contains a certain overdensity becomes larger). We thus subtract the increasing virial radius at every time-step such that we are left with the accreted mass. Furthermore, we subtract the mass that has been accreted through mergers of resolved smaller haloes, as those will be explicitly simulated as merging haloes. We are thus left with the mass $M_{\rm smooth}(t)$ that has been added to the parent halo due to smooth accretion. In the next step we choose the number of DM-spheres $N_{\rm DMS}$, which we set equal to the number of time-steps we will use for the model. We then interpolate $M_{\rm smooth}(t)$ for every time-step $t_i$ using a Bezier curve which traces the mass accretion history and is equal to the mass found in the merger tree at $t_{\rm start}$ and $t_{N_{\rm DMS}}$. We then compute the mass that is accreted within every time-step $M_{\rm DMS}(t_i)$, and use this mass for the DM-sphere that corresponds to this time-step. If this mass is smaller than zero (i.e. the parent halo loses mass), we set the mass of the corresponding DM-sphere to zero (i.e. the DM-sphere is omitted). However, we store this lost mass and set the mass of the DM-spheres that correspond to the following time-steps to zero, until the lost mass is balanced by accreted mass. Thus the overall mass growth via smooth accretion over the whole accretion history is reproduced. Finally, we set the particle mass for each DM-sphere particle equal to the mass of the dark matter halo particles $m_{\rm dm}$ and compute the number of particles of each DM-sphere: $N_i = M_{\rm DMS}(t_i)/m_{\rm dm}$. Having determined the mass of each DM-sphere, we now need to specify where to place them. For this we make use of the free-fall time for a point mass at a radius $r$: $t_{\rm ff} = \pi [r^3/8GM(r)]^{1/2}$, where $M(r)$ is the total mass enclosed within $r$. The time it takes a DM-sphere to enter the virial radius of the halo $t_{\rm enter}= t_i-t_{\rm start}$ is shorter than the free-fall time and only slightly larger than the free-fall time at radius $r$ minus the free-fall time at the virial radius $r_{\rm vir}$, as the DM-sphere already has a velocity towards the centre when crossing the virial radius. We can thus assume that $t_{\rm enter}$ has a value of $t_{\rm enter}\gta t_{\rm ff}(r_i)-t_{\rm ff}(r_{\rm vir})$, where $r_i$ is the initial distance of the DM-sphere from the halo centre and $r_{\rm vir}$ is the virial radius of the halo at $t_i$. The initial distance of each DM-sphere is then given by $$r_i \approx \left(\frac{\sqrt{8GM(r)}}{\pi}~t_{\rm enter} + r_{\rm vir}^{3/2}\right)^{2/3}\,,$$ and we place the centre of each DM-sphere randomly on a sphere with radius $r_i$. In order to prevent the DM-spheres from passing through the centre of the halo, we assign an initial velocity to every DM-sphere in a random direction orthogonal to the radius vector. This leads to a larger time taken for a DM-sphere to enter the virial radius which partly compensates for smaller estimate of $t_{\rm enter}$. We are thus left with three free parameters: the number of DM-spheres $N_{\rm DMS}$, their scalelength $r_{\rm DMS}$, and their initial velocity $v_{\rm init}$. In order to fix these values we select a merger tree from the simulation box, presented in section \[sec:mwmergers\], that has no mergers above a mass ratio $\mu=0.02$. We can thus assume that all mass accreted by the parent halo is from smooth accretion. The index number of this merger tree is *1811* and the virial mass at $z=0$ is $10^{12}\Msun$. We start the simulation at $z_{\rm start}=1$, where the parent halo has a virial mass of $6.5\times10^{11}\Msun$, a virial velocity of $V_{\rm vir}=150\kms$, We construct an $N$-body realisation of this dark matter halo with the initial conditions generator as described in section \[sec:models\], but omitting all baryonic components. We employ $N_{\rm halo}=200~000$ particles, and add dark matter particles in DM-spheres to this model with the method outlined above. For this we choose the fiducial parameters as $N_{\rm DMS}=400$, $r_{\rm DMS}=5\kpc$ and $v_{\rm init}=30\kms$, and vary each parameter around these values, while keeping the other two parameters fixed. For about 400 DM-spheres the number of particles per DM-sphere is $\approx200$ which we have shown to be sufficient for stable DM-spheres in isolation. We evolve each system until $z=0$ with a softening length of $\epsilon=0.7\kpc$. ![Projected dark matter surface density for the smooth accretion model using the fiducial parameters. Blue colour represents regions of low density and red colour depicts high densities. Each panel measures 1 Mpc on a side and the redshift is displayed in the upper left corner of each panel. The virial radius of the halo is given by the envelope of the green region.[]{data-label="fig:dms1mpc"}](dms1mpc.jpg){width="45.00000%"} For each simulation we measure the virial mass as a function of cosmic time using a spherical overdensity criterion and the fitting function by @bryan1998. The results are presented in Figure \[fig:dms\], where the virial mass measured in the simulations is compared to that found in the merger tree. For our fiducial parameters, we are able to reproduce the accretion history of the halo very well. In the left panel the number of DM-spheres is varied. We see that neither a lower nor a higher value than the fiducial one results in a difference for the mass that can be accreted. In the middle panel we vary the scalelength of the Gaussian profile. Also for this parameter we see that the result is independent of its value. In the right panel the initial velocity is varied. Values of $v_{\rm init}=0$ and $30\kms$ lead to a very similar mass accretion history. For $v_{\rm init}=60\kms$ the DM-spheres enter the virial radius slightly later, as the orbital energy is higher and the DM-spheres need to lose some angular momentum first. For higher values ($v_{\rm init}\gta90\kms$) the virial mass of the halo is too low, compared to the merger tree. The reason is that for some DM-spheres the initial velocity is high enough, such that they orbit within the virial radius of the parent halo just for a short time, or even not at all. Instead their pericentric distance has the same dimension as the virial radius. Thus they do not fall towards the centre, but orbit the halo on a large distance where the density is not high enough such that they can lose their angular momentum due to dynamical friction. This means that the initial velocities of the DM-spheres should not be larger than half of the virial velocity of the halo. Altogether we find that our model is able to reproduce the smooth accretion history of a dark matter halo very well and does not depend on the exact values of the model parameters. ![Same as Figure \[fig:dms1mpc\], but for the core of the halo with a panel side length of 50 kpc. The envelope of the green region corresponds to approximately half the Hernquist scaleradius.[]{data-label="fig:dms50kpc"}](dms50kpc.jpg){width="45.00000%"} We finally need to check whether the Gaussian DM-spheres remain bound when they enter the parent halo, or if they are quickly dissolved such that they can be seen as smooth accretion as intended. For this we compute surface density maps of the system at six redshifts. In Figure \[fig:dms1mpc\] the maps are plotted on the scale of 1 Mpc for the model with the fiducial parameters. As the virial radius is of the order of 200 kpc (depending on redshift), the total halo is shown. In the first panel at $z=1$, all 400 DM-spheres are clearly visible, as their density is larger than the surrounding background density. As time elapses, the DM-spheres fall towards the centre of the halo. When the DM-spheres enter the halo, they are able to stay bound for a short amount of time, corresponding to less than half an orbital period, and are then dissolved. In the last panel at $z=0$, most of the DM-spheres have been destroyed. Only those DM-spheres that have just entered the halo are still bound as they have not been within the halo long enough to be tidally destroyed. As most of the DM-spheres can remain bound for almost half an orbital period, they are able to reach the pericentre of their orbit before they are dissolved. If the DM-spheres are freely falling, this would be problematic for a central disc of stars, since the constant bombardment of small objects can disturb the disc and possibly thicken it. Therefore we modelled the blobs with an initial velocity, such that the pericentre is large enough and the DM-spheres do not perturb the disc. In Figure \[fig:dms50kpc\] the surface density maps are plotted on a scale of 50 kpc such that the region within half of the Hernquist scalelength of the halo is shown. This is the typical scale up to which exponential discs with scalelengths of a few kpc can be resolved. As we can see, the density profile remains the same throughout the simulation, and no DM-spheres can be identified in this region as here the density of a DM-sphere is lower than the density of the halo. This shows that our method, which models the smooth accretion with small bound objects instead of a uniform distribution of dark matter particles, does not lead to a perturbation of central objects. Simulations of Semi-Analytic Merger Trees {#trees} ----------------------------------------- In this section we describe how we use the SAMs to populate $N$-body merger trees with galaxies and then use them as the initial conditions for hydrodynamical multiple merger simulations. A schematic view of our method is presented in Figure \[fig:samtosim\]. In a first step we select a dark matter merger tree from the large-scale $N$-body simulation and use the SAM to predict the properties of the baryonic components of each halo at every timestep. A resulting galaxy merger tree is shown in the left side of Figure \[fig:samtosim\] with time running from top to bottom. We then choose a starting time $t_i$ from which we want to simulate this tree. In our simple example, the main system experiences four mergers after $t_i$. We then use the predictions of the SAM for the central galaxy of the main halo at $t_i$ and create a particle realisation with the galaxy generator, as indicated by the brown arrow. This model galaxy is shown in the top right of Figure \[fig:samtosim\] and in our example consists of a dark matter halo (grey), a hot gaseous halo (red), a cold gaseous disc (blue), a stellar disc (yellow) and a small stellar bulge (green). We evolve this galaxy with our hydrodynamical code until the time $t_1$ when the first satellite galaxy $S_1$ enters the main halo. A particle realisation of the $S_1$ system (dark halo and galaxy) is then created using the semi-analytic prediction for the galaxy properties and included in the simulation at the virial radius of the main halo. The orbital parameters of $S_1$ (position and velocity at the time of accretion) are taken directly from the $N$-body simulation. We evolve this merger with the hydrodynamical code, until the next satellite galaxy enters the main halo. In our example, two galaxies enter the halo at the same time, and thus we create particle realisations using the predictions of the SAM for both galaxies $S_2$ and $S_3$ and include them in the simulation by positioning them at the virial radius. This procedure is repeated for all merging satellites until the final time $t_f$. In this way, we naturally include multiple mergers, when an already merging galaxy has not been fully accreted as the next galaxy is entering the halo. This is indicated at $t_3$, where the galaxy $S_3$ is still orbiting while the satellite $S_4$ enters the halo. In the example shown in Figure \[fig:samtosim\], the central galaxy grows both through accretion of dark matter and gas (e.g. from $t_i$ to $t_1$ or from $t_3$ to $t_f$) and through mergers of satellite galaxies (e.g. from $t_1$ to $t_2$). Our approach thus requires two main steps: creating particle realisations of galaxies as predicted by the SAM and combining these initial conditions in simulations as determined by the merger tree (i.e. every galaxy has to enter the simulation at the specified time and position). This means we first have to specify how the information about the galaxy properties that is computed with the SAM is transformed into three dimensional particle based galaxy models that can be simulated with a hydrodynamic code. Then we have to determine how the satellite galaxies are included in the simulation, so their position and velocity have to be calculated. We note that the starting time $t_i$, or the starting redshift $z_i$, respectively, can also be chosen such that the simulation starts at a very early epoch. In this case the central galaxy would consist only of a dark matter halo and hot gas in this halo. The dark matter will then grow by mergers and smooth accretion, and the stellar disc and bulge will form as a result of cooling and accretion of gas, and merger events. ### Creating particle realisations of semi-analytic galaxies In order to create particle realisations of the semi-analytic galaxies, we need to specify those galaxies that are included in our simulation. For this we first have to choose a starting redshift $z_i$ and a minimum merger ratio $\mu_{\rm min}$. We define this ratio as the mass of the dark matter in the entering subhalo divided by the dark matter mass of the main halo at the time the satellite passes the virial radius. The starting redshift and the minimum merger ratio are in principle free parameters and can be chosen according to the physical problem one wants to address. This makes our approach very flexible. In a semi-analytic merger tree we identify the central galaxy at the starting redshift, all satellite galaxies within the virial radius at this time (which have not merged with the central galaxy yet) and all satellite galaxies that enter the main halo at a later time. From these galaxies we only select those that fulfill our merger mass ratio criterion. For every selected galaxy we record the value for the time the galaxy enters the main halo $t_{\rm enter}$, the virial mass of the dark matter halo $M_{\rm vir}$, its concentration $c$, its spin parameter $\lambda$, and the semi-analytic predictions for the masses of the hot gaseous halo $M_{\rm hg}$, the cold gaseous disc $M_{\rm cg}$, the stellar disc $M_{\rm disc,*}$ and the stellar bulge $M_{\rm bulge}$, and the scalelength of the stellar disc $r_{\rm disc}$. All these quantities are taken at $t_i$ for the central galaxy and satellites that are already with the main halo at $t_i$, and at $t_{\rm enter}$ for all satellite galaxies that are entering the main halo after $t_i$. Additionally, for all satellite galaxies we record the dynamical friction time. The remaining structural parameters of every galaxy that are not directly predicted by the SAM are based on empirical scalings. The scaleheight of the stellar disc is assumed to be a fraction of its scalelength $z_0=\zeta r_{\rm disc}$, where typically $\zeta=0.15$. Similarly the scalelength of the gaseous disc is related to that of the stellar disc by $r_{\rm gas}=\chi r_{\rm disc}$, with a standard value of $\chi=1.5$. The core radius $r_c$ of the hot gaseous halo is related to the scalelength of the dark matter halo $r_s$ by $r_c=\xi r_s$ with a fiducial value of $\xi=0.22$ and the slope parameter of the $\beta$-profile is set to $\beta_{\rm hg}=2/3$. In order to select the number of particles in every component of the galaxies, we determine the semi-analytic prediction for the final stellar mass of the central galaxy $M_{*,f}$ and choose a number of stellar particles in the central galaxy $N_*$ that we would like to obtain at the end of our simulation in order to get the stellar particle mass $m_*=M_{*,f}/N_*$. As the simulation code produces $N_g$ generation of stellar particles from every gas particle due to star formation (where $N_g$ is typically 2), we set the mass of every gas particle to $m_{\rm gas}=N_gm_*$. In this way, all stellar particles (old and new) have the same mass. The mass of the dark matter particles is finally selected as $m_{\rm dm}=\kappa m_*$ where $\kappa$ is a free parameter. If $\kappa$ is chosen too low, the dark matter particles have a very low mass which results in a very large number of dark matter particles and thus in a high computational cost. For high values of $\kappa$ the mass of the dark matter particles can become too large, such that these massive particles perturb the disc component which results in numerical disc heating. Typically values of $\kappa\sim15$ are both computationally efficient and lead to no measurable heating. In order to prevent a component from being unstable due to a low number of particles, we remove a component if its number of particles is lower than a minimum particle threshold $N_{\rm min}$. Additionally, for simulations where we are interested in the satellite population and not in the central galaxy, we can allow for a higher resolution in the satellite galaxies. This is done by dividing the mass of the particles of every satellite by a number $N_{\rm res,sat}$. This number, however, should not be chosen to be too high, since the particle masses of the satellite would be much lower than the particle masses of the main system. This can lead to mass segregation, which means that the particles of higher mass (i.e. the central galaxy particles) will preferentially settle at the bottom of the potential. We wish to avoid this numerical effect which requires that $N_{\rm res,sat}\lta10$. The last parameter that needs to be fixed is the gravitational softening length $\epsilon$. In order to ensure that the maximum gravitational force exerted from a particle is independent of its mass, we scale the softening lengths of all particle species (dark matter, gas and stars) with the square root of the particle mass [@dehnen2001]. The normalisation of this relation is obtained with a free parameter $\epsilon_1$ that specifies the softening length for a particle of one internal mass unit of the code (i.e. $10^{10}\Msun$). The softening length is thus given as $\epsilon = \epsilon_1 \sqrt{m_{\rm part}/10^{10}\Msun}$, where $m_{\rm part}$ is the mass of the particle. For our fiducial choice of $\epsilon_1=32\kpc$ and a typical particle mass of $10^{5}\Msun$ this results in a softening length of 100 pc. Using this recipe, we create particle realisations of all selected galaxies. These can now be used as initial conditions for the hydrodynamical simulation. The next task is thus to specify when a galaxy enters the main halo, and at which positions and with what velocity. ### Performing the multiple merger simulation [@llr@]{} Parameter & Description & Fiducial value\ $z_i$ & Redshift at the start of the simulation & 1.0\ $\mu_{\rm min}$ & Minimum dark matter mass ratio & 0.03\ $\zeta$ & Ratio of scaleheight and scalelength of the stellar disc & 0.15\ $\chi$ & Ratio of scalelengths between gaseous and stellar disc & 1.5\ $\xi$ & Ratio of gaseous halo core radius and dark matter halo scaleradius & 0.22\ $\beta_{\rm hg}$ & Slope parameter of gaseous halo & 0.67\ $\alpha$ & Ratio of specific angular momentum between gaseous and dark halo& 4.0\ $N_*$ & Expected final number of stellar particles in the central galaxy & $200\,000$\ $\kappa$ & Ratio of dark matter and stellar particle mass & 15.0\ $N_{\rm res,sat}$ & Ratio of satellite and central galaxy particle mass & 1.0\ $N_{\rm min}$ & Minimum number of particles in one component & 100\ $\epsilon_1$ & Softening length in kpc for particle of mass $m=10^{10}\Msun$ & 32.0\ $t_0^*$ & Gas consumption time-scale in Gyr for star formation model & 3.5$^\dagger$\ $A_0$ & Cloud evaporation parameter for star formation model & 1250.0$^\dagger $\ $\beta_{\rm SF}$ & Mass fraction of massive stars for star formation model & 0.16$^\dagger $\ $T_{\rm SN}$ & Effective supernova temperature in K for feedback model& $1.25\times10^{8\dagger}$\ $\eta$ & Mass loading factor for wind model & 1.0\ $v_{\rm wind}$ & Initial wind velocity in for wind model & 500.0\ \ \ \[t:smtparameters\] We start with the particle realisation of the central galaxy and move into its rest frame. For all satellite galaxies we have to compute the relative position and velocity with respect to the central galaxy at the time when they are included in the simulation. Since we use merger trees drawn from simulations, we can directly extract the relative initial positions and velocities from the tree. As $t_{\rm enter}$ is defined as the time when a satellite galaxy passes the virial radius of the main halo, the initial distance is always equal to the virial radius at $t_{\rm enter}$. However, we have to divide between those galaxies that are already within the main halo at $z_i$ and those galaxies that enter the halo at a later time. For those galaxies that have entered the halo before $z_i$, we expect that they have already lost some of their angular momentum due to dynamical friction and are therefore closer to the central galaxy than the virial radius. Therefore we scale their initial distance and velocity by $$r^\prime = r \sqrt{1-\frac{t_i-t_{\rm enter}}{t_{\rm df}}} \quad {\rm and} \quad v^\prime = v \sqrt{1-\frac{t_i-t_{\rm enter}}{t_{\rm df}}}\,,$$ where $t_i$ is the starting time of the simulation, $t_{\rm enter}$ is the time the satellite entered the main halo and $t_{\rm df}$ is the dynamical friction time (i.e. the time it takes the satellite to merge with the central galaxy). The direction of the position and velocity vectors are unaltered. Before the simulation is started we include all satellite galaxies that have entered the main halo before $t_i$ and all satellites that are entering the halo at $t_i$ in the initial conditions. We evolve this system with the hydrodynamical code until the time when the first satellite enters the main halo $t_{{\rm enter}, 1}$ as specified by the merger tree. At this time we interrupt the evolution of the simulation and include the particle realisation of this satellite galaxy in the simulation. The simulation is then resumed and this process is repeated until the final time of the simulation $t_f$. The main advantage of including the satellite galaxies only when they enter the main halo is that the computational cost is reduced. The satellites are only simulated when they affect (or are affected by) the main system. Up to this point we use the SAM to compute the evolution of the satellite galaxy progenitor’s properties. In Table \[t:smtparameters\], we summarise the free parameters of our model and present our fiducial values. We have tested these values mainly on merger trees of MW-like galaxies. For systems of higher or lower halo mass they may have to be adjusted accordingly. We also list the free parameters that are used in the hydrodynamical code and its cooling, star formation and wind models. Simulations of Milky Way-like galaxies {#sec:sims} ====================================== [@lrrrrrrrrr@]{} ID & $t_{\rm enter}$ & $t_{\rm merge}~^a$ &  $\mu^{-1}$ & $\log(M_h)$ &      $c$ & $\log(M_*)$ &  $B/T$ &  $f_{\rm gas}~^b$ &  $r_{\rm disc}$\ \ Main & 6.25 & - & - & 11.55 & 4.71 & 10.16 & 0.15 & 0.41 & 2.53\ Sat 1 & 7.72 & 14.47 & 14.42 & 10.43 & 7.01 & 7.47 & 0.00 & 0.90 & 1.13\ Sat 2 & 7.88 & 13.33 & 9.48 & 10.68 & 6.77 & 9.25 & 0.37 & 0.37 & 1.36\ Sat 3 & 11.76 & 15.47 & 8.04 & 10.96 & 9.01 & 9.22 & 0.00 & 0.43 & 1.87\ \ Main & 6.25 & - & - & 11.70 & 4.56 & 10.11 & 0.00 & 0.51 & 1.65\ Sat 1 & 6.90 & 13.48 & 9.87 & 10.76 & 6.02 & 8.92 & 0.00 & 0.37 & 1.04\ Sat 2 & 7.07 & 12.52 & 15.58 & 10.62 & 6.30 & 8.73 & 0.00 & 0.58 & 1.66\ Sat 3 & 10.42 & 22.29 & 22.80 & 10.57 & 8.71 & 8.66 & 0.18 & 0.53 & 1.43\ \ Main & 6.25 & - & - & 11.53 & 4.73 & 9.88 & 0.00 & 0.41 & 2.08\ Sat 1 & 6.25 & 10.03 & 7.80 & 10.64 & 5.73 & 8.81 & 0.00 & 0.56 & 1.06\ Sat 2 & 6.74 & 14.16 & 13.12 & 10.53 & 6.17 & 8.78 & 0.00 & 0.41 & 0.91\ Sat 3 & 9.49 & 24.84 & 16.86 & 10.51 & 8.14 & 8.17 & 0.00 & 0.71 & 1.46\ Sat 4 & 9.64 & 23.02 & 10.57 & 10.75 & 7.91 & 8.76 & 0.00 & 0.70 & 2.69\ Sat 5 & 9.96 & 15.56 & 8.77 & 10.93 & 7.83 & 9.26 & 0.05 & 0.56 & 2.53\ Sat 6 & 13.31 & 27.59 & 23.16 & 10.58 & 10.95 & 6.41 & 0.00 & 1.00 & 4.19\ \ Main & 6.25 & - & - & 11.60 & 4.66 & 10.02 & 0.00 & 0.41 & 3.09\ Sat 1 & 6.08 & 13.76 & 10.52 & 10.52 & 5.72 & 8.22 & 0.00 & 0.70 & 1.38\ Sat 2 & 6.41 & 11.04 & 6.08 & 10.86 & 5.61 & 8.80 & 0.00 & 0.74 & 1.96\ Sat 3 & 6.57 & 20.96 & 21.52 & 10.41 & 6.21 & 7.96 & 0.00 & 0.81 & 1.65\ Sat 4 & 12.05 & 39.86 & 21.00 & 10.57 & 9.96 & 8.57 & 0.06 & 0.70 & 2.13\ \ Main & 6.25 & - & - & 11.57 & 4.69 & 9.84 & 0.05 & 0.68 & 3.57\ Sat 1 & 5.27 & 8.31 & 6.08 & 10.61 & 5.07 & 8.62 & 0.16 & 0.61 & 0.95\ Sat 2 & 7.39 & 14.30 & 24.23 & 10.24 & 7.05 & 7.70 & 0.00 & 0.82 & 0.92\ Sat 3 & 7.56 & 12.81 & 5.42 & 10.97 & 6.18 & 9.33 & 0.09 & 0.52 & 2.50\ Sat 4 & 7.56 & 18.16 & 10.87 & 10.66 & 6.56 & 8.11 & 0.24 & 0.92 & 2.30\ Sat 5 & 8.37 & 22.92 & 26.00 & 10.46 & 7.41 & 8.11 & 0.08 & 0.79 & 1.52\ Sat 6 & 8.85 & 27.09 & 21.83 & 10.58 & 7.57 & 8.31 & 0.00 & 0.82 & 2.23\ \ Main & 6.25 & - & - & 11.37 & 4.90 & 9.66 & 0.00 & 0.53 & 2.15\ Sat 1 & 6.41 & 11.35 & 6.81 & 10.55 & 5.94 & 8.28 & 0.00 & 0.74 & 1.55\ Sat 2 & 6.74 & 13.44 & 10.13 & 10.50 & 6.22 & 8.17 & 0.04 & 0.70 & 0.84\ Sat 3 & 9.80 & 18.33 & 14.04 & 10.63 & 8.18 & 8.72 & 0.17 & 0.78 & 2.87\ Sat 4 & 10.11 & 15.01 & 10.39 & 10.83 & 8.10 & 9.07 & 0.00 & 0.30 & 1.45\ \ \ \ \[t:mergertrees\] [**Table \[t:mergertrees\]**]{} cont.\ \ [@lrrrrrrrrr@]{} ID & $t_{\rm enter}$ & $t_{\rm merge}~~$ &  $\mu^{-1}$ & $\log(M_h)$ &      $c$ & $\log(M_*)$ &  $B/T$ &  $f_{\rm gas}~~$ &  $r_{\rm disc}$\ \ Main & 6.25 & - & - & 11.50 & 4.76 & 9.97 & 0.01 & 0.36 & 1.15\ Sat 1 & 6.57 & 11.26 & 9.81 & 10.58 & 6.03 & 8.48 & 0.00 & 0.86 & 3.37\ Sat 2 & 6.90 & 11.80 & 8.49 & 10.78 & 6.01 & 8.82 & 0.00 & 0.67 & 2.39\ Sat 3 & 7.23 & 15.89 & 20.99 & 10.51 & 6.59 & 9.35 & 0.03 & 0.00 & 1.07\ \ Main & 6.25 & - & - & 11.39 & 4.88 & 9.79 & 0.24 & 0.56 & 2.15\ Sat 1 & 6.08 & 10.61 & 14.27 & 10.20 & 6.07 & 7.84 & 0.00 & 0.83 & 1.66\ Sat 2 & 6.25 & 13.08 & 11.11 & 10.34 & 6.04 & 8.28 & 0.00 & 0.78 & 1.54\ Sat 3 & 8.37 & 20.41 & 11.29 & 10.68 & 7.12 & 9.01 & 0.00 & 0.56 & 2.76\ \ Main & 6.25 & - & - & 11.52 & 4.75 & 9.87 & 0.05 & 0.38 & 1.12\ Sat 1 & 7.23 & 11.65 & 8.07 & 10.69 & 6.32 & 8.84 & 0.00 & 0.42 & 1.14\ Sat 2 & 7.56 & 14.06 & 11.33 & 10.67 & 6.57 & 8.94 & 0.00 & 0.40 & 1.55\ Sat 3 & 11.91 & 17.82 & 8.30 & 10.90 & 9.22 & 9.34 & 0.30 & 0.55 & 2.60\ Sat 4 & 12.20 & 22.71 & 14.57 & 10.72 & 9.79 & 8.74 & 0.00 & 0.67 & 2.36\ Sat 5 & 13.45 & 23.69 & 21.42 & 10.63 & 10.97 & 7.97 & 0.00 & 0.94 & 3.38\ \ Main & 6.25 & - & - & 11.46 & 4.80 & 9.66 & 0.00 & 0.70 & 3.90\ Sat 1 & 7.56 & 10.64 & 6.62 & 10.70 & 6.53 & 8.79 & 0.00 & 0.44 & 1.25\ Sat 2 & 8.37 & 11.02 & 13.31 & 10.54 & 7.32 & 7.07 & 0.00 & 0.98 & 1.79\ Sat 3 & 8.37 & 19.33 & 20.04 & 10.36 & 7.56 & 8.23 & 0.00 & 0.72 & 1.54\ Sat 4 & 8.53 & 12.18 & 9.71 & 10.74 & 7.15 & 8.87 & 0.00 & 0.49 & 1.44\ Sat 5 & 8.85 & 17.34 & 10.50 & 10.81 & 7.24 & 9.50 & 0.03 & 0.21 & 1.38\ We employ our model in a series of simulations of MW-like galaxies, i.e. disc galaxy systems with a dark matter halo mass of $M_h\approx10^{12}\Msun$. The aim of this study is to investigate how a central disc galaxy evolves from redshift $z=1$ to $z=0$ while experiencing minor mergers. We use dark matter merger trees drawn from the $N$-body simulation presented in section \[sec:mwmergers\]. As we are interested in systems that experience only minor mergers after $z=1$, we select 10 trees with a final halo mass of $M_h\approx10^{12}\Msun$, that have only mergers with a mass ratio of $\mu<0.2$. The properties of these trees are presented in Table \[t:mergertrees\]. As a starting redshift we use $z_i=1$ which corresponds to a cosmic time of $t=6.25\Gyr$. We employ the fiducial model parameters presented in Table \[t:smtparameters\] and simulate all trees up to a redshift $z=0$. ![image](2126face.jpg){width="92.00000%"} ![image](2126edge.jpg){width="92.00000%"} The surface density for a typical merger tree (*2126*) is shown in Figures \[fig:2126face\] (face-on projection) and \[fig:2126edge\] (edge-on projection) for the stellar component (upper panels) and the gaseous component (lower panels). The central galaxy evolves in isolation for $\sim1\Gyr$ until the first two satellites enter the halo. These satellites pass the central galaxy at a pericentric distance of 55 and 90 kpc, respectively. While the first satellite continues orbiting around the central galaxy and subsequently loses angular momentum and energy, which leads to a decrease of the pericentric distance, the second satellite has enough orbital energy to leave the main halo again. Near the end of the simulation the last three satellites enter the halo and pass the central galaxy. During the evolution of the simulation, the stellar disc of the central galaxy is heated and thickened. At the same time, a stellar halo forms as a result of two processes. First, due to the close passage of the satellites, some stars in the disc are scattered out of the disc, as energy and angular momentum is transferred. Second, some satellite stars are accreted onto the central galaxy. These stars still retain some of the angular momentum of the satellite such that they do not settle in the disc, but remain in the halo. Additionally, we can identify several stellar streams which originate from the destruction of satellites. However, due to the limited resolution in this pilot study, the number of particles in these streams is not very high, so that a detailed study is not possible. This limitation can be overcome in future studies by increasing the parameter $N_{\rm res,sat}$, so that there are more particles in the satellites and thus more particles in the streams. When satellite galaxies enter the halo, a fraction of their cold gas is stripped and eventually can fall towards the centre of the system. Thus some of the stars in the central galaxy form from gas that has been stripped from satellites and accreted onto the central gaseous disc. Evolution of the stellar mass ----------------------------- In this pilot study, we analyse the properties of the central galaxy in each merger tree and these galaxies’ evolution during the course of the simulation. Clearly, the stellar mass of the central galaxy is an important quantity. In order to measure the stellar mass in the simulation we identify FOF groups for every snapshot with a linking parameter of $b=0.2$. Each FOF group is then split with the [ Subfind]{} code into a set of disjoint subhaloes that contain all particles bound to a local overdensity. The main halo which contains the central galaxy is the most massive subhalo of the FOF group. The stellar mass of the central galaxy is the total mass of all stellar particles that belong to the main halo. We plot the resulting evolution of the stellar mass for all merger trees in Figure \[fig:smass\]. The results of the simulations are given by the red line, while the semi-analytic prediction is given by the blue line. We find that all systems increase their stellar mass and have a final stellar mass that is more than twice the value at $z=1$. The stellar masses found in the simulation at $z=0$ range from $\log(m_*/\Msun) = 10.3$ to $10.7$, while the majority of systems have a stellar mass close to $\log(m_*/\Msun) = 10.5$. These results are in excellent agreement with those obtained by @moster2010a using a statistical halo occupation model. For some merger trees the prediction by the SAM agrees very well with the result of the simulation (e.g. the trees *1775*, *1975* and *2126*), while for other trees the semi-analytic prediction for the final stellar mass is much higher than what we find in the merger simulation (e.g. the trees *1968*, *1990* and *2048*). We note that the semi-analytic values for the stellar mass tend to exceed, by a small amount, the average observed stellar mass for a halo of mass $10^{12}\Msun$. This is because the version of the SAM that we used had been optimized for use with different merger trees, and we did not re-tune the parameters to specifically match this constraint. In the evolution of the stellar mass, we see that at some times the mass increases quickly in a short time interval. This is not due to an increased SFR due to an interaction-triggered starburst, but is a result of the accretion of the satellite itself. The mass of the accreted satellite is much larger than the newly formed mass as a result of a starburst. Examples can be seen for tree *1975* ($t=11.5\Gyr$), tree *2048* ($t=9\Gyr$) and tree *2181* ($t=9.6, 11.2$ and $12.1\Gyr$). Overall however, we find that most stars in the central galaxy have formed from the cold gaseous disc that is accreted from the gaseous halo (in-situ formation), and a only few stars in the central galaxy originate from accreted satellites (ex-situ formation). Evolution of the cold gas fraction ---------------------------------- We further study the evolution of the cold gas component, i.e. all gas particles with a temperature below $T=2\times10^4K$ that reside in the central disc. The resulting gas fractions are plotted in Figure \[fig:fgas\]. We find that all systems have a higher gas fraction at $z=1$, with typical values of 30 to 50 per cent. Towards $z=0$ the gas fraction then decreases to lower values of 20 to 40 per cent. These results are in very good agreement with those obtained by @stewart2009 based on indirect gas fraction estimates from star formation rate densities. In all systems that have just accreted a subhalo, the gas fraction is elevated. This happens during the first passage of the satellite, in which a considerable amount of its cold gas is unbound by tidal stripping. This gas then falls onto the central galaxy, leading to an increased SFR. We note that unlike in the SAM, this usually happens during the first passage, and not in the final coalescence, which can be much later. The semi-analytic predictions for the gas fractions agree very well with those that are found in the simulations. This is due to the very similar cooling rates in the halo for the SAM and the simulations. The accretion of cooled gas from the halo is much larger than the accretion of stripped cold gas from satellite galaxies. Although the amount of accreted gas from satellites is different in the SAM and the simulations, this effect is small, so that the total gas fractions in the central discs agree quite well. Evolution of the scale length and height ---------------------------------------- Finally we study the evolution of the scale parameters of the stellar components, i.e. the scalelength and height of the stellar disc. In order to measure these quantities in our simulations, we first use the decomposition of the particles into disc and spheroidal components. For the disc component we then compute the face-on projected surface density profile and fit the exponential disc scalelength. Similarly, we compute the projected edge-on surface density at a radius of $R=8\kpc$, and fit a ${\rm sech}^2$ function to it to obtain the disc scaleheight. We plot the resulting scale parameters in Figure \[fig:scalelength\] for all merger trees. The disc scalelength $R_{\rm disc}$ is given by the red line and the disc scaleheight $z_0$ is given by the green line. For comparison we included the prediction for the disc scalelength by the SAM which is shown by the blue line. For some systems, the scalelength in the simulation and the SAM agree very well (trees *1775*, *1990* and *2092*), while for other systems the scalelength predicted by the SAM differs from the one found in the simulation (trees *1808*, *1877* and *1975*). Interestingly, while the scalelength in the SAM always increases with time, the scalelength measured in the simulation can both increase and decrease. Overall, we find a broad range of scalelengths for MW-like systems, consistent with observations [@barden2005]. The disc scaleheight evolves slowly with time, as long as no satellite merges with the central galaxy. For most systems the ratio between the scaleheight and scalelength is roughly constant throughout the simulation. Final scaleheights range from $z_0=0.6$ to $1.2\kpc$, consistent with results form observations of MW-like systems [@schwarzkopf2000; @yoachim2006]. In some merger trees (trees *1975* and *2048*), the thin disc is completely destroyed due to mergers and only a thick disc with a final scaleheight of $\approx4\kpc$ remains. Discussion and Outlook {#sec:discussion} ====================== In order to study the connection between the internal structure of galaxies and their formation histories and large scale environment, a galaxy formation model must have high resolution, include gas physics, and include the cosmological background. The purpose of this paper is to present a novel approach to study the evolution of galaxies by combining semi-analytic models with numerical hydrodynamic merger simulations. Using the predictions of the semi-analytic model as the initial conditions for multiple merger simulations, we were able to achieve high resolution at a fraction of the computational cost of standard cosmological hydrodynamic methods. Hydrodynamical simulations of binary galaxy mergers have been used extensively to study the evolution and morphological transformation of single galaxies due to merger effects, although additional information must be used to place them in a cosmological context. They can be regarded as a method to study the effects of a single encounter in a detailed manner, rather than the complete evolution of a galaxy. For these merger simulations one creates pre-formed galaxies, that are stable in isolation, with parameters drawn from a grid or motivated by observations. These model galaxies are then set on an orbit and evolved with a hydrodynamical code. Although this technique is useful for studying the evolution of a galaxy during a single encounter, it is not able to predict the typical galaxy properties at a given redshift, as the cosmological background, i.e. the merger history of each galaxy is not taken into account. For this reason, we studied the merger histories of galaxies using merger trees drawn from a large cosmological $N$-body simulation. We studied whether most mergers are binary or whether they usually involve multiple galaxies, i.e. if two galaxies generally have enough time to merge after their haloes have merged, before the remnant merges with another galaxy. We found that the probability for two mergers happening within a few halo dynamical times is always higher than the probability for many dynamical times to elapse between mergers. In our merger trees, for more than 50 per cent of all merger pairs, the second satellite enters the halo within three dynamical times after the first satellite entered, independent of mass ratio. This indicates that multiple mergers are more common than sequences of isolated binary mergers. As a consequence, it is not sensible to focus on binary mergers, but rather on merger simulations which consider several satellites that enter the parent halo one after each other. Due to the large number of parameters involved in mergers (orbit, masses, gas fractions, merger ratios, etc.) it is impossible to cover the whole parameter space in merger simulations by drawing the parameters uniformly from a grid. Instead, a simple and elegant path is to use semi-analytic galaxy merger trees to generate the initial conditions for galaxy merger simulations. In this way, we automatically select mergers that are expected to be common in the Universe. However, in order to do this, we had to extend the code that creates the particle representations of galaxies used in simulations, as smooth accretion of dark matter and the hot gaseous halo component were not taken into account before. In order to model the cooling and accretion of gas onto the disc we included a slowly rotating hot gaseous halo in the initial conditions generator. We modelled the smooth accretion of dark matter material that is too small to be resolved as a halo in the merger trees, by placing additional dark matter particles around the halo. These particles were placed into small spherical systems with a Gaussian density profile to represent the many sub-resolution systems that are expected to be accreted. The distance to the halo centre was chosen such that they fall into the virial radius of the halo at a specified time extracted from the merger tree. In order to model a system that is stable in isolation, we computed the distribution function for a Gaussian profile and the velocity of each particle with a rejection sampling technique. We tested this smooth accretion model for an isolated MW-sized halo and found an excellent agreement with the results from a full cosmological simulation. Furthermore we verified that the small spherical systems are quickly dissolved once they enter the halo. With the extended model for the initial galaxies, we were able to develop our novel approach that uses semi-analytic predictions as initial conditions for a multiple merger simulation. Choosing a starting redshift, we first created particle representations of the central galaxy at this time, and the satellite galaxies at the time when they enter the main halo. We set the mass resolution by requiring that the final number of stellar particles equals a fixed parameter, provided the final stellar mass in the SAM and the simulation are equal. The multiple merger simulation was then performed by evolving the central galaxy until the first satellite enters the main halo, at which point the particle realisation of the satellite was included in the simulation at the virial radius. After this the simulation was resumed and the procedure repeated until the end of the run. We applied our method for ten merger trees drawn from a large dissipationless N-body simulation, for which the main systems are MW-sized at $z=0$ and have no major merger after the starting redshift $z=1$. For all ten trees we analysed the evolution of the properties of the central galaxy and found good agreement between the stellar mass in our simulations at $z=0$ and observational constraints. Similarly, the disc scale lengths in our simulations agree well with the observed range of scale lengths. Overall we also found good agreement with the predictions by the SAM. There are many differences in the details of the star formation and stellar feedback prescriptions in the hydrodynamic simulations and the SAMs, and the values of the parameters controlling the subgrid physics, and we made no attempt to tune these parameters to match. The fact that there is none-the-less reasonable agreement between the important parameters of the galaxies indicates that the two tools can be used together despite the differences in the details. In the future, it would be straightforward to modify the physical ingredients of the SAM and/or hydro simulations to achieve even better agreement. Although we demonstrated the basic elements of our new approach here, there are still a number of improvements that can be made and physical processes that can be included. For example, cosmological hydrodynamic simulations have shown that filamentary large scale structures can efficiently feed cold gas into massive halos via “cold streams” [@keres2005; @dekel2006]. Because our method assumes accretion from a spherically symmetric halo, it does not include this accretion mode. This process is dominant in massive haloes at a redshift of $z>2$. Thus one possible improvement of our method is to model these cold streams. Another improvement is the inclusion of additional or more realistic feedback processes. In our simulations we have implemented winds with constant mass loading factor and constant wind velocities for all galaxies. It has been shown that better agreement with observations can be obtained with more sophisticated wind models, in which the mass loading factor and wind velocity scales with galaxy properties, e.g. momentum-driven winds [@oppenheimer2006]. More detailed chemical evolution recipes that follow multiple elements can be implemented to track abundance ratios in various galactic components. Another form of feedback that is not considered in our simulations yet is AGN feedback. The processes that result from gas accretion onto a central BH can provide a large amount of energy, which may be able to reduce cooling from the hot halo after a major merger. We plan to use the results of simulations carried out with our new approach to study the efficiency of starbursts and morphological transformation (formation of spheroids) in mergers. This will lead to more physical recipes that can then be fed back into the SAM. In conclusion, we anticipate that the approach developed here will open a path to many possible future applications, including (but not limited to) the study of the formation of stellar halos through the accretion and stripping of satellites, sub-structure and streams in galaxy halos, the build-up of spheroids and thickening of galactic discs through mergers, and the evolution of spheroid sizes and internal velocities through mergers. Acknowledgements {#acknowledgements .unnumbered} ================ We thank Hans-Walter Rix and Kathryn Johnston, for enlightening discussions and useful comments on this work. The numerical simulations used in this work were performed on the THEO cluster of the Max-Planck-Institut für Astronomie at the Rechenzentrum in Garching. AVM acknowledges support from the Sonderforschungsbereich SFB 881 “The Milky Way System” (sub-project A1) of the German Research Foundation (DFG). \[lastpage\] [^1]: [email protected]
--- abstract: 'Hydrodynamical simulations of galaxy formation in spatially flat Cold Dark Matter (CDM) cosmologies with and without a cosmological constant ($\Lambda$) are described. A simple star formation algorithm is employed and radiative cooling is allowed only after redshift $z=1$ so that enough hot gas is available to form large, rapidly rotating stellar discs if angular momentum is approximately conserved during collapse. The specific angular momenta of the final galaxies are found to be sensitive to the assumed background cosmology. This dependence arises from the different angular momenta contained in the haloes at the epoch when the gas begins to collapse and the inhomogeneity of the subsequent halo evolution. In the $\Lambda$-dominated cosmology, the ratio of stellar specific angular momentum to that of the dark matter halo (measured at the virial radius) has a median value of $\sim 0.24$ at $z=0$. The corresponding quantity for the $\Lambda=0$ cosmology is over $3$ times lower. It is concluded that the observed frequency and angular momenta of disc galaxies pose significant problems for spatially flat CDM models with $\Lambda=0$ but may be consistent with a $\Lambda$-dominated CDM universe.' author: - | Vincent Eke, George Efstathiou & Lisa Wright\ Institute of Astronomy, Madingley Road, Cambridge CB3 OHA. title: The Cosmological Dependence of Galactic Specific Angular Momenta --- galaxies: formation – galaxies: evolution – galaxies: spiral – cosmology: theory – dark matter Introduction {#sec:intro} ============ Smoothed Particle Hydrodynamical (SPH) simulations of galaxy formation in Cold Dark Matter (CDM) dominated universes have repeatedly failed to create Milky Way-type extended discs (Navarro & Benz 1991; Katz 1992; Navarro, Frenk & White 1995; Steinmetz & Muller 1995; Navarro & Steinmetz 1997; Weil, Eke & Efstathiou 1998, hereafter WEE98). Gas is found to cool very effectively at high redshifts into the centres of haloes before gravitational torques from surrounding perturbations can supply it with sufficient angular momentum. Furthermore the lumpy nature of the subsequent halo evolution leads to an outward transfer of angular momentum, compounding the problem. Navarro & Steinmetz (1997) showed that including a photoionizing background to suppress the early cooling of gas, worsened the problem by preferentially decreasing the amount of higher angular momentum gas accreted at late epochs. Almost all analytic and semi-analytic models of galaxy formation assume that angular momentum is conserved during disc formation (Mestel 1963; Fall & Efstathiou 1980; Gunn 1982; van der Kruit 1987; Cole 1994; Dalcanton, Spergel & Summers 1997; Mo, Mao & White 1998), in marked contrast to what is found in numerical simulations. It is interesting to note that Cole (1994) actually find that their predicted Tully-Fisher relation has galaxies spinning $60$ per cent too rapidly at fixed luminosity, or alternatively being underluminous at fixed circular velocity. Evidently, the degree to which angular momentum is conserved during galaxy formation is of vital importance in developing realistic theoretical models for comparison with observations. Given that high angular momentum gas is required to produce a rapidly spinning large stellar disc, and recognising that the collapse of gas to form such a disc will, if anything, transfer angular momentum to the halo, it is apparent that a significant reservoir of hot gas must be maintained in the galactic halo at least until the epoch at which the halo specific angular momentum has grown to match that of real galaxies. As discussed above, numerical simulations have generally failed to do this. Navarro & Benz (1991), WEE98 and others, have suggested that the inclusion of a more effective feedback mechanism, in particular, energy injection from supernovae (see [*e.g.*]{} Larson 1974, Dekel and Silk 1986) is the most likely solution to this problem. To illustrate the importance of feedback, WEE98 performed several SPH simulations of haloes selected from CDM initial conditions in an $\Omega=1$, $\Lambda = 0$, universe with radiative cooling suppressed until a redshift $z=1$. They did indeed find that suppressing the collapse of gas until late epochs had a dramatic effect on the specific angular momentum of the final galaxies. However, only two out of five carefully selected haloes produced objects with comparable angular momenta to those of real disc galaxies. This suggests that even with an extreme feedback prescription, there may be a problem explaining the frequency and angular momenta of disc galaxies in a critical density CDM universe with $\Lambda=0$. This problem is investigated in more detail in this letter and the sensitivity of the specific angular momenta of the final objects to the assumed background cosmology is tested. The same simplified feedback prescription of WEE98 is adopted, [*i.e.*]{} radiative cooling is suppressed until $z=1$, and applied to a total of $60$ SPH simulations for two spatially flat CDM universes, one with $\Omega_m=1$ and $\Lambda=0$ and the other with $\Omega_m = 0.3$ and $\Omega_\Lambda = 0.7$. Section \[sec:sims\] contains details of the two cosmological models, the initial conditions of the SPH simulations and the simulations themselves. The results are presented in Section \[sec:res\] and discussed in Section \[sec:conc\]. Simulation details {#sec:sims} ================== Model parameters {#ssec:models} ---------------- The two spatially flat CDM cosmological models that are investigated have scale invariant initial fluctuations and a post-recombination power spectrum given by the parametrisation of Efstathiou, Bond & White (1992) with a shape parameter $\Gamma=0.2$. One model, referred to as  (following the nomenclature of Thomas 1998) has a matter density parameter $\Omega_m = 1$ and zero cosmological constant. The other model,  has $\Omega_m = 0.3$ and a cosmological constant contributing $\Omega_\Lambda = 0.7$ to the density parameter; the parameters for this model are close to those favoured by anisotropies in the cosmic microwave background radiation and the distances of Type Ia supernovae (see [*e.g.*]{} Efstathiou 1999). The amplitude of the mass fluctuations has been normalised to reproduce the present day abundance of galaxy clusters, thus the linear theory [*rms*]{} mass fluctuations in spheres of radius $8\Mpc$ are $\sigma_8=0.52$ and $0.90$ for the  and  models respectively (Eke 1996). In both models the Hubble constant is set to $h=0.65$[^1], giving an age for the  universe of $10.0$ Gyr and $14.5$ Gyr for . The baryonic contribution to the critical density is set to $\Omega_{\rm b}=0.06$ in both models, consistent with the predictions of primordial nucleosynthesis and the deuterium abundance measurements reported by Burles & Tytler (1998). Dark matter simulations {#ssec:ap3m} ----------------------- To create initial conditions for the SPH simulations, a dark matter only calculation was performed for each cosmology using the AP$^3$M code of Couchman (1991). For each cosmology, a $32.5$ $\Mpc$ cube containing $128^3$ particles was evolved from $z=24$ to the present employing $2000$ timesteps of equal size in expansion factor. The effective Plummer gravitational force softening was fixed in comoving coordinates at $7$ kpc. Identical random phases were used in both simulations so that the same haloes in both cosmologies could be simulated at higher resolution with the SPH code. Halo selection {#ssec:halo} -------------- The spherical overdensity group-finding algorithm (Lacey & Cole 1994) was applied to the final outputs of the dark matter simulations to locate virialised haloes, with the virial radius defined by the spherical collapse model. A subset of these were chosen for resimulation with the SPH code. To qualify for resimulation, a halo had to be at least $1 \Mpc$ from any other containing at least $100$ particles at $z=0$. This constraint was applied so that the effort in the resimulation was concentrated on the central object rather than a more massive companion. It effectively biases against haloes in dense environments and can be loosely thought of as selecting a sample of field galaxies. Twenty haloes that could clearly be identified as the same object in both  and  simulations with circular velocities in the range $170<v_{\rm c}/~\kms<250$ in the  run were selected for resimulation (referred to as ‘common’ haloes hereafter). As the corresponding  haloes had systematically lower circular velocities (by about $30$ per cent), $5$ additional larger  and $15$ smaller  haloes were also chosen to increase the overlap in circular velocity between the two models. WEE98 adopted similar algorithms to select haloes, but also imposed an additional constraint that the haloes should not have merged with a comparable mass system between $z=1$ and $z=0$. This criterion was imposed to bias against haloes that suffered a major merger at late times and hence to select haloes more favourable to the formation of disc systems. No such criterion was applied in generating initial conditions for the simulations described in this paper. SPH Simulations {#ssec:resim} --------------- The procedure for creating high resolution initial conditions for resimulation is as described by WEE98. Briefly, this involved tracing back to the initial redshift all dark matter particles within $400$ kpc of the selected halo centres at $z=0$. Extra particles were placed in a ‘high resolution’ cube containing the region of interest (and including the short wavelength fluctuations associated with this improved resolution) and more massive particles were added to sample the distant density field. $34^3$ dark matter and $34^3$ gas particles (initially at identical positions to the dark particles) were used in the central cube, while the outer regions were represented by $\sim 5000$ particles with radially increasing masses. The sizes of the high resolution cubes were typically about $3.2\Mpc$ and $4.0\Mpc$ for  and  simulations, yielding gas particle masses of $\sim 2\times10^7{\rm M_\odot}$ and $4\times10^7{\rm M_\odot}$ respectively. Higher resolution runs with $2\times 43^3$ particles were also performed for some of the  haloes that produced inadequately resolved stellar discs. The evolution of the simulation was performed using the GRAPESPH code outlined in WEE98 and $5$ GRAPE$-3A$ boards (Sugimoto 1990) connected to a Sun Ultra$-2$ workstation. The Plummer gravitational force softening for gas and star particles was $0.8$ kpc and the dark matter had softenings of $4.1$ kpc and $2.7$ kpc for  and  respectively. Up to $40000$ timesteps for  and $60000$ for  runs were used to evolve the particles from $z=24$ to $z=0$. Typical run times were $2$ days for each  simulation and $3-4$ days for a  simulation. Radiative cooling was switched on at $z=1$ in all cases to model feedback crudely, as described by WEE98. Each gas particle that remained in a collapsing region with $\rho>7\times10^{-23}$ kg m$^{-3}$ (see Navarro & White 1993, WEE98) for a local dynamical time was converted to a star particle. Results {#sec:res} ======= The same number of particles is used in the high resolution regions even though these differ in volume from run-to-run. The numerical resolution of the SPH simulations is therefore variable and correlated with the mass of the halo. Navarro & Steinmetz (1997) discussed in detail the outward transport of angular momentum when the SPH artificial viscosity acts on a poorly resolved gaseous disc. As a result of the star formation algorithm adopted here, the gaseous discs do not monotonically increase their masses, and thus the effect of this viscous transport is probably increasing as the simulations approach $z=0$ and the accretion rate diminishes. However, by analysing the conservation of angular momentum as a function of the number of star particles within $20$ kpc of the halo centre, it was found that only systems with less than $\sim 1000$ stars showed any significant trend of increasing angular momentum with increasing mass resolution. Consequently, results will only be given for simulations that had at least $1000$ stars in the central object at $z=0$. Figure \[fig:dircomp\] shows an object by object comparison of the stellar specific angular momenta for the common haloes in the two sets of simulations. The stellar angular momentum is measured for the central galaxy-like object within $20$ kpc of its centre. There is essentially no correlation between the two quantities. However, the galaxies in the  simulations tend to have significantly more angular momentum than their  counterparts. The reasons for this difference will be discussed in more detail below. Figure \[fig:jom\] shows the absolute specific angular momenta of all the dark matter haloes (not just the common haloes), measured at the virial radius, and their largest stellar occupants, as a function of halo circular velocity. The dashed lines in Figure \[fig:jom\] show the ranges of specific angular momenta of real disc galaxies[^2] (see figure 1 of WEE98). While the haloes in both cosmologies occupy a similar locus (for $120 < v_{\rm c}/\kms < 180$, the  haloes are only $\sim 50$ per cent higher than those for ), the  stellar objects are at systematically much lower values than those from the  simulations. For both sets of simulations the fraction of halo specific angular momentum retained in the stellar objects decreases with increasing circular speed. Thus in comparing the two cosmologies, attention will be restricted to haloes with circular speeds in the same range, $120 < v_{\rm c}/\kms < 180$. For the $12$  and $17$  haloes that this range includes, the median ratios of stellar to halo specific angular momenta at $z=0$ are $0.07$ and $0.24$ respectively. One reason for this difference is that although haloes in the  and  models with the same circular speed have similar angular momenta at $z=0$, the angular momentum growth depends on the background cosmology. As a consequence of the continual evolution of structure in the  model, haloes acquire relatively more angular momentum since $z=1$ than haloes in the  model. This is illustrated in Figure \[fig:dmjomrat\]. Since typically half of the stars are formed by $z=0.4$ in the  simulations and by $z=0.6$ for , it is more appropriate to compare final stellar angular momenta with those of the parent haloes at $z=1$ when disc formation begins. For haloes with circular speeds in the range $120 < v_{\rm c}/\kms < 180$ the median ratios of stellar to halo specific angular momenta at $z=1$, are $0.17$ and $0.32$ for  and  respectively. Thus a major reason for the differences in final stellar angular momenta in the two cosmologies is that  haloes have lower angular momenta at $z\sim 1$ compared with  haloes of the same circular speed. The other main cause of the difference in stellar angular momenta in the two cosmologies stems from the more inhomogeneous evolution at $z < 1$ suffered by  haloes. As a consequence of the higher frequency of merger events in the  cosmology, the accretion at late times of high angular momentum gas from the outer parts of the halo is disrupted. Either the gas is gravitationally shocked and remains extended, or its angular momentum is transported to larger radii by torques from the anisotropic halo. Figure \[fig:mvirrat\] shows how the fraction of conserved specific angular momentum varies with the halo growth, parametrised by the ratio of halo virial masses at $z=1$ to $z=0$. It is clear that the  haloes accrete significantly less mass than those evolving in the  model. Galaxies forming in the  cosmology experience fewer merger events and conserve a greater fraction of their angular momentum. Conclusions {#sec:conc} =========== The aim of this work has been to investigate how the background cosmological model affects the efficiency with which angular momentum is conserved during galaxy formation. The large set of simulations described here shows that the specific angular momenta of galaxies with halo circular speeds in the range $120$–$180\; \kms$ forming in a  universe are typically $3-4$ times higher than those in a  universe. This large difference arises from the more turbulent merger histories of the  haloes and from their lower specific angular momenta. The median specific angular momentum for observed disc galaxies (see the dashed lines in Figure \[fig:jom\] and figure 1 of WEE98) is, nevertheless, about $2.5$ times as large as that of the  galaxies simulated here, and an order of magnitude above the  galaxies. This discrepancy for the  model is extremely large, confirming the suspicion of WEE98 that the observed frequency of disc galaxies is difficult to reproduce in a spatially flat CDM model with $\Lambda=0$. Allowing cooling at higher redshift would exacerbate this problem (see WEE98), and it seems unlikely that a more realistic feedback process could resolve this discrepancy. In both cosmologies there is a strong trend for galaxies with high circular speeds to lose more angular momentum, suggesting that it is difficult to form large disc galaxies within haloes with circular speeds of $\gsim 200 \kms$ at the virial radius. This is qualitatively consistent with the observed lack of spiral discs with high circular speeds. Most of the simulated galaxies in the  cosmology lie below the observed range of specific angular momenta for real disc systems, though the magnitude of the discrepancy is much lower than in the  cosmology. This may not be an insurmountable problem for the  model. The observational range plotted in Figure \[fig:jom\] is computed from the specific angular momenta of only the disc components, whereas the specific angular momenta of the simulated galaxies have been calculated using the entire stellar system. In a more realistic scenario, the low angular momentum gas might be expected to give rise to a bulge-type stellar component in addition to providing the feedback energy to maintain the reservoir of hot, extended and high angular momentum gas at large redshift. Figure \[fig:hole\] shows that removing all stars within a $3$ kpc sphere of the centres of the final stellar objects can bring the  galaxies with circular speeds $\lsim 180 \kms$ into agreement with the observations. The choice of $3$ kpc is arbitrary and has been adopted merely to illustrate the importance of correctly distinguishing between an inner bulge and an extended disc before comparing simulations with observations. To do this more accurately will require a more realistic feedback prescription and much larger numerical simulations in which bulge and disc components can be distinguished kinematically. Until such simulations are performed, it is unclear whether the  model can account for the observed frequency of disc galaxies. However, galaxies forming in a -like universe experience a severe ‘angular momentum catastrophe’ and this seems to be a fundamental problem for such a model. ACKNOWLEDGMENTS {#acknowledgments .unnumbered} =============== VE, GE and LW acknowledge the support of a PPARC postdoctoral fellowship, senior research fellowship and research studentship respectively. We thank the Institute of Astronomy for providing funds towards the purchase of the GRAPE boards. Burles S., Tytler D., 1998, ApJ, 507, 732 Cole S., Aragon-Salamanca A., Frenk C.S., Navarro J.F., Zepf S., 1994, MNRAS, 271, 781 Couchman H.M.P., 1991, ApJ, 429, L23 Dalcanton J.J., Spergel S.N., Summers F.J., 1997, ApJ, 482, 669 Dekel A., Silk J., 1986, ApJ., 303, 39 Efstathiou G., Bond J.R., White S.D.M., 1992, MNRAS, 258, L1 Efstathiou G., Bridle S.L., Lasenby A.N., Hobson M.P., Ellis R., 1999, MNRAS, 303, L47. Eke V.R., Cole S., Frenk C.S., 1996, MNRAS, 282, 263 Fall S.M., Efstathiou G., 1980, MNRAS, 193, 189 Gunn J.E., 1982, in Astrophysical Cosmology, ed. H.A. Bruck, G. Coyne and M.S. Longair, (Vatican Pontificia Acadamia Scientiarum), 191 Katz N., 1992, ApJ, 391, 502 Kauffmann G., White S.D.M., Guiderdoni B., 1993, MNRAS, 264, 201 Lacey C., Cole S., 1994, MNRAS, 271, 676 Larson R.B., 1974, MNRAS, 169, 229 Mestel L., 1963, MNRAS, 126, 553 Mo H.J., Mao S., White S.D.M., 1998, MNRAS, 295, 319 Navarro J.F., Benz W., 1991, ApJ, 380, 320 Navarro J.F., White S.D.M., 1993, MNRAS, 265, 271 Navarro J.F., Frenk C.S., White S.D.M., 1995, MNRAS, 275, 56 Navarro J.F., Frenk C.S., White S.D.M., 1996, ApJ, 462, 536 Navarro J.F., Steinmetz M., 1997, ApJ, 478, 13 Steinmetz M., Muller E., 1995, MNRAS, 276, 549 Sugimoto D., Chikada Y., Makino J., Ito T., Ebisuzaki T., Umemura M., 1990, Nat, 345, 33 Thomas P.A. , 1998, MNRAS, 296, 1061 van der Kruit P.C., 1987, A&A, 173, 59 Weil M.L., Eke V.R., Efstathiou G., 1998, MNRAS, 300, 773 [^1]: $h$ is defined such that $H_{0}=100h~ \kms{\rm Mpc}^{-1}$. [^2]: The observed specific angular momenta are calculated assuming flat rotation curves of amplitude $v_c$. For dark haloes with the Navarro, Frenk and White (1996) profile, the disc rotation velocity may exceed the halo circular speed at the virial radius by as much as $30$–$40$ %, depending on the concentration of the stellar disc. This difference is neglected in this paper.
--- abstract: 'Based on observations of points uniformly distributed over a convex set in $\R^d$, a new estimator for the volume of the convex set is proposed. The estimator is minimax optimal and also efficient non-asymptotically: it is nearly unbiased with minimal variance among all unbiased oracle-type estimators. Our approach is based on a Poisson point process model and as an ingredient, we prove that the convex hull is a sufficient and complete statistic. No hypotheses on the boundary of the convex set are imposed. In a numerical study, we show that the estimator outperforms earlier estimators for the volume. In addition, an adjusted set estimator for the convex body itself is proposed.' author: - bibliography: - 'ref.bib' title: | Unbiased estimation of the volume\ of a convex body --- Keywords: volume estimation, convex hull, Poisson point process, UMVU, stopping set, exact oracle inequality, missing volume\ MSC code: 60G55, 62G05, 62M30\ Introduction ============ Driven by applications in image analysis and signal processing, the estimation of the support of a density attracts a lot of statistical activity. In many cases it is natural to assume a convex shape for the support set. First fundamental results for convex support estimation have been achieved by [@korostelev1993minimax; @korostelev1994asymptotic] who prove minimax-optimal rates of convergence in Hausdorff distance for a set estimator. In particular, [@korostelev1993minimax] prove that the convex hull of the points $\Ch_n $, which is a maximum likelihood estimator for the set $C$, is rate-optimal. Interestingly, the volume $ |\Ch_n| $ of the convex hull is not rate-optimal for estimation of the volume $|C|$ of the convex set and an alternative two-step estimator, optimal up to a logarithmic factor, was proposed. A fully rate-optimal estimator for the volume of a convex set with smooth boundary was then constructed by [@gayraud1997] based on three-fold sample splitting. For various extensions and applications of convex support estimation, let us refer to [@mammen1995asymptotical; @guntuboyina2012optimal; @brunel:tel-01066977] and the literature cited there. Related ideas under Hölder and monotonicity constraints, respectively, have been adopted by [@ReissSelk14] for a one-sided regression model. Our contribution is the construction of a very simple volume estimator which is not only rate-optimal over all convex sets without boundary restrictions, but even adaptive in the sense that it attains almost the parametric rate if the convex set is a polytope. Our approach is non-asymptotic and provides much more precise properties. The analysis is based on a Poisson point process (PPP) observation model with intensity $\lambda>0$ on the convex set $C{\subseteq}\R^d$. We thus observe X\_1, ..., X\_N U(C), N \~([C ]{}), where $(X_n), N$ are independent, see Section \[theoretical\_digression\_section\] below for a concise introduction to the PPP model. Using Poissonisation and de-Poissonisation techniques, this model exhibits asymptotic properties like the uniform model, i.e. a sample of $n=\lambda{\lvert C \rvert}$ uniformly on $C$ distributed random variables $X_1,\ldots,X_n$. The beautiful geometry of the PPP model, however, allows for much more concise ideas and proofs, see also [@meister2013asymptotic] for connections between PPP and regression models with irregular error distributions. From an applied perspective, PPP models are often natural, e.g. for spatial count data of photons or other emissions. For known intensity $\lambda$ of the PPP, we construct in Section \[oracle\_case\_section\] an [*oracle*]{} estimator ${\widehat{{\vartheta}}}_{oracle}$. Theorem \[ThmOracle\] shows that this estimator is UMVU (uniformly of minimum variance among unbiased estimators) and rate-optimal. To this end, moment bounds from stochastic geometry for the missing volume of the convex hull, obtained by [@barany1988convex] and [@dwyer1988convex] are essential. Moreover, we derive results of independent interest: the convex hull $\Ch=\text{conv}\{X_1,\ldots,X_N\}$ forms a sufficient and complete statistic (Proposition \[PropSuffCompl\]) and the Poisson point process, conditionally on $ \Ch$, remains Poisson within its convex hull (Theorem \[LemMeasCond\]). For the more realistic case of unknown intensity $\lambda$, we analyse in Section \[unknown\_intensity\_section\] our final estimator || , where $N_\circ$ denotes the number of observed points in the interior of $\Ch$. We are able to prove a sharp oracle inequality, comparing the risk of this estimator to that of ${\widehat{{\vartheta}}}_{oracle}$. Here, very recent and advanced results by [@reitzner2003random; @pardon2011; @reitzner2015] on the variance of the number of points $N_{\partial}$ on the boundary of $\Ch$ and the missing volume ${\lvert C\setminus\Ch \rvert}$ are of key importance. This fascinating interplay between stochastic geometry and statistics prevails throughout the work. The lower bound showing that ${\widehat{{\vartheta}}}$ is indeed minimax-optimal is proved in Theorem \[lower\_bound\_theorem\] by adopting the proof of the lower bound in the uniform model by [@gayraud1997]. A small simulation study is presented in Section \[numerical\_study\_section\]. Moreover, we propose to enlarge the convex hull set by the factor $((N + 1)/(N_\circ + 1))^{1/d}$ and we study its error as an estimator of the set $C$ itself. The proof of Lemma \[lemma\_bias\_plug\_in\] is deferred to the Appendix. Digression on Poisson Point Processes {#theoretical_digression_section} ===================================== Most of the results and notation are adapted from [@karr1991point]. We fix a compact convex set $\mathbf E$ in $\R^d$ with non-empty interior as a state space and denote by $\mathcal{E}$ its Borel $\sigma$-algebra. We define the family of convex subsets $\C = \{C \subseteq \mathbf{E}, \text{convex}, \text{closed} \}$ (this implies that all sets in $\C$ are compact) and the family of compact subsets $\K = \{K \subseteq \mathbf{E}, \text{compact} \}$. It is natural to equip the space $\C$ (resp. $\K$) with the Hausdorff-metric $d_H$ and its Borel $\sigma$-algebra $\B_{\C}$ (resp. $\B_{\K}$). Then $(\C,d_H)$ is a compact and thus separable space and the mapping $(x_1,\ldots,x_k)\mapsto \text{conv}\{x_1,\ldots,x_k\}$, which generates the convex hull of points $x_i\in{\mathbf E}$, is continuous from ${\mathbf E}^k$ to $(\C,d_H)$. On $(\mathbf{E}, \mathcal{E})$ we define the set of point measures $\mathbf{M} = \{m \text{ measure on } \mathcal{E}\,:\,m(A) \in \mathbb{N},\,\, \forall A \in \mathcal{E}\}$ equipped with the $\sigma$-algebra $\mathcal{M} = \sigma(m\mapsto m(A), A \in\mathcal{E} )$. Let $C_c^{+}(\mathbf{E})$ be the collection of continuous functions $\mathbf{E} \mapsto [0,\infty)$ with compact support. A useful topology for $\mathbf{M}$ is the *vague topology* which makes $\mathbf{M}$ a complete, separable metric space, cf. Section 3.4 in [@resnick2013extreme]. A sequence of point measures $m_n \in \mathbf{M}$ then converges vaguely to a limit $m \in \mathbf{M}$ if and only if $m_n[f] \to m[f]$ for all $f \in C_c^{+}(\mathbf{E}) $ where $m[f] = \int_{\mathbf{E}} f d m $. Let $( {\varOmega}, \F, \P)$ be an abstract probability space. We call a measurable mapping $ \N: {\varOmega}\to \mathbf{M}$ a Poisson point process (PPP) of intensity $\lambda>0$ on $C\in\mathbf{C}$ if - for any $A \in \mathcal{E}$, we have $\N(A) \sim \text{Poiss}\bigl(\lambda{\lvert A\cap C \rvert}\bigr)$, where ${\lvert A\cap C \rvert}$ denotes the Lebesgue measure of $A\cap C$; - for all mutually disjoint sets $A_1,..., A_n \in \mathcal{E}$, the random variables $\N(A_1),...,\N(A_n) $ are independent. For statistical inference, we assume the Poisson point process to be defined on a set of non zero Lebesgue measure, i.e. $|C| >0$. A more constructive and intuitive representation of the PPP $\N$ is $\N = \sum_{i = 1}^{N} \delta_{X_i}$ for $N \sim \text{Poiss}(\lambda{\lvert C \rvert})$ and i.i.d. random variables $(X_i)$, independent of $N$ and distributed uniformly $\P(X_i \in A) = |A \cap C| / |C|$, so that $\N(A) = \sum_{i = 1}^{N} {\bf 1}( X_i \in A )$ for any $A \in \mathcal{E}$. We consider the convex hull of the PPP points ${\widehat{C}} : \mathbf{M } \to \mathbf{C}$ defined by ${\widehat{C}}(\N) : =\text{conv}\{X_1, ..., X_N\}$, which by the above continuity property of the convex hull is a random element with values in the Polish space $(\C,d_H)$, see also [@davis1987convex] for a detailed study of the continuity of the convex hull. For a short notation, we shall further write $\Ch$ to denote the convex hull of the process $\N$. In the sequel, conditional expectations and probabilities with respect to ${\widehat{C}}$ are thus well defined. We can also evaluate the probability $$\P_C\bigl({\widehat{C}} \in A\bigr) = \sum_{k = 0}^{\infty} \frac{\ex^{-\lambda{\lvert C \rvert}} \lambda^k}{k!} \int_{C^k} {\bf 1}(\text{conv}\{x_1, ..., x_k\} \in A ) d(x_1,..., x_k)$$ for $A \in \B_{\C}$. Usually, we only write the subscript $C$ or sometimes $(C,\lambda)$ when different probability distributions are considered simultaneously. The likelihood function $\frac{d \P_{C,\lambda}}{d \P_{\mathbf{E},\lambda_0}}$ for $C \in{\mathbf C}$ and $\lambda,\lambda_0>0$ is then given by (X\_1,...,X\_N) &=& \^[\_0|[E]{}|-|C|]{}(/\_0)\^N [**1**]{}( i=1,...,N:X\_i C)\ &=& \^[\_0|[E]{}|-|C|]{}(/\_0)\^N [**1**]{}(CC), \[fPPeb\] cf. Thm. 1.3 in [@Kutoyants1998]. For the last line, we have used that a point set is in $C$ if and only if its convex hull is contained in $C$. For the set-indexed process $(\N(K),K\in\K)$ we define its natural set-indexed filtration \_K ({(U); U K, U }) for any $ K \in \K$. The filtration $(\F_K, K \in \K)$ possesses the following properties: - monotonicity: $\F_{K_1} \subseteq \F_{K_2} $ for any $K_1, K_2 \in \K$ with $K_1 \subseteq K_2 $, - continuity from above: $\F_K = \cap_{i=1}^{\infty}\F_{K_i}$ if $K_i \downarrow K $; cf. [@zuyev1999stopping]. By construction, the restriction $\N_K = \N(\cdotp \cap K)$ of the point process $\N$ onto $K \in \mathbf{K}$ is $\F_K$-measurable (in fact, $\F_K = \sigma(\{\N_K(U); U \in \mathbf{K}\})$). In addition, it can be easily seen that $\N_K$ is a Poisson point process in $\mathbf{M}$, cf. the Restriction Theorem in [@kingman1992poisson], and thus $\Ch(\N_K) = \text{conv}(\{X_1,\ldots,X_N\}\cap K)$ is by the above arguments $\F_K$-measurable. A random compact set $\Kh$ is a measurable mapping $\Kh: (\mathbf{M},\mathcal{M}) \to (\K,\B_\K)$. Note that [@zuyev1999stopping] defines a random compact set as a measurable mapping from $ (\mathbf{M},\mathcal{M})$ to $(\K,\sigma_{\K})$ where $\sigma_{\K}$ is the so-called *Effros* $\sigma$-algebra generated by the sets $\{ F \in \K: F \cap K \neq \emptyset \}$, $K \in \K$. Thanks to Thm. 2.7 in [@molchanov2006], the Effros $\sigma$-algebra $\sigma_{\K}$ induced on the family of compact sets $\K$ coincides with the Borel $\sigma$-algebra $\B_\K$, and we prefer to stick to the first definition of a random compact set for convenience. Next, we recall the definition of stopping sets from [@rozanov1982markov] in complete analogy with stopping times. A random compact set $\Kh$ is called an $\F_K$-stopping set if $\{\Kh \subseteq K\} \in \F_K$ for all $K \in \K$. The sigma-algebra of $\Kh$-history is defined as $\F_{\Kh} = \{A \in \mathfrak{F}: A \cap \{ \Kh \subseteq K \} \in \F_K\,\,\, \forall K \in \K \}$, where $\mathfrak{F} = \sigma(\F_K; K \in \K)$. For a set $A{\subseteq}{\mathbf E}$ let $A^c$ denote its complement. The set ${\widehat{\Kh}}\eqdef\overline{\Ch^c}$, the closure of the complement of the convex hull, is an $(\F_K)$-stopping set. We claim ${\widehat{\Kh}}\subseteq K$ if and only if $K^c\subseteq \text{conv}(\{X_1,\ldots,X_N\}\cap K)$. Indeed, if ${\widehat{\Kh}}\subseteq K$ holds, then the boundary $\partial\Ch=\partial{\widehat{\Kh}}$ is in $K$ which implies $\text{conv}(\{X_1,\ldots,X_N\}\cap K)=\Ch$. Consequently, $K^c\subseteq {\widehat{\Kh}}^c\subseteq \Ch=\text{conv}(\{X_1,\ldots,X_N\}\cap K)$ holds. Conversely, $K^c\subseteq \text{conv}(\{X_1,\ldots,X_N\}\cap K)$ implies immediately $K^c\subseteq \Ch$ and thus $\Ch^c\subseteq K$. Since $K$ is closed, we obtain ${\widehat{\Kh}}\subseteq K$. Since $\{X_1,\ldots,X_N\}\cap K$ are the realisations of the point process inside $K$ and the convex hull is measurable, we conclude $\{K^c\subseteq \text{conv}(\{X_1,\ldots,X_N\}\cap K)\}\in\F_K$. We shall further use the following short notation: $N =\N(C)$ denotes the total number of points, $N_\circ = \N({\Ch}^\circ)$ the number of points in the interior of the convex hull $\Ch$ and $ N_{\partial} = \N(\partial\Ch)=\N(\partial{\widehat{\Kh}}) $ the number of points on the boundary of the convex hull. For asymptotic bounds we write $f(x) = O(g(x))$ or $f(x) \lesssim g(x)$ if $f(x)$ is bounded by a constant multiple of $g(x)$ and $f(x) \thicksim g(x)$ if $f(x) \lesssim g(x)$ as well as $g(x) \lesssim f(x)$. Oracle case: intensity $\lambda$ is known {#oracle_case_section} ========================================= For a PPP on $C\in{\mathbf C}$ with intensity $\lambda>0$, we know $N\sim\operatorname{Poiss}(\lambda{\lvert C \rvert})$. In the oracle case, when the intensity $\lambda $ is known, $N/\lambda$ estimates ${\lvert C \rvert}$ without bias and yields the classical parametric rate in $\lambda$: = \^[-2]{}(N) = . Another natural idea might be to use the plug-in estimator $ |\Ch|$ whose error is given by the missing volume and satisfies = = O(|C|\^[2(d-1)/(d+1)]{}\^[-4/(d+1)]{}), where the bound is obtained similarly to and below. This means that its error is of smaller order than $\lambda^{-1}$ for $d{\leqslant}2$, but larger for $d{\geqslant}4$. For any $d{\geqslant}2$, however, both convergence rates are worse than the minimax-optimal rate $\lambda^{-(d+3)/(d+1)}$, established below. The way to improve these estimators is to observe that by the likelihood representation for $\lambda=\lambda_0$ and the Neyman factorisation criterion the convex hull is a sufficient statistic. Consequently, by the Rao-Blackwell theorem, the conditional expectation of $ N / \lambda$ given the convex hull $\Ch $ is an estimator with smaller mean squared error (MSE). The number of points $N$ can be split into the number $N_{\partial}$ of points on the boundary and the number $N_\circ $ of points in the interior of the convex hull. The following theorem is essential in deriving the oracle estimator. Although the statement of the theorem is quite intuitive and already used in [@privault2012invariance], the proof turns out to be nontrivial and is deferred to the Appendix. \[LemMeasCond\] The number $N_{\partial}$ of points on the boundary of the convex hull is measurable with respect to the sigma-algebra of ${\widehat{\Kh}}$-history $\F_{{\widehat{\Kh}}}$. The number of points in the interior of the convex hull $N_\circ$ is, conditionally on $ \F_{{\widehat{\Kh}}}$, Poisson-distributed: $$\label{NccCs} N_\circ \cond \F_{{\widehat{\Kh}}} \sim \text{Poiss}(\lambda_\circ)\text{ with }\lambda_\circ \eqdef \lambda | \Ch |.$$ In addition, we have $\F_{{\widehat{\Kh}}} = \sigma(\Ch)$, where the latter is the sigma-algebra $ \sigma(\{\Ch \subseteq B, B \in \C\})$ completed with the null sets in $\mathfrak{F}$. With Theorem \[LemMeasCond\] at hand, we obtain the *oracle* estimator \[sbvcbxcvb\] \_[oracle]{} = = | |+ , where conditioning on $\Ch$ means conditioning on $\sigma(\Ch)=\F_{{\widehat{\Kh}}}$. \[ThmOracle\] For known intensity $\lambda > 0$, the oracle estimator ${\widehat{{\vartheta}}}_{oracle}$ is unbiased and of minimal variance among all unbiased estimators (UMVU). It satisfies $$\Var({\widehat{{\vartheta}}}_{oracle})= \frac{1}{\lambda} \E[ |C \setminus \Ch|]\,.$$ Its worst case mean squared error over $\C$ decays as $\lambda\uparrow \infty$ like $\lambda^{-(d+3)/(d+1)}$ in dimension $d$: \_ \^[(d+3)/(d+1)]{}\_[C , |C| &gt;0]{} {[C ]{}\^[-(d-1)/(d+1)]{} } &lt; . \[RemAdapt\] The theorem implies that the rate of convergence for the RMSE (root mean-squared error) of the estimator ${\widehat{{\vartheta}}}_{oracle}$ is $\lambda^{-(d+3)/(2d+2)}$. In Theorem \[lower\_bound\_theorem\] below, we prove that the lower bound on the minimax risk in the PPP model is of the same order implying that the rate is minimax-optimal. Even more, the oracle estimator is *adaptive* in the sense, that its rate is faster if the missing volume decays faster. In particular, for polytopes $C$ it is shown in [@barany1988convex] and independently in [@dwyer1988convex] that $\E[ |C \setminus \Ch|] \thicksim \lambda ^{-1} (\log (\lambda|C|))^{d-1}, $ which implies a faster (almost parametric) rate of convergence for the RMSE of the oracle estimator. The unbiasedness follows immediately from the definition . By the law of total variance, we obtain (\_[oracle]{}) & = & () - = -\ &=& - = . \[bcbnfwe\] Proposition \[PropSuffCompl\] below affirms that the convex hull $\Ch$ is not only a sufficient, but also a complete statistic such that by the Lehmann-Scheffé theorem, the estimator ${\widehat{{\vartheta}}}_{oracle}$ has the UMVU property. Finally, we bound the expectation of the missing volume $ |C \setminus \Ch|$ by Poissonisation, i.e. using that the convex hull $\Ch$ in the PPP model conditionally on the event $\{N = k\}$ is distributed as the convex hull $\Ch_k = \text{conv}\{X_1, ..., X_k\}$ in the model with $k$ uniform observations on $C$, for which the following upper bound is known (e.g., [@barany1988convex]): \[thm\_upper\_bound\] \_[C , |C| &gt;0]{} = O(k\^[-2/(d+1)]{}). Thus, it follows by a Poisson moment bound \_[C , |C| &gt;0]{} &=& \_[C ,|C| &gt;0]{}\_[k=0]{}\^\ &=&O(\^[-2/(d+1)]{}) . \[EcbCh\] This bound, together with , yields the assertion. The lower bound for the risk in the PPP framework can be derived from the lower bound in the uniform model with a fixed number of observations, see Thm. 6 in [@gayraud1997]. \[lower\_bound\_theorem\] For estimating $|C|$ in the PPP model with parameter class $\C$, the following asymptotic lower bound holds \_ \^[(d+3)/(d+1)]{}\_[ \_]{} \_[C ]{}\_C \[ ( | C| -\_)\^2\] &gt; 0 , \[lower\_bound\_statement\] where the infimum extends over all estimators ${\widehat{{\vartheta}}}_\lambda$ in the PPP model with intensity $\lambda$. We use that an estimator ${\widehat{{\vartheta}}}_\lambda$ in the PPP model is an estimator in the uniform model on the event $\{N = n \}$. Then, due to the lower bound in the uniform model in [@gayraud1997], for a constant $c> 0$ and for all $n \in \mathbb{N}$ there exists a set $C_n \in \C$ with $|C_n| \sim 1$ such that for all $ k = 1,...,n$, \_[C\_n]{} &gt; c n \^[-(d+3)/(d+1)]{}, a.s. Then, in the PPP model for $C = C_{ \lfloor \lambda \rfloor}$ with $\lambda |C| {\geqslant}1$, we have \_[C]{} \[ ( | C| -\_)\^2\] & = & \_[k ]{} \_[C]{} ¶(N = k)\ &&\_[ k ]{} \_[C]{} ¶(N = k)\ &&gt; & c\^[-(d+3)/(d+1)]{} (1 - ¶( N &gt; ) )\ &\~& \^[-(d+3)/(d+1)]{} , applying Chernoff’s inequality to $N \sim \text{Poiss}( \lambda |C| )$ for the last line. Thus, the lower bound follows. \[PropSuffCompl\] For known intensity $\lambda>0$, the convex hull $\Ch = \text{conv}\{X_1, ..., X_N\}$ is a complete statistic. We need to show the implication C : \_C= 0 T() = 0 ¶\_[E]{}-a.s. for any $\B_{\C}$-measurable function $T: \C \to \R$. From the likelihood in for $\lambda=\lambda_0$, we derive \_C& = &\_[E]{}. Since $\exp(\lambda{\lvert {\mathbf E}\setminus C \rvert})$ is deterministic, $\E_C\bigl[T(\Ch)\bigr] = 0$ for all $C\in{\C}$ implies C :\_[E]{}= 0 . For $C \in \C $, define the family of convex subsets of $C$ as $[C] = \{A\in \C | A \subseteq C \}$ such that ${\widehat{C}}\subseteq C \iff {\widehat{C}}\in[C]$. Splitting $T = T^{+} - T^{-}$ with non-negative $\B_{\C}$-measurable functions $T^{+}$ and $T^{-}$, we infer that the measures $\mu^\pm(B)=\E_{\mathbf E}[T^{\pm}({\widehat{C}}){\bf 1}({\widehat{C}} \in B)] $, $B\in\B_{\C}$, agree on $\{[C] \,|\, C \in \C \}$. Note that the brackets $\{[C] | C \in \C \}$ are $\cap$-stable due to $[A] \cap [C] = [A \cap C]$ and $A \cap C \in \C$. If the $\sigma$-algebra $\Cs$ generated by $\{[C]\,|\,C\in\C\}$ contains $\B_{\C}$, the uniqueness theorem asserts that the measures $\mu^+,\mu^-$ agree on all Borel sets in $ \B_\C$, in particular on $\{T > 0\} $ and $\{T < 0\} $, which entails $\E_{\mathbf E}[T^+({\widehat{C}})]=\E_{\mathbf E}[T^-({\widehat{C}})]=0$. Thus, in this case, $T(\Ch) = 0$ holds $\P_{\mathbf E}$-a.s. It remains to show that $\Cs=\sigma([C],\,C\in \C)$ equals the Borel $\sigma$-algebra $\B_\C$. This can be derived as a non-trivial consequence of Choquet’s theorem, see Thm. 7.8 in [@molchanov2006], but we propose a short self-contained proof here. Let us define the family $\langle C \rangle = \{B \in \C | C \subseteq B\}$ of convex sets containing $C$. Then the closed Hausdorff ball with center $C$ and radius ${\varepsilon}>0$ has the representation B\_(C) {A | d\_H(A, C) } = {A | U\_[-]{}(C) A U\_(C) }, with $U_{\varepsilon}(C)=\{x\in{\mathbf E}\,|\,\text{dist}(x,C){\leqslant}{\varepsilon}\}$, $U_{-{\varepsilon}}(C)=\{x\in C\,|\,\text{dist}(x,{\mathbf E}\setminus C){\leqslant}{\varepsilon}\}$. Noting that $U_{\varepsilon}(C),U_{-{\varepsilon}}(C)$ are closed and convex and thus in ${\mathbf C}$, we obtain \[BeC\] B\_(C) = U\_[-]{}(C) . Since $(\C,d_H)$ is separable, our problem is reduced to proving that all angle sets $\langle C\rangle$ for $C\in{\mathbf C}$ are in $\Cs$. A further reduction is achieved by noting $\langle C\rangle=\bigcap_{x\in C}\langle x\rangle=\bigcap_{x\in C\cap\Q^d}\langle x\rangle$ setting $\langle x\rangle=\langle \{x\}\rangle$ for short such that it suffices to prove $\langle x\rangle\in\Cs$ for all $x\in{\mathbf E}$. (0,0) rectangle (4,4); (0,1.5) – (4,3.5) – (4,4) – (0,4) – (0,1.5); (1.2,3.4) ellipse (1.1 and 0.4); (2,2) circle \[radius=0.05\]; at (2.2,1.7) [$x$]{}; at (3.2,3.5) [$H_{\delta,v}$]{}; at (1.2,3.4) [$C$]{}; at (1.4,2.5) [$v$]{}; (2,2) – (1.63,2.7); (0,1.25) – (4,3.25); at (1.3,1) [separating\ hyperplane]{}; Now, let $x \in{\mathbf E}$ and $C \in \C$ such that $x \notin C$. Then, by the Hahn-Banach theorem, there are $\delta > 0, v \in \R^d$ such that $\langle v, c-x\rangle {\geqslant}\delta$ holds for all $c \in C $. By a density argument, we may choose $\delta\in\Q^+$ and $v\in\Q^d$. Denoting the corresponding hyperplane intersected with $\mathbf E$ by $H_{\delta,v} = \{\xi \in {\mathbf E} \, | \, \langle v, \xi-x \rangle {\geqslant}\delta \}$, see Figure \[HB\], we conclude x\^= \_[\_[+]{} ]{} \_[v \^d]{} \_ . Consequently, $\langle x \rangle \in \Cs$ and thus $\B_\C\subseteq\Cs$ hold. Unknown intensity $\lambda$: nearly unbiased estimation {#unknown_intensity_section} ======================================================= In case the intensity $\lambda$ is unknown and the oracle estimator ${\widehat{{\vartheta}}}_{oracle}$ in is inaccessible, the maximum-likelihood approach suggests to use $ N / |\Ch|$ as an estimator for $\lambda$ in . This yields the *plug-in* estimator for the volume, \_[plugin]{} || + ||. In the unlikely event $N={\lvert \Ch \rvert}=0$, we define ${\widehat{{\vartheta}}}_{plugin}=0$. This estimator has a significant bias due to the following result, which is proved in the appendix. \[lemma\_bias\_plug\_in\] For the bias of the plug-in MLE estimator ${\widehat{{\vartheta}}}_{plugin}$, it follows with some universal constant $c >0$ \[statement\_bias\_plug\_in\] [C ]{} - c \^2, C . The maximal bias over $C\in\C$ is thus at least of order $\lambda^{-4/(d+1)}$, which is worse than the minimax rate $\lambda^{-(d+3)/(2d+2)}$ for $d>5$. Yet, in the two-dimensional finite sample study of Section \[numerical\_study\_section\] below, its performance is quite convincing. We surmise that ${\widehat{{\vartheta}}}_{plugin}$ is rate-optimal for $d {\leqslant}5$, but we leave that question aside because the final estimator we propose will be nearly unbiased and will satisfy an *exact* oracle inequality. In particular, it is rate-optimal in any dimension. The new idea is to exploit that the number of interior points of $\Ch$ satisfies $N_\circ \cond \Ch \sim \text{Poiss}(\lambda_\circ)$, see . There is no conditionally unbiased estimator for $\lambda_\circ^{-1}$ based on observing $N_\circ \cond \Ch \sim \text{Poiss}(\lambda_\circ)$ for $\lambda_\circ$ ranging over some open (non-empty) interval. Otherwise, an estimator ${\widetilde{\mu}}(N_\circ)$ for $\lambda_\circ^{-1}$ would satisfy $\E[{\widetilde{\mu}}(N_\circ) | \Ch] =\lambda_\circ^{-1} $ implying \_[k = 0]{}\^ (k) \^[-\_]{} = \_\^[-1]{} \_[k = 0]{}\^ (k) = \_[k = 0]{}\^ . The coefficient for the constant term in the left and right power series would thus differ ($0$ versus $1$), in contradiction with the uniqueness theorem for power series. We provide an almost unbiased estimator for $\lambda_\circ^{-1}$ by noting that the first jump time of a time-indexed Poisson process with intensity $\nu$ is $\operatorname{Exp}(\nu)$-distributed and thus has expectation $\nu^{-1}$. Taking conditional expectation of the first jump time with respect to the value of the Poisson process at time 1, we conclude that $${\widehat{\mu}}(N_\circ, \lambda_\circ) \eqdef \begin{cases} (N_\circ+1)^{-1},& \text{ for }N_\circ{\geqslant}1,\\ 1+\lambda_\circ^{-1},&\text{ for } N_\circ=0\end{cases}$$ satisfies $\E[{\widehat{\mu}}(N_\circ,\lambda_\circ )|\Ch]=\lambda_\circ^{-1}$. Omitting the term $\lambda_\circ^{-1}$, depending on $\lambda_\circ$, in the unlikely case $N_\circ=0$, we define our final estimator $${\widehat{{\vartheta}}} \eqdef | \Ch | +\frac{ N_{\partial}}{N_\circ + 1} | \Ch | \,.$$ For the proofs, we also define the *pseudo*-estimator $${\widehat{{\vartheta}}}_{pseudo} \eqdef | \Ch | + | \Ch |N_{\partial}\Big(\frac{1}{N_\circ + 1}+ \frac{\ex^{-\lambda_\circ}}{\lambda_\circ}\Big)\,.$$ \[estimator\_bias\] The pseudo-estimator ${\widehat{{\vartheta}}}_{pseudo}$ is unbiased and the estimator ${\widehat{{\vartheta}}}$ is asymptotically unbiased in the sense that with constants $c_1, c_2 > 0$ depending on $d$, $d>1$, whenever $\lambda{\lvert C \rvert}{\geqslant}1$: \[CEt\] 0 |C | - c\_1 |C| (-c\_2(|C|)\^[(d-1)/(d+1)]{} ), C . We have \[vxczvc214\] = \^[-\_]{}\_\^[-1]{} (\_[k=0]{}\^+1)=\_\^[-1]{} , which by ${\lvert \Ch \rvert}\lambda_\circ^{-1}=\lambda^{-1}$ and $\E[{\widehat{{\vartheta}}}_{oracle}]={\lvert C \rvert}$ implies unbiasedness of ${\widehat{{\vartheta}}}_{pseudo}$. Thus, it follows that $$|C|-\E[{\widehat{{\vartheta}}}] = \E \bigl[ | \Ch |N_{\partial} \ex^{-\lambda_\circ}\lambda_\circ^{-1} \bigr] = \lambda^{-1} \E \bigl[ N_{\partial} \ex^{-\lambda |\Ch|} \bigr] \,\,.$$ We exploit the deviation inequality from Thm. 1 in [@Bru14c] and derive the bound for the exponential moment of the missing volume in the model with fixed number of points b\_1 , k 2, for positive constants $b_1, b_2$, depending on the dimension according to [@Bru14c]. For the cases $k = 0,1$, we have the identity $\E[\exp{(\lambda|C \setminus \Ch_k|)}] = \exp{(\lambda|C|)} $. By Poissonisation, similarly to , we derive \[sCPCbl\] (-|C|) b\_3 , for positive constants $b_3, c_2$, depending on the dimension. Hence, using the Cauchy-Schwarz inequality and the bound for the moments of the points on the convex hull, \[MomNumPoints\] = O( (|C|)\^[q(d-1)/(d+1)]{}) , q , see e.g. Section 2.3.2 in [@brunel:tel-01066977], we derive for a constant $c_1 > 0$ \^[-1]{} & & \^[-1]{} \^[-|C|]{} \^[1/2]{} \^[1/2]{}\ & & c\_1 \^[-2/(d+1)]{}|C|\^[(d-1)/(d+1)]{}\ & & c\_1 |C| . \[lexlC\] The next step of the analysis is to compare the variance of the [pseudo]{}-estimator ${\widehat{{\vartheta}}}_{pseudo} $ with the variance of the oracle estimator ${\widehat{{\vartheta}}}_{oracle} $, which is UMVU. \[estimator\_variance\] The following oracle inequality holds with a universal constant $c>0$ and dimension-dependent constants $c_1,c_2 >0$ for all $C \in \C$ with $\lambda{\lvert C \rvert}{\geqslant}1$: (\_[pseudo]{} )(1 + c (, C))(\_[oracle]{}) + r(,C), where $$\begin{aligned} \alpha(\lambda, C) &= \frac{1}{{\lvert C \rvert}} \Bigl( \frac{1}{\lambda} + \frac{ \Var({\lvert C\setminus\Ch \rvert} )}{ \E[ {\lvert C\setminus\Ch \rvert}]} + \E[ {\lvert C\setminus\Ch \rvert}] \Bigr)\,,\\ r(\lambda,C) &=c_1 (\lambda |C |)^{2(d-1)/(d+1)} \exp{\big(-c_2(\lambda|C|)^{(d-1)/(d+1)}\big)}\,.\end{aligned}$$ By the law of total variance, we obtain (\_[pseudo]{} ) &=& () +\ &=& (\_[oracle]{}) + . In view of $N_\circ\,|\,\Ch \sim \text{Poiss}(\lambda_\circ)$, a power series expansion gives $$\E[(N_\circ+1)^{-2}|\,\Ch]=\lambda_\circ^{-1}\ex^{-\lambda_\circ}\int_0^{\lambda_\circ}(\ex^{t}-1)/t\,dt\,.$$ The conditional variance can for $\lambda_\circ\to\infty$ thus be bounded by $$\begin{aligned} \Var((1+N_\circ)^{-1}|\,\Ch) &{\leqslant}\lambda_\circ^{-1}\ex^{-\lambda_\circ}\int_{\lambda_\circ/2}^{\lambda_\circ}\ex^{t}/t\,dt-(\lambda_\circ)^{-2}+O(\ex^{-\lambda_\circ/4})\\ &=(\lambda_\circ)^{-1}\int_{0}^{\lambda_\circ/2} \ex^{-s}\Big(\frac1{\lambda_\circ-s}-\frac1{\lambda_\circ} \Big)\,ds+O(\ex^{-\lambda_\circ/4})\\ &=\lambda_\circ^{-3}(1+o(1))\,,\end{aligned}$$ where we have used $(\lambda_\circ-s)^{-1}-\lambda_\circ^{-1}=s\lambda_\circ^{-1}(\lambda_\circ-s)^{-1}$, $\int_0^\infty s\ex^{-s}ds=1$ and dominated convergence. Thanks to $(N_\circ+1)^{-1}\in[0,1]$ we conclude for some constant $c{\geqslant}1$ $$\Var((1+N_\circ)^{-1}|\,\Ch){\leqslant}c(1\wedge \lambda_\circ^{-3}).$$ Consequently, we have (\_[pseudo]{} ) && (\_[oracle]{}) +\ &=& (\_[oracle]{}) + c , and with & & 1 + c\ & = &1 + c . \[fdfsadf1\] Define the ‘good’ event $\mathcal{G} = \{|\Ch| > |C|/2\}$, on which $\bigl( (\lambda | \Ch |)^2 \wedge (\lambda |\Ch|)^{-1}\bigr) {\leqslant}2(\lambda |C|)^{-1}$. On the complement $\mathcal{G}^c$, we infer from $A^2\wedge A^{-1}{\leqslant}1$ for $A>0$ &&\ &&\^[1/2]{} ¶(|C| |C|/2)\^[1/2]{}\ &&c\_1 (|C |)\^[2(d-1)/(d+1)]{} , \[EBNchlC\] for some positive constant $c_1$ and $c_2$, using and . It remains to estimate the upper bound on $\mathcal{G}$ = ( + ) . \[laCfCf\] Using the identity (17) in [@reitzner2015] for the factorial moments for the number of vertices $N_{\partial}$, we derive $\Var (N_{\partial}) {\leqslant}\lambda^2 \Var({\lvert C\setminus\Ch \rvert}) + \lambda\E[ {\lvert C\setminus\Ch \rvert}] $ in view of $ \E [N_{\partial}] = \lambda \E[ {\lvert C\setminus\Ch \rvert}]$. Thus, is bounded by ( + + ), which yields the assertion. As a result, we obtain an *oracle inequality* for the estimator ${\widehat{{\vartheta}}}$. \[new\_estimator\_risk\] It follows for the risk of the estimator ${\widehat{{\vartheta}}}$ for all $C \in \C$ whenever $\lambda{\lvert C \rvert}{\geqslant}1$: \^[1/2]{} (1+c (, C))\^[1/2]{} +r(,C), with constant $c > 0$ and $\alpha(\lambda, C),r(\lambda,C)$ from Theorem \[estimator\_variance\]. For any $C\in\C$ and $\lambda>0$ we have $\alpha(\lambda,C){\leqslant}1+\frac{1}{\lambda{\lvert C \rvert}}$. In view of $\lambda_\circ=\lambda{\lvert \Ch \rvert}$, we have ${\widehat{{\vartheta}}} = {\widehat{{\vartheta}}}_{pseudo} - \lambda^{-1}N_{\partial}\ex^{-\lambda{\lvert \Ch \rvert}}$ and we derive as in and with some constants $c_1,c_2>0$ $$\begin{aligned} \E[({\widehat{{\vartheta}}} -{\widehat{{\vartheta}}}_{pseudo})^2] &{\leqslant}\lambda^{-2}\E[N_{\partial}^4]^{1/2}\E[\ex^{-4\lambda{\lvert \Ch \rvert}}]^{1/2} {\leqslant}c_1^2 \exp{\big(-2c_2 (\lambda{\lvert C \rvert})^{(d-1)/(d+1)}\big)}.\end{aligned}$$ To establish the oracle inequality, we apply the triangle inequality in $L^2$-norm together with Theorems \[ThmOracle\] and \[estimator\_variance\]. The universal bound on $\alpha(\lambda,C)$ follows from the rough bound $\E[{\lvert C\setminus\Ch \rvert}^2]{\leqslant}{\lvert C \rvert}\E[{\lvert C\setminus\Ch \rvert}]$. Note that the remainder term $r(\lambda,C)$ is exponentially small in $\lambda{\lvert C \rvert}$. Therefore, an immediate implication of Theorem \[new\_estimator\_risk\] is that asymptotically our estimator ${\widehat{{\vartheta}}}$ is minimax rate-optimal in all dimensions, where the lower bound is proved in the next section. Yet, even more is true: the oracle inequality is in all well studied cases *exact* in the sense that $\alpha(\lambda,C)\to 0$ holds for $\lambda\to\infty$ such that the the UMVU risk of ${\widehat{{\vartheta}}}_{oracle}$ is attained asymptotically. \[LemOracle\] We have tighter bounds on $\alpha(\lambda, C)$ from Theorem \[estimator\_variance\] in the following cases: 1. for $d = 1,2$ and $ C \in \C$ arbitrary: $ \alpha(\lambda, C) \lesssim (\lambda |C|)^{-2/(d+1)}$, 2. for $ d {\geqslant}2$, $C$ with $C^2$-boundary of positive curvature: $ \alpha(\lambda, C) \lesssim (\lambda |C|)^{-2/(d+1)}$, 3. for $ d {\geqslant}2$ and $C$ a polytope: $ \alpha(\lambda, C) \lesssim \lambda^{-1} (\log(\lambda|C|))^{d-1}$. Let us restrict to $|C| = 1$, the case of general volume follows by rescaling. In view of the expectation upper bound , the main issue is to bound $ \Var({\lvert C\setminus\Ch \rvert} ) / \E[ {\lvert C\setminus\Ch \rvert}]$ uniformly. Case (1) follows from [@pardon2011], where $ \lambda \Var({\lvert C\setminus\Ch \rvert}) \thicksim \E[ {\lvert C\setminus\Ch \rvert}] $ is established. For case (2) with smooth boundary, the upper bound for the variance, $\Var({\lvert C\setminus\Ch \rvert} ) \lesssim \lambda^{-(d+3)/(d+1)}$, was obtained in [@reitzner2005central], while the lower bound for the first moment, $ \E[ {\lvert C\setminus\Ch \rvert}] \gtrsim \lambda^{-2/(d+1)}$, is due to [@schutt1994random]. For the case (3) of polytopes, the upper bound $\Var({\lvert C\setminus\Ch \rvert} ) \lesssim \lambda^{-2}(\log \lambda)^{d-1}$ was obtained in [@barany2010variance], while the lower bound for the first moment, $ \E[ {\lvert C\setminus\Ch \rvert}] \gtrsim \lambda^{-1}(\log \lambda)^{d-1}$, was proved in [@barany1988convex]. The expectation upper bound from Remark \[RemAdapt\] thus yields the result. One could conjecture that $ \lambda \Var({\lvert C\setminus\Ch \rvert}) \thicksim \E[ {\lvert C\setminus\Ch \rvert}] $ holds universally for all convex sets in arbitrary dimensions and thus that the oracle inequality is always exact. Proving such a universal bound is a challenging open problem in stochastic geometry, strongly connected to the discussion on universal variance asymptotics in terms of the floating body by [@barany2010variance]. Finite sample behaviour and dilated hull estimator {#numerical_study_section} ================================================== ![The two convex sets (blue), observations (points), their convex hulls (black lines) and dilated convex hulls (black dashed lines).[]{data-label="polygon_ellipse_figures"}](ellipse_polygon.png "fig:"){width="0.8\linewidth"}\ In this section, we demonstrate the performance of the main estimator ${\widehat{{\vartheta}}}$ numerically and compare it to other estimators including the naive estimator $|\Ch|$, the naive oracle estimator $N / \lambda$, the UMVU oracle estimator ${\widehat{{\vartheta}}}_{oracle}$ and the plug-in MLE estimator ${\widehat{{\vartheta}}}_{plugin} = |\Ch| (1+ N_{\partial}/ N)$. The main competitor from the literature is a rate-optimal estimator proposed in [@gayraud1997]. In their construction, the whole sample is divided into three equal parts $X$, $X^\prime$ and $X^{\prime\prime}$ of sizes $N^\star $ (without loss of generality $N^\star\in\mathbb{N}$) and the estimator is given by \_[G]{} = || + \_[i=1]{}\^[N\^]{} [**1**]{} (X\^\_i ), where $\Ch^{\prime\prime}$ is the convex hull of the third sample $X^{\prime\prime}$. The data points are simulated for two convex sets: an ellipse and a polygon; see Figure \[polygon\_ellipse\_figures\]. ![Monte Carlo RMSE estimates for the studied estimators for the volume of two convex sets: a polygon and an ellipse.](rmse_ellipse_polygon.png){width="\linewidth"} \[rmse\_estimates\_plot\] The RMSE estimate normalised by the area of the true set is based on $M = 500$ Monte Carlo iterations in each case. The results of the simulations are depicted in Figure \[rmse\_estimates\_plot\] where $n=\lambda{\lvert C \rvert}$ denotes the expected total number of points. The worst convergence rate of $N/\lambda$ is clearly visible. More importantly, we see that the RMSE of ${\widehat{{\vartheta}}}$ approaches the oracle risk for larger $n$ (i.e. $\lambda$) as the oracle inequality predicts. It is also conspicuous that in the studied cases the plug-in estimator ${\widehat{{\vartheta}}}_{plugin}$ and the estimator ${\widehat{{\vartheta}}}$ perform rather similarly. This is explained by the fact that the number of points $N_{\partial}$ on the convex hull increases with a moderate speed in the two-dimensional case, $\E[N_{\partial}] = O(\lambda^{1/3})$, which results in a small difference between the multiplication factors $N_{\partial} / N$ and $N_{\partial} / (N_\circ + 1) $. The simulations in two dimensions were implemented using the R package “spatstat” by [@spatstat]. To illustrate the sub-optimality of the plug-in estimator ${\widehat{{\vartheta}}}_{plugin}$ in high dimensions, we provide results of numerical simulations in dimensions $d = 3,4,5,6$ for the case when the true set $C $ is a unit cube $C = [0,1]^d$, see Figure \[d3456\]. The simulations were implemented using the R package “geometry” by [@geometry]. As an application of the obtained results, we propose a new estimator for the convex set itself: & & {x\_0+()\^[1/d]{} (x-x\_0)|x}\ &=& {x\_0+()\^[1/d]{} (x-x\_0)|x} , which is just the dilation of the convex hull $\Ch$ from its barycentre ${\widehat{x}}_0$, see the dashed polygons in Figure \[polygon\_ellipse\_figures\]. Since the convex hull is a sufficient statistic (for known $\lambda$), the points in its interior do not bear any information on the shape of $C$ itself such that the barycentre is a reasonable choice. There are, of course, other enlargements of the convex hull conceivable like $\argmin_{B \in \C, |B| = {\widehat{{\vartheta}}}} d_H(B,\Ch)$, the convex set closest (in Hausdorff distance) to $\Ch$ with volume ${\widehat{{\vartheta}}}$. The intuition behind these estimators is based on the observation that once the volume of the true set is known, we can estimate the set itself faster (in the constant), and ${\widehat{{\vartheta}}}$ is a reasonable substitute for the true volume due to its fast rate of convergence. ![Monte Carlo error ratio for the convex hull and its dilation when the true set is a polygon.[]{data-label="error_ratio_plot"}](l1_distance_ratio.png "fig:"){width="\linewidth"}\ A detailed analysis is not pursued here, but in a small simulation study we investigate the behaviour of the new dilated hull estimator for the above polygon. The error ratio $\E[ {\lvert C {\varDelta}\Ch \rvert}] / \E [{\lvert C {\varDelta}{\widetilde{C}} \rvert} ] $ in terms of the symmetric difference $A{\varDelta}B= ( A \setminus B) \cup (B \setminus A)$ is approximated in $M = 500$ Monte Carlo iterations and shown in Figure \[error\_ratio\_plot\]. It turns out that the dilation significantly improves the convex hull as an estimator for $C$, especially for a small number of observations. Appendix ======== Proof of Theorem \[LemMeasCond\] -------------------------------- The proof is split into several statements, which might be of interest on their own. \[LemMeas\] The random variable $\N({\Kh})$ is measurable with respect to $\F_\Kh$ for any stopping set $\Kh$. The proof is just a generalisation of the analogous statement for time-indexed stochastic processes, see e.g. Proposition 2.18 in [@karatzas2012brownian]. For this, the notions are extended to the partial order $\subseteq$ and then the right-continuity of $(\N(K), K\in\K)$ (with respect to inclusion) implies its progressive measurability and thus in turn the measurability of $\N({\Kh})$. Next, observe that the set-indexed process $(\N(K), K \in \K)$ has independent increments, i.e. for $K_1,\ldots,K_m \in \K$ with $K_i \subseteq K_{i+1}$, $i=1,\ldots,m-1$, the random variables $\N(K_{i+1}) - \N(K_i)=\N(K_{i+1}\setminus K_i)$are independent (by the independence of the PPP on disjoint sets). In fact, we show in Proposition \[psm\] that the process $\N$ is even a strong Markov process. In addition, Proposition \[psm\] yields using that the closed complement ${\widehat{\Kh}}={\overline{{{\widehat{C}}}^c}}$ of the convex hull is a stopping set. \[psm\] The set-indexed process $(\N(K), K\in\K)$ is strong Markov at every stopping set $\Kh$. More precisely, conditionally on $\F_{\Kh}$ the process $(\N(K\setminus \Kh),K\in\K)$ is a Poisson point process with intensity $\lambda$ on $\Kh^c$. In particular, $\N(K\setminus \Kh ) \cond \F_{\Kh} \sim \text{Poiss}(\lambda|K\setminus \Kh|)$ holds for all $K \in \K$. The fact that the increments $\N(\Kh \cup K) - \N(\Kh)$ are independent of $\F_{\Kh}$ can be derived from a general theorem about the strong Markov property for random fields in Thm. 4 in [@rozanov1982markov]. See also [@zuyev2006strong] for a discussion of the strong Markov property and its applications in stochastic geometry. These statements, however, do not provide a distributional characterisation of the increments of the process. A set-indexed, $(\F_K)$-adapted integrable process $(X_K, K \in \K)$ is called a martingale if $\E[X_B| \F_A] = X_A$ holds for any $A, B \in \K$ with $A \subseteq B$. By the independence of increments, the process $M_K \eqdef \N(K) - \lambda|K|$, $K\in\K$, is clearly a martingale with respect to its natural filtration $(\F_K,K\in\K)$. Then also the process $${\widetilde{M}}_K \eqdef M_{K\cup \Kh}-M_{\Kh}=\N(K\setminus\Kh)-\lambda{\lvert K\setminus\Kh \rvert}$$ is a martingale with respect to the filtration ${\widetilde{\F}}_{K} \eqdef \F_K \vee \F_{\Kh} = \F_{K\cup\Kh}$ because for $K_1, K_2 \in \K$ with $K_1 \subseteq K_2$ the optional sampling theorem (see e.g. [@zuyev1999stopping]) yields $$\E[{\widetilde{M}}_{K_2}| {\widetilde{\F}}_{K_1}] = \E[M_{K_2\cup \Kh}-M_{\Kh}| \F_{K_1\cup \Kh}] = M_{K_1\cup \Kh}-M_{\Kh}={\widetilde{M}}_{K_1},$$ noting that $K_1\cup\Kh$ is again a stopping set. This implies that $\lambda|K\setminus\Kh|$, conditionally on $\Kh$, is the deterministic compensator of the process ${\widetilde{N}}_K = \N(K\setminus \Kh)$. Then, due to the martingale characterisation of the set-indexed Poisson process, see Thm. 3.1 in [@ivanoff1994martingale] (analogue of Watanabe’s characterisation for the Poisson process), the process ${\widetilde{N}}_K$, conditionally on $\F_{\Kh}$, is a Poisson point process with mean measure ${\widetilde{\mu}}(A)= \lambda|A \cap \Kh^c| $. The last statement of Theorem \[LemMeasCond\], that $\F_{{\widehat{\Kh}}} = \sigma(\Ch) $ is shown next. It can be seen as a generalisation of the interesting fact that for a time-indexed Poisson process the sigma-algebra $\sigma(\tau)$ associated with the first jump time $\tau$ coincides with the sigma-algebra of $\tau$-history $\F_\tau$. The sigma-algebra $\sigma(\Ch)$ coincides with the sigma-algebra $ \F_{{\widehat{\Kh}}}$ of ${\widehat{\Kh}}$-history, i.e. $\sigma(\Ch) = \F_{{\widehat{\Kh}}}$. Since ${\widehat{\Kh}}$ is $\F_{{\widehat{\Kh}}}$-measurable by Lemma 1 in [@zuyev1999stopping] and $\Ch=\overline{{\widehat{\Kh}}^c}$, it is evident that $\sigma(\Ch) \subseteq \F_{{\widehat{\Kh}}}$. The other direction is more involved. We use that the sigma-algebra $ \F_{{\widehat{\Kh}}}$ coincides with the sigma-algebra $\sigma(\{\N({\widehat{\Kh}}\cap K), K \in \K\})$ generated by the process stopped at ${\widehat{\Kh}}$. This statement can be derived from Thm. 6, Ch. 1 in [@shiryaev2007optimal]. Note that their assumption (1.11) is satisfied in our case, because for all $K \in \K$ and $\omega \in {\varOmega}$ there is $\omega^\prime$ such that $\N(U\cap K,\omega) = \N(U,\omega^\prime) $ for all $U \in \K$, which simply says that observing points in $K\in \K$ there might be no points outside $K$. Finally, observe that by definition of the convex hull $\N({\overline{\Ch^c}}\cap K)=\N((\partial\Ch)\cap K)$. Modulo null sets, $\N((\partial\Ch)\cap K)$ counts the number of vertices of ${\widehat{C}}$ in $K$ and is thus $\sigma({\widehat{C}})$-measurable. Using that the bias of the oracle estimator ${\widehat{{\vartheta}}} = | \Ch | + N_{\partial}/ (N_\circ + 1) | \Ch | $ is exponentially small, it remains to compare its expectation with the expectation of the plug-in estimator ${\widehat{{\vartheta}}}_{plugin}$ to show : & = & =\ && , \[Etvtvpl\] where in the last line we have used ${\lvert \Ch \rvert}>0$ only if $N_{\partial}{\geqslant}d+1$ and in this case $N_{\partial}^2-N_{\partial}{\geqslant}\frac{d}{d+1}N_{\partial}^2$. Using $\E[(N_\circ+1)^{-1}\,|\,{\widehat{C}}]=\lambda_\circ^{-1}(1-\ex^{-\lambda_\circ})$ from above, we obtain after writing ${\bf 1}(N{\leqslant}2\lambda{\lvert C \rvert})=1-{\bf 1}(N> 2\lambda{\lvert C \rvert})$ $$\begin{aligned} \E [{\widehat{{\vartheta}}} - {\widehat{{\vartheta}}}_{plugin}] &{\geqslant}\frac {d}{d+1}\Big(\E \biggl[ \frac{N_{\partial}^2 |\Ch|(1-\ex^{-\lambda_\circ})}{2\lambda_\circ \lambda{\lvert C \rvert}}\biggr] -\E \biggl[ \frac{N_{\partial}^2 |\Ch|}{2\lambda{\lvert C \rvert}}{\bf 1}(N> 2\lambda{\lvert C \rvert}) \biggr]\Big)\\ &{\geqslant}\frac {d}{d+1}\Big(\frac{\E[N_{\partial}^2(1-\ex^{-\lambda_\circ})]}{2\lambda^2{\lvert C \rvert}} -\frac{\E \bigl[ N^2{\bf 1}(N> 2\lambda{\lvert C \rvert}) \bigr]}{2\lambda}\Big).\end{aligned}$$ By Cauchy-Schwarz inequality and large deviations similarly to , the first term is bounded from below by a constant multiple of $\E[{\lvert C\setminus\Ch \rvert}]^2/{\lvert C \rvert}$ in view of $\E[N_{\partial}^2] {\geqslant}\lambda^2 \E[{\lvert C\setminus\Ch \rvert}]^2 $, see e.g. Section 2.3.2 in [@brunel:tel-01066977]. Because of $N\sim\operatorname{Poiss}(\lambda{\lvert C \rvert})$, the second term is of order $\lambda{\lvert C \rvert}^2\ex^{-\lambda{\lvert C \rvert}}$ and thus asymptotically of much smaller order. ![Monte Carlo RMSE estimates for the studied estimators for the volume of the unit cube $C = [0,1]^d$ in dimensions $d = 3,4,5,6$ .](d3456_new.png){width="\linewidth"} \[d3456\]
--- abstract: 'We investigate the constraints on the flavour violating parameters from the decay $B \rightarrow X_s \gamma$, taking into account the interplay of the various sources of flavour violation in the unconstrained MSSM. We present a systematic leading logarithmic QCD analysis of these model-independent constraints, including contributions from gluinos, neutralinos, charginos, charged Higgs bosons and interferences between them. We show that two simple combinations of elements of the down squark mass matrix are stringently bounded over large parts of the parameter space where only weak assumptions on the hierarchical structure of the squark mass matrices are made. We also briefly analyse up to which values SUSY contributions, compatible with $B \rightarrow X_s \gamma$, can enhance the Wilson coefficient $C_8(m_W)$, which plays an important role in the phenomenology of charmless hadronic $B$ decays.' address: | *a Institute for Theoretical Physics, University of Zurich, CH–8057 Zurich, Switzerland\ b Institute for Theoretical Physics, University of Berne, CH–3012 Berne, Switzerland\ c Theory Division, CERN, CH–1211 Geneva 23, Switzerland* author: - '[**Thomas  Besmer**]{} (a), [**Christoph  Greub**]{} (b), [**Tobias  Hurth**]{} (c)' title: | **Bounds on Flavour Violating Parameters\ in Supersymmetry[^1]** --- Introduction {#intro} ============ Today supersymmetric models are given priority in the search for new physics beyond the standard model (SM). This is primarily suggested by theoretical arguments related to the hierarchy problem. Supersymmetry eliminates the sensitivity for the highest scale in the theory and, thus, stabilizes the low energy theory. Flavour changing neutral current (FCNC) processes provide crucial guidelines for supersymmetry model building. In the so-called unconstrained minimal supersymmetric standard model (uMSSM) there are new sources for FCNC transitions. Besides the Cabibbo–Kobayashi–Maskawa (CKM)-induced contributions, there are generic supersymmetric contributions induced by flavour mixing in the squark mass matrices. The structure of the uMSSM does not explain the suppression of FCNC processes, which is observed in experiments; this is the crucial point of the well-known supersymmetric flavour problem. Within the framework of the MSSM there are at present three favoured supersymmetric models that solve the supersymmetric flavour problem by a specific mechanism through which the sector of supersymmetry breaking communicates with the sector accessible to experiments: in the minimal supergravity model (mSUGRA) [@MSUGRA], supergravity is the corresponding mediator; in the other two models, this is achieved by gauge interactions [@GMSB] and by anomalies [@AMSB]. Furthermore, there are other classes of models in which the flavour problem is solved by particular flavour symmetries [@FLAVOUR]. Flavour violation thus originates from the interplay between the dynamics of flavour and the mechanism of supersymmetry breaking. FCNC processes therefore yield important (indirect) information on the construction of supersymmetric extensions of the SM and can contribute to the question of which mechanism ultimately breaks supersymmetry. The experimental measurements of the rates for these processes, or the upper limits set on them, impose in general a reduction of the large number and size of parameters in the soft supersymmetry-breaking terms present in these models. Among these processes, those involving transitions between first- and second-generation quarks, namely FCNC processes in the $K$ system, are considered as the most formidable tools to shape viable supersymmetric flavour models. Moreover, the tight experimental bounds on some flavour-diagonal transitions, such as the electric dipole moment of the electron and of the neutron, as well as $g-2$, help constraining soft terms inducing chirality violations. Among neutral flavour transitions involving the third generation, the rare decay $B \rightarrow X_s \gamma$ is at present the most important one, as it is the only inclusive mode which is already measured [@BSGMEASURE]. The theoretical SM prediction, up to next-to-leading logarithmic (NLL) precision [@NLL] for its branching ratio, is in agreement with the experimental data. Although the experimental errors are still rather large, this decay mode already allows for theoretically clean and rather stringent constraints on the parameter space of various extensions of the SM (see for example [@NLLBEYOND]). Once more precise data from the B factories are available, this decay will undoubtedly gain efficiency in selecting the viable regions of the parameter space in the above classes of models; it may also help discriminating among the models that will be proposed by then. In this paper we present a model-independent analysis of the decay $B \rightarrow X_s\,\gamma$, based on a LL-QCD calculation, where contributions from $W$-bosons, charged Higgs bosons, charginos, neutralinos and gluinos are systematically included. Former analyses in the unconstrained MSSM neglected QCD corrections and only used the gluino contribution to saturate the experimental bounds. Technically, the so-called mass insertion approximation (MIA) was used where the off-diagonal elements of the squark mass matrices are taken to be small and their higher powers neglected. As a consequence of this single insertion approximation, no correlations between different sources of flavour violation were taken into account. In this way, one arrived at ’order-of-magnitude bounds’ on the soft parameters [@GGMS; @DNW; @HKT]. In [@BGHW], the sensitivity of the bounds on the down squark mass matrix to radiative QCD corrections was analysed including the SM and the gluino contributions. The aim of the present paper is to extend this analysis to include the contributions from charged Higgs bosons, charginos and neutralinos and their interference effects, and even more important, the effects that result when several flavour violating parameters, i.e. several off-diagonal elements in the squark mass matrices, are switched on simultaneously. We anticipate that two simple combinations of matrix elements of the down squark mass matrix remain rather stringently bounded over large parts of the parameter space, in a general scenario where only relatively weak assumptions on pattern of the squark mass matrices are made. Since there are different contributions to this decay, with different numerical impact on its rate, some of these flavour-violating terms may turn out to be poorly constrained. Thus, given the generality of such a calculation, it is convenient to rely on the mass eigenstate formalism, which remains valid even when some of the intergenerational mixing elements are large, and not to use the approximate mass insertion method, where the off-diagonal squark mass matrix elements are taken to be small and their higher powers neglected. In the latter approach the reliability of the approximation can only be checked a posteriori. Finally, we note that the off-diagonal elements of the squark mass matrices can get constraints on completely different grounds, namely from the requirement of the absence of charge and colour breaking minima as well as from directions in the scalar potential which are unbounded from below (see [@Casas] for a more detailed discussion). However, these bounds correspond to sufficient, but not necessary conditions for the stability of the standard vacuum, because it is possible that we live in a metastable vacuum, whose lifetime is longer than the age of the universe [@Kusenko]. The paper is organized as follows: In section 2, we discuss the framework for the calculation of the branching ratio for $B \rightarrow X_s \gamma$. In section 3, we briefly recall the sources of flavour violation, encoded in the squark mass matrices. In section 4, we present the phenomenological analysis on the bounds on the flavour violating parameters. In section 5, we briefly explore up to which values SUSY contributions, allowed by , can enhance the Wilson coefficient $C_8(m_W)$, which plays an important role in the phenomenology of charmless hadronic $B$ decays. In section 6 we give a short summary. In appendix A1, we state our conventions, while in appendix A2 we list the Wilson coefficients at the matching scale. Framework for calculating $B \rightarrow X_s \gamma$ {#framework} ============================================ Hamiltonians ------------ In the SM, rare $B$-meson decays are induced by one-loop diagrams in which $W$ bosons and up-type quarks propagate. The most important corrections to the decay amplitude for $b \to s \gamma$ are due to exchanges of gluons, which give rise to powers of the factor $L=\log(m_b^2/m_W^2)$. It turns out that each of these logarithms is accompanied by at least one factor of $\alpha_s$. Since the two scales $m_b$ and $m_W$ are far apart, $L$ is a large number and these terms need to be resummed: at the leading logarithmic (LL) order, powers of $\alpha_s L$ are resummed; at the next-to-leading (NLL) order, also the terms of the form $\alpha_s \, (\alpha_s L)^N$ are systematically resummed. Thus, the contributions to the decay amplitude are classified according to $$\quad (LL): \quad \quad G_F \, (\alpha_s L)^N, \quad \quad (NLL): \quad G_F \, \alpha_s (\alpha_s L)^N, \quad (N=0,1,...) \label{terms}$$ where $G_F$ is the Fermi constant. The resummation of these corrections is usually achieved by making use of the formalism of effective Hamiltonians, combined with renormalization group techniques. The effective Hamiltonian ${\cal H}_{eff}^{W}$, obtained by integrating out the top-quark and the $W$ boson, can be written as $${\cal H}_{eff}^{W} = - \frac{4 G_F}{\sqrt{2}} K_{tb}^{\phantom{\ast}} K_{ts}^\ast \sum_i C_i(\mu) {\cal O}_i(\mu) \,. \label{weffham}$$ The Wilson coefficients $C_i$ contain all dependence on the heavy degrees of freedom, whereas the operators ${\cal O}_{i}$ depend on light fields only. The operators relevant to $b \to s \gamma$ read $$\begin{array}{llll} {\cal O}_{1} \,= &\! (\bar{s} \gamma_\mu T^a P_L c)\, (\bar{c} \gamma^\mu T_a P_L b)\,, & \\[1.2ex] {\cal O}_{2} \,= &\! (\bar{s} \gamma_\mu P_L c)\, (\bar{c} \gamma^\mu P_L b)\,, & \\[1.2ex] {\cal O}_{3} \,= &\! (\bar{s} \gamma_\mu P_L b) \sum_q (\bar{q} \gamma^\mu q)\,, & \\[1.2ex] {\cal O}_{4} \,= &\! (\bar{s} \gamma_\mu T^a P_L b) \sum_q (\bar{q} \gamma^\mu T_a q)\,, & \\[1.2ex] {\cal O}_{5} \,= &\! (\bar{s} \gamma_\mu \gamma_\nu \gamma_\rho P_L b) \sum_q (\bar{q} \gamma^\mu \gamma^\nu \gamma^\rho q)\,, & \\[1.2ex] {\cal O}_{6} \,= &\! (\bar{s} \gamma_\mu \gamma_\nu \gamma_\rho T^a P_L b) \sum_q (\bar{q} \gamma^\mu \gamma^\nu \gamma^\rho T_a q)\,, & \\[1.2ex] {\cal O}_{7} \,= &\! \displaystyle{\frac{e}{16\pi^2}} \,{\overline m}_b(\mu) \, (\bar{s} \sigma^{\mu\nu} P_R b) \, F_{\mu\nu}\,, \\[2.0ex] {\cal O}_{8} \,= &\! \displaystyle{\frac{g_s}{16\pi^2}} \,{\overline m}_b(\mu) \, (\bar{s} \sigma^{\mu\nu} T^a P_R b) \, G^a_{\mu\nu}\,. \label{smbasis} \end{array}$$ The matrices $T^a$ ($a=1,...,8$) are $SU(3)$ colour generators and $P_{L,R}$ are left- and right-handed projection operators; $e$ and $g_s$ denote the electromagnetic and the strong coupling constants, respectively. Note that the $b$-quark mass is the relevant parameter that governs the chirality flip in the SM dipole operators ${\cal O}_{7} $ and ${\cal O}_{8} $. All eight operators are of dimension six. We anticipate that this is in contrast with the dipole operators induced by gluinos, where the helicity flip can be generated by the gluino mass instead of the $b$-quark mass, as we will see in more detail later. As a consequence, these dipole operators are effectively of dimension five. A consistent SM calculation for $B \to X_s \gamma$ at LL (or NLL) precision requires three steps: - a matching calculation of the full SM theory with the effective theory at the scale $\mu=\mu_W$ to order $\alpha_s^0$ (or $\alpha_s^1$) for the Wilson coefficients, where $\mu_W$ denotes a scale of order $m_W$ or $m_t$; - a renormalization group treatment of the Wilson coefficients using the anomalous-dimension matrix to order $\alpha_s^1$ (or $\alpha_s^2$); - a calculation of the operator matrix elements at the scale $\mu = \mu_b$ to order $\alpha_s^0$ (or $\alpha_s^1$), where $\mu_b$ denotes a scale of order $m_b$. In supersymmetric models there are further contributions to the FCNC processes studied in this paper, i.e. the exchange of charged Higgs bosons and up-type quarks; of charginos and up-type squarks; of neutralinos and down-type squarks; and of gluinos and down-type squark. They all lead to $|\Delta(B)|=|\Delta(S)|=1$ effective magnetic and chromomagnetic operators (of ${\cal O}_7$-type, ${\cal O}_8$-type) and also to new four-quark operators. Taking into account operators up to dimension six only, the effects of charginos, neutralinos and charged Higgs bosons can be matched onto the usual SM magnetic and chromomagnetic operators ${\cal O}_{7}$ and ${\cal O}_{8}$ and their primed counterparts $$\begin{array}{llll} {\cal O}_{7}^\prime \,= &\! \displaystyle{\frac{e}{16\pi^2}}\,{\overline m}_b(\mu) \,(\bar{s} \sigma^{\mu\nu} P_L b) \, F_{\mu\nu}\,, & \quad {\cal O}_{8}^\prime \,= &\! \displaystyle{\frac{g_s}{16\pi^2}} \,{\overline m}_b(\mu) \,(\bar{s} \sigma^{\mu\nu} T^a P_L b) \, G^a_{\mu\nu}\,. \label{smmagnopw2} \end{array}$$ We would like to stress that we do not work in the mSUGRA scenario. Therefore, a lot of the relations deduced in [@BBMR] do not hold anymore. The most important thing to notice is that in a general SUSY framework, there is no connection between the Yukawa couplings in the superpotential and the corresponding trilinear term in the soft potential. However, the contributions from charginos and charged Higgs bosons to the Wilson coefficients of the primed operators vanish also in the more general unconstrained MSSM if $m_s$ is put to zero (see eqs. (\[C7’\]) and (\[C8’\])). This implies that for physical values of $m_s$ the chargino- and the charged Higgs boson contributions to the primed operators are small in scenarios in which $\tan \beta$ does not take extreme values. The neutralino contributions to both, the primed and unprimed operators are also unimportant, because their Wilson coefficients involve those entries of the squark mixing matrices $\Gamma_{DL}$ and $\Gamma_{DR}$, which also induce gluino contributions; the latter, which are proportional to $g_s^2$ therefore dominate the neutralino contributions which are proportional to $g_2^2$. We also found numerically that the neutralino contibutions are indeed rather inessential. The fact that the operators generated by charged Higgs bosons, charginos and neutralinos can be matched onto the SM operators and their primed counterparts implies that the terms that get resummed at LL, show the same pattern as those listed in eq. (\[terms\]); the Fermi constant appearing there is obviously replaced by a specific supersymmetric parameter for the chargino and neutralino contributions. Matters are somewhat different for the gluino contribution ${\cal H}_{eff}^{\tilde{g}}$, as worked out in detail in [@BGHW]. At the amplitude level, terms of the form - $(LL)$: $\quad \quad \alpha_s \, (\alpha_s L)^N,$ $(NLL)$: $\quad \alpha_s \, \alpha_s (\alpha_s L)^N$, $(N=0,1,...)$ are resummed, respectively at the leading and next-to-leading order. While ${\cal H}_{eff}^{\tilde{g}}$ is unambiguous, it is a matter of convention whether the $\alpha_s$ factors should be put into the definition of operators or into the Wilson coefficients. We follow the framework developed in ref. [@BGHW], where the distribution of the $\alpha_s$ factors was done in such a way that the anomalous dimension matrix systematically starts at order $\alpha_s$. We write the effective Hamiltonian ${\cal H}_{eff}^{\tilde{g}}$ in the form $${\cal H}_{eff}^{\tilde{g}} = \sum_i C_{i,\tilde{g}}(\mu) {\cal O}_{i,\tilde{g}}(\mu) + \sum_i \sum_q C_{i,\tilde{g}}^q(\mu) {\cal O}_{i,\tilde{g}}^q(\mu) \,. \label{geffham}$$ The index $q$ runs over all light quarks $q=u,d,c,s,b$. Among the operators contributing to the first part, there are dipole operators in which the chirality flip is induced by the $b$-quark mass: $$\begin{array}{llll} {\cal O}_{7b,\tilde{g}} \,= &\! e \,g_s^2(\mu) \,{\overline m}_b(\mu) \, (\bar{s} \sigma^{\mu\nu} P_R b) \, F_{\mu\nu}\,, & %\quad {\cal O}_{7b, \tilde{g}}^{\prime} \,= &\! e \,g_s^2(\mu) \,{\overline m}_b(\mu) \, (\bar{s} \sigma^{\mu\nu} P_L b) \, F_{\mu\nu}\,, \\[2.0ex] % & \\ {\cal O}_{8b, \tilde{g}} \,= &\! g_s^3(\mu) \,{\overline m}_b(\mu) \, (\bar{s} \sigma^{\mu\nu} T^a P_R b) \, G^a_{\mu\nu}\,, & %\quad {\cal O}_{8b, \tilde{g}}^\prime \,= &\! g_s^3(\mu) \,{\overline m}_b(\mu) \, (\bar{s} \sigma^{\mu\nu} T^a P_L b) \, G^a_{\mu\nu}\,. \label{gmagnopb} \end{array}$$ As discussed in [@BGHW], there are also gluino-induced operators where the chirality violation is signalled by the charm quark mass (obtained by replacing ${\overline m}_b(\mu)$ by ${\overline m}_c(\mu)$) and operators where the chirality flip is induced by the gluino mass. The latter read $$\begin{array}{llll} {\cal O}_{7\tilde{g},\tilde{g}} \,= &\! e \,g_s^2(\mu) \, (\bar{s} \sigma^{\mu\nu} P_R b) \, F_{\mu\nu}\,, & \quad {\cal O}_{7\tilde{g},\tilde{g}}^\prime \,= &\! e \,g_s^2(\mu) \, (\bar{s} \sigma^{\mu\nu} P_L b) \, F_{\mu\nu}\,, \\ & \\[-1.3ex] {\cal O}_{8\tilde{g},\tilde{g}} \,= &\! g_s^3(\mu) \, (\bar{s} \sigma^{\mu\nu} T^a P_R b) \, G^a_{\mu\nu}\,, & \quad {\cal O}_{8\tilde{g},\tilde{g}}^\prime \,= &\! g_s^3(\mu) \, (\bar{s} \sigma^{\mu\nu} T^a P_L b) \, G^a_{\mu\nu}\,. \label{gmagnopg} \end{array}$$ At the LL-level, these operators could be absorbed into the operators given in eq. (\[gmagnopb\]), when neglecting the small mixings effects from the gluino-induced four-Fermi operators with scalar or tensor Lorentz structure. However, as it is useful to separate the contributions where the chirality flip is induced by $m_{\tilde{g}}$, we do not perform this absorption. Notice that the operators in eq. (\[gmagnopg\]) have dimension [*five*]{}, while the other operators are of dimension [*six*]{}. We also stress that unlike the other supersymmetric contributions, the primed gluino-induced operators are [*not*]{} suppressed compared with the unprimed ones. This is in strong contrast with the mSUGRA scenario, where the primed operators are stronlgy suppressed [@BBMR]. The contributions to the second part in eq. (\[geffham\]) are given by four-quark operators with vector, scalar and tensor Lorentz structure. As shown in ref. [@BGHW], the scalar and tensor operators mix at one loop into the six-dimensional magnetic and chromomagnetic ones. Therefore, they have to be included in principle in a LL calculation. As mentioned above, these mixings are numerically small and therefore not very important in practice. For completeness we recall all Wilson coefficients at the matching scale $\mu_W$ in appendix \[Wilson\]. The anomalous dimension matrix of the SM operators ${\cal O}_1$–${\cal O}_8$ and the evolution of the corresponding Wilson coefficients to the decay scale $\mu_b$ are well known and can be found in [@NLL]. The evolution of the gluino-induced Wilson coefficients $C_{i,\tilde{g}}$ is given in ref. [@BGHW]. Branching ratio --------------- The decay width for $B \rightarrow X_s \gamma$ to LL precision is given by $$\Gamma(B \to X_s \gamma) = \frac{m_b^5 \, G_F^2 \, |K_{tb} K_{ts}^*|^2 \, \alpha}{32 \pi^4} \ \left(\left\vert \hat{C}_7(\mu_b) \right\vert^2 + \left\vert \hat{C}'_7(\mu_b) \right\vert^2 \right) \,,$$ where the auxiliary quantities $\hat{C}_7(\mu_b)$ and $\hat{C}'_7(\mu_b)$ are defined as $$\begin{aligned} \hat{C}_7(\mu_b) & = & C_7^{\rm{eff}}(\mu_b) - \left[ C_{7b,\tilde{g}}(\mu_b) + \frac{1}{m_b} C_{7\tilde{g},\tilde{g}}(\mu_b) \right] \, \frac{16 \sqrt{2} \pi^3 \alpha_s(\mu_b)}{G_F \, K_{tb} K_{ts}^*} \, , \nonumber \\ \hat{C}'_7(\mu_b) & = & C_7'(\mu_b) \, - \left[ C'_{7b,\tilde{g}}(\mu_b) + \frac{1}{m_b} C'_{7\tilde{g},\tilde{g}}(\mu_b) \right] \, \frac{16 \sqrt{2} \pi^3 \alpha_s(\mu_b)}{G_F \, K_{tb} K_{ts}^*} \, , \label{c7hat}\end{aligned}$$ where $$C_7^{\rm{eff}}(\mu_b) = C_7(\mu_b) - \frac{1}{3} C_3(\mu_b) - \frac{4}{9} C_4(\mu_b) - \frac{20}{3} C_5(\mu_b) - \frac{80}{9} C_6(\mu_b) \, .$$ Note that we have neglected the small contributions from the operators ${\cal O}_{7c,\tilde{g}}$ and ${\cal O}_{7c,\tilde{g}}^\prime$. The branching ratio can be expressed as $${\rm BR}(B \to X_s\gamma) = \frac{\Gamma(B \to X_s \gamma)}{\Gamma_{SL}} \, {\rm BR}_{SL} \, , \label{bratio}$$ where ${\rm BR}_{SL}=(10.49 \pm 0.46)\%$ is the measured semileptonic branching ratio. To the relevant order in $\alpha_s$, the semileptonic decay width is given by: $$\Gamma_{SL} = \frac{m_b^5 \, G_F^2 \, |K_{cb}|^2}{192 \pi^3} \, g\left(\frac{m_c^2}{m_b^2}\right) \,,$$ where the phase-space function $g(z)$ is $g(z) = 1 - 8z + 8 z^3 - z^4 - 12 z^2 \log z$. Note that there is no systematic distinction between the pole mass $m_b$ and the corresponding running mass normalized at the scale $\mu_b$ in the LL approximation. To be specific, the mass parameters are always treated as pole masses in our numerical analysis. Squark Mass Matrices as New Sources of Flavour Violation {#SMM} ======================================================== As advocated in the introduction, the aim of this paper is to provide a phenomenological analysis of the constraints on the flavour violating parameters in supersymmetric models with the most general soft terms in the squark mass matrices. As explained there, we work in the mass eigenstate formalism, which remains valid (in contrast to the mass insertion approximation) when the intergenerational mixing elements are not small. A specification of the squark mass matrices usually starts in the super-CKM basis, in which the superfields are rotated in such a way that the mass matrices of the quark fields are diagonal. In this basis, the $6\times 6$ squared-mass matrix for the $d$-type squarks has the form $${\cal M}_d^2 \equiv \left( \begin{array}{cc} m^2_{\,d,\,LL} +F_{d\,LL} +D_{d\,LL} & \left(m_{\,d,\,LR}^2\right) + F_{d\,LR} \\[1.01ex] \left(m_{\,d,\,LR}^{2}\right)^{\dagger} + F_{d\,RL} & \ \ m^2_{\,d,\,RR} + F_{d\,RR} +D_{d\,RR} \end{array} \right) \,. \label{massmatrixd}$$ For the $u$-type squarks we have $${\cal M}_u^2 \equiv \left( \begin{array}{cc} m^2_{\,u,\,LL} +F_{u\,LL} +D_{u\,LL} & \left(m_{\,u,\,LR}^2\right) + F_{u\,LR} \\[1.01ex] \left(m_{\,u,\,LR}^{2}\right)^{\dagger} + F_{u\,RL} & \ \ m^2_{\,u,\,RR} + F_{u\,RR} +D_{u\,RR} \end{array} \right) \,. \label{massmatrixu}$$ In this basis, the $F$ terms (stemming from the superpotential) in the $6 \times 6$ mass matrices ${\cal M}^2_f$ ($f=u,d$) are diagonal $3 \times 3$ submatrices, reading $(F_{f\,RL}=F_{f\,LR}^\dagger)$ $$(F_{d\,LR})_{ij} = -\mu (m_{d,i} \tan \beta)\, {{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}}_{ij} \,, \,\,\, (F_{u\,LR})_{ij} = -\mu (m_{u,i} \cot \beta)\, {{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}}_{ij} \, , \label{FFterm}$$ $$(F_{d\,LL})_{ij} = m^2_{d\,i}\, {{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}}_{ij} \, , \,\,\, (F_{u\,LL})_{ij} = m^2_{u\,i}\, {{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}}_{ij} \, . \label{Fterm}$$ Also the $D$-term contributions $D_{f\,LL}$ and $D_{f\,RR}$ to the squared-mass matrix are diagonal in flavour space: $$D_{f\,LL,RR} = \cos 2\beta \, m_Z^2 \left(T_f^3 - Q_f \sin^2\theta_W \right) {{\mathchoice {\rm 1\mskip-4mu l} {\rm 1\mskip-4mu l} {\rm 1\mskip-4.5mu l} {\rm 1\mskip-5mu l}}}_3\,. \label{dterm}$$ Since present collider limits give indications that the squark masses are larger than those of the corresponding quarks, the largest entries in the squark mass matrices squared must come from the soft potential, directly linked to the mechanism of supersymmetry breaking. These contributions, denoted in (\[massmatrixd\]) and (\[massmatrixu\]) by $m^2_{\,f,\,LL}$, $m^2_{\,f,\,RR}$ and $m^2_{\,f,\,LR}$, are in general not diagonal in the super-CKM basis. Further comments are in order. Because of $SU(2)_L$ invariance, $m^2_{\,u,\,LL}$ and $m^2_{\,d,\,LL}$ are related. In the super-CKM basis this relation reads $m^2_{\,u,\,LL} = K m^2_{d,LL} K^{\dagger}$, where $K$ denotes the CKM matrix. The off-diagonal $3 \times 3$ block matrix $ m_{\,f,\,LR}^2$ equals $A^{\,\ast}_{d} v_d $ for down-type and $A^{\,\ast}_{u} v_u $ for up-type squarks (the two vacuum expectation values are chosen to be real). They arise from the trilinear terms in the soft potential, namely $A_{d,ij} H_d \,{\widetilde{D}}_i {\widetilde{D}}_j^{c}$ and $A_{u,ij} H_u \,{\widetilde{U}}_i {\widetilde{U}}_j^{c}$. We stress that we do [*not*]{} assume the proportionality of these trilinear terms to the Yukawa couplings, as is done in the mSUGRA model. Furthermore, differently from $ m^2_{\,f,\,LL}$ and $ m^2_{\,f,\,RR}$, the off-diagonal $3 \times 3$ matrix $m_{\,f,\,LR}^2$ is not hermitian. Because all neutral gaugino couplings are flavour diagonal in the super-CKM basis and the mixing in the charged gaugino coupling to quarks and squarks is governed by the conventional CKM matrix, the flavour change through squark mass mixing is parametrized by the off-diagonal elements of the soft terms $m^2_{f,LL}$, $m^2_{f,RR}$, $m^2_{f,LR}$ in this basis. The diagonalization of the two $6 \times 6$ squark mass matrices ${\cal M}^2_d$ and ${\cal M}^2_u$ yields the eigenvalues $m_{\tilde{d}_k}^2$ and $m_{\tilde{u}_k}^2$ ($k=1,...,6$). The corresponding mass eigenstates, $\tilde{u}_{k}$ and $\tilde{d}_{k}$ ($k=1,...,6$) are related to the fields in the super-CKM basis, $\tilde{u}_{Lj}$, $\tilde{u}_{Rj}$ and $\tilde{d}_{Lj}$, $\tilde{d}_{Rj}$, ($j=1,...,3$) as $$\tilde{u}_{L,R} = \Gamma^\dagger_{UL,R} \, \tilde{u} \,, \hspace*{1truecm} \tilde{d}_{L,R} = \Gamma^\dagger_{DL,R} \, \tilde{d} \,, \label{qdiag}$$ where the four matrices $\Gamma_{UL,R}$ and $\Gamma_{DL,R}$ are $6 \times 3$ mixing matrices. Phenomenological Analysis {#Pheno} ========================= Our phenomenological analysis is based on a complete LL QCD calculation within the unconstrained MSSM; it is done in two parts: - In the first part, we try to derive bounds on the off-diagonal elements of the squark mass matrices by switching on only [*one*]{} of these elements at a time. We include, however, all new physics contributions (chargino, neutralino, charged Higgs bosons, gluino) in the analysis. We show that only those parameters get stringently bounded by $B \to X_s \gamma$, which can generate contributions to the five-dimensional gluino-induced dipole operators ${\cal O}_{7\tilde{g},\tilde{g}}$ and ${\cal O}'_{7\tilde{g},\tilde{g}}$. - In the second part of our analysis we investigate whether the bounds obtained in the first part remain stable, if [*all*]{} off-diagonal elements, which induce the decay $B \rightarrow X_s \gamma$, are varied simultaneously. We anticipate that the bounds on the individual off-diagonal elements get lost, because in this case various combinations of off-diagonal elements can contribute (with opposite sign) to the Wilson coefficients of the five-dimensional dipole operators. In the scenarios we discuss below, it is, however, possible to constrain certain simple combinations of off-diagonal elements of the down squark mass matrices, provided $\tan \beta$ and $\mu$ are not very large. General comments ---------------- In order to analyse the implications of $B \to X_s \gamma$ on the flavour violating soft parameters in the squark mass matrices, we choose some specific scenarios that are characterized by the values of the parameters\ $$\mu, \quad M_{H^-}, \quad \tan \beta, \quad M_{\rm{susy}}, \quad m_{\tilde{g}}. \label{susypar}$$ We regard this as reasonable, because we expect that these input parameters, which are unrelated to flavour physics, will be fixed from flavour conserving observables in the next generations of high energy experiments (provided low energy SUSY exists). Note that the common SUSY scale, $M_{\rm{susy}}$, fixes in our scenarios the general soft squark mass scale $m_{\tilde{q}}$ (see eqs. (\[deltadefa\],\[deltadefb\])) and the first diagonal element of the chargino mass matrix $M_2$ (see eq. (\[Xmat\])). The parameters are chosen as follows: for $M_{\rm{susy}}$ we choose the three values $M_{\rm{susy}}=300, 500, 1000 \,GeV$, while for $\tan \beta$ we use the values $\tan \beta = 2, 10, 30, 50$. For the gluino mass, characterized by $x=m^2_{\tilde{g}}/M^2_{\rm{susy}}$, we take $x=0.3,0.5,1,2$. Unless otherwise stated, the $\mu$ parameter and the mass of the charged Higgs boson $M_{H^-}$ are fixed to be $\mu = 300 \,GeV$ and $M_{H^-} = 300 \,GeV$. While in the first part of our analysis we set, following ref.[@GGMS], all diagonal soft entries in $m^2_{\,d,\,LL}$, $m^2_{\,d,\,RR}$, and $m^2_{\,u,\,RR}$ equal to the common soft squark mass scale $m^2_{\tilde{q}}$, we relax this condition in the second part of our analysis. We point out that the present bound on the mass of the lightest neutral Higgs boson requires a non-vanishing mixing $(m^2_{u,LR})_{33}$ among the stop-squarks. For our choices of the parameters, the MSSM bound coincides with the bound on the SM Higgs boson. We note that there are two contributions to the stop-squark mixing, namely the ‘soft’ contribution $(m^2_{u,LR})_{33}$ and the $F$-term contribution ($-\mu \, m_t \, \cot \beta$). In a general unconstrained MSSM, the soft contribution does not scale with $m_t$. However, following common notation, we parametrize the stop mixing term in ${\cal M}_u^2$ (see eq. (\[massmatrixu\])) as $$X_t \, m_t = (m^2_{u,LR})_{33}\, -\mu m_t \cot \beta \, .$$ We fix the stop mixing parameter $X_t$ such that the mass of the lightest Higgs boson is at least $115 \, GeV$ to assure that the present Higgs bound is fulfilled in our analysis. We use the program FeynHiggsFast [@heinemeyer] which determines the Higgs boson mass approximately taking into account the complete one- and two-loop QCD corrections, the effects of the running top mass and of the Yukawa term for the light Higgs boson. The input parameters of the FeynHiggsFast program, are $\tan \beta$, the diagonal entries of the stop and the sbotton squark mass matrix, $M^2_{\tilde{t}_L}$, $M^2_{\tilde{t}_R}$, $M^2_{\tilde{b}_L}$, and $M^2_{\tilde{b}_R}$, the stop and the sbotton mixing parameters, $X_t$ and $X_b$, the top mass $m_t$, the parameter $\mu$ and the charged Higgs boson mass. For $\tan \beta = 10$ we choose the following values for $X_t$ in dependence of the parameter $M_{{\rm susy}}$: $(M_{\rm{susy}},X_t) = (300 \,GeV, 470 \,GeV)$; $(500 \,GeV, 750 \,GeV)$; $(1000 \,GeV, 1200 \,GeV)$. (For $\mu = 300 \,GeV$ and $X_b =0$ we find Higgs boson masses of $115.2 \,GeV$, $119.9 \,GeV$ and $121.1 \,GeV$, respectively.) The dependence of the Higgs boson mass on the parameters $\mu$ or $X_b$ is rather small within our parameter scenarios. Also for the $\tan \beta = 30$ and $50$ scenarios we use the same values of $X_t$ for our convenience. In these cases the chosen $X_t$ values imply slightly higher Higgs boson masses. We also note that within our choice of parameters, the low $\tan \beta$ scenario, with $\tan \beta = 2$, already gets excluded by the bound on the Higgs boson mass. First part of analysis ---------------------- In this part of the analysis only [*one*]{} off-diagonal entry in the soft part of the squark mass matrices is different from zero. We further assume (as in ref. [@GGMS]) that all diagonal soft entries in $m^2_{\,d,\,LL}$, $m^2_{\,d,\,RR}$, and $m^2_{\,u,\,RR}$ are set to be equal to the common value $m_{\tilde{q}}^2=M^2_{\rm{susy}}$. Then we normalize the off-diagonal elements to $m_{\tilde{q}}^2$, $$\delta_{f,LL,ij} = \frac{(m^2_{\,f,\,LL})_{ij}}{m^2_{\tilde{q}}}\,, \hspace{1.0truecm} \delta_{f,RR,ij} = \frac{(m^2_{\,f,\,RR})_{ij}}{m^2_{\tilde{q}}}\,, \hspace{1.0truecm} (i \ne j) \label{deltadefa}$$ $$\delta_{f,LR,ij} = \frac{(m^2_{\,f,\,LR})_{ij}}{m^2_{\tilde{q}}}\,, \hspace{1.0truecm} \delta_{f,RL,ij} = \frac{(m^2_{\,f,\,LR})^\dagger_{ij}}{m^2_{\tilde{q}}}\,. \phantom{\hspace{1.0truecm} (i \ne j)} \label{deltadefb}$$ We recall that the matrix $m_{u,LL}$ cannot be specified independently; $SU(2)_L$ gauge invariance implies that $m_{u,LL} = K m_{d,LL} K^\dagger$, where $K$ is the CKM matrix. We also note that our $\delta$-quantities only include the soft parts of the matrix elements of the squark mass matrices, while in ref. [@GGMS] also the $F$-term contributions are included in the definition of the $\delta$-quantities. ![ Dependence of $BR(B\rightarrow X_s\gamma)$ on $\delta_{d,LR,23}$. In the upper frame, only SM and gluino contributions are considered. In the lower frame, the additional contributions from chargino, charged Higgs boson and neutralino are included. The horizontal lines denote the experimental limits. The different lines correspond to different values of $x=m^2_{\tilde{g}}/m^2_{\tilde{q}}$: 0.3 (short-dashed line), 0.5 (long-dashed line), 1 (solid line), and 2 (dot-dashed line). The other parameters are $\mu=300 \,GeV$, $\tan\beta=10$, $M_{H^-}=300 \,GeV$, $M_{\rm{susy}}=500 \,GeV$.[]{data-label="fig:adI"}](ad231pap.nb.epsi){height="8cm"} ![Same as in fig. \[fig:adI\] when $\delta_{d,RL,23}$ is the only non-zero off-diagonal squark mass entry.[]{data-label="fig:adII"}](ad321pap.nb.epsi){height="8cm"} In figs. \[fig:adI\] and \[fig:adII\], we show the dependence of the branching ratio of $B \rightarrow X_s\,\gamma$ on the flavour-violating parameters $\delta_{d,LR,23}$ and $\delta_{d,RL,23}$, respectively. The upper frame in each figure is borrowed from [@BGHW], i.e. we consider only SM and gluino contributions. As $\delta_{d,LR,23}$ and $\delta_{d,RL,23}$ generate the five-dimensional dipole operators ${\cal O}_{7\tilde{g},\tilde{g}}$ and ${\cal O}'_{7\tilde{g},\tilde{g}}$, it is not surprising that they get stringently bounded. We should note that at this level of the analysis there is no dependence of these bounds on $\mu$ or $\tan \beta$. Such a dependence could result from the term $(F_{d,LR})_{33}$, but only when $\delta_{d,LL,23}$ and $\delta_{d,RR,23}$ are turned on. We will discuss this point in more detail at the end of this section and in the second part of our analysis. In the lower frame of figs. \[fig:adI\] and \[fig:adII\], we also include the contributions from charginos, charged Higgs bosons and neutralinos. Comparing the branching ratio in the two frames at $\delta_{d,LR,23}=0$ and $\delta_{d,RL,23}=0$ (which corresponds to switching off the gluino contribution), one concludes that the combined contribution from charginos, neutralinos and charged Higgs bosons is of the same order as the SM contribution. A detailed investigation shows that the neutralino contribution is negligible, while the contributions from the chargino and charged Higgs boson are similar in magnitude; both interfere constructively with the SM contributions for the specific choice of parameters. However, as the gluino yields, intrinsically, the dominant contribution by far, the bounds $\delta_{d,LR,23}$ and $\delta_{d,RL,23}$ are only marginally modified by chargino, neutralino and charged Higgs boson contributions. A comment concerning the different shapes of the curves in figs. \[fig:adI\] and \[fig:adII\] is in order. In fig. \[fig:adII\], with non-vanishing $\delta_{d,RL,23}$, the gluino contribution is induced by the primed-type operator ${\cal O}'_{7\tilde{g},\tilde{g}}$ and therefore does not interfere with the contributions from the other particles, as these induce unprimed operators in the first place. In contrast, in fig. \[fig:adI\], which shows the case of non-zero $\delta_{d,LR,23}$, the gluino contribution is of the unprimed type and therefore interferes with the other contributions. We also tried to derive analogous bounds on $\delta_{d,LL,23}$, $\delta_{d,RR,23}$, $\delta_{u,LR,23}$, $\delta_{u,RL,23}$, $\delta_{u,RR,23}$ and also on $\delta_{u,LR,33}$ and $\delta_{u,LR,22}$. In the chargino sector the latter diagonal elements, together with the usual CKM mechanism, also can induce flavour violation. The parameters of the up-squark mass matrix give rise to chargino contributions that lead only to dimension six dipole operators, which inherently are not very large. For our choices of $\mu$, $M_{\rm{susy}}$ and $\tan \beta$, this was confirmed numerically. Therefore, no stringent bounds are obtained for the soft parameters in the up-squark mass matrix[^2]. The remaining parameters of the down-squark mass matrix, i.e. $\delta_{d,LL,23}$ and $\delta_{d,RR,23}$, play an interesting role. They not only generate contributions to the six-dimensional operators in (\[gmagnopb\]), but, together with the chirality changing term $(F_{d,LR})_{33}$, they also induce contributions to the five-dimensional gluino operators in (\[gmagnopg\]). For the values of $\mu$ and $\tan \beta$ used in our analysis, the coefficients of the five-dimensional operators turn out to be rather small. Thus, no stringent bounds on $\delta_{d,LL,23}$ and $\delta_{d,RR,23}$ are obtained. Summarizing the first part of our analysis, we conclude that $\delta_{d,LR,23}$ and $\delta_{d,RL,23}$ are the only parameters that get significantly constrained by the measurement of the branching ratio of $B \rightarrow X_s \gamma$. Second part of analysis ----------------------- We now explore the problem of whether the separate bounds on $\delta_{d,LR,23}$ and $\delta_{d,RL,23}$, obtained in the first part, remain stable if the various soft parameters are varied simultaneously. The analysis is based on the assumption that the soft terms in the squark mass matrices have the hierarchical structure that the diagonal entries in $m^2_{\,d,\,LL}$, $m^2_{\,d,\,RR}$, and $m^2_{\,u,\,RR}$ are larger than the off-diagonal matrix elements (including $m^2_{\,d,\,LR}$ and $m^2_{\,u,\,LR}$). In contrast to the first part of the analysis, we will allow for a non-degeneracy of the diagonal elements in the matrices $m^2_{\,d,\,LL}$, $m^2_{\,d,\,RR}$, and $m^2_{\,u,\,RR}$. To implement this, we define $\delta$-quantities in addition to those in eqs. (\[deltadefa\]) and (\[deltadefb\]), which parametrize this non-degeneracy: $$\delta_{f,LL,ii} = \frac{(m^2_{\,f,\,LL})_{ii} - m^2_{\tilde{q}}}{m^2_{\tilde{q}}}\,, \hspace{1.0truecm} \delta_{f,RR,ii} = \frac{(m^2_{\,f,\,RR})_{ii} - m^2_{\tilde{q}}}{m^2_{\tilde{q}}}\,, \label{deltadefc}$$ Unless otherwise stated, the diagonal $\delta$-parameters (in eq. (\[deltadefc\])) are varied in the interval $[-0.2,0.2]$. On the other hand, the off-diagonal ones (in eqs. (\[deltadefa\]) and (\[deltadefb\])) are varied in the interval $[-0.5,0.5]$, by use of a Monte Carlo program. There are, however, two exceptions. First, we do not vary those off-diagonal $\delta$’s with an index $1$; the latter $\delta$’s we set to zero, since they are severely constrained by kaon decays (see for example [@GGMS]). Second, as mentioned earlier, also $(m^2_{u,LR})_{33}$ is not varied, but fixed such that the mass of the lightest neutral Higgs boson gets heavy enough to be compatible with experimental bounds. In our Monte Carlo analysis we plot those events, corresponding to $2.0\times 10^{-4}\leq BR(B \rightarrow X_s \gamma) \leq 4.5\times 10^{-4}$, which is the range allowed by the CLEO measurement. Note that we do not include recent preliminary data [@BSGMEASURE] in our analysis. Furthermore, we have made sure that our events correspond to squark masses that are real and lie above $150 \,GeV$. The dependence of the bounds on this specific choice is discussed below. We start with the following parameter set: $\mu=300\,GeV$, $M_{H^-}=300\, GeV$, $\tan\beta=10$, $M_{\rm{susy}}=500\,GeV$, $x= m_{\tilde{g}}^2 \, / \, M_{\rm{susy}}^2 = 1$ and $X_t=750\,GeV$. In fig. \[fig:ad23ad32\], we only consider SM and gluino contributions. In the left frame we present the constraints on $\delta_{d,LR,23}$ and $\delta_{d,RL,23}$ when these are the only flavour-violating soft parameters; the diagonal $\delta$-parameters defined in eq. (\[deltadefc\]) are also switched off. As expected from the first part of our analysis, stringent constraints are obtained. The hole inside the dotted area represents values of $\delta_{d,LR,23}$ and $\delta_{d,RL,23}$ for which the branching ratio is too small to be compatible with the measurements. In the right frame we investigate interference effects that arise when $\delta_{d,LL,23}$ and $\delta_{d,RR,23}$ are switched on in addition to $\delta_{d,LR,23}$, $\delta_{d,RL,23}$. All of them are varied between $\pm 0.5$. From fig. \[fig:ad23ad32\] we find that the bounds on $\delta_{d,LR,23}$ and $\delta_{d,RL,23}$ cannot be softened significantly by non-zero values of $\delta_{d,LL,23}$ and $\delta_{d,RR,23}$, although these $\delta$-parameters, which individually give rise to six-dimensional operators, generate five-dimensional operators through the interplay with the $F$-term $(F_{d,LR})_{33}$. As already discussed in the first part of the analysis, for moderate values of $\mu$ and $\tan \beta$, the contribution to the Wilson coefficient of the five-dimensional operator is rather small. ![Contours in the $\delta_{d,LR,23}$ - $\delta_{d,RL,23}$ plane, where $\delta_{d,LR,23}$, $\delta_{d,RL,23}$, $\delta_{d,LL,23}$ and $\delta_{d,RR,23}$ are the only flavour-violating parameters; $\delta_{d,LR,22}$, $\delta_{d,LR,33}$ are also non-vanishing. We only include SM and gluino contributions; the other parameters are $\mu=300\,GeV$, $\tan\beta=10$, $M_{\rm{susy}}=500\,GeV$ and $x=1$.[]{data-label="fig:ad23ad32q23d23ad22ad33"}](bild4.epsi){height="6cm"} The full power of the interference effects from different sources of flavour violation is depicted in fig. \[fig:ad23ad32q23d23ad22ad33\], where we allow not only for non-zero $\delta_{d,LR,23}$, $\delta_{d,RL,23}$, $\delta_{d,LL,23}$ and $\delta_{d,RR,23}$ but also for non-vanishing $\delta_{d,LR,22}$, $\delta_{d,LR,33}$. All these parameters are varied between $\pm 0.5$. As can be seen, the bounds on $\delta_{d,LR,23}$ and $\delta_{d,RL,23}$ get destroyed dramatically. The reason is that there are now new contributions to the five-dimensional dipole operators. As an example, the combined effect of $\delta_{d,LR,33}$ and $\delta_{d,LL,23}$ leads to a contribution to the Wilson coefficient of the operator ${\cal O}_{7\tilde{g},\tilde{g}}$. The sign of this contribution can be different from the one generated by $\delta_{d,LR,23}$. As a consequence, the bound on $\delta_{d,LR,23}$ gets weakened. To illustrate this more quantitatively, we assume for the moment that there are only these two sources that can generate ${\cal O}_{7\tilde{g},\tilde{g}}$, i.e. we switch off the other $\delta$-quantities. If $\delta_{d,LR,23}$ is larger than the individual bound from the first part of the analysis, it is necessary that the product of $\delta_{d,LR,33}$ and $\delta_{d,LL,23}$ is also relatively large; only in this case can the two sources lead to a branching ratio compatible with experiment. This feature is illustrated in fig. \[fig:linie\]; only values of $\delta_{d,LR,23}$ and values of $\delta_{d,LR,33} \cdot \delta_{d,LL,23}$ which are strongly correlated lead to an acceptable branching ratio. ![The parameters $\delta_{d,LR,23}$ and $\delta_{d,LR,33} \cdot \delta_{d,LL,23}$, which are compatible with the data on $B \to X_s \gamma$ are shown by dots. Values lying on the solid line lead to a vanishing contribution of the five-dimensional operator ${\cal O}_{7\tilde{g},\tilde{g}}$ in the MIA. See text. We only switch on SM and gluino contributions; the other parameters are $\mu=300\,GeV$, $\tan\beta=10$, $M_{\rm{susy}}=500\,GeV$ and $x=1$.[]{data-label="fig:linie"}](linie.epsi){height="6cm"} As clearly visible from fig. \[fig:linie\], the correlation between the two sources for ${\cal O}_{7\tilde{g},\tilde{g}}$, is essentially linear. This implies that the linear combination $$\delta_{d,LR,23} + f \delta_{d,LR,33} \cdot \delta_{d,LL,23} \label{combi}$$ gets constrained, if $f$ is chosen appropriately. Stated differently, the Wilson coefficient of the operator ${\cal O}_{7\tilde{g},\tilde{g}}$ is essentially proportional to the combination (\[combi\]). This implies in turn that for the values of the parameters we are using at the moment ($\mu=300\,GeV$, $M_{H^-}=300\,GeV$, $\tan\beta=10$ $M_{\rm{susy}}=500\,GeV$, $x= m_{\tilde{g}}^2 \, / \, M_{\rm{susy}}^2 = 1$, $X_t=750\,GeV$), the Wilson coefficient is well approximated by its double mass insertion expression. The coefficient $f$, which can be read off from this expression, depends on the parameter $x= m_{\tilde{g}}^2/M^2_{\rm{susy}}$ and reads $$f(x)= \frac{1 + 9 x - 9 x^2 - x^3 +6x(1+x)\log x}{ (1-x)[5x^2-4x-1-2x(x+2)\log x]} \, . \label{eq:f(x)}$$ The numerical values of $f(x)$ for some values of $x$ read $0.74$ for $x=0.3$, $0.68$ for $x=0.5$, $0.60$ for $x=1.0$ and $0.52$ for $x=2.0$, respectively. The solid line in fig. \[fig:linie\] represents pairs ($\delta_{d,LR,23}$, $\delta_{d,LR,33} \cdot \delta_{d,LL,23}$) for which the combination in eq. (\[combi\]) is zero. The points scattered around this line therefore represent Monte Carlo events for which this combination is small. We now turn back to the scenario of fig. \[fig:ad23ad32q23d23ad22ad33\] in which all the parameters $\delta_{d,LR,23}$, $\delta_{d,RL,23}$, $\delta_{d,LL,23}$, $\delta_{d,RR,23}$, $\delta_{d,LR,22}$, $\delta_{d,LR,33}$ are varied simultaneously. In this case, the linear combinations $$\begin{aligned} LC_1 & = & \delta_{d,RL,23}+f(x)\delta_{d,RR,23}\cdot\delta_{d,RL,33} +f(x)\delta_{d,RL,22}\cdot\delta_{d,LL,23},\nonumber\\ LC_2 & = & \delta_{d,LR,23}+f(x)\delta_{d,LR,22}\cdot\delta_{d,RR,23} +f(x)\delta_{d,LL,23}\cdot\delta_{d,LR,33}, \label{deflc1lc2}\end{aligned}$$ are expected to get constrained. In fig. \[fig:lc1lc2\] we show the allowed region for $LC_1$ and $LC_2$. There, we allow all non-diagonal $\delta$-parameters to vary between $\pm 0.5$. In addition, we also allow for non-equal diagonal soft entries, by varying the parameters $\delta_{f,LL,ii}$ and $\delta_{f,RR,ii}$ between $\pm 0.2$. With the latter choice we still guarantee the hierarchy between diagonal and off-diagonal entries, but we get rid of the unnatural assumption of degenerate diagonal entries. In the left frame, we include only SM and gluino contributions. We find that the linear combinations $LC_1$ and $LC_2$ indeed get stringently bounded. In the right frame of fig. \[fig:lc1lc2\] we test the resistance of these bounds when the additional contributions (i.e., those from charginos, charged Higgs bosons and neutralinos) are turned on. In this case also $\delta_{u,LR,23}$, $\delta_{u,RL,23}$, $\delta_{u,LL,23}$, $\delta_{u,RR,23}$ and $\delta_{u,LR,22}$ are varied in the range $\pm 0.5$. We find that the bound on $LC_1$ remains unchanged, while the one on $LC_2$ gets somewhat weakened. This feature is expected, because charginos and charged Higgs bosons contribute to unprimed operators at first place. At this point we should stress that these plots were obtained by choosing the renormalization scale $\mu_b=4.8\,GeV$ and by requiring all squark masses to be larger than $150 \,GeV$. We checked that the bounds on $LC_1$ and $LC_2$ remain practically unchanged when the renormalization scale is varied between $2.4\, GeV$ and $9.6\, GeV$; they are also insensitive to the value of the required minimal squark mass, as we found by changing $m_{\rm{squark\,min}}$ from $150\, GeV$ to $100\, GeV$ or $250\, GeV$. Moreover, we also checked whether the restriction to $\mu=+300$ $GeV$ scenario is too severe: we redid the complete analysis for $\mu=-300\,GeV$ and confirmed that ther are no differences between the results of these two choices. Two remarks are in order: First, one might wonder why we did not include terms like $\delta_{d,RR,33}\cdot\delta_{d,LR,23}$ in $LC_1$ and $LC_2$, which would result into more complicated combinations. As we are allowing for nonequal diagonal soft entries, these terms give in principle additional contributions to the five dimensional operators. However, as the diagonal $\delta$-parameters are only varied between $\pm 0.2$, their influence on the Wilson coefficients is numerically small. For this reason, the simpler combinations $LC_1$ and $LC_2$, defined in eqs. (\[deflc1lc2\]), are sufficiently constrained and we prefer to give bounds on these quantities.\ Second, if we got rid of the hierarchy of diagonal and off-diagonal entries in the squark mass matrices, stringent bounds on the simple combinations $LC_1$ and $LC_2$ certainly would no longer exist, simply because there would then be more contributions to the five-dimensional operators of similar magnitude. In this case, however, the $full$ Wilson coefficients of the five-dimensional operators still would be stringently constrained by the experimental data on $B \to X_s \gamma$. Unfortunately, in this case not much information can be extracted for the individual soft parameters or simple combinations thereof. Finally, we extend our analysis to other values of the input parameters. So far, we found that the combinations $LC_1$ and $LC_2$ (see eqs. (\[deflc1lc2\])) are stringently bounded in the scenario characterized by the input values $\mu=300 \,GeV$, $M_{H^-}=300\,GeV$, $\tan\beta=10$, $M_{\rm{susy}}=500\,GeV$, $x= m_{\tilde{g}}^2 \, / \, M_{\rm{susy}}^2 = 1$ and $X_t=750\,GeV$. It is conceivable that the bounds on $LC_1$ and $LC_2$ can get considerably weakened in other scenarios. Therefore, we analyse the bounds on the soft parameters within the following parameter sets: ($M_{\rm{susy}}, X_t$) $=$ $(300\,GeV, 470\, GeV)$, $(500\, GeV, 750\, GeV)$, $(1000\, GeV, 1200\, GeV)$. For $\tan\beta $ we explore the values: $\tan \beta = 10,\, 30,\, 50.$ Furthermore, the gluino mass $m_{\tilde{g}}$ is varied over the values $ x = m_{\tilde{g}}^2 \, / \, M_{\rm{susy}}^2 = 0.3\,, \, 0.5\,,\, 1\,,\, 2$. Surprisingly, the constraints on $LC_1$ and $LC_2$ are completely stable over large parts of the parameter space. Within the $\tan \beta = 10$ scenario the bounds are essentially unchanged if the other two parameters $M_{{\rm susy}}$ and $x$, are varied over the complete range of values given above. For example, the independence from the parameter $M_{{\rm susy}}$ within this scenario can be read off from the comparison of frames in the first vertical line in fig. \[allx1\]. However, fig. \[allx1\] also illustrates that the bounds get significantly weakened or even lost when $\tan \beta$ values as large as $30$ (second vertical line) or $50$ (third vertical line) are chosen. This effect gets enhanced when the general mass scale $m_{\tilde{q}}$ in the squark mass matrices decreases with the parameter $M_{\rm susy}$. There are two main reasons why the bounds get weakened in these scenarios. First, in the large $\tan \beta$ regime the term $(F_{d,LR})_{33}$ gets strongly enhanced because of its proportionality to $\tan \beta$ (see (\[FFterm\])). Particularly, for $\tan \beta = 50$ and $M_{{susy}} = 300\, GeV$, the term $(F_{d,LR})_{33}$ is of the same magnitude as the diagonal entries of the squark mass matrix. Thus, the contributions to the Wilson coefficients of the five-dimensional gluino operators (induced by $(F_{d,LR})_{33}$ in combination with $\delta_{d,LL,23}$ or $\delta_{d,RR,23}$) become important enough to weaken the bounds on $LC_1$ and $LC_2$ significantly. The relative importance of this $F$ term is of course increased if the general soft squark mass scale $M_{{\rm susy}}$ is decreased as can be read off from fig. \[allx1\]. Second, within the large $\tan \beta$ regime the contributions from charginos get enhanced and therefore also weaken the bounds on $LC_2$. These features are illustrated in more detail in fig. \[fig:tanbeta50\]. In the first frame we take over the specific scenario with $\tan\beta=50$ and $M_{{\rm susy}}=500\, GeV$ from fig. \[allx1\]. To show that the term $(F_{d,LR})_{33}$ is indeed one of the reasons for the weakening of the bounds, we present in the right frame of fig. \[fig:tanbeta50\] the corresponding scenario when $(F_{d,LR})_{33}$ is set to zero. We see that we regain better bounds on $LC_1$ and also on $LC_2$. However, we also see that the bound on $LC_2$ remains weak. This, and the resulting asymmetry, is due to a large chargino contribution for $\tan\beta=50$. We recall that there is no chargino contribution to the primed operator which could influence the bound on $LC_1$. We can also explore how the bounds behave if we vary the parameter $\mu$. Until now we used the value $\mu = 300\, GeV$. Because the parameter $(F_{d,LR})_{33}$ is actually proportional to the product of $\tan \beta$ and $\mu$ (see eq. (\[FFterm\])), we conclude from the findings above that the bound on $LC_1$ is unchanged if we increase the value of $\mu$ and decrease the value of $\tan \beta$ such that the product of both parameters is constant; the bound on $LC_2$ is then even stronger because the chargino contribution is smaller for increasing $\mu$. Consequently, one finds a smaller asymmetry in the corresponding plots (compare the left frame in fig. \[new150\] with the second frame in the second line of fig. \[allx1\]). On the contrary, if one decreases the value of $\mu$ to $\mu = 150\, GeV$, the bound on $LC_2$ is weakened and the asymmetry of the plot is increased as one can read off from the right frame in fig. \[new150\]. Summing up the second part of our analysis, the two simple combinations $LC_1$ and $LC_2$ (\[deflc1lc2\]), consisting of elements of the soft parts of the down squark mass matrices, stay stringently bounded over large parts of the supersymmetric parameter space, excluding the large $\tan \beta$ and the large $\mu$ regime. We note that these new bounds are in general one order of magnitude weaker than the bound on the single off-diagonal element $\delta_{d,LR,23}$, which was derived in previous work [@GGMS; @Masiero2001] by neglecting any kind of interference effects (see e.g. tab. 4 in [@Masiero2001] where the value $1.6 \cdot 10^{-2}$ is given as bound on $\delta_{d,LR,23}$ for $x=1$ and $M_{\rm{susy}}=500\, GeV$). Implications on $\hat{C}_8(\mu_W)$ and $\hat{C}_8'(\mu_W)$ ========================================================== As mentioned in section \[framework\], it is possible to absorb the various versions of gluonic dipole operators into the SM operator ${\cal O}_8$ and its primed counterpart. The resulting effective Wilson coefficients, denoted by $\hat{C}_8(\mu_W)$ and $\hat{C}_8'(\mu_W)$, read at the matching scale $\mu_W$: $$\begin{aligned} \label{combine} \hat{C}_8(\mu_W) &=& C_8(\mu_W) - \left( C_{8b,\tilde{g}}(\mu_W) + \frac{1}{m_b(\mu_W)} \, C_{8\tilde{g},\tilde{g}}(\mu_W) \right) \, \frac{16 \sqrt{2} \, \pi^3 \, \alpha_s(\mu_W)}{G_F \, K_{tb} K_{ts}^*} \nonumber \\ \hat{C}_8'(\mu_W) &=& C_8'(\mu_W) - \left( C_{8b,\tilde{g}}'(\mu_W) + \frac{1}{m_b(\mu_W)} \, C_{8\tilde{g},\tilde{g}}'(\mu_W) \right) \, \frac{16 \sqrt{2} \, \pi^3 \, \alpha_s(\mu_W)}{G_F \, K_{tb} K_{ts}^*} \quad .\end{aligned}$$ The coefficients on the r.h.s. of eq. (\[combine\]) are given explicitly in section \[Wilson\] (appendix \[ami\]). We now investigate the implications on possible values for the effective Wilson coefficients $\hat{C}_8(\mu_W)$ and $\hat{C}_8'(\mu_W)$ when taking into account the experimental constraints on $B \to X_s \gamma$. The result is shown in fig. \[fig:c08\], for $\mu=300\,GeV$, $M_{H^-} = 300\,GeV$, $M_{\rm{susy}} = 500\,GeV$, $X_t = 750\,GeV$, $\tan \beta = 10$ and $x=1$. The soft parameters, encoded in the $\delta$ quantities, are varied as in fig. \[fig:lc1lc2\]. From fig. \[fig:c08\] we conclude that large deviations from the SM values for $\hat{C}_8(\mu_W)$ and $\hat{C}_8'(\mu_W)$ are still possible. Scenarios in which these Wilson coefficients are enhanced with respect to the SM gained a lot of attention in the last years. For a long time the theoretical predictions for both, the inclusive semileptonic branching ratio ${\cal B}_{\rm{sl}}$ and the charm multiplicity $n_c$ in $B$-meson decays were considerably higher than the experimental values [@Bigi_Falk]. An attractive hypothesis, which would move the theoretical predictions for both observables into the direction favoured by the experiments, assumed the Wilson coefficients $\hat{C}_8(\mu_W)$ and $\hat{C}_8'(\mu_W)$ to be enhanced by new physics [@Kagan]. After the inclusion of the complete NLL corrections to the decay modes $b \to c \overline{u} q$ and $b \to c \overline{c} q$ ($q=d,s$) [@Bagan], the theoretical prediction for the central values of the semileptonic branching ratio and the charm multiplicity [@Sachrajda] are still somewhat higher than the present measurements [@Japantalk], but theory and experiment are in agreement within the errors. It should be stressed, however, that in the theoretical error estimate the renormalization was varied down to $m_b/4$. If one only considers the variations down to $m_b/2$, the theoretical predictions will have only an marginal overlap with the data. This implies that there is still room for enhanced $\hat{C}_8(\mu_W)$ and $\hat{C}_8'(\mu_W)$ [@Liniger]. Summary ======= In this paper we have chosen the rare decay $B \rightarrow X_s \gamma$ to analyse the importance of interference effects for the bounds on the parameters in the squark mass matrices within the unconstrained MSSM. Our analysis, based on a systematic leading logarithmic (LL) QCD analysis, mainly explored the interplay between the various sources of flavour violation and the interference effects of SM, gluino, chargino, neutralino, and charged Higgs boson contributions. Surprisingly, such an analysis did not exist so far. Unlike previous work, which used the mass insertion approximation, we used in our analysis the mass eigenstate formalism, which remains valid even when some of the intergenerational mixing elements are large. In former analyses no correlations between the different sources of flavour violation were taken into account. Following that approach, we found only two $down$-type squark mass entries to be significantly constrained by the data on $B \rightarrow X_s \gamma$: $\delta_{d,LR,23}$ and $\delta_{d,RL,23}$. These entries are correlated with the five-dimensional dipole operators where the chirality flip is induced by the gluino mass. We showed that these bounds get destroyed in scenarios in which certain off-diagonal elements of the squark mass matrices are switched on simultaneously. We then systematically explored the interference effects from all possible contributions and sources of flavour violation within the unconstrained MSSM. Accordingly, we switched on all off-diagonal elements $\delta_i$ of the squark mass matrices and varied them in the range $\pm 0.5$. In addition, we also varied the diagonal elements, but in smaller interval in order to preserve a certain hierarchy between the off-diagonal and the diagonal ones. In this general scenario we singled out two simple combinations of elements of the soft part of the down squark mass matrix, which stay stringently bounded over large parts of the supersymmetric parameter space, excluding the large $\tan \beta$ and the large $\mu$ regime. These new bounds are in general one order of magnitude weaker than the bound on the single off-diagonal element $\delta_{d,LR,23}$, which was derived in previous work [@GGMS; @Masiero2001] by neglecting any kind of interference effects. Finally, we briefly analysed up to which values SUSY contributions, compatible with $B \rightarrow X_s \gamma$, can enhance the Wilson coefficients $\hat{C}_8 (m_W)$ and $\hat{C}'_8 (m_W)$. We found that large deviations from the SM values are still possible in our general setting. Such scenarios are of particular interest within the phenomenology of inclusive charmless hadronic $B$ decays. We thank Sven Heinemeyer, Shaaban Khalil and Georg Weiglein for discussions. $~$ {#ami} === Mixing matrices, interacting Lagrangian {#am} --------------------------------------- In this appendix we present our conventions in the mass mixing matrices for the relevant particles and in the interacting Lagrangian. Besides some specific changes, we follow [@Andreas]: [**Charged Higgs bosons:**]{} If we denote the two $SU(2)$ Higgs boson doublets appearing in the superpotential by $$H_1=\left(\begin{array}{c}H_1^1\\ H_{1}^{2}\end{array} \right) \, , \hskip 2cm H_2=\left(\begin{array}{c}H_2^1\\ H_{2}^{2}\end{array} \right),$$ the corresponding mass eigenstates $H_1^{+\,-}$ and $H_2^{+\,-}$ of the charged Higgs bosons are given by (see [@Rosiek]) $$\left(\begin{array}{c}H_2^{1\,\ast}\\H_1^2\end{array}\right) =\underbrace{\left(\begin{array}{cc}\sin\beta & \cos\beta \\ -\cos\beta & \sin\beta \end{array}\right)}_{Z_E} \hskip 0.3cm \left(\begin{array}{c}H_2^{-}\\H_1^-\end{array}\right) \, ,$$ and similarly for $H_2^+=(H_2^-)^\ast$ and $H_1^+=(H_1^-)^\ast$.\ In the unitary (physical) gauge, the massless charged fields $H_2^{+\,-}$ are absorbed by the $W$ boson. One is left with two massive charged Higgs bosons of equal mass. [**Charginos:**]{} The charginos $\chi^{ch}_{1/2}$ are a mixture of charged gauginos $\lambda^{\pm}$ and Higgsinos $h_{1}^{-}$ and $h_{2}^{+}$. Defining $$\psi^{+}=\left(\begin{array}{c}-i\lambda^{+}\\ h_{2}^{+}\end{array} \right) \, , \hskip 2cm \psi^{-}=\left(\begin{array}{c}-i\lambda^{-}\\ h_{1}^{-}\end{array} \right) \, ,$$ the mass terms are then ${\cal L}_{m}^{ch}= -\frac{1}{2}(\psi^{+T}X^{T}\psi^{-}+\psi^{-T}X\psi^{+})+\mbox{h.c.}$, where $$\label{Xmat} X=\left(\begin{array}{cc}M_{2}&g_{2}v_{2}\\ g_{2}v_{1}&\mu \end{array}\right).$$ The two-component charginos $\chi^{\pm}_{i}\;(i=1,2)$ and the four-component charginos $\chi^{ch}_{1/2}$ are then defined as $$\renewcommand{\arraystretch}{1.5} \begin{array}{llllll} \chi_{i}^{+}&=&V_{ij}\psi_{j}^{+},&\chi_{i}^{-}&=&U_{ij}\psi_{j}^{-},\\ \chi^{ch}_{1}&=&\left(\begin{array}{c} \chi_{1}^{+}\\ \overline{\chi_{1}^{-}}\end{array}\right), &\chi_{2}^{ch}&=& \left(\begin{array}{c}\chi_{2}^{+}\\ \overline{\chi_{2}^{-}}\end{array}\right), \end{array} \label{M_2}$$ where the unitary matrices $U$ and $V$ diagonalize $X$: $M_{D}^{ch}=U^{*}XV^{-1}=VX^{\dagger}U^{T}$. ${\cal L}^{ch}_{m}$ then becomes ${\cal L}^{ch}_m=-M^{ch}_{D\,11}\:\overline{\chi_{1}^{ch}}\chi^{ch}_{1} -M^{ch}_{D\,22}\:\overline{\chi^{ch}_{2}}\chi^{ch}_{2}.$ $U$ and $V$ can be found by observing that\ $M^{ch\,2}_{D}=VX^{T}XV^{-1}=U^{*}XX^{T}U^{*-1}.$ They are not fixed completely by these conditions. The freedom can be used to arrange the elements of $M^{ch}_{D}$ to be positive: If the $i^{th}$ eigenvalue of $M^{ch}_{D}$ is negative, simply multiply the $i^{\rm th}$ row of $V$ with $-1$. [**Neutralinos:**]{} The neutralinos are linear combinations of the gauginos $\lambda'$ and $\lambda_{3}$ and the neutral Higgsinos $h_{1}^{0}$ and $h_{2}^{0}$. If we define $$\psi^{0}=\left(\begin{array}{c} -i\lambda'\\-i\lambda_{3}\\h_{1}^{0} \\h_{2}^{0}\end{array}\right) \, ,$$ the neutralino mass term reads ${\cal L}^{0}_{m}=-\frac{1}{2}\psi^{0T}Y\psi^{0}+\mbox{h.c.},$ where $$Y=\left(\begin{array}{cccc} M_{1}&0&-\frac{g_{1}v_{1}}{\sqrt{2}}&\frac{g_{1}v_{2}}{\sqrt{2}}\\ 0&M_{2}&\frac{g_{2}v_{1}}{\sqrt{2}}&-\frac{g_{2}v_{2}}{\sqrt{2}}\\ -\frac{g_{1}v_{1}}{\sqrt{2}}&\frac{g_{2}v_{1}}{\sqrt{2}}&0&-\mu\\ \frac{g_{1}v_{2}}{\sqrt{2}}&-\frac{g_{2}v_{2}}{\sqrt{2}}&-\mu&0 \end{array} \right).$$ Two- and four-component neutralinos must be defined as $$\renewcommand{\arraystretch}{1.5} \begin{array}{l} \tilde{\chi}^{0}_{i}=N_{ij}\psi^{0}_{j} \, , \hskip 1.5cm (i=1,\ldots ,4)\\ \chi^{0}_{i}=\left(\begin{array}{c}\tilde{\chi}^{0}_{i} \\ \overline{\tilde \chi^{0}_{i}} \end{array}\right) \, . \end{array}$$ To diagonalize the mass matrix, $N$ must obey $N_{D}=N^{*}YN^{-1}$, where $N_{D}$ is a diagonal matrix. $N$ can be found, using the property $N_{D}^{2}=NY^{\dagger}YN^{-1}.$ The eigenvalues and eigenvectors are found numerically. Possible negative entries in $N_{D}$ are turned positive by multiplying the corresponding row of $N$ by a factor of $i$. [**Quarks:**]{} The situation in the quark sector is in almost complete analogy to that of the SM. The quarks get their masses from the Yukawa potential when the Higgs bosons acquire a vacuum expectation value. We define the mass eigenstates by $$\begin{array}{ll} u_{Li}^{(m)}=U^{L}_{ij}u_{Lj},&u_{Ri}^{(m)}=U^{R}_{ij}u_{Rj},\\ d_{Li}^{(m)}=D^{L}_{ij}d_{Lj},&d_{Ri}^{(m)}=D^{R}_{ij}d_{Rj}. \end{array}$$ The mixing matrices must satisfy ($i=1,2,3$) $$\begin{array}{rlllll} D^{R}\lambda^{dT}D^{L\dagger}&=&\lambda^{d}_{D}&=&\mbox{diag}\left( \frac{m_{di}}{v_{1}}\right),&\\ U^{R}\lambda^{uT}U^{L\dagger}&=&\lambda^{u}_{D}&=&\mbox{diag}\left( \frac{m_{ui}}{v_{2}}\right),& \end{array}$$ where $$\begin{array}{lll} v_1&=&\sqrt{2}\frac{m_W}{g_2}\cos \beta,\\ v_2&=&\sqrt{2}\frac{m_W}{g_2}\sin \beta. \end{array}$$ As can be seen, the eigenvalues of $\lambda^{u}$ and $\lambda^{d}$ are fixed by the quark masses and the minimum of the Higgs potential. In the SM, the only observable effect of the mixing is encoded in the CKM matrix $K=U^{L}D^{L\dagger}$, appearing in the charged current. Therefore it is possible and convenient to set $D^{L}=D^{R}=U^{R}=\openone\,\, (\Rightarrow U^{L}=K)$. To be more precise, $\lambda^{d}$ and $\lambda^e$ are chosen to be diagonal and $\lambda^{u}=\mbox{diag}\left(\frac{m_{ui}}{v_{2}}\right) K^{T}$. Although in our theory the mixing matrices appear in all kinds of combinations, we adopt this convention here, emphasizing that it is a *choice* made just for convenience. An underlying theory should fix the values of $\lambda^{u}$ and $\lambda^{d}$ at some (high) scale. Note that in the main text we neglect the superscript $m$ for the mass eigenstates. [**Squarks:**]{} If supersymmetry were not broken, squarks would be rotated to their mass basis with the help of the same matrices as their fermionic partners. In a more realistic setting we need to introduce a further set of unitary rotation matrices. The notation must be set up carefully because the mass eigenstates of squarks and sleptons are linear combinations of the partners of *left-* and *right-*handed partners of the corresponding fermions. The exact form of the mass matrices and the notation for the corresponding diagonalization matrices can be found in section \[SMM\].\ [**Interaction Lagrangian:**]{} In order to fix further conventions we quote the relevant parts of the interaction Lagrangian: - [Charged Higgs boson-quark-quark]{} $$\begin{aligned} \label{Higgs} {\cal L}_{qqH}&=&(\lambda_D^d)_{m\ell}K_{mk}\overline{d_\ell}P_L u_{k}(Z_E)_{2\,n}H^-_i\, +\,(\lambda_D^d)_{m\ell}K_{mk}^\ast\overline{u_k}P_R d_{\ell}(Z_E)_{2\,n}^\ast(H^-_i)^\ast \nonumber \\ & + & (\lambda_D^u)_{m\ell}\overline{u_\ell}P_L d_{m}(Z_E)_{1\,n}^\ast(H^-_i)^\ast\, +\,\lambda_D^u)_{m\ell}\overline{d_m}P_R u_{\ell}(Z_E)_{1\,n}H^-_i.\end{aligned}$$ Note that in our basis, the terms proportional to the $\lambda_D^d$ always come together with the CKM matrix $K$, while the $\lambda_D^u$ terms do not. - [Squark-quark-chargino]{} $$\begin{aligned} \label{QSQCH} {\cal L}_{\tilde qq\chi^{ch}}&=& \tilde u_{j}\overline{d_{i}} \left[A^d_{ij\ell}P_L +B^d_{ij\ell}P_R \right]\chi^{ch\;c}_\ell +\tilde u_i^\dagger \overline{\chi^{ch\;c}_\ell} \left[A^{d\dagger}_{ij\ell}P_R +B^{d\dagger}_{ij\ell}P_L\right]d_j \, , \end{aligned}$$ where $A^d_{ij\ell}=(\lambda^d_D\Gamma^{\dagger}_{UL})_{ij}U^*_{\ell 2}$, $B^d_{ij\ell}=(K^\dagger\lambda^u_D\Gamma^{\dagger}_{UR})_{ij}V_{\ell 2} -g_2\Gamma^{\dagger}_{ULij} V_{\ell 1}$, $P_{L/R}=\frac{1}{2}(1\mp\gamma^5)$ and $\chi^{ch\;c}_\ell$ denotes the charge-conjugated field. - [Squark-quark-neutralino]{} $$\begin{aligned} \label{QSQNE} {\cal L}_{\tilde qq\chi^0}&=&-\tilde d_j\overline{d_i}\left[C^d_{ij\ell}P_L +D^d_{ij\ell}P_R\right]\chi^0_\ell -\tilde d^\dagger_i\overline{\chi^0_\ell}\left[C^{d\dagger}_{ij\ell}P_R +D^{d\dagger}_{ij\ell}P_L\right]d_j \, ,\end{aligned}$$ where $C^d_{ij\ell}=(\lambda^d_D\Gamma^{\dagger}_{DL})_{ij}N^*_{\ell 3} -\sqrt{2}g_1Q_d\Gamma^{\dagger}_{DRij}N^*_{\ell 1}$,\ $D^d_{ij\ell}=(\lambda^d_D\Gamma^{\dagger}_{DR})_{ij}N_{\ell 3}$ $+\frac{1}{\sqrt{2}}\Gamma^{\dagger}_{DLij} ((2Q_d+1)g_1N_{\ell 1}-g_2 N_{\ell 2})$. Wilson coefficients {#Wilson} ------------------- We recall the Wilson coefficients at the matching scale $\mu_W$. The non-vanishing Wilson coefficients for the SM are, at leading order in $\alpha_s$ ($x_{tw} \equiv m_t^2/m_W^2$) : $$\begin{aligned} C_{2SM}(\mu_W) & = & 1 \nonumber \\[1.5ex] C_{7SM}(\mu_W) & = & \frac{x_{tw}}{24\,(x_{tw}-1)^4} \, \left( {-8x_{tw}^3+3x_{tw}^2+12x_{tw}-7+(18x_{tw}^2-12x_{tw}) \ln x_{tw}} \right) \nonumber \\[1.5ex] C_{8SM}(\mu_W) & = & \frac{x_{tw}}{\ 8\,(x_{tw}-1)^4} \, \left( {-x_{tw}^3+6x_{tw}^2-3x_{tw}-2-6x_{tw} \ln x_{tw}} \right) \,. \label{wclosm} \end{aligned}$$ The contributions from charginos, neutralinos and charged Higgs bosons match onto the (chromo)magnetic operators of the SM and the corresponding primed operators, which differ from the SM ones only by their chirality structure. The corresponding Wilson coefficients become somewhat involved [@Andreas] as they include many mixing matrices, whose definitions were given in appendix A1. One gets (using the abbreviation $V \doteq (4 G_F \, K_{tb} K_{ts}^*)/\sqrt{2}$) $$\begin{aligned} C_{7}(\mu_W) & = & C_{7\mathit{SM}}(\mu_W) \nonumber \\ & - & \frac{1}{2} \, [\cot^2\beta\, x_{tH}(Q_{u}F_1(x_{tH})+F_2(x_{tH}))+ \nonumber \\ &&x_{tH}(Q_uF_3(x_{tH})+F_4(x_{tH}))] + \nonumber \\ & + &\frac{1}{2V}\frac{1}{m^2_{\tilde u_j}}B^d_{2j\ell}B^{d*}_{3j\ell}\left[F_1(x_{\chi^{ch}_\ell u_j})+ Q_{u}F_2(x_{\chi^{ch}_{\ell u_j}})\right] \nonumber\\ &+&\frac{1}{2V}\frac{1}{m^2_{\tilde u_j}}\frac{m_{\chi^{ch}_\ell}}{m_b}B^d_{2j\ell}A^{d*}_{3j\ell} \left[F_3(x_{\chi^{ch}_\ell u_j})+Q_u F_4(x_{\chi^{ch}_\ell u_j})\right] \nonumber\\ &+& \frac{Q_d}{2V}\frac{1}{m^2_{\tilde d_j}}\left[D^d_{2j\ell}D^{d*}_{3j\ell}F_2(x_{\chi^0_\ell \tilde d_j})+\frac{m_{\chi^0_\ell}}{m_b}D^d_{2j\ell} C^{d*}_{3j\ell}F_4(x_{\chi^0_\ell \tilde d_j})\right]\label{C7}\\ C_{8}(\mu_W) & = & C_{8\mathit{SM}}(\mu_W) \nonumber \\ & - & \frac{1}{2} \, [\cot^2\beta\, x_{tH}F_1(x_{tH})+x_{tH}F_3(x_{tH})] \nonumber \\ & + & \frac{1}{2V}\frac{1}{m^2_{\tilde u_j}}\left[B^d_{2j\ell}B^{d*}_{3j\ell}F_2(x_{\chi^{ch}_\ell u_j}) +\frac{m_{\chi^{ch}_\ell}}{m_b}B^d_{2j\ell}A^{d*}_{3j\ell} F_4(x_{\chi^{ch}_\ell u_j}) \right] \nonumber \\ & + & \frac{1}{2V}\frac{1}{m^2_{\tilde d_j}}\left[D^d_{2j\ell}D^{d*}_{3j\ell}F_2(x_{\chi^0_\ell d_j}) +\frac{m_{\chi^0}}{m_b}D^d_{2j\ell}C^{d*}_{3j\ell}F_4(x_{\chi^0_\ell d_j}) \right]\label{C8} \\ C_{7}^{\prime}(\mu_W) & = & -\frac{1}{2} \, \frac{m_sm_b}{m_t^2}\tan^2\beta\, x_{tH}(Q_{u}F_1(x_{tH})+F_2(x_{tH})) \nonumber \\ & + &\frac{1}{2V}\frac{1}{m^2_{\tilde u_j}}A^d_{2j\ell}A^{d*}_{3j\ell}\left[F_1(x_{\chi^{ch}_\ell u_j})+Q_{u}F_2(x_{\chi^{ch}_\ell u_j})\right] \nonumber\\ &+&\frac{1}{2V}\frac{1}{m^2_{\tilde u_j}} \frac{m_{\chi^{ch}_\ell}}{m_b}A^d_{2j\ell}B^{d*}_{3j\ell} \left[F_3(x_{\chi^{ch}_\ell u_j})+Q_u F_4(x_{\chi^{ch}_\ell u_j})\right] \nonumber \\ & + & \frac{Q_d}{2V}\frac{1}{m^2_{\tilde d_j}}\left[C^d_{2j\ell}C^{d*}_{3j\ell}F_2(x_{\chi^0_\ell d_j}) +\frac{m_{\chi^0_\ell}}{m_b}C^d_{2j\ell} D^{d*}_{3j\ell}F_4(x_{\chi^0_\ell d_j}) \right]\label{C7'} \\ C_{8}^{\prime}(\mu_W) & = & -\frac{1}{2} \, \frac{m_sm_b}{m_t^2}\tan^2\beta\, x_{tH}F_1(x_{tH}) \nonumber \\ & + & \frac{1}{2V}\frac{1}{m^2_{\tilde u_j}}\left[A^d_{2j\ell}A^{d*}_{3j\ell}F_2(x_{\chi^{ch}_\ell u_j}) +\frac{m_{\chi^{ch}_\ell}}{m_b}A^d_{2j\ell}B^{d*}_{3j\ell} F_4(x_{\chi^{ch}_\ell u_j}) \right] \nonumber \\ & + & \frac{1}{2V}\frac{1}{m^2_{\tilde d_j}}\left[C^d_{2j\ell}C^{d*}_{3j\ell}F_2(x_{\chi^0_\ell d_j}) +\frac{m_{\chi^0_\ell}}{m_b}C^d_{2j\ell}D^{d*}_{3j\ell} F_4(x_{\chi^0_\ell d_j}) \right] \, , \label{C8'}\end{aligned}$$ where $Q_u=2/3$ and $Q_d=-1/3$. We kept the charged Higgs boson contribution to the primed operators since they are proportional to $\tan^2 \beta$ which could compensate the $m_s/m_t$ suppression. The functions $F_i(x)$ are defined at the end of this section. Although the Wilson coefficients $C_7'(\mu_b)$ and $C_8'(\mu_b)$ of the primed operators are usually small, we retain them in our analysis. Among the coefficients arising from the virtual exchange of a gluino, the most important ones are those associated with the (chromo)magnetic operators: $$\begin{aligned} C_{7b,\tilde{g}}(\mu_W) & = & \ \ -\frac{Q_d}{16 \pi^2} \, \frac{4}{3} \sum_{k=1} ^6 \frac{1}{m_{\tilde{d}_k}^2} \left( \Gamma_{DL}^{kb} \, \Gamma_{DL}^{\ast\,ks} \right) F_2(x_{gd_k}) \, , \nonumber \\ C_{7\tilde{g},\tilde{g}}(\mu_W) & = & m_{\tilde g}\, \frac{Q_d}{16 \pi^2} \, \frac{4}{3} \sum_{k=1} ^6 \frac{1}{m_{\tilde{d}_k}^2} \left( \Gamma_{DR}^{kb} \, \Gamma_{DL}^{\ast\,ks} \right) F_4(x_{gd_k})\, , \nonumber \\ C_{8b,\tilde{g}}(\mu_W) & = & \ \ -\frac{1}{16 \pi^2} \sum_{k=1} ^6 \frac{1}{m_{\tilde{d}_k}^2} \left( \Gamma_{DL}^{kb} \, \Gamma_{DL}^{\ast\,ks} \right) \, \left[ - \frac{1}{6} F_2(x_{gd_k}) - \frac{3}{2} F_1(x_{gd_k}) \right] \, , \nonumber \\ C_{8\tilde{g},\tilde{g}}(\mu_W) & = & m_{\tilde g}\, \frac{1}{16 \pi^2} \sum_{k=1} ^6 \frac{1}{m_{\tilde{d}_k}^2} \left( \Gamma_{DR}^{kb} \, \Gamma_{DL}^{\ast\,ks} \right) \, \left[- \frac{1}{6} F_4(x_{gd_k}) - \frac{3}{2} F_3(x_{gd_k}) \right] \,. \label{glgl} \end{aligned}$$ Note that the coefficients $C_{7\tilde{g},\tilde{g}}(\mu_W)$ and $C_{8\tilde{g},\tilde{g}}(\mu_W)$ are of higher dimensionality to compensate the lower dimensionality of the corresponding operators. The ratios $x_{gd_k}$ are defined as $x_{gd_k} \equiv m_{\tilde g}^2/m_{\tilde{d}_k}^2$. The Wilson coefficients of the corresponding primed operators (which are not small numerically) are obtained through the interchange $\Gamma_{DR}^{ij} \leftrightarrow \Gamma_{DL}^{ij} $ in eqs. (\[glgl\]). For the Wilson coefficients of the scalar/tensorial four-quark operators we refer to [@BGHW]. Finally, we define the functions $F_i$ appearing in the Wilson coefficients listed above: \[functions\] $$\begin{aligned} F_1(x) &\quad = \quad & \frac{1}{ 12\, (\!x-1)^4} \left( x^3 -6x^2 +3x +2 +6x\log x\right) \, , \nonumber \\ & & \nonumber \\ F_2(x) & \quad = \quad & \frac{1}{ 12\, (\!x-1)^4} \left(2x^3 +3x^2 -6x +1 -6x^2\log x\right) \, , \nonumber \\ & & \nonumber \\ F_3(x) & \quad = \quad & \frac{1}{\phantom{1} 2\, (\!x-1)^3} \left( x^2 -4x +3 +2\log x\right) \, , \nonumber \\ & & \nonumber \\ F_4(x) & \quad = \quad & \frac{1}{ \phantom{1} 2\, (\!x-1)^3} \left( x^2 -1 -2x\log x\right) \, . \label{loopfunc}\end{aligned}$$ [100]{} A.H. Chamseddine, R. Arnowitt and P. Nath, Phys. Rev. Lett. [**49**]{} 970 (1982); R. Barbieri, S. Ferrara and C.A. Savoy, Phys. Lett. [**B119**]{} 343 (1982); L. J. Hall, J. Lykken and S. Weinberg, Phys. Rev. [**D27**]{} 2359 (1983). M. Dine, W. Fischler, and M. Srednicki, Nucl. Phys. [**B 189**]{} 575 (1981); S. Dimopoulos and S. Raby, Nucl. Phys. [**B 192**]{} 353 (1981); L. Alvarez-Gaumé, M. Claudson and M. Wise, Nucl. Phys. [**B 207**]{} 96 (1982); M. Dine and A.E. Nelson, Phys. Rev. [**D48**]{} 1277 (1993); M. Dine, A.E. Nelson, and Y. Shirman, Phys. Rev. [**D51**]{} 1362 (1995); M. Dine, A.E. Nelson, Y. Nir, and Y. Shirman, Phys. Rev. [**D53**]{} 2658 (1996). G.F. Giudice, M.A. Luty, H. Murayama and R. Rattazzi, JHEP [**9812**]{} 027 (1998); L. Randall and R. Sundrum, Nucl. Phys. B [**557**]{} 79 (1999). M. Dine, R.G. Leigh and A. Kagan, Phys. Rev. [**D48**]{} 4269 (1993); S. Dimopoulos and G.F. Giudice, Phys. Lett. [**B357**]{} 573 (1995); A. Pomarol and D. Tommasini, Nucl. Phys. [**B466**]{} 3 (1996); A.G. Cohen, D.B. Kaplan and A.E. Nelson, Phys. Lett. [**B388**]{} 588 (1996); R. Barbieri, G. Dvali and L.J. Hall, Phys. Lett. [**B377**]{} 76 (1996). CLEO Collaboration, M. S. Alam [ et al.]{}, Phys. Rev. Lett. [**74**]{} 2885 (1995); CLEO Collaboration, S. Ahmed [ et al.]{}, hep-ex/9908022; ALEPH Collaboration, R. Barate [ et al]{}, Phys. Lett. B [**429**]{} 169 (1998); BELLE Collaboration, talk by M. Nakao at ICHEP 2000, Osaka, July 2000; Preliminary new results presented by T. Taylor for the BELLE Collaboration and by F. Blanc for the CLEO Collaboration can be found at program homepage of the XXXVI Rencontres de Moriond, March 11-17-2001: $http://moriond.in2p3.fr/EW/2001/program.html$. A. Ali and C. Greub, Zeit. f. Phys. [**C60**]{} 433 (1993); N. Pott, Phys. Rev. [**D54**]{} 938 (1996); C. Greub, T. Hurth abd D. Wyler, Phys. Lett. [**B380**]{} 385 (1996); Phys. Rev. [**D54**]{} 3350 (1996); K. Adel and Y.P. Yao, Phys. Rev. [**D49**]{} 4945 (1994); C. Greub and T. Hurth, Phys. Rev. [**D56**]{} 2934 (1997); K. Chetyrkin, M. Misiak and M. M[ü]{}nz, Phys. Lett. [**B400**]{} 206 (1997); Erratum-ibid.  [**B425**]{}, 414 (1997). G. Degrassi, P. Gambino and G.F. Giudice, JHEP[**0012**]{} 009 (2000); M. Carena, D. Garcia, U. Nierste and C.E. Wagner, Phys. Lett. B [**499**]{}, 141 (2001); W. de Boer, M. Huber, A.V. Gladyshev and D.I. Kazakov, hep-ph/0102163. S. Bertolini, F. Borzumati, A. Masiero and G. Ridolfi, Nucl. Phys. [**B353**]{} 591 (1991). F. Gabbiani, E. Gabrielli, A. Masiero and L. Silvestrini, Nucl. Phys. [**B477**]{} 321 (1996). J.F. Donoghue, H.P. Nilles and D. Wyler, Phys. Lett. [**B128**]{} 55 (1983). J.S. Hagelin, S. Kelley and T. Tanaka, Nucl. Phys. [**B415**]{} 293 (1994). F. Borzumati, C. Greub, T. Hurth and D. Wyler, Phys. Rev. D [**62**]{} 075005 (2000); Nucl. Phys. Proc. Suppl. [**86**]{} 503 (2000); hep-ph/9912420. J. A. Casas and S. Dimopoulos, Phys. Lett. [**B387**]{} 107 (1996); H. Baer, M. Brhlik and D. Castano, Phys. Rev. D [**54**]{} 6944 (1996). M. Claudson, L.J. Hall and I. Hinchliffe, Nucl. Phys. B [**228**]{} 501 (1983); A. Kusenko and P. Langacker, Phys. Lett. B [**391**]{} 29 (1997); A. Kusenko, P. Langacker and G. Segre, Phys. Rev. D [**54**]{} 5824 (1996). Th. Besmer and A. Steffen, Phys. Rev. D [**63**]{} 55007 (2001). J. Rosiek, Phys. Rev. [**D41**]{} 3464 (1990). $http://lephiggs.web.cern.ch/LEPHIGGS/papers/osaka note.ps.$ S. Heinemeyer, W. Hollik and G. Weiglein, hep-ph/0002213. M.B. Causse and J. Orloff, hep-ph/0012113. A. Masiero and O. Vives, hep-ph/0104027. I. Bigi et al., Phys. Lett. [**B323**]{} 408 (1994);\ A. Falk, M.B. Wise and I. Dunietz, Phys. Rev. [**D51**]{} 1183 (1995);\ I. Dunietz et al., Eur. Phys. J. [**C1**]{} 211 (1998); H. Yamamoto, hep-ph/9912308. A. L. Kagan and J. Rathsman, hep-ph/9701300;\ A. L. Kagan, in: Proceedings of the 2nd International Conference on B Physics and CP Violation, Honolulu, Hawaii, USA, March 1997 and hep-ph/9806266. E. Bagan et al., Nucl. Phys. [**B432**]{} 3 (1994); Phys. Lett. [**B 342**]{} 362 (1995); Erratum:[**374**]{} 363 (1996); E. Bagan et al., Phys. Lett. [**B 351**]{} 546 (1995) M. Neubert and C.T. Sachrajda, Nucl. Phys. [**B483**]{} 339 (1997). A. Golutvin, plenary talk given at the XXXth International Conference on High Energy Physics, Osaka, Japan, July 2000. C. Greub and P. Liniger, Phys. Lett. [**B494**]{} 237 (2000); Phys. Rev. [**D63**]{} 054025 (2001). [^1]: Work partially supported by Schweizerischer Nationalfonds [^2]: In [@newpaper] the authors derived a rather stringent bound on a quantity proportional to $\delta_{u,LR,33}$ in the case of a small chargino mass of $100 \,GeV$. However, they include the small CKM factor $K_{ts}^* K_{tb} \approx 1/30$ in the definition of their quantity.
--- abstract: 'We investigate quasi-Monte Carlo integration using higher order digital nets in weighted Sobolev spaces of arbitrary fixed smoothness $\alpha \in {\mathbb{N}}$, $\alpha \ge 2$, defined over the $s$-dimensional unit cube. We prove that randomly digitally shifted order $\beta$ digital nets can achieve the convergence of the root mean square worst-case error of order $N^{-\alpha}(\log N)^{(s-1)/2}$ when $\beta \ge 2\alpha$. The exponent of the logarithmic term, i.e., $(s-1)/2$, is improved compared to the known result by Baldeaux and Dick, in which the exponent is $s\alpha /2$. Our result implies the existence of a digitally shifted order $\beta$ digital net achieving the convergence of the worst-case error of order $N^{-\alpha}(\log N)^{(s-1)/2}$, which matches a lower bound on the convergence rate of the worst-case error for any cubature rule using $N$ function evaluations and thus is best possible.' author: - 'Takashi Goda[^1], Kosuke Suzuki[^2], Takehito Yoshiki[^3]' title: 'Optimal order quasi-Monte Carlo integration in weighted Sobolev spaces of arbitrary smoothness [^4]' --- *Keywords*: Quasi-Monte Carlo, numerical integration, higher order digital nets, Sobolev space\ *MSC classifications*: 65C05, 65D30, 65D32 Introduction and the main result ================================ In this paper we investigate quasi-Monte Carlo (QMC) integration of functions defined over the $s$-dimensional unit cube. For an integrable function $f\colon [0,1)^s\to {\mathbb{R}}$, we denote the true integral of $f$ by $$\begin{aligned} I(f) := \int_{[0,1)^s}f({\boldsymbol{x}})\, {\mathrm{d}}{\boldsymbol{x}}.\end{aligned}$$ QMC integration of $f$ over a finite point set $P\subset [0,1)^s$ approximates $I(f)$ by $$\begin{aligned} I(f;P) := \frac{1}{|P|}\sum_{{\boldsymbol{x}}\in P}f({\boldsymbol{x}}) .\end{aligned}$$ Here $P$ is a multiset and so if an element occurs multiple times it is counted according to its multiplicity. The key ingredient for success of QMC integration is to construct good point sets depending on a function class to which $f$ belongs. In the classical QMC theory, for instance, a class of functions with bounded variation in the sense of Hardy and Krause has been often considered [@KNbook; @Nbook]. For this function class, the Koksma-Hlawka inequality states that the integration error is bounded by $$\begin{aligned} |I(f;P)-I(f)| \le V_{\mathrm{HK}}(f)D^*(P) ,\end{aligned}$$ where $V_{\mathrm{HK}}(f)$ denotes the variation of $f$ in the sense of Hardy and Krause, and $D^*(P)$ the star-discrepancy of $P$, see [@Nbook Chapter 3]. This inequality motivates construction of low-discrepancy point sets. We refer to [@DPbook Chapter 8] for several explicit constructions of point sets whose star-discrepancy is of order $N^{-1}(\log N)^{s-1}$, where $N$ denotes the number of points, i.e., $N=|P|$. One of the recent interest in the research community is to consider a function class consisting of smooth functions and to construct good point sets which achieve higher order convergence of the integration error in that function class, see for instance [@Dick08; @HMOT15; @Mark13; @Teml03]. A function space of our particular interest in this paper is a weighted Sobolev space ${\mathcal{H}}_{\alpha,{\boldsymbol{\gamma}}}$ of arbitrary fixed smoothness $\alpha$ for a set of non-negative real numbers ${\boldsymbol{\gamma}}=(\gamma_u)_{u\subseteq \{1,\ldots,s\}}$. Here $\alpha\geq 2$ is a positive integer. (We shall give the precise definition of ${\mathcal{H}}_{\alpha,{\boldsymbol{\gamma}}}$ in Subsection \[subsec:sobolev\].) This space has been studied, for instance, in [@BD09; @Dick09; @DPbook] in the context of QMC integration. The breakthrough in this research direction was made by Dick and his collaborators [@BD09; @BDP11; @Dick07; @Dick08; @Dick09; @DPbook], who provide us with an explicit construction of good point sets called *higher order digital nets* achieving almost optimal convergence of the integration error of order $N^{-\alpha}(\log N)^{c(s,\alpha)}$ for some $c(s,\alpha)>0$. (We shall give the definition of higher order digital nets in Subsection \[subsec:ho\_digital\_net\].) The above order of convergence $\alpha$ is best possible up to some power of a $\log N$ factor. A thorough analysis on the exponent $c(s,\alpha)$ has been recently done for periodic Sobolev spaces and periodic Nikol’skij-Besov spaces with dominating mixed smoothness in [@HMOT15]. They obtained $c(s,\alpha)=(s-1)/2$ for order 2 digital nets in the former spaces for instance. Although the result is best possible, there are restrictions that only periodic function spaces are taken into account and that the smoothness parameter $\alpha$, which equals $r$ in their notation and is considered to be a positive real number, should be less than 2. Thus, the question arises whether higher order digital nets can achieve the best possible convergence of the integration error in non-periodic function spaces of $\alpha \ge 2$. In this paper we give an affirmative answer to this question. To state the main result of this paper, we introduce some notation here. Let ${\mathbb{N}}$ be the set of positive integers and ${\mathbb{N}}_0:={\mathbb{N}}\cup \{0\}$. Let $b$ be a prime and ${\mathbb{F}}_b$ the finite field with $b$ elements, which is identified with the set $\{0,1,\ldots,b-1\}$ equipped with addition and multiplication modulo $b$. For $x\in [0,1)$, its $b$-adic expansion $x=\sum_{i=1}^{\infty}\xi_ib^{-i}$ with $\xi_i\in {\mathbb{F}}_b$ is understood to be unique in the sense that infinitely many of the $\xi_i$’s are different from $b-1$. The operator $\oplus$ denotes digitwise addition modulo $b$, that is, for $x=\sum_{i=1}^{\infty}\xi_ib^{-i}\in [0,1),x'=\sum_{i=1}^{\infty}\xi'_ib^{-i}\in [0,1)$, we define $$\begin{aligned} x\oplus x':= \sum_{i=1}^{\infty}\frac{\eta_i}{b^i}\quad \text{with}\quad \eta_i=\xi_i+\xi'_i \pmod b.\end{aligned}$$ Note that $x\oplus x'$ is not always defined via its unique $b$-adic expansion and even may equal $1\notin [0,1)$. Such an instance is given by setting $b=2$, $x=2^{-1}+2^{-3}+2^{-5}+\cdots$ and $x'=2^{-2}+2^{-4}+2^{-6}+\cdots$. However, if either $x$ or $x'$ can be written in a finite $b$-adic expansion, this situation never occurs. Moreover, let $V$ be a normed function space with norm $\lVert \cdot\rVert_V$. The worst-case error of QMC integration over $P$ in $V$ is defined as $$\begin{aligned} e^{{\mathrm{wor}}}(V;P) := \sup_{\substack{f\in V\\ \lVert f\rVert_V\le 1}}|I(f;P)-I(f)|.\end{aligned}$$ For ${\boldsymbol{\sigma}}\in [0,1)^s$, we write $P\oplus {\boldsymbol{\sigma}}:=\{{\boldsymbol{x}}\oplus {\boldsymbol{\sigma}}: {\boldsymbol{x}}\in P\}$, where $\oplus$ is applied componentwise. Since we shall only consider a point set $P$ whose each element ${\boldsymbol{x}}$ can be written in finite $b$-adic expansions in this paper, ${\boldsymbol{x}}\oplus {\boldsymbol{\sigma}}$ is always defined via unique $b$-adic expansions. Then the root mean square (RMS) worst-case error of QMC integration over $P\oplus {\boldsymbol{\sigma}}$ in $V$ with respect to a randomly chosen ${\boldsymbol{\sigma}}\in [0,1)^s$ is defined as $$\begin{aligned} e^{{\mathrm{rms}}\text{--}{\mathrm{wor}}}(V;P) := \left( \int_{[0,1)^s}\left( e^{{\mathrm{wor}}}(V;P\oplus {\boldsymbol{\sigma}}) \right)^2\, {\mathrm{d}}{\boldsymbol{\sigma}}\right)^{1/2}.\end{aligned}$$ Now the main result of this paper is given as follows. \[thm:main\_result\] For $\alpha,\beta,m\in {\mathbb{N}}$ and $t\in {\mathbb{N}}_0$ with $\alpha\ge 2$, $\beta \ge 2\alpha$ and $0\le t\le \beta m$, let $P$ be an order $\beta$ digital $(t,m,s)$-net over ${\mathbb{F}}_b$. Let ${\boldsymbol{\gamma}}=(\gamma_u)_{u\subseteq \{1,\ldots,s\}}$ be a set of the weights. The RMS worst-case error of QMC integration over $P\oplus {\boldsymbol{\sigma}}$ in ${\mathcal{H}}_{\alpha,{\boldsymbol{\gamma}}}$ with respect to a randomly chosen ${\boldsymbol{\sigma}}\in [0,1)^s$ is bounded by $$\begin{aligned} \label{eq:main_result} e^{{\mathrm{rms}}\text{--}{\mathrm{wor}}}({\mathcal{H}}_{\alpha,{\boldsymbol{\gamma}}};P) \le \frac{1}{N^{\alpha}}\sum_{\emptyset \ne u\subseteq \{1,\ldots,s\}}\gamma_u^{1/2} C_{\alpha,\beta,b,t,u}(\log N)^{(|u|-1)/2},\end{aligned}$$ where $C_{\alpha,\beta,b,t,u}>0$ for all $\emptyset \ne u\subseteq \{1,\ldots,s\}$ and $N=|P|=b^m$. Note that the explicit form of $C_{\alpha,\beta,b,t,u}$ can be found later in (\[eq:constant\_full\]). This result directly implies the following. \[cor:existence\] For $\alpha,\beta,m\in {\mathbb{N}}$ and $t\in {\mathbb{N}}_0$ with $\alpha\ge 2$, $\beta \ge 2\alpha$ and $0\le t\le \beta m$, let $P$ be an order $\beta$ digital $(t,m,s)$-net over ${\mathbb{F}}_b$. Let ${\boldsymbol{\gamma}}=(\gamma_u)_{u\subseteq \{1,\ldots,s\}}$ be a set of the weights. There exists a ${\boldsymbol{\sigma}}\in [0,1)^s$ such that the worst-case error of QMC integration over $P\oplus {\boldsymbol{\sigma}}$ in ${\mathcal{H}}_{\alpha,{\boldsymbol{\gamma}}}$ is bounded by $$\begin{aligned} e^{{\mathrm{wor}}}({\mathcal{H}}_{\alpha,{\boldsymbol{\gamma}}};P\oplus {\boldsymbol{\sigma}}) \le \frac{1}{N^{\alpha}}\sum_{\emptyset \ne u\subseteq \{1,\ldots,s\}}\gamma_u^{1/2} C_{\alpha,\beta,b,t,u}(\log N)^{(|u|-1)/2}.\end{aligned}$$ Although the $t$-value and thus the constants $C_{\alpha,\beta,b,t,u}$ may depend on $m$, it was shown by Dick [@Dick07; @Dick08] that for large $m$ we can explicitly construct an order $\beta$ digital $(t,m,s)$-net with its $t$-value independent of $m$, see also Remark \[rem:ho\_digital\_net\]. Therefore, the rate of convergence which we obtain in this paper is of order $N^{-\alpha}(\log N)^{(s-1)/2}$ unless $\gamma_{\{1,\ldots,s\}}=0$. This compares favorably with what was obtained by Baldeaux and Dick in [@BD09 Theorem 24], where they considered the case where $P$ is an order $\alpha$ digital net over ${\mathbb{F}}_b$, i.e., the case where the order of digital nets and the smoothness parameter coincide, and obtained a similar bound on the RMS worst-case error but with the exponent of the logarithmic term equal to $s\alpha/2$. Our result shows that the exponent of the logarithmic term is actually independent of $\alpha$. Here we note that the convergence of order $N^{-\alpha}(\log N)^{(s-1)/2}$ in a similar function space has been proven by using the Frolov cubature rule in conjunction with periodization strategy, see for instance [@Ullrich14], which is not a QMC integration rule though. Moreover, from the results of [@DNP14], we can see that the above results in Theorem \[thm:main\_result\] and Corollary \[cor:existence\] are best possible. Now let $P=\{{\boldsymbol{x}}_0,\ldots,{\boldsymbol{x}}_{N-1}\}\subset [0,1)^s$ be an $N$ element point set and ${\boldsymbol{w}}=\{w_0,\ldots,w_{N-1}\}$ be an arbitrary real tuple. The worst-case error of cubature rule with points in $P$ and weights ${\boldsymbol{w}}$ is defined as $$\begin{aligned} e^{{\mathrm{wor}}}(V;P,{\boldsymbol{w}}) := \sup_{\substack{f\in V\\ \lVert f\rVert_V\le 1}}\left| \sum_{n=0}^{N-1}w_nf({\boldsymbol{x}}_n)-I(f)\right|.\end{aligned}$$ A lower bound on $e^{{\mathrm{wor}}}(V;P,{\boldsymbol{w}})$ in the so-called half-period cosine space of smoothness $\alpha$ for the case of product weights, i.e., weights of the form $\gamma_u=\prod_{j\in u}\gamma_j$ for $\gamma_1,\ldots,\gamma_s>0$, was proven in [@DNP14 Theorem 4], and furthermore, it was shown in [@DNP14 Theorem 1] that the half-period cosine space is continuously embedded in the Sobolev space ${\mathcal{H}}_{\alpha,{\boldsymbol{\gamma}}}$ which we consider in this paper. Combining these two results, we immediately have the following. Let $\alpha \ge 2$ be a positive integer and $\gamma_1,\ldots,\gamma_s>0$. For $u\subseteq \{1,\ldots,s\}$, let $\gamma_u=\prod_{j\in u}\gamma_j$ where the empty product equals 1. For any $N$ element point set $P\subset [0,1)^s$ and any real tuple ${\boldsymbol{w}}$ we have $$\begin{aligned} e^{{\mathrm{wor}}}({\mathcal{H}}_{\alpha,{\boldsymbol{\gamma}}};P,{\boldsymbol{w}}) \ge c_{\alpha,{\boldsymbol{\gamma}},s}\frac{(\log N)^{(s-1)/2}}{N^{\alpha}},\end{aligned}$$ where $c_{\alpha,{\boldsymbol{\gamma}},s}$ is positive and independent of $P$ and ${\boldsymbol{w}}$. This implies that the exponent $c(s,\alpha)$ cannot be less than $(s-1)/2$ in ${\mathcal{H}}_{\alpha,{\boldsymbol{\gamma}}}$ for the case of product weights with $\gamma_1,\ldots,\gamma_s>0$. Since Corollary \[cor:existence\] shows the existence of point sets which achieve exactly this order, our result is best possible. However, as our result is again an existence result and thus is not fully constructive, it is interesting to study an explicit construction of deterministic point sets which achieve the best possible convergence of the worst-case error in ${\mathcal{H}}_{\alpha,{\boldsymbol{\gamma}}}$. We leave it open for future work to address. In the next section, we shall introduce the necessary background and notation such as weighted Sobolev spaces of smoothness $\alpha$ and higher order digital nets. In Section \[sec:upper\], we shall prove Theorem \[thm:main\_result\], i.e., an upper bound on the RMS worst-case error of randomly digitally shifted order $\beta$ digital nets in ${\mathcal{H}}_{\alpha,{\boldsymbol{\gamma}}}$. Preliminaries {#sec:pre} ============= Weighted Sobolev spaces {#subsec:sobolev} ----------------------- First let us consider the one-dimensional unweighted case. The Sobolev space which we consider is given by $$\begin{aligned} {\mathcal{H}}_{\alpha} & := \Big\{f \colon [0,1)\to {\mathbb{R}}\mid f^{(r)} \colon \\ & \qquad \quad \text{absolutely continuous for $r=0,\ldots,\alpha-1$}, f^{(\alpha)}\in L^2[0,1)\Big\},\end{aligned}$$ where $f^{(r)}$ denotes the $r$-th derivative of $f$. As in [@Wbook Section 10.2] this space is indeed a reproducing kernel Hilbert space with the reproducing kernel ${\mathcal{K}}_{\alpha}\colon [0,1)\times [0,1)\to {\mathbb{R}}$ and the inner product $\langle \cdot, \cdot \rangle_{\alpha}$ given as follows: $$\begin{aligned} {\mathcal{K}}_{\alpha}(x,y) = \sum_{r=0}^{\alpha}\frac{B_r(x)B_r(y)}{(r!)^2}+(-1)^{\alpha+1}\frac{B_{2\alpha}(|x-y|)}{(2\alpha)!} ,\end{aligned}$$ for $x,y\in [0,1)$, where $B_r$ denotes the Bernoulli polynomial of degree $r$, and $$\begin{aligned} \langle f, g \rangle_{\alpha} = \sum_{r=0}^{\alpha-1}\int_{0}^{1}f^{(r)}(x)\, {\mathrm{d}}x \int_{0}^{1}g^{(r)}(x)\, {\mathrm{d}}x + \int_{0}^{1}f^{(\alpha)}(x)g^{(\alpha)}(x)\, {\mathrm{d}}x,\end{aligned}$$ for $f,g\in {\mathcal{H}}_{\alpha}$. Let us move on to the $s$-dimensional weighted case. In the following we write $\{1:n\}:=\{1,\ldots,n\}$ for $n\in {\mathbb{N}}$. Let ${\boldsymbol{\gamma}}=(\gamma_u)_{u\subseteq \{1:s\}}$ be a set of non-negative real numbers which are called weights. Note that the weights moderate the importance of different variables or groups of variables in function spaces and play an important role in the study of tractability [@SW98]. However, such an investigation is out of the scope of this paper since we are interested in showing the optimal exponent of $\log N$ term in the error bound. We consider the weighted function space for the sake of completeness. Moreover, we shall use the following notation: For $v\subseteq \{1:s\}$ and ${\boldsymbol{x}}\in [0,1)^s$, let ${\boldsymbol{x}}_v=(x_j)_{j\in v}$. For $v\subseteq u\subseteq \{1:s\}$ and ${\boldsymbol{r}}_{u\setminus v}=(r_j)_{j\in u\setminus v}$, $({\boldsymbol{r}}_{u\setminus v},{\boldsymbol{\alpha}}_v,{\boldsymbol{0}})$ denotes the $s$-dimensional vector whose $j$-th component is $r_j$ if $j\in u\setminus v$, $\alpha$ if $j\in v$, and $0$ otherwise. Now the weighted Sobolev space ${\mathcal{H}}_{\alpha,{\boldsymbol{\gamma}}}$ which we consider is the reproducing kernel Hilbert space whose reproducing kernel ${\mathcal{K}}_{\alpha,{\boldsymbol{\gamma}}}\colon [0,1)^s\times [0,1)^s\to {\mathbb{R}}$ and inner product $\langle \cdot, \cdot \rangle_{\alpha,{\boldsymbol{\gamma}}}$ are given as follows [@BD09]: $$\begin{aligned} {\mathcal{K}}_{\alpha,{\boldsymbol{\gamma}}}({\boldsymbol{x}},{\boldsymbol{y}}) = \sum_{u\subseteq \{1:s\}}\gamma_u \prod_{j\in u}\left\{\sum_{r=1}^{\alpha} \frac{B_r(x_j)B_r(y_j)}{(r!)^2}+(-1)^{\alpha+1}\frac{B_{2\alpha}(|x_j-y_j|)}{(2\alpha)!}\right\} ,\end{aligned}$$ for ${\boldsymbol{x}}=(x_1,\ldots,x_s),{\boldsymbol{y}}=(y_1,\ldots,y_s)\in [0,1)^s$, where the empty product always equals $1$, and $$\begin{aligned} \langle f, g \rangle_{\alpha,{\boldsymbol{\gamma}}} & = \sum_{u\subseteq \{1:s\}}\gamma_u^{-1}\sum_{v\subseteq u}\sum_{{\boldsymbol{r}}_{u\setminus v}\in \{1:\alpha-1\}^{|u\setminus v|}} \\ & \qquad \times \int_{[0,1)^{|v|}}\left(\int_{[0,1)^{s-|v|}}f^{({\boldsymbol{r}}_{u\setminus v},{\boldsymbol{\alpha}}_v,{\boldsymbol{0}})}({\boldsymbol{x}})\, {\mathrm{d}}{\boldsymbol{x}}_{\{1:s\}\setminus v}\right) \\ & \qquad \quad \times \left(\int_{[0,1)^{s-|v|}} g^{({\boldsymbol{r}}_{u\setminus v},{\boldsymbol{\alpha}}_v,{\boldsymbol{0}})}({\boldsymbol{x}}) \, {\mathrm{d}}{\boldsymbol{x}}_{\{1:s\}\setminus v}\right) \, {\mathrm{d}}{\boldsymbol{x}}_v ,\end{aligned}$$ for $f,g\in {\mathcal{H}}_{\alpha,{\boldsymbol{\gamma}}}$, where for $u\subseteq \{1:s\}$ such that $\gamma_u=0$ we assume $$\begin{aligned} \sum_{v\subseteq u}\sum_{{\boldsymbol{r}}_{u\setminus v}\in \{1:\alpha-1\}^{|u\setminus v|}}& \int_{[0,1)^{|v|}} \left(\int_{[0,1)^{s-|v|}}f^{({\boldsymbol{r}}_{u\setminus v},{\boldsymbol{\alpha}}_v,{\boldsymbol{0}})}({\boldsymbol{x}})\, {\mathrm{d}}{\boldsymbol{x}}_{\{1:s\}\setminus v}\right) \\ & \quad \times \left(\int_{[0,1)^{s-|v|}} g^{({\boldsymbol{r}}_{u\setminus v},{\boldsymbol{\alpha}}_v,{\boldsymbol{0}})}({\boldsymbol{x}}) \, {\mathrm{d}}{\boldsymbol{x}}_{\{1:s\}\setminus v}\right) \, {\mathrm{d}}{\boldsymbol{x}}_v = 0.\end{aligned}$$ Note that an integral and sum over the empty set is the identity operator and we formally set $0/0:=0$. Higher order digital nets {#subsec:ho_digital_net} ------------------------- Here we start with the general digital construction scheme of point sets as introduced by Niederreiter [@Nbook]. For $m,n,s\in {\mathbb{N}}$, let $C_1,\ldots,C_s\in {\mathbb{F}}_b^{n\times m}$. Let $0\le h<b^m$ be an integer with its $b$-adic expansion $h=\sum_{i=0}^{m-1}\eta_i b^i$. For $1\le j\le s$, let us consider $$\begin{aligned} x_{h,j} = \frac{\xi_{1,h,j}}{b}+\frac{\xi_{2,h,j}}{b^2}+\cdots + \frac{\xi_{n,h,j}}{b^n} ,\end{aligned}$$ where $\xi_{1,h,j},\xi_{2,h,j},\ldots,\xi_{n,h,j}$ are given by $$\begin{aligned} (\xi_{1,h,j},\xi_{2,h,j},\ldots,\xi_{n,h,j})^{\top} = C_j (\eta_0,\eta_1,\ldots,\eta_{m-1})^{\top}.\end{aligned}$$ The set $P=\{{\boldsymbol{x}}_0,{\boldsymbol{x}}_1,\ldots,{\boldsymbol{x}}_{b^m-1}\}\subset [0,1)^s$ with ${\boldsymbol{x}}_h=(x_{h,1},\ldots,x_{h,s})$ is called a digital net over ${\mathbb{F}}_b$ (with generating matrices $C_1,\ldots,C_s$). The dual net of $P$, denoted by $P^{\perp}$, is defined as follows. For $m,n,s\in {\mathbb{N}}$ and $C_1,\ldots,C_s\in {\mathbb{F}}_b^{n\times m}$, let $P$ be a digital net over ${\mathbb{F}}_b$ with generating matrices $C_1,\ldots,C_s$. The dual net of $P$ is defined as $$\begin{aligned} P^{\perp} := \{{\boldsymbol{k}}=(k_1,\ldots,k_s)\in {\mathbb{N}}_0^s\colon C_1^{\top} \vec{k}_1\oplus \cdots \oplus C_s^{\top} \vec{k}_s = {\boldsymbol{0}}\in {\mathbb{F}}_b^m\},\end{aligned}$$ where we set $\vec{k}:=(\kappa_0,\ldots,\kappa_{n-1})$ for $k\in {\mathbb{N}}_0$ with its $b$-adic expansion $k=\kappa_0 +\kappa_1 b+\cdots $, which is actually a finite expansion. For $\alpha\in {\mathbb{N}}$, we define a metric function $\mu_{\alpha}$ as follows. Let $\alpha\in {\mathbb{N}}$. For $k\in {\mathbb{N}}$ with its $b$-adic expansion $k=\kappa_1b^{c_1-1}+\kappa_2b^{c_2-1}+\cdots+\kappa_vb^{c_v-1}$ such that $\kappa_1,\ldots,\kappa_v\in \{1,\ldots,b-1\}$ and $c_1>c_2>\cdots >c_v>0$. Then we define $$\begin{aligned} \mu_{\alpha}(k):=\sum_{i=1}^{\min(\alpha,v)}c_i ,\end{aligned}$$ and $\mu_{\alpha}(0):=0$. For ${\boldsymbol{k}}=(k_1,\ldots,k_s)\in {\mathbb{N}}_0^s$, we define $$\begin{aligned} \mu_{\alpha}({\boldsymbol{k}}):=\sum_{j=1}^{s}\mu_{\alpha}(k_j).\end{aligned}$$ Note that the above definition was originally given in [@Nied86; @RT97] for the case $\alpha=1$ and in [@Dick07; @Dick08] for $\alpha\ge 2$. We simply call $\mu_{\alpha}$ the *Dick metric function* for any $\alpha \ge 1$ throughout this paper. Now we define the minimum Dick metric of a digital net, which shall play a critical role in the subsequent analysis. \[def:minimum\_Dick\_metric\] Let $P$ be a digital net over ${\mathbb{F}}_b$ and $P^{\perp}$ its dual net. For $\alpha\in {\mathbb{N}}$, the minimum Dick metric of $P$ is defined as $$\begin{aligned} \delta_{\alpha}(P) := \min_{{\boldsymbol{k}}\in P^{\perp}\setminus \{{\boldsymbol{0}}\}}\mu_{\alpha}({\boldsymbol{k}}). \end{aligned}$$ Now we give the definition of higher order digital nets. \[def:ho\_digital\_net\] For $m,n,\alpha,s\in {\mathbb{N}}$ with $n\ge \alpha m$, let $P$ be a digital net over ${\mathbb{F}}_b$ with generating matrices $C_1,\ldots,C_s\in {\mathbb{F}}_b^{n\times m}$. For $1\le i\le n$ and $1\le j\le s$, we denote by ${\boldsymbol{c}}_{i,j}\in {\mathbb{F}}_b^m$ the $i$-th row vector of $C_j$. Let $t$ be an integer with $0\le t\le \alpha m$ which satisfies the following condition: For all $1\le i_{j,v_j}<\ldots < i_{j,1}\le n$ with $$\begin{aligned} \sum_{j=1}^{s}\sum_{l=1}^{\min(\alpha,v_j)}i_{j,l}\le \alpha m -t,\end{aligned}$$ the vectors ${\boldsymbol{c}}_{i_{1,v_1},1},\ldots,{\boldsymbol{c}}_{i_{1,1},1}, \ldots, {\boldsymbol{c}}_{i_{s,v_s},s},\ldots,{\boldsymbol{c}}_{i_{s,1},s}$ are linearly independent over ${\mathbb{F}}_b$. Then we call $P$ an order $\alpha$ digital $(t,m,s)$-net over ${\mathbb{F}}_b$. The following property of order $\alpha$ digital $(t,m,s)$-nets directly follows from the linear independence of the rows of the generating matrices, that is, for any order $\alpha$ digital $(t,m,s)$-net $P$ over ${\mathbb{F}}_b$, we have $$\begin{aligned} \delta_{\alpha}(P) > \alpha m -t. \end{aligned}$$ Moreover, the following lemma is an obvious adaptation of the result shown in [@Dick07 Theorem 3.3] and [@Dick08 Theorem 4.10], which states that any order $\alpha$ digital net is also an order $\alpha'$ digital net as long as $1\le \alpha' <\alpha$. \[lem:propagation\] For $\alpha\in {\mathbb{N}}$, let $P$ be an order $\alpha$ digital $(t,m,s)$-net over ${\mathbb{F}}_b$ with some integer $0\le t\le \alpha m$. Then, for any $\alpha' \in {\mathbb{N}}$ with $1\le \alpha' <\alpha$, $P$ is also an order $\alpha'$ digital $(t_{\alpha'},m,s)$-net over ${\mathbb{F}}_b$ with $t_{\alpha'}=\lceil t\alpha'/\alpha\rceil$. Dick [@Dick07; @Dick08] proposed the following digit interlacing composition to obtain explicit construction of higher order digital nets over ${\mathbb{F}}_b$: For $m,s,\alpha\in {\mathbb{N}}$, let $Q\subset [0,1)^{\alpha s}$ be a digital net over ${\mathbb{F}}_b$ with generating matrices $C_1,\ldots,C_{\alpha s}\in {\mathbb{F}}_b^{m\times m}$. For $1\le i\le m$ and $1\le j\le \alpha s$, we denote by ${\boldsymbol{c}}_{i,j}$ the $i$-th row vector of $C_j$. We now construct a digital net $P\subset [0,1)^s$ over ${\mathbb{F}}_b$ with generating matrices $D_1,\ldots,D_s\in {\mathbb{F}}_b^{\alpha m\times m}$ such that the ($\alpha(h-1)+i$)-th row vector of $D_j$ equals ${\boldsymbol{c}}_{h,\alpha(j-1)+i}$ for all $1\le h\le m$, $1\le i\le \alpha$ and $1\le j\le s$. Regarding this construction algorithm, we have the following, see for instance [@BDP11 Corollary 3.4]. \[lem:ho\_digital\_net\] Let $Q$ be an order 1 digital $(t',m,\alpha s)$-net over ${\mathbb{F}}_b$ with $0\le t'\le m$. Then a digital net $P$ constructed as above is an order $\alpha$ digital $(t,m,s)$-net over ${\mathbb{F}}_b$ with $$\begin{aligned} \label{eq:order_alpha_t-value} t = \alpha \min \left\{m, t'+ \left\lfloor \frac{s(\alpha-1)}{2} \right\rfloor\right\} .\end{aligned}$$ Thus in order to obtain an order $\alpha$ digital $(t,m,s)$-net with small $t$-value, we need an order 1 digital $(t',m,\alpha s)$-net with small $t'$-value. Here we recall that there have been many explicit constructions of order $1$ digital sequences (defined below) over ${\mathbb{F}}_b$ for arbitrary dimension proposed in the literature, so that we can construct order 1 digital $(t',m,\alpha s)$-nets with small $t'$-value. \[def:digital\_seq\] Let $C_1,\ldots,C_s\in {\mathbb{F}}_b^{{\mathbb{N}}\times {\mathbb{N}}}$ be ${\mathbb{N}}\times {\mathbb{N}}$ matrices over ${\mathbb{F}}_b$. For $C_j=(c_{j,k,l})_{k,l\in {\mathbb{N}}}$ we assume that there exists a function $K: {\mathbb{N}}\to {\mathbb{N}}$ such that $c_{j,k,l}=0$ when $k>K(l)$. Let $h$ be a non-negative integer with its $b$-adic expansion $h=\sum_{i=0}^{a-1}\eta_i b^i$ for some $a\in {\mathbb{N}}$. For $1\le j\le s$, let us consider $$\begin{aligned} x_{h,j} = \frac{\xi_{1,h,j}}{b}+\frac{\xi_{2,h,j}}{b^2}+\cdots ,\end{aligned}$$ where $\xi_{1,h,j},\xi_{2,h,j},\ldots$ are given by $$\begin{aligned} (\xi_{1,h,j},\xi_{2,h,j},\ldots)^{\top} = C_j (\eta_0,\eta_1,\ldots,\eta_{a-1},0,0,\ldots)^{\top}.\end{aligned}$$ The sequence $S=({\boldsymbol{x}}_0,{\boldsymbol{x}}_1,\ldots)$ with ${\boldsymbol{x}}_h=(x_{h,1},\ldots,x_{h,s})$ is called an digital sequence over ${\mathbb{F}}_b$ (with generating matrices $C_1,\ldots,C_s$). Moreover, let $t$ be a non-negative integer. For $m\in {\mathbb{N}}$, let $C_{j,m\times m}$ be the the left upper $m\times m$ sub-matrix of $C_j$. If for all $m>t$ the matrices $C_{1,m\times m},\ldots,C_{s,m\times m}$ generate an order 1 digital $(t,m,s)$-net over ${\mathbb{F}}_b$, we call $S$ an order 1 digital $(t,s)$-sequence over ${\mathbb{F}}_b$. \[rem:ho\_digital\_net\] As mentioned above, there are many explicit constructions of order $1$ digital sequences over ${\mathbb{F}}_b$, see for instance [@Faure82; @Nied88; @NXbook; @Sobol67]. We refer to [@DPbook Chapter 6] for more information on this topic. For $\alpha,s\in {\mathbb{N}}$, let $S$ be an order 1 digital $(t',\alpha s)$-sequence over ${\mathbb{F}}_b$ with generating matrices $C_1,\ldots,C_{\alpha s}$ for some non-negative integer $t'$. Now let us define $m_0:=t'+ \lfloor s(\alpha-1)/2 \rfloor$. When $m\ge m_0$, by using the result of Lemma \[lem:ho\_digital\_net\], we see that the digital net $P\subset [0,1)^s$ constructed by the digit interlacing composition based on $C_{1,m\times m},\ldots,C_{\alpha s,m\times m}$ becomes an order $\alpha$ digital $(\alpha m_0, m, s)$-net. Here the value $\alpha m_0$ does not depend on $m$. Proof of Theorem \[thm:main\_result\] {#sec:upper} ===================================== Throughout this section, let $P$ be an order $\beta$ digital net over ${\mathbb{F}}_b$ for $\beta \in {\mathbb{N}}$. Here we prove Theorem \[thm:main\_result\], i.e., an upper bound on the RMS worst-case error of QMC integration over $P\oplus {\boldsymbol{\sigma}}$ in ${\mathcal{H}}_{\alpha,{\boldsymbol{\gamma}}}$ with respect to a randomly chosen ${\boldsymbol{\sigma}}\in [0,1)^s$ when $\beta\ge 2\alpha$. Interpolation of Dick metric functions -------------------------------------- In this subsection, we discuss an interpolation property of Dick metric functions, which shall become a crucial tool in the proof of an upper bound on the RMS worst-case error. \[lem:mertic\_inter\] Let $\alpha,\beta\in {\mathbb{N}}$ with $1< \alpha \le \beta$. For any ${\boldsymbol{k}}\in {\mathbb{N}}_0^s$, it holds that $$\begin{aligned} \mu_{\alpha}({\boldsymbol{k}}) \ge \frac{\alpha-1}{\beta-1} \mu_{\beta}({\boldsymbol{k}})+\frac{\beta-\alpha}{\beta-1} \mu_1({\boldsymbol{k}}) .\end{aligned}$$ For convenience, we shall in what follows write $$\begin{aligned} A_{\alpha\beta}= \frac{\alpha-1}{\beta-1}\quad \text{and}\quad B_{\alpha\beta}= \frac{\beta-\alpha}{\beta-1} .\end{aligned}$$ Since $\mu_{\alpha}({\boldsymbol{k}})=\sum_{j=1}^{s}\mu_{\alpha}(k_j)$ for any $\alpha\in {\mathbb{N}}$ and ${\boldsymbol{k}}=(k_1,\ldots,k_s)\in {\mathbb{N}}_0^s$, it suffices to prove that the inequality $$\begin{aligned} \mu_{\alpha}(k) \ge A_{\alpha\beta}\mu_{\beta}(k)+B_{\alpha\beta}\mu_1(k) ,\end{aligned}$$ holds for any $k\in {\mathbb{N}}_0$. As the result is trivial for $k=0$, we only consider the case $k\ge 1$ in the following. Let us denote the $b$-adic expansion of $k$ by $k=\kappa_1 b^{c_1-1}+\kappa_2 b^{c_2-1}+\cdots+\kappa_v b^{c_v-1}$ for some $v\ge 1$ such that $\kappa_1,\ldots,\kappa_v\in \{1,\ldots,b-1\}$ and $c_1>\cdots>c_v>0$. When $\beta > v$, we write $c_{v+1}=c_{v+2}=\cdots = c_{\beta}=0$. Then, we have $$\begin{aligned} \mu_{\alpha}(k) = \sum_{i=1}^{\alpha}c_i \ge c_1 + \sum_{i=2}^{\alpha}c_{\alpha} = \mu_1(k)+ (\alpha -1)c_{\alpha} ,\end{aligned}$$ as well as $$\begin{aligned} \mu_{\beta}(k) = \sum_{i=1}^{\beta}c_i \le \sum_{i=1}^{\alpha}c_i + \sum_{i=\alpha+1}^{\beta}c_{\alpha}=\mu_{\alpha}(k)+(\beta -\alpha)c_{\alpha}.\end{aligned}$$ By using the above two inequalities, we obtain $$\begin{aligned} \frac{\mu_{\beta}(k)-\mu_{\alpha}(k)}{\beta -\alpha} \le c_{\alpha}\le \frac{\mu_{\alpha}(k)-\mu_1(k)}{\alpha -1} ,\end{aligned}$$ from which we can easily see that the result follows. \[rem:mertic\_inter\] In the proof of the upper bound on the RMS worst-case error, which shall be given in the next subsection, the condition $B_{\alpha\beta}>1/2$ is necessary. This condition can be satisfied if and only if $\beta \ge 2\alpha$. This is why we assume $\beta \ge 2\alpha$ in Theorem \[thm:main\_result\] and Corollary \[cor:existence\]. Upper bound on the RMS worst-case error --------------------------------------- Finally, we prove an upper bound on the RMS worst-case error of QMC integration over $P\oplus {\boldsymbol{\sigma}}$ in ${\mathcal{H}}_{\alpha,{\boldsymbol{\gamma}}}$ with respect to a randomly chosen ${\boldsymbol{\sigma}}\in [0,1)^s$. The following lemma stems from the proof of [@BD09 Theorem 30]. \[lem:mse\_Baldeaux\_Dick\] Let $P$ be a digital net over ${\mathbb{F}}_b$ and $P^{\perp}$ its dual net. For $u\subseteq \{1:s\}$, we write $P_u^{\perp}=\{{\boldsymbol{k}}_u\in {\mathbb{N}}^{|u|}: ({\boldsymbol{k}}_u,{\boldsymbol{0}})\in P^{\perp}\}$. The mean square worst-case error of QMC integration over $P\oplus {\boldsymbol{\sigma}}$ in ${\mathcal{H}}_{\alpha,{\boldsymbol{\gamma}}}$ with respect to a randomly chosen ${\boldsymbol{\sigma}}\in [0,1)^s$ is bounded by $$\begin{aligned} \left(e^{{\mathrm{rms}}\text{--}{\mathrm{wor}}}({\mathcal{H}}_{\alpha,{\boldsymbol{\gamma}}};P)\right)^2 \le \sum_{\emptyset \ne u\subseteq \{1:s\}}\gamma_u D_{\alpha,b}^{|u|}\sum_{{\boldsymbol{k}}_u\in P_u^{\perp}}b^{-2\mu_{\alpha}({\boldsymbol{k}}_u)} ,\end{aligned}$$ where we simply write $\mu_{\alpha}({\boldsymbol{k}}_u):=\sum_{j\in u}\mu_{\alpha}(k_j)$ for $\emptyset \ne u\subseteq \{1:s\}$ and ${\boldsymbol{k}}_u\in {\mathbb{N}}^{|u|}$, and $D_{\alpha,b}>0$ depends only on $\alpha$ and $b$ and is given by $$\begin{aligned} D_{\alpha,b} = \max_{1\leq v\leq \alpha}\left\{ \sum_{\tau=v}^{\alpha}\frac{(C_{\tau,b})^2}{b^{2(\tau-v)}}+\frac{2C_{2\alpha, b}}{b^{2(\alpha-v)}}\right\},\end{aligned}$$ with $$\begin{aligned} C_{1,b}=\frac{1}{2\sin(\tau/b)} \quad \text{and} \quad C_{\tau,b}=\frac{(1+1/b+1/(b(b+1)))^{\tau-2}}{(2\sin(\tau/b))^{\tau}}\quad \text{for $\tau\geq 2$}.\end{aligned}$$ In the subsequent analysis, we shall use the following inequality, see [@DPbook Lemma 13.24] for its proof. \[lem:binom\_sum\] For any real number $b>1$ and any $k,t_0\in {\mathbb{N}}$, we have $$\begin{aligned} \sum_{t=t_0}^{\infty}b^{-t}\binom {t+k-1}{k-1}\le b^{-t_0}\binom {t_0+k-1}{k-1}\left( 1-\frac{1}{b}\right)^{-k}.\end{aligned}$$ Now we are ready to prove Theorem \[thm:main\_result\]. Using Lemmas \[lem:mertic\_inter\] and \[lem:mse\_Baldeaux\_Dick\], we have $$\begin{aligned} \left(e^{{\mathrm{rms}}\text{--}{\mathrm{wor}}}({\mathcal{H}}_{\alpha,{\boldsymbol{\gamma}}};P)\right)^2 & \le \sum_{\emptyset \ne u\subseteq \{1:s\}}\gamma_u D_{\alpha,b}^{|u|} \sum_{{\boldsymbol{k}}_u\in P_u^{\perp}}b^{-2A_{\alpha\beta}\mu_{\beta}({\boldsymbol{k}}_u)-2B_{\alpha\beta}\mu_1({\boldsymbol{k}}_u)} \\ & \le \sum_{\emptyset \ne u\subseteq \{1:s\}}\gamma_u D_{\alpha,b}^{|u|} \sum_{{\boldsymbol{k}}_u\in P_u^{\perp}}b^{-2A_{\alpha\beta}\delta_\beta(P)-2B_{\alpha\beta}\mu_1({\boldsymbol{k}}_u)} \\ & = \sum_{\emptyset \ne u\subseteq \{1:s\}}\gamma_u D_{\alpha,b}^{|u|}b^{-2A_{\alpha\beta}\delta_\beta(P)}W_u^{1,2B_{\alpha\beta}}(P) ,\end{aligned}$$ where $\delta_\beta(P)$ is defined as in Definition \[def:minimum\_Dick\_metric\] and we write $$\begin{aligned} W_u^{1,2B_{\alpha\beta}}(P) := \sum_{{\boldsymbol{k}}_u\in P_u^{\perp}}b^{-2B_{\alpha\beta}\mu_1({\boldsymbol{k}}_u)} ,\end{aligned}$$ for $\emptyset \ne u\subseteq \{1:s\}$. Since $\beta\geq 2\alpha$, we have $B_{\alpha\beta}>1/2$ as stated in Remark \[rem:mertic\_inter\]. In the following we focus on the term $W_u^{1,2B_{\alpha\beta}}(P)$. Since $\mu_1({\boldsymbol{k}}_u)$ is an integer no less than both $|u|$ and $\delta_1(P)$ for any ${\boldsymbol{k}}_u\in {\mathbb{N}}^{|u|}$, we have $$\begin{aligned} W_u^{1,2B_{\alpha\beta}}(P) & = \sum_{h=\max\{\delta_1(P),|u|\}}^{\infty}\sum_{\substack{{\boldsymbol{k}}_u\in P_u^{\perp}\\ \mu_1({\boldsymbol{k}}_u)=h}}b^{-2B_{\alpha\beta}h} \\ & = \sum_{h=\max\{\delta_1(P),|u|\}}^{\infty}b^{-2B_{\alpha\beta}h}\sum_{\substack{{\boldsymbol{k}}_u\in P_u^{\perp}\\ \mu_1({\boldsymbol{k}}_u)=h}}1 \\ & = \sum_{h=\max\{\delta_1(P),|u|\}}^{\infty}b^{-2B_{\alpha\beta}h}\sum_{\substack{{\boldsymbol{l}}_u\in {\mathbb{N}}^{|u|}\\ |{\boldsymbol{l}}_u|_1=h}}\sum_{\substack{{\boldsymbol{k}}_u\in P_u^{\perp}\\ \mu_1(k_j)=l_j, \forall j\in u}}1,\end{aligned}$$ where we denote $|{\boldsymbol{l}}_u|_1=\sum_{j\in u}l_j$. For the innermost sum in the last expression, it is known from [@DPbook Lemma 13.8][^5] that we have $$\begin{aligned} \sum_{\substack{{\boldsymbol{k}}_u\in P_u^{\perp}\\ \mu_1(k_j)=l_j, \forall j\in u}}1 \le \begin{cases} 0 & \text{if $|{\boldsymbol{l}}_u|_1< \delta_1(P)$,} \\ (b-1)^{|u|} & \text{if $\delta_1(P)\le |{\boldsymbol{l}}_u|_1< \delta_1(P)+|u|$,} \\ (b-1)^{|u|}b^{|{\boldsymbol{l}}_u|_1-(\delta_1(P)+|u|-1)} & \text{if $|{\boldsymbol{l}}_u|_1\ge \delta_1(P)+|u|$.} \end{cases}\end{aligned}$$ Thus $W_u^{1,2B_{\alpha\beta}}(P)$ can be bounded by $$\begin{aligned} \label{eq:W_u_bound} W_u^{1,2B_{\alpha\beta}}(P) & \le \sum_{h=\max\{\delta_1(P),|u|\}}^{\delta_1(P)+|u|-1}b^{-2B_{\alpha\beta}h}\sum_{\substack{{\boldsymbol{l}}_u\in {\mathbb{N}}^{|u|}\\ |{\boldsymbol{l}}_u|_1=h}}(b-1)^{|u|} \nonumber \\ & \qquad + \sum_{h=\delta_1(P)+|u|}^{\infty}b^{-2B_{\alpha\beta}h}\sum_{\substack{{\boldsymbol{l}}_u\in {\mathbb{N}}^{|u|}\\ |{\boldsymbol{l}}_u|_1=h}}(b-1)^{|u|}b^{|{\boldsymbol{l}}_u|_1-(\delta_1(P)+|u|-1)} \nonumber \\ & = (b-1)^{|u|}\Biggl[ \sum_{h=\max\{\delta_1(P),|u|\}}^{\delta_1(P)+|u|-1}b^{-2B_{\alpha\beta}h}\binom {h-1}{|u|-1} \nonumber \\ & \qquad + b^{-(\delta_1(P)+|u|-1)}\sum_{h=\delta_1(P)+|u|}^{\infty}b^{-(2B_{\alpha\beta}-1)h}\binom {h-1}{|u|-1}\Biggr].\end{aligned}$$ For the second sum in (\[eq:W\_u\_bound\]), we have $$\begin{aligned} & \quad \sum_{h=\delta_1(P)+|u|}^{\infty}b^{-(2B_{\alpha\beta}-1)h}\binom {h-1}{|u|-1} \\ & = \sum_{h=\delta_1(P)}^{\infty}b^{-(2B_{\alpha\beta}-1)(h+|u|)}\binom {h+|u|-1}{|u|-1} \\ & \le b^{-(2B_{\alpha\beta}-1)(\delta_1(P)+|u|)}\binom {\delta_1(P)+|u|-1}{|u|-1}\left( 1-b^{-(2B_{\alpha\beta}-1)}\right)^{-|u|} \\ & \le \left(\frac{1}{b^{2B_{\alpha\beta}-1}-1}\right)^{|u|}\frac{(\delta_1(P)+1)^{|u|-1}}{b^{(2B_{\alpha\beta}-1)\delta_1(P)}},\end{aligned}$$ where we used Lemma \[lem:binom\_sum\] in the first inequality as we have $B_{\alpha\beta}>1/2$ by the assumption $\beta\ge 2\alpha$, and the second inequality stems from the inequality $$\begin{aligned} \binom {\delta_1(P)+|u|-1}{|u|-1} =\prod_{i=1}^{|u|-1}\frac{\delta_1(P)+|u|-i}{|u|-i}\le (\delta_1(P)+1)^{|u|-1}.\end{aligned}$$ For the first sum in (\[eq:W\_u\_bound\]), we have $$\begin{aligned} & \quad \sum_{h=\max\{\delta_1(P),|u|\}}^{\delta_1(P)+|u|-1}b^{-2B_{\alpha\beta}h}\binom {h-1}{|u|-1} \\ & \le \sum_{h=\max\{\delta_1(P),|u|\}}^{\infty}b^{-2B_{\alpha\beta}h}\binom {h-1}{|u|-1} \\ & = \sum_{h=\max\{\delta_1(P)-|u|,0 \}}^{\infty}b^{-2B_{\alpha\beta}(h+|u|)}\binom {h+|u|-1}{|u|-1} \\ & \le b^{-2B_{\alpha\beta}\max\{\delta_1(P),|u|\}}\binom {\max\{\delta_1(P),|u|\}-1}{|u|-1}\left( 1-b^{-2B_{\alpha\beta}}\right)^{-|u|} ,\end{aligned}$$ where we used Lemma \[lem:binom\_sum\] again in the last inequality. Now let us consider the case $\delta_1(P)\ge |u|$. The first sum in (\[eq:W\_u\_bound\]) is bounded by $$\begin{aligned} \sum_{h=\max\{\delta_1(P),|u|\}}^{\delta_1(P)+|u|-1}b^{-2B_{\alpha\beta}h}\binom {h-1}{|u|-1} & \le b^{-2B_{\alpha\beta}\delta_1(P)}\binom {\delta_1(P)-1}{|u|-1}\left( 1-b^{-2B_{\alpha\beta}}\right)^{-|u|} \\ & \le \left( \frac{b^{2B_{\alpha\beta}}}{b^{2B_{\alpha\beta}}-1}\right)^{|u|}\frac{(\delta_1(P)-1)^{|u|-1}}{b^{2B_{\alpha\beta}\delta_1(P)}} \\ & \le \left( \frac{b^{2B_{\alpha\beta}}}{b^{2B_{\alpha\beta}}-1}\right)^{|u|}\frac{(\delta_1(P)+1)^{|u|-1}}{b^{2B_{\alpha\beta}\delta_1(P)}}.\end{aligned}$$ For the case $\delta_1(P)< |u|$, the first sum in (\[eq:W\_u\_bound\]) is bounded by $$\begin{aligned} \sum_{h=\max\{\delta_1(P),|u|\}}^{\delta_1(P)+|u|-1}b^{-2B_{\alpha\beta}h}\binom {h-1}{|u|-1} & \le b^{-2B_{\alpha\beta}|u|}\left( 1-b^{-2B_{\alpha\beta}}\right)^{-|u|} \\ & \le \left(\frac{b^{2B_{\alpha\beta}}}{b^{2B_{\alpha\beta}}-1}\right)^{|u|}\frac{(\delta_1(P)+1)^{|u|-1}}{b^{2B_{\alpha\beta}\delta_1(P)}}.\end{aligned}$$ Thus, regardless of whether $\delta_1(P)\ge |u|$ or $\delta_1(P)< |u|$, we have the bound on the first sum in (\[eq:W\_u\_bound\]) as $$\begin{aligned} \sum_{h=\max\{\delta_1(P),|u|\}}^{\delta_1(P)+|u|-1}b^{-2B_{\alpha\beta}h}\binom {h-1}{|u|-1} \le \left(\frac{b^{2B_{\alpha\beta}}}{b^{2B_{\alpha\beta}}-1}\right)^{|u|}\frac{(\delta_1(P)+1)^{|u|-1}}{b^{2B_{\alpha\beta}\delta_1(P)}}.\end{aligned}$$ Combining this result with the bound on the second sum, we have $$\begin{aligned} W_u^{1,2B_{\alpha\beta}}(P) & \le (b-1)^{|u|}\Biggl[ \left(\frac{b^{2B_{\alpha\beta}}}{b^{2B_{\alpha\beta}}-1}\right)^{|u|}\frac{(\delta_1(P)+1)^{|u|-1}}{b^{2B_{\alpha\beta}\delta_1(P)}} \\ & \qquad + b^{-(\delta_1(P)+|u|-1)}\left(\frac{1}{b^{2B_{\alpha\beta}-1}-1}\right)^{|u|}\frac{(\delta_1(P)+1)^{|u|-1}}{b^{(2B_{\alpha\beta}-1)\delta_1(P)}}\Biggr] \\ & = G_{\alpha,\beta,b,u}\frac{(\delta_1(P)+1)^{|u|-1}}{b^{2B_{\alpha\beta}\delta_1(P)}} ,\end{aligned}$$ where we set $$\begin{aligned} G_{\alpha,\beta,b,u} = (b-1)^{|u|}\left[ \left(\frac{b^{2B_{\alpha\beta}}}{b^{2B_{\alpha\beta}}-1}\right)^{|u|}+b\left( \frac{1}{b^{2B_{\alpha\beta}}-b}\right)^{|u|}\right].\end{aligned}$$ So far we have obtained $$\begin{aligned} & \quad \left(e^{{\mathrm{rms}}\text{--}{\mathrm{wor}}}({\mathcal{H}}_{\alpha,{\boldsymbol{\gamma}}};P)\right)^2 \\ & \le \sum_{\emptyset \ne u\subseteq \{1:s\}}\gamma_u D_{\alpha,b}^{|u|}b^{-2A_{\alpha\beta}\delta_\beta(P)}W_u^{1,2B_{\alpha\beta}}(P) \\ & \le \frac{1}{b^{2A_{\alpha\beta}\delta_\beta(P)+2B_{\alpha\beta}\delta_1(P)}}\sum_{\emptyset \ne u\subseteq \{1:s\}}\gamma_u D_{\alpha,b}^{|u|}G_{\alpha,\beta,b,u}(\delta_1(P)+1)^{|u|-1}.\end{aligned}$$ Finally let us recall that $P$ is an order $\beta$ digital $(t,m,s)$-net over ${\mathbb{F}}_b$. From this fact and Lemma \[lem:propagation\], we have $$\begin{aligned} \delta_1(P) > m-t_1\quad \text{and}\quad \delta_\beta(P) > \beta m-t ,\end{aligned}$$ where $t_1=\lceil t/\beta\rceil$. Thus, we have $$\begin{aligned} 2A_{\alpha\beta}\delta_\beta(P)+2B_{\alpha\beta}\delta_1(P) & > 2A_{\alpha\beta}(\beta m-t) + 2B_{\alpha\beta}(m-t_1) \\ & = (2\beta A_{\alpha\beta}+2B_{\alpha\beta})m - 2A_{\alpha\beta}t - 2B_{\alpha\beta}t_1 .\end{aligned}$$ In the above, it holds that $$\begin{aligned} 2\beta A_{\alpha\beta}+2B_{\alpha\beta} =2\alpha.\end{aligned}$$ Since $t_1=0$ is best possible, we have $\delta_1(P)\le m+1$. Therefore, we get $$\begin{aligned} \left(e^{{\mathrm{rms}}\text{--}{\mathrm{wor}}}({\mathcal{H}}_{\alpha,{\boldsymbol{\gamma}}};P)\right)^2 & \leq \frac{b^{2A_{\alpha\beta}t + 2B_{\alpha\beta}t_1}}{b^{2\alpha m}}\sum_{\emptyset \ne u\subseteq \{1:s\}}\gamma_u D_{\alpha,b}^{|u|}G_{\alpha,\beta,b,u}(m+2)^{|u|-1} \\ & \leq \frac{b^{2A_{\alpha\beta}t + 2B_{\alpha\beta}t_1}}{b^{2\alpha m}}\sum_{\emptyset \ne u\subseteq \{1:s\}}\gamma_u D_{\alpha,b}^{|u|}G_{\alpha,\beta,b,u}(3m)^{|u|-1} ,\end{aligned}$$ which completes the proof by using the inequality $(\sum_i a_i)^{1/2}\leq \sum_i a_i^{1/2}$ for $a_i\geq 0$ and by choosing $$\begin{aligned} \label{eq:constant_full} C_{\alpha,\beta,b,u,t} = b^{A_{\alpha\beta}t + B_{\alpha\beta}t_1}D_{\alpha,b}^{|u|/2}G_{\alpha,\beta,b,u}^{1/2}\left( \frac{3}{\log b}\right)^{(|u|-1)/2}\end{aligned}$$ for all $\emptyset \ne u\subseteq \{1:s\}$ such that the bound (\[eq:main\_result\]) holds. [99]{} J. Baldeaux and J. Dick, QMC rules of arbitrary high order: Reproducing kernel Hilbert space approach, Constr. Approx., 30 (2009) 495–527. J. Baldeaux, J. Dick and F. Pillichshammer, Duality theory and propagation rules for higher order nets, Discrete Math., 311 (2011) 362–386. J. Dick, Explicit constructions of quasi-Monte Carlo rules for the numerical integration of high-dimensional periodic functions, SIAM J. Numer. Anal., 45 (2007) 2141–2176. J. Dick, Walsh spaces containing smooth functions and quasi-Monte Carlo rules of arbitrary high order, SIAM J. Numer. Anal., 46 (2008) 1519–1553. J. Dick, The decay of the Walsh coefficients of smooth functions, Bull. Aust. Math. Soc., 80 (2009) 430–453. J. Dick, D. Nuyens and F. Pillichshammer, Lattice rules for nonperiodic smooth integrands, Numer. Math., 126 (2014) 259–291. J. Dick and F. Pillichshammer, *Digital Nets and Sequences: Discrepancy Theory and Quasi-Monte Carlo Integration*, Cambridge University Press, Cambridge, 2010. H. Faure, Discrépances de suites associées à un système de numération (en dimension s). Acta Arith., 41 (1982) 337–351. A. Hinrichs, L. Markhasin, J. Oettershagen and T. Ullrich, Optimal quasi-Monte Carlo rules on higher order digital nets for the numerical integration of multivariate periodic functions, ArXiv preprint arXiv:1501.01800. L. Kuipers and H. Niederreiter, *Uniform Distribution of Sequences*, John Wiley, New York, 1974. L. Markhasin, Quasi-Monte Carlo methods for integration of functions with dominating mixed smoothness in arbitrary dimension, J. Complexity, 29 (2013) 370–388. H. Niederreiter, Low-discrepancy point sets, Monatsh. Math., 102 (1986) 155–167. H. Niederreiter, Low-discrepancy and low-dispersion sequences. J. Number Theory, 30 (1988) 51–70. H. Niederreiter, *Random Number Generation and Quasi-Monte Carlo Methods*, CBMS-NSF Regional Conference Series in Applied Mathematics, Vol. 63, SIAM, Philadelphia, 1992. H. Niederreiter and C. P. Xing, *Rational Points on Curves over Finite Fields: Theory and Applications*, London Mathematical Society Lecture Note Series, 285. Cambridge University Press, Cambridge, 2001. M. Yu. Rosenbloom and M. A. Tsfasman, Codes for the $m$-metric, Probl. Inf. Transm., 33 (1997) 55–63. I. H. Sloan and H. Woźniakowski, When are quasi-Monte Carlo algorithms efficient for high-dimensional integrals?, J. Complexity, 14 (1998) 1–33. I. M. Sobol’, The distribution of points in a cube and approximate evaluation of integrals. Zh. Vycisl. Mat. i Mat. Fiz., 7 (1967) 784–802. V. N. Temlyakov, Cubature formulas, discrepancy, and nonlinear approximation, J. Complexity, 19 (2003) 352–391. M. Ullrich, On “Upper error bounds for quadrature formulas on function classes” by K. K. Frolov, ArXiv preprint arXiv:1404.5457. G. Wahba, *Spline Models for Observational Data*, CBMS-NSF Regional Conference Series in Applied Mathematics, Vol. 59, SIAM, Philadelphia, 1990. [^1]: Graduate School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan () [^2]: School of Mathematics and Statistics, The University of New South Wales, Sydney 2052, Australia ([[email protected]]{}) [^3]: School of Mathematics and Statistics, The University of New South Wales, Sydney 2052, Australia ([[email protected]]{}) [^4]: The work of T. G. is supported by JSPS Grant-in-Aid for Young Scientists No.15K20964. The work of K. S. and T. Y. is supported by the Program for Leading Graduate Schools, MEXT, Japan and Australian Research Council’s Discovery Projects funding scheme (project number DP150101770). The work of K. S. is also supported by Grant-in-Aid for JSPS Fellows No.15J05380. [^5]: Although [@DPbook Lemma 13.8] only consider the case where $P$ is a digital net over ${\mathbb{F}}_b$ with generating matrices of size $m\times m$, the proof still goes through even when $P$ is a digital net over ${\mathbb{F}}_b$ with generating matrices of size $n\times m$ as long as $n\ge m$.
--- abstract: 'We propose to use the steered quantum coherence (SQC) as a signature of quantum phase transitions (QPTs). By considering various spin chain models, including the transverse-field Ising model, *XY* model, and *XX* model with three-spin interaction, we showed that the SQC and its first-order derivative succeed in signaling different critical points of QPTs. In particular, the SQC method is effective for any spin pair chosen from the chain, and the strength of SQC, in contrast to entanglement and quantum discord, is insensitive to the distance (provided it is not very short) of the tested spins, which makes it convenient for practical use as there is no need for careful choice of two spins in the chain.' author: - 'Ming-Liang Hu' - 'Yun-Yue Gao' - Heng Fan title: Steered quantum coherence as a signature of quantum phase transitions in spin chains --- Introduction {#sec:1} ============ Quantum coherence plays a fundamental role in the fields of quantum optics [@Ficek] and thermodynamics [@ther5]. The resource theoretic framework for quantifying coherence formulated in 2014 stimulates further study of it from a quantitative perspective [@coher; @Plenio; @Hu]. In particular, it has been used to explain the quantum advantage of many emerging quantum computation tasks, including quantum state merging [@qsm], deterministic quantum computation with one qubit [@DQC1], Deutsch-Jozsa algorithm [@DJ], and Grover search algorithm [@Grover]. The resource theory of coherence also provides a basis for interpreting the wave nature of a quantum system [@path1; @path2] and the essence of quantum correlations such as quantum entanglement [@coher-ent; @convex3; @SQC; @naqc2; @naqc3; @Tan] and various discordlike quantum correlations [@DQC1; @Tan; @Yao; @Hufan; @Hux1; @Yuc; @Hux2]. Besides the fundamental position in physics, quantum coherence is also useful in studying critical behaviors of various spin chain systems. For instance, the relative entropy of coherence for one spin or two adjacent spins can detect quantum phase transitions (QPTs) in the spin-1/2 transverse-field Ising, *XX*, and Kitaev honeycomb models [@chenj], while critical behaviors of the *XY* model have been studied by virtue of the $l_1$ norm of coherence [@Qin]. Moreover, the relative entropy and $l_1$ norm of coherence for two neighboring spins detect successfully the Ising-type first-order QPT in the spin-1 *XXZ* model [@spin1]. The skew-information-based coherence measure [@skif], though it is not well defined [@Dubai], can also detect QPTs in certain spin chain models, including the spin-1/2 *XY* model either without [@Karpat] or with three-spin interaction [@Leisg; @Liyc] and the spin-1/2 *XYZ* model with Dzyaloshinsky-Moriya interaction [@Ywl]. In fact, other characterizations of quantumness in quantum information science have also been used to study QPTs. One of them is entanglement [@EoF]. Its role in exploring QPTs can be found in Refs. [@nature; @Osborne; @Gusj1; @Gusj2] and the review work [@Amico]. Another quantumness measure is entropic quantum discord [@QD; @QD2], which can detect QPTs in the *XXZ* model [@XXZ; @Sarandy], the transverse-field Ising model [@Ising; @Sarandy], the transverse-field *XY* model [@XY], and the *XY* model with three-spin [@XYthree] or Dzyaloshinsky-Moriya interaction [@XYDM]. Moreover, one can also use geometric quantum discord to explore QPTs in certain spin chain models [@Hu]. Nevertheless, although entanglement and quantum discord were widely used to explore QPTs with great success, entanglement is short ranged [@Amico], so a careful choice of two very short distance spins or the bipartition of the system is required. Quantum discord, though can exist for two relatively long-distance spins, its computation is NP complete [@qd-np] (there is no closed formula even for a general two-qubit state [@qdtwo]). These limit the scope of their applications in exploring QPTs. In this paper, we propose to use the steered quantum coherence (SQC) [@SQC] as a signature of QPTs. We consider a general *XY* model with a transverse magnetic field and three-spin interaction, and show that the SQC precisely signals all critical points of the QPTs. In particular, compared with entanglement and quantum discord, the SQC exists for any two spins in the chain, and its strength is insensitive to the distance of two spins provided it is not very short. This remarkable property of SQC releases the restriction on the distance of the spin pair selected for probing QPTs and may have important implications for experimental observation of QPTs as, in general, it is hard to measure a weak quantity in experiments. Moreover, different from quantum coherence of a state which is basis dependent and one may extract useless information if the basis is inappropriate, the SQC is analytically solvable for any two-spin state and its value is definite. On the experimental side, the SQC can be estimated by local projective measurements and one-qubit tomography, which is also feasible with current techniques [@qcexp1; @qcexp2; @qcexp3]. All the aspects above show that the SQC may be a powerful tool to study QPTs in spin chain models. The structure of this paper is as follows. In Sec. \[sec:2\], we recall definition of the SQC and solution of the physical model. Then in Sec. \[sec:3\], we discuss critical behaviors of SQC for the considered model and show that it signals the QPTs precisely. Finally, we summarize our main finding in Sec. \[sec:4\]. Preliminaries {#sec:2} ============= We first present definition of the SQC. For a state $\rho_{AB}$ with the two qubits held, respectively, by Alice and Bob, the SQC was defined by Alice’s local measurements and classical communication between Alice and Bob. To be explicit, Alice carries out one of the pre-agreed measurements $\{\sigma^\mu\}_{\mu=x,y,z}$ ($\sigma^\mu$ is the Pauli operator) on qubit $A$ and communicates to Bob her choice $\sigma^\mu$. Then Bob’s system collapses to the ensemble states $\{p_{\mu,a}, \rho_{B|\Pi_\mu^a}\}$, with $p_{\mu,a} ={\mathrm{tr}}(\Pi_\mu^a \rho_{AB})$ being the probability of Alice’s outcome $a\in\{0,1\}$, and $\rho_{B|\Pi_\mu^a}= {\mathrm{tr}}_A (\Pi_\mu^a\rho_{AB}) /p_{\mu,a}$ being Bob’s conditional state. Moreover, $\Pi_\mu^{a}= [\openone_2+(-1)^a \sigma^\mu]/2$ is the measurement operator and ${\openone}_2$ is the identity operator. For Alice’s chosen observable $\sigma^\mu$, Bob can measure the coherence of the ensemble $\{p_{\mu,a}, \rho_{B|\Pi_\mu^a}\}$ with respect to the eigenbasis of either one of the remaining two Pauli operators. After Alice’s all possible measurements $\{\Pi_\mu^a\}_{\mu=x,y,z}$ with equal probability, the SQC at Bob’s hand can be defined as the following averaged quantum coherence [@SQC] $$\label{eq2a-2} C^{na}(\rho_{AB})= \frac{1}{2}\sum_{\mu,\nu,a\atop \mu\neq\nu} p_{\mu,a} C^{\sigma^\nu}(\rho_{B|\Pi_\mu^a}),$$ where $C^{\sigma^\nu}(\rho_{B|\Pi_\mu^a})$ is the coherence of $\rho_{B|\Pi_\mu^a}$ defined in the reference basis spanned by the eigenbases of $\sigma^\nu$ [@coher]. In this paper, we use the $l_1$ norm of coherence and the relative entropy of coherence which are favored for their ease of calculation. By denoting $\{|\psi_i\rangle\}$ the eigenbases of $\sigma^\nu$, their analytical solutions are given, respectively, by [@coher] $$\label{eq2a-3} \begin{split} & C_{l_1}^{\sigma^\nu}(\rho)= \sum_{i\neq j}|\langle\psi_i|\rho|\psi_j\rangle|,\\ & C_{re}^{\sigma^\nu}(\rho)= -\sum_i \langle\psi_i|\rho|\psi_i\rangle \log_2 \langle\psi_i|\rho|\psi_i\rangle-S(\rho), \end{split}$$ with $S(\rho)=-{\mathrm{tr}}(\rho\log_2 \rho)$ denoting the von Neumann entropy. Based on these formulas, one can then obtain the corresponding SQC $C_{l_1}^{na}(\rho_{AB})$ and $C_{re}^{na} (\rho_{AB})$. Next, we introduce the *XY* model with a transverse magnetic field and three-spin interaction. The Hamiltonian for such a model can be written as $$\label{eq2b-1} \begin{split} \hat{H}= &-\sum\limits_{n=1}^N\left(\frac{1+\gamma}{2}{\sigma_n^x\sigma_{n+1}^x} +\frac{1-\gamma}{2}{\sigma_n^y\sigma_{n+1}^y}+\lambda\sigma_n^z\right) \\ &-\sum\limits_{n=1}^N\alpha({\sigma_{n-1}^x\sigma_n^z\sigma_{n+1}^x} +{\sigma_{n-1}^y\sigma_n^z\sigma_{n+1}^y}), \end{split}$$ where $\sigma_n^\mu$ ($\mu=x,y,z$) are the Pauli operators at site $n$, $\lambda$ is the transverse magnetic field, $\gamma$ denotes the anisotropy of the system arising from the nearest-neighbor interaction, and $\alpha$ denotes the strength of the three-spin interaction arising from the next-to-nearest-neighbor interaction [@three]. Moreover, $N$ is the number of spins in the chain, and we assume the periodic boundary conditions. The Hamiltonian $\hat{H}$ can be diagonalized by first using the Jordan-Wigner transformation [@QPTs2] $$\label{eq2b-2} \begin{split} & \sigma_n^x=\prod_{m<n}\left(1-2{c_m^\dag}c_m\right)\left(c_n+c_n^\dag\right), \\ & \sigma_n^y=-i\prod_{m<n}\left(1-2{c_m^\dag}c_m\right)\left(c_n-c_n^\dag\right),~ \sigma_n^z=1-2 c_n^\dag c_n, \end{split}$$ which maps the spins to spinless fermions with the creation (annihilation) operators $c_n^\dag$ ($c_n$). Then by virtue of the Fourier transformation $\tilde{c}_k=\sum_l c_l e^{-ilx_k}/\sqrt{N}$ ($x_k=2\pi k/N$) and the Bogoliubov transformation $d_k = \cos(\theta_k/2)\tilde{c}_k-i\sin(\theta_k/2)\tilde{c}_{-k}^\dag$, one can obtain [@epjb] $$\label{eq2b-3} \hat{H}=\sum_{k=-M}^M 2\varepsilon_k\left(d_k^\dag d_k-\frac{1}{2}\right),$$ where $M=(N-1)/2$, $\theta_k=\arcsin[-\gamma \sin (x_k)/\varepsilon_k]$, and the energy spectrum is given by $$\label{eq2b-4} \varepsilon_k=\sqrt{\epsilon_k^2+ \gamma^2\sin^2(x_k)},$$ with $\epsilon_k= \lambda-\cos(x_k)-2\alpha\cos(2x_k)$. To calculate the SQC, one needs to obtain the density operator $\rho_{i,i+r}$ for the spin pair $(i,i+r)$, with $r$ denoting the distance of two spins in units of the lattice constant. In the Bloch representation, $\rho_{i,i+r}$ can always be decomposed as $$\label{eq2b-5} \rho_{i,i+r}=\frac{1}{4}\sum_{\mu,\nu} t_{\mu\nu}\sigma_i^\mu\otimes \sigma_{i+r}^\nu,$$ where $\mu,\nu\in \{0,x,y,z\}$, $t_{\mu\nu}= {\mathrm{tr}}(\rho_{i,i+r} \sigma_i^\mu\otimes \sigma_{i+r}^\nu)$, and $\sigma_i^0={\openone}_2$. Due to the translation invariance, $\rho_{i,i+r}$ will be independent of the position $i$ and depends only on the distance $r$ of two spins. Then one can obtain the nonzero $t_{\mu\nu}$ of $\rho_{i,i+r}$ as [@Wang; @Gusj] $$\label{eq2b-6} t_{z0}=t_{0z}=\langle\sigma^z\rangle,~ t_{\mu\mu}=\langle\sigma_{i}^\mu\sigma_{i+r}^\mu\rangle~(\mu\in\{x,y,z\}),$$ where $\langle\sigma^z\rangle$ is the magnetization intensity given by [@magnet] $$\label{eq2b-7} \langle\sigma^z\rangle= \frac{1}{N} \sum_k \frac{\epsilon_k\tanh(\beta\varepsilon_k)}{\varepsilon_k},$$ and $\beta=1/k_B T$, with $k_B$ being the Boltzmann constant. Moreover, the spin-spin correlation functions are given by [@xyt1] $$\label{eq2b-8} \begin{split} \langle\sigma_i^x\sigma_{i+r}^x\rangle&= \begin{vmatrix} G_{-1} & G_{-2} & \cdots & G_{-r} \\ G_{0} & G_{-1} & \cdots & G_{-r+1} \\ \vdots & \vdots & \ddots & \vdots \\ G_{r-2} & G_{r-3} & \cdots & G_{-1} \end{vmatrix},\\ \langle\sigma_i^y\sigma_{i+r}^y\rangle&= \begin{vmatrix} G_{1} & G_{0} & \cdots & G_{-r+2} \\ G_{2} & G_{1} & \cdots & G_{-r+3} \\ \vdots & \vdots & \ddots & \vdots \\ G_{r} & G_{r-1} & \cdots & G_{1} \end{vmatrix}, \end{split}$$ and $\langle\sigma_i^z\sigma_{i+r}^z\rangle= {\langle\sigma_i^z \rangle}^2 -G_r G_{-r}$, where $G_n$ ($-r\leqslant n\leqslant r$) is given by $$\label{eq2b-9} G_n= -\sum_k\frac{[\cos(n x_k)\epsilon_k+\gamma\sin(n x_k)\sin(x_k)] \tanh\left(\beta\varepsilon_k\right)}{N\varepsilon_k}.$$ For the two-spin density operator $\rho_{i,i+r}$ with its nonzero elements constrained by Eq. , the SQC can be obtained analytically as $$\begin{aligned} \label{eq2b-10} \begin{aligned} C_{l_1}^{na}(\rho_{i,i+r})= & t_{0z}+\frac{1}{2}\left(t_{xx}+t_{yy}+\sqrt{t_{0z}^2+t_{xx}^2} +\sqrt{t_{0z}^2+t_{yy}^2}\right), \\ C_{re}^{na}(\rho_{i,i+r})= & 2-H_2(\tau_1)-H_2(\tau_2)-\frac{(1+t_{z0})H_2(\tau_3)}{2} \\ & -\frac{(1-t_{z0})H_2(\tau_4)}{2}+H_2\left(\frac{1+t_{0z}}{2}\right), \end{aligned}\end{aligned}$$ where $H_2(\cdot)$ denotes the binary Shannon entropy function, and the parameters $\tau_i$ ($i=1,2,3,4$) are given by $$\label{eq2b-11} \begin{aligned} & \tau_1=\frac{1}{2}\left(1 + \sqrt{t_{0z}^2+t_{xx}^2}\right),~ \tau_2=\frac{1}{2}\left(1 + \sqrt{t_{0z}^2+t_{yy}^2}\right), \\ & \tau_3=\frac{1}{2}+ \frac{|t_{0z}+t_{zz}|}{2(1+t_{z0})},~ \tau_4=\frac{1}{2}+ \frac{|t_{0z}-t_{zz}|}{2(1-t_{z0})}. \end{aligned}$$ SQC and QPTs in spin chain models {#sec:3} ================================= Based on the above preliminaries, we discuss in this section critical behaviors of the spin chain described by Eq. by using the SQC. We show that the extreme points of the SQC for any two spins as well as the discontinuity of its first derivative are able to indicate QPTs in the considered model. Transverse-field Ising model {#sec:3a} ---------------------------- To begin with, we consider the transverse-field Ising model which corresponds to $\gamma=1$ and $\alpha=0$ in Eq. . For such a model, it is known that there is a second-order QPT at $\lambda_c=1$. At this point, the global phase flip symmetry breaks and the correlation length diverges [@Amico]. To reveal that the SQC can indicate QPTs in the Ising model, we show in Fig. \[fig:1\] the dependence of $C_{l_1}^{na} (\rho_{i,i+r})$ and its first derivative on $\lambda$ with different distances $r$ of the spin pair. For $r\leqslant 3$, $C_{l_1}^{na} (\rho_{i,i+r})$ increases monotonically with the increase of $\lambda$, and its first-order derivative with respect to $\lambda$ shows a discontinuity at $\lambda_c=1$. For the tested spins with long distances ($r\geqslant 4$), as depicted in Fig. \[fig:1\](a), $C_{l_1}^{na} (\rho_{i,i+r})$ does not behave as a monotonic increasing function of $\lambda$. Instead, there exists a pronounced cusp close to $\lambda_c=1$. A further numerical calculation shows that the critical point $\lambda_t$ for the minimum of this cusp approaches monotonically to $\lambda_c$ with the increase of $r$, e.g., $\lambda_t-\lambda_c \sim 10^{-6}$ when $r=1000$ and $N=2001$. Then it is reasonable to conclude that for an infinite chain, the minimum of this cusp can precisely signal the QPT at $\lambda_c=1$ when $r$ is very large. Moreover, one can observe from Fig. \[fig:1\](b) that the discontinuity of $\mathrm{d}C_{l_1}^{na} (\rho_{i,i+r})/ \mathrm{d}\lambda$ indicates the QPT at $\lambda_c=1$ for the chosen tested spins with any distance. With the same system parameters as in Fig. \[fig:1\], we displayed in Fig. \[fig:2\] dependence of $C_{re}^{na}(\rho_{i,i+r})$ and its first derivative on $\lambda$. One can see that with the increasing strength of the transverse magnetic field $\lambda$, $C_{re}^{na} (\rho_{i,i+r})$ first decreases to a minimum, and then turns to be increased gradually. As for $\rho_{i,i+r}$ with large $r$, $C_{re}^{na}(\rho_{i,i+r})$ also shows a pronounced cusp in the neighborhood of $\lambda_c$, and with the increase of $r$, the critical point of $\lambda_t$ for the minimum of this cusp approaches to $\lambda_c$ more rapidly than that for $C_{l_1}^{na} (\rho_{i,i+r})$, e.g., $\lambda_{t}-\lambda_c \sim 10^{-8}$ for $r=1000$ and $N=2001$. This suggests that the cusp of SQC can signal the QPT taking place at $\lambda_c$ for two long-distance tested spins. Moreover, the first-order derivative of $C_{re}^{na} (\rho_{i,i+r})$, as expected, also presents a discontinuity at the phase transition point $\lambda_c=1$ for two spins with different distances. All the above observations show evidently that the SQC and its first-order derivative for any two spins can clearly indicate QPT in the Ising model. In particular, one can see from Figs. \[fig:1\] and \[fig:2\] that beyond the adjacent region of $\lambda_c$, the curves of SQC for two spins with different large $r$ are nearly overlapped; i.e., there is almost no decrease of the SQC for $\rho_{i,i+r}$ with different large $r$. Such a property can be immediately applied to reduce the experimental demands to detect QPTs, as one can choose two spins at any distance to achieve the same feat. We have also checked efficiency of other signatures of QPT. For entanglement and quantum discord, the discontinuities of their first derivatives can detect QPTs in the Ising chain [@Ising]. But the entanglement exists only for $r\leqslant 2$, and hence imposes a strict restriction on the distance of the tested spins, while the calculation of quantum discord is a hard task even when $\rho_{i,i+r}$ is available [@qdtwo]. Moreover, it can be seen from Eqs. and that the one-spin coherence is always zero. As for the two-spin coherence, its derivative shows a discontinuity at $\lambda_c$, but its estimation needs a two-qubit state tomography. Transverse-field *XY* model {#sec:3b} --------------------------- Next, we consider the transverse-field *XY* model, which corresponds to $\alpha=0$ in Eq. . There are two QPTs [@phase; @QPTxy]. The first one occurs at $\lambda_c= 1$. For $\lambda< \lambda_c$, the system is in the ferromagnetic ordered phase, while for $\lambda> \lambda_c$ it is in the paramagnetic quantum disordered phase. The second one occurs at $\gamma_c=0$ and $\lambda\in (0,1)$. It further separates the ferromagnetic ordered phase into two regions, i.e., the ferromagnet ordered along either the $x$ ($\gamma<0$) or the $y$ ($\gamma>0$) axis. In Fig. \[fig:3\], we show the dependence of $C_{l_1}^{na} (\rho_{i,i+r})$ and its first derivative on $\lambda$ for the *XY* model with $\gamma=0.5$. For two neighboring spins, the discontinuity of $\mathrm{d}C_{l_1}^{na}(\rho_{i,i+r})/ \mathrm{d} \lambda$ precisely signals the QPT at $\lambda_c$, and there exist two inflexions for it, which are not critical points of QPTs [@xyt1; @xyt2]. When $r$ is large, the curves of $C_{l_1}^{na}(\rho_{i,i+r})$ with different $r$ are nearly overlapped beyond the adjacent region of $\lambda_c$, and there exists an abrupt cusp in the neighborhood of $\lambda_c$. The critical point of $\lambda_t$ corresponds to the minimum of this cusp approaches asymptotically to $\lambda_c$ with the increase of $r$, e.g., $\lambda_t-\lambda_c \sim 10^{-7}$ when $r=1000$ and $N=2001$. Similar to the Ising model, the insensitivity of the SQC to the distance (provided it is not very short) of the tested spins in the *XY* chain also has important practical consequences for experimental characterization of QPTs. With regard to the first-order derivative of $C_{l_1}^{na}(\rho_{i,i+r})$, it shows a discontinuity at $\lambda_c$, irrespective of $r$. Hence, it is able to precisely detect the QPT for two spins at any distance. Similarly, we show in Fig. \[fig:4\] the capability of $C_{re}^{na} (\rho_{i,i+r})$ and its derivative in detecting QPT at $\lambda_c=1$. First, for two spins with long distances, the curves of $C_{re}^{na} (\rho_{i,i+r})$ are nearly overlapped for $\lambda$ deviating from the adjacent region of $\lambda_c$. On the contrary, there is a cusp close to $\lambda_c$, and the critical $\lambda_t$ related to the bottom of this cusp approaches rapidly to $\lambda_c$ with the increase of $r$, e.g., $\lambda_t-\lambda_c\sim 10^{-10}$ when $r=1000$ and $N=2001$. Second, the first derivative of $C_{re}^{na} (\rho_{i,i+r})$ shows a discontinuity at $\lambda_c$, irrespective of the distance of the spin pair in the chain. This indicates that the phase transition point in the *XY* model can also be signaled precisely by $\mathrm{d}C_{re}^{na} (\rho_{i,i+r})/ \mathrm{d}\lambda$. We have also examined QPTs of the *XY* model at $\gamma_c=0$ and $\lambda\in (0,1)$. For conciseness of this paper, we do not present the plots here. The numerical calculation shows that this QPT can be signaled precisely by the extremal behaviors of the SQC. To be explicit, $C_{l_1}^{na}(\rho_{i,i+r})$ is maximal for $r=1$ and minimal for $r\geqslant 2$ at $\gamma_c$, while $C_{re}^{na} (\rho_{i,i+r})$ always reaches to its minimum at $\gamma_c$. However, there is no extremal, discontinuous, or singular behavior being observed for the first-order derivative of the SQC with respect to the anisotropic parameter $\gamma$. As for concurrence of $\rho_{i,i+r}$, it is non-null for two spins with very short distance; e.g., for $\gamma= 0.5$, its first derivative detects the QPT at $\lambda_c$ only when $r\leqslant 3$. The critical point $\lambda_c$ can also be detected by the first derivative of quantum discord for two spins more distant than second neighbors [@XY], and similarly for the two-spin coherence. However, the strength of quantum discord and two-spin coherence decrease as we increase $r$, especially in the region of $\lambda> \lambda_c$, hence it is hard to detect them experimentally when $r$ is large. Transverse-field *XX* model with three-spin interaction {#sec:3c} ------------------------------------------------------- Now, we consider a more general case where only $\gamma=0$ is assumed in Eq. . The ground-state phase diagram consists of four sectors [@epjb]: the spin-saturated phase in the regions of $\lambda>\lambda_{c_1}$ and $\lambda<\lambda_{c_i}$ ($i= 2$ when $\alpha<1/8$ and $i=3$ otherwise), the spin liquid phase in the region of $\lambda\in(\lambda_{c_2},\lambda_{c_1})$, and the spin liquid phase in the region of $\lambda\in(\lambda_{c_3},\lambda_{c_2})$ and $\alpha>1/8$. Here, $\lambda_{c_1,c_2}= 2\alpha\pm 1$ and $\lambda_{c_3}= -(1+32\alpha^2)/16\alpha$. In Fig. \[fig:5\], we plot the SQC as functions of $\alpha$ and $\lambda$ for the three-spin interaction *XX* model with $N=2001$ and $r=100$. As can be seen from this figure, both $C_{l_1}^{na}(\rho_{i,i+r})$ and $C_{re}^{na}(\rho_{i,i+r})$ can signal the regions of different phases. To be explicit, when the system is in the spin-saturated phase, the two SQC measures take their values of about 2, while in the two spin liquid phases, one can observe a pronounced decrease of their values. The critical lines (i.e., $\lambda= \lambda_{c_1}$ and $\lambda= \lambda_{c_3}$) separating the spin-saturated phase from the spin liquid phase correspond to two inflexions of the SQC. For $\alpha> 1/8$, the boundary (i.e., $\lambda=\lambda_{c_2}$) between the spin liquid I and spin liquid II phases corresponds to another inflexion of the SQC. Besides the three critical lines, there is a critical line indicated by the minimum of the SQC, but as was shown in Ref. [@epjb], it is not a boundary of QPT. To gain more insight into the critical behaviors of SQC for the present model, we further plot in Fig. \[fig:6\] the dependence of $C_{l_1}^{na}(\rho_{i,i+r})$ and $C_{re}^{na} (\rho_{i,i+r})$ on $\lambda$ with different $\alpha$ and $r$. Besides those behaviors observed in Fig. \[fig:5\], one can observe that when $r=1$ and $\alpha< 1/8$, there are two cusplike minima which are pronounced for $C_{l_1}^{na}(\rho_{i,i+r})$ and are not obvious for $C_{re}^{na} (\rho_{i,i+r})$, but they are not critical points of QPTs [@epjb]. In this sense, the SQCs of long-distance spin pairs are more reliable than that of the neighboring spin pair in detecting QPTs of the three-spin interaction *XX* model. Looking at Fig. \[fig:6\], one can note that the curves of SQC for the spin pairs with different long distances are nearly overlapped; that is, the SQC in this model is also insensitive to the variation of the distance (provided it is not very short) of two spins. Such a property will be useful in the experimental detection of QPTs where other characterizations of quantumness are very weak and hence cannot be detected efficiently. As for concurrence of $\rho_{i,i+r}$, it is able to detect partial QPTs in the three-spin interaction model for the spin pair with small $r$ [@XYthree]. But when $r$ is large, its value becomes very small, and the regions of non-null concurrence shrink to the vicinity of $\lambda_{c_2}$ (if $\alpha< 1/8$) or $\lambda_{c_3}$ (if $\alpha> 1/8$). The quantum discord is a reliable indicator of QPTs when choosing two neighboring spins [@XYthree], and the two-spin coherence can detect the QPTs as well for small $r$. However, they also decrease with an increase in $r$, especially when $\alpha> 1/8$ and $r$ is large, they both oscillate rapidly with respect to $\lambda$ in the region of $\lambda\in (\lambda_{c_3}, \lambda_{c_2})$, with a large number of extreme points being observed. It is therefore hard to distinguish these points from the critical points of QPTs. Finally, we present an explanation for the underpinning of the observed phenomena in the above subsections, that is, the insensitivity of the SQC to the distance $r$ of two spins in the chain and the divergence in the derivative of the SQC with respect to the magnetic field $\lambda$. For brevity, we consider the Hamiltonian $\hat{H}$ without the three-spin interaction, and the general $\hat{H}$ of Eq. can be analyzed in a similar manner. First, we explain the insensitivity of the SQC to $r$. As $t_{0z}$ is independent of $r$, one only needs to consider the $r$ dependence of $t_{\mu\mu}$ which are determined by $\{G_n\}_{n=-r}^{r}$. From Eq. , one can obtain that for $\gamma=0$, $|G_{\pm 1}|$ is maximal among all $\{|G_n|\}$ if $\lambda \lesssim 0.6736$ and $|G_{0}|$ is maximal if $\lambda\gtrsim 0.6736$, while for $\gamma\in(0,1]$, $|G_{-1}|$ is maximal if $\lambda<\lambda_{0}$ and $|G_{0}|$ is maximal if $\lambda> \lambda_{0}$, with $\lambda_{0}$ increasing from 0.6736 to 1 when $\gamma$ increases from 0 to 1. Moreover, $|G_{\pm n}|$ with large $n$ are negligible compared with those with small $n$. For example, for the Ising model, we have $G_n=-2/[(2n+1)\pi]$ at $\lambda=\lambda_c$, $G_{-1}=1$ and $G_{n}=0$ ($n\neq -1$) at $\lambda=0$ in the thermodynamic limit ($N\rightarrow \infty$), while for the *XX* model, we have $G_0=2\theta_0/\pi-1$ and $G_n=2\sin(n\theta_0)/(n\pi)$ ($n\neq 0$), where $\theta_0=\arccos(\min\{\lambda,1\})$. Therefore, for the Ising model, $|G_n/G_{-1}|=1/(2n+1)$ at $\lambda=\lambda_c$, and such a ratio will be further decreased when $\lambda$ deviates from $\lambda_c$. Similarly, for the *XX* model, $|G_n/G_{\pm 1}|= |\sin(n\theta_0)|/(n\sin\theta_0)$ and $|G_n/G_0|=|\sin(n\theta_0)|/ [n(\pi -2\theta_0)]$. As a consequence, even when $r$ is very large, only those terms $G_{\pm n}$ with small $n$ dominate in $t_{xx}$ and $t_{yy}$, and this results in the insensitivity of $C_{l_1}^{na} (\rho_{i,i+r})$ to large $r$. Moreover, it is easy to see that $t_{zz}$ depends weakly on large $r$, thus $C_{re}^{na}(\rho_{i,i+r})$ is also insensitive to large $r$. Physically, the insensitivity of the SQC indicator to the distance between the tested spins can also be comprehended from the fact that the SQC is null only for $\rho_{AB}= \rho_A \otimes \openone_2/2$ as it takes into account the three mutually unbiased bases [@SQC]. That is, it characterizes a more general form of correlation and could exist in a parameter region in which there are no entanglement and quantum discord. In fact, the insensitivity of the SQC indicator to large $r$ also has its roots in the insensitivity of the elements of the reduced density matrices $\rho_{i,i+r}$ with large $r$. But for these $\rho_{i,i+r}$, the entanglement has already disappeared and the quantum discord is very weak. Moreover, some sudden change points of quantum discord may not correspond to QPTs as they are caused by the optimization procedure in its definition [@Karpat]. Second, we explain the divergence in the derivative of the SQC with respect to $\lambda$. Given that $T=0$, then from Eqs. and one can obtain $$\label{eq3-1} \begin{aligned} & \frac{\partial{t_{0z}}}{\partial{\lambda}}= \frac{\gamma^2}{N}\sum_k \frac{\sin^2(x_k)}{\varepsilon_k^3}, \\ & \frac{\partial{G_n}}{\partial{\lambda}}= \frac{\gamma}{N}\sum_k\frac{\epsilon_k \sin(nx_k)\sin(x_k) -\gamma\cos(nx_k)\sin^2(x_k)}{\varepsilon_k^3}, \end{aligned}$$ from which one can see that both $\partial{t_{0z}}/\partial{\lambda}$ and $\partial{G_n}/\partial{\lambda}$ are divergent at $\lambda= \lambda_c$ as the two fractions in the above equation approach infinity. For the *XX* model, one can see more specifically the divergence of $\partial{t_{0z}}/\partial{\lambda}$ and $\partial{G_n}/ \partial{\lambda}$. This is because in the thermodynamic limit, we have $\partial{t_{0z}}/\partial{\lambda}=-\partial{G_0}/\partial{\lambda}= 2/(\pi\sqrt{1-\lambda^2})$ and $\partial{G_n}/\partial{\lambda}= -2\cos(n\theta_0)/(\pi\sqrt{1-\lambda^2})$ ($n\neq 0$). Consequently, there is always a divergence in the derivatives of the SQC due to Eq. . Summary and discussion {#sec:4} ====================== To summarize, we have proposed to use the SQC as a signature of QPTs in the transverse-field *XY* model with three-spin interaction. The motivation for considering such a quantumness measure is that it is long ranged and exists in the parameter regions for which there are no quantum correlations. Compared with other signatures of QPTs such as entanglement and quantum discord, our method is powerful due to the following advantages: (+1) The SQC and its derivative succeed in detecting precisely all the QPTs in the considered models. (+2) The effectiveness of SQC in detecting QPTs is independent of the distance of two spins, which makes it convenient for practical use as one can choose any two spins other than the restricted short-distance spins. This also differentiates it from concurrence and quantum discord, which decrease rapidly with the increasing distance of two spins and disappear or become infinitesimal when the distance is long. (+3) The SQC is analytically solvable and could be estimated experimentally by local projective measurements and one-qubit tomography. Moreover, the advantage of the SQC method over the simple coherence method may originate from the fact that while quantum coherence reveals only the quantum nature of the whole system under a fixed basis, the SQC takes into account the three mutually unbiased bases and the local operation and classical communication between *A* and *B*. As a consequence, it captures a kind of correlation which contains more comprehensive information than that of coherence [@SQC; @naqc2; @naqc3], hence it is capable of distinguishing the subtle nature of a system and is more reliable in reflecting the quantum critical behaviors even when the coherence measures fail to do so. As the three-spin interaction Hamiltonian may be generated in optical lattices [@three], we expect our observation can be confirmed in future experiments with state-of-the-art techniques. One step further would be to use the SQC method to investigate QPTs of high-dimensional spin systems and exotic quantum phases in many-body systems such as topological phase transitions [@topo1; @topo2; @topo3; @topo4; @topo5]. Moreover, it is also appealing to study the dynamics of the SQC, which may provide an interesting scenario for understanding quantum criticality of many-body systems [@dyqc1; @dyqc2; @dyqc3]. ACKNOWLEDGMENTS {#acknowledgments .unnumbered} =============== This work was supported by National Natural Science Foundation of China (Grant Nos. 11675129, 11774406, and 11934018), National Key R & D Program of China (Grant Nos. 2016YFA0302104 and 2016YFA0300600), Strategic Priority Research Program of Chinese Academy of Sciences (Grant No. XDB28000000), Research Program of Beijing Academy of Quantum Information Sciences (Grant No. Y18G07), the New Star Team of XUPT, and the Innovation Fund for graduates (Grant No. CXJJLA2018007). [50]{} Z. Ficek and S. Swain, *Quantum Interference and Coherence: Theory and Experiments*, Springer Series in Optical Sciences (Springer, New York, 2005). G. Gour, M. P. Müller, V. Narasimhachar, R. W. Spekkens, and N. Y. Halpern, [Phys. Rep. ]{}[**583**]{}, 1 (2015). T. Baumgratz, M. Cramer, and M. B. Plenio, [Phys. Rev. Lett. ]{}[**113**]{}, 140401 (2014). A. Streltsov, G. Adesso, and M. B. Plenio, [Rev. Mod. Phys. ]{}[**89**]{}, 041003 (2017). M. L. Hu, X. Hu, J. C. Wang, Y. Peng, Y. R. Zhang, and H. Fan, [Phys. Rep. ]{}[**762-764**]{}, 1 (2018). A. Streltsov, E. Chitambar, S. Rana, M. N. Bera, A. Winter, and M. Lewenstein, [Phys. Rev. Lett. ]{}[**116**]{}, 240405 (2016). J. Ma, B. Yadin, D. Girolami, V. Vedral, and M. Gu, [Phys. Rev. Lett. ]{}[**116**]{}, 160407 (2016). M. Hillery, [Phys. Rev. A ]{}[**93**]{}, 012111 (2016). H. L. Shi, S. Y. Liu, X. H. Wang, W. L. Yang, Z. Y. Yang, and H. Fan, [Phys. Rev. A ]{}[**95**]{}, 032307 (2017). M. N. Bera, T. Qureshi, M. A. Siddiqui, and A. K. Pati, [Phys. Rev. A ]{}[**92**]{}, 012118 (2015). E. Bagan, J. A. Bergou, S. S. Cottrell, and M. Hillery, [Phys. Rev. Lett. ]{}[**116**]{}, 160406 (2016). A. Streltsov, U. Singh, H. S. Dhar, M. N. Bera, and G. Adesso, [Phys. Rev. Lett. ]{}[**115**]{}, 020403 (2015). X. Qi, T. Gao, and F. Yan, [J. Phys. A ]{}[**50**]{}, 285301 (2017). D. Mondal, T. Pramanik, and A. K. Pati, [Phys. Rev. A ]{}[**95**]{}, 010301(R) (2017). M. L. Hu and H. Fan, [Phys. Rev. A ]{}[**98**]{}, 022312 (2018). M. L. Hu, X. M. Wang, and H. Fan, [Phys. Rev. A ]{}[**98**]{}, 032317 (2018). K. C. Tan, H. Kwon, C. Y. Park, and H. Jeong, [Phys. Rev. A ]{}[**94**]{}, 022329 (2016). Y. Yao, X. Xiao, L. Ge, and C. P. Sun, [Phys. Rev. A ]{}[**92**]{}, 022112 (2015). M. L. Hu and H. Fan, [Phys. Rev. A ]{}[**95**]{}, 052106 (2017). X. Hu, A. Milne, B. Zhang, and H. Fan, [Sci. Rep. ]{}[**6**]{}, 19365 (2015). J. Zhang, S. R. Yang, Y. Zhang, and C. S. Yu, [Sci. Rep. ]{}[**7**]{}, 45598 (2017) X. Hu and H. Fan, [Sci. Rep. ]{}[**6**]{}, 34380 (2016). J. J. Chen, J. Cui, Y. R. Zhang, and H. Fan, [Phys. Rev. A ]{}[**94**]{}, 022112 (2016). M. Qin, Z. Ren, and X. Zhang, [Phys. Rev. A ]{}[**98**]{}, 012303 (2018). A. L. Malvezzi, G. Karpat, B. C. Çakmak, F. F. Fanchini, T. Debarba, and R. O. Vianna, [Phys. Rev. B ]{}[**93**]{}, 184428 (2016). D. Girolami, [Phys. Rev. Lett. ]{}[**113**]{}, 170401 (2014). S. Du and Z. Bai, [Ann. Phys. (N.Y.) ]{}[**359**]{}, 136 (2015). G. Karpat, B. Çakmak, and F. F. Fanchini, [Phys. Rev. B ]{}[**90**]{}, 104431 (2014). S. G. Lei and P. Q. Tong, [Quantum Inf. Process. ]{}[**15**]{}, 1811 (2016). Y. C. Li and H. Q. Lin, [Sci. Rep. ]{}[**6**]{}, 26365 (2016). T. C. Yi, W. L. You, N. Wu, and A. M. Oleś, [Phys. Rev. B ]{}[**100**]{}, 024423 (2019). W. K. Wootters, [Phys. Rev. Lett. ]{}[**80**]{}, 2245 (1998). A. Osterloh, L. Amico, G. Falci, and R. Fazio, Nature (Londan) [**416**]{}, 608 (2002). T. J. Osborne and M. A. Nielsen, [Phys. Rev. A ]{}[**66**]{}, 032110 (2002). S. J. Gu, H. Q. Lin, and Y. Q. Li, [Phys. Rev. A ]{}[**68**]{}, 042330 (2003). S. J. Gu, G. S. Tian, and H. Q. Lin, [Phys. Rev. A ]{}[**71**]{}, 052322 (2005). L. Amico, R. Fazio, A. Osterloh, and V. Vedral, [Rev. Mod. Phys. ]{}[**80**]{}, 517 (2008). H. Ollivier and W. H. Zurek, [Phys. Rev. Lett. ]{}[**88**]{}, 017901 (2001). L. Henderson and V. Vedral, [J. Phys. A ]{}[**34**]{}, 6899 (2001). T. Werlang, C. Trippe, G. A. P. Ribeiro, and G. Rigolin, [Phys. Rev. Lett. ]{}[**105**]{}, 095702 (2010). M. S. Sarandy, [Phys. Rev. A ]{}[**80**]{}, 022108 (2009). R. Dillenschneider, [Phys. Rev. B ]{}[**78**]{}, 224413 (2008). J. Maziero, H. C. Guzman, L. C. Céleri, M. S. Sarandy, and R. M. Serra, [Phys. Rev. A ]{}[**82**]{}, 012106 (2010). Y. C. Li and H. Q. Lin, [Phys. Rev. A ]{}[**83**]{}, 052323 (2011). B. Q. Liu, B. Shao, J. G. Li, J. Zou, and L. A. Wu, [Phys. Rev. A ]{}[**83**]{}, 052112 (2011). Y. Huang, [New J. Phys. ]{}[**16**]{}, 033027 (2014). D. Girolami and G. Adesso, [Phys. Rev. A ]{}[**83**]{}, 052108 (2011). Y. T. Wang, J. S. Tang, Z. Y. Wei, S. Yu, Z. J. Ke, X. Y. Xu, C. F. Li, and G. C. Guo, [Phys. Rev. Lett. ]{}[**118**]{}, 020403 (2017). D. J. Zhang, C. L. Liu, X. D. Yu, and D. M. Tong, [Phys. Rev. Lett. ]{}[**120**]{}, 170501 (2018). X. D. Yu and O. Gühne, [Phys. Rev. A ]{}[**99**]{}, 062310 (2019). J. K. Pachos and M. B. Plenio, [Phys. Rev. Lett. ]{}[**93**]{}, 056402 (2004). S. Sachdev, *Quantum Phase Transitions* (Cambridge University Press, Cambridge, England, 2000). I. Titvinidze and G. I. Japaridze, [Eur. Phys. J. B ]{}[**32**]{}, 383 (2003). X. G. Wang, [Phys. Lett. A ]{}[**331**]{}, 164 (2004). S. J. Gu, C. P. Sun, and H. Q. Lin, [J. Phys. A ]{}[**41**]{}, 025002 (2008). E. Barouch, B. M. McCoy, and M. Dresden, [Phys. Rev. A ]{}[**2**]{}, 1075 (1970). E. Barouch and B. McCoy, [Phys. Rev. A ]{}[**3**]{}, 786 (1971). P. Pfeuty, [Ann. Phys. (N.Y.) ]{}[**57**]{}, 79 (1970). M. Zhong and P. Tong, [J. Phys. A ]{}[**43**]{}, 505302 (2010). B. McCoy, E. Barouch, and D. Abraham, [Phys. Rev. A ]{}[**4**]{}, 2331 (1971). A. Kitaev and J. Preskill, [Phys. Rev. Lett. ]{}[**96**]{}, 110404 (2006). A. Hamma, W. Zhang, S. Haas, and D. A. Lidar, [Phys. Rev. B ]{}[**77**]{}, 155111 (2008). F. Pollmann, A. M. Turner, E. Berg, and M. Oshikawa, [Phys. Rev. B ]{}[**81**]{}, 064439 (2010). Y. X. Chen and S. W. Li, [Phys. Rev. A ]{}[**81**]{}, 032120 (2010). J. Cui, J. P. Cao, and H. Fan, [Phys. Rev. A ]{}[**82**]{}, 022319 (2010). H. T. Quan, Z. Song, X. F. Liu, P. Zanardi, and C. P. Sun, [Phys. Rev. Lett. ]{}[**96**]{}, 140604 (2006). D. Rossini, T. Calarco, V. Giovannetti, S. Montangero, and R. Fazio, [Phys. Rev. A ]{}[**75**]{}, 032333 (2007). Z. Sun, X. G. Wang, and C. P. Sun, [Phys. Rev. A ]{}[**75**]{}, 062312 (2007).
--- author: - 'Tomáš Masařík[^1]' - Tomáš Toufar bibliography: - 'src/lit.bib' title: 'Parameterized complexity of fair deletion problems.[^2]' --- [^1]: Author was supported by the project CE-ITI P202/12/G061. [^2]: Research was supported by the project GAUK 338216 and by the project SVV-2016-260332.
--- abstract: 'It is introduced the gauge invariant regularization of quantum chromodynamics (QCD), adjusted to modeling nonperturbative vacuum effects in QCD on the light front (LF) via modeling the dynamics of zero Fourier modes of fields on the LF.' author: - | **M.Yu. Malyshev[^1], E.V. Prokhvatilov\ ** title: '**Gauge invariant regularization of QCD on the Light Front with the lattice in transverse space coordinates**' --- Introduction ============ Quantization of field theory on the light front (LF) [@dir], i.e. on the hyperplane $x^+=0$ in the coordinates $$x^{\pm}=(x^0\pm x^3)/\sqrt{2},\quad x^{{\bot}}=x^1,x^2,$$ where $x^0,x^1,x^2,x^3$ are Lorentz coordinates and the $x^+$ plays the role of the time, requires special regularization of the theory. The LF momentum operator $P_-$ (the generator of translations along the $x^-$ axis) is nonnegative for the states with nonnegative energy and mass: $$P_-=(P_0-P_3)/\sqrt{2}\geqslant 0\quad \text{for} \quad p_0\geqslant0,\, p^2\geqslant0.$$ The vicinity of its minimal eigenvalue, $p_-=0$, corresponds both to ultraviolet and infrared domains of momenta in Lorentz coordinates. Quantizing field theory on the LF one finds singularities at $p_-\to 0$. And the regularization of these singularities may affect the description of both ultraviolet and infrared momenta physics, in particular, correct description of vacuum effects. Usual ways of the regularization of the $p_-\to 0$ singularities are the following: [**(a)**]{} the cutoff $|p_-|\;\; (|p_-|\geqslant{\varepsilon}>0)$, [**(b)**]{} “DLCQ” regularization, i.e. the space cutoff in the $x^-$, $|x^-|\leqslant L$, plus the periodic boundary conditions on fields $x^-$, that leads to the discretization of the $P_-$ spectrum: $p_-=\frac{\pi n}{L}$, $n=0,1,2,\dots$ The Fourier mode of the field with the $p_-=0$ (“zero mode”) is separared here from other modes. In canonical formalism zero mode turns out to be dependent on other modes due to constraints (for gauge field theory see [@nov2; @nov2a]). Both ways of the regularization break Lorentz symmetry, and the regularization (a) violates also gauge invariance in gauge field theory. This can lead to difficulties with the renormalization of the theory, and also to a nonequivalence of results, obtained with LF and with usual (“equal time”) quantization. In the framework of perturbation theory it was shown [@burlang; @tmf97] that to restore the symmetry and the above mentioned equivalence it is necessary to add to the regularized LF Hamiltonian some special “counterterms”. However for the Quantum Chromodynamics (QCD) one can expect effects nonperturbative in coupling constant, in particular, vacuum condensates. Applying the regularization of the type (a) where one excludes zero modes, we get the absence of such condensates. The regularization (b) leads to canonically constrained and dynamically not independent zero modes. With these zero modes one again cannot correctly descibe condensates [@yaf88; @yaf89]. The study of this problem in (1+1)-dimensional quantum electrodynamics suggested some way to introduce correct description of condensates in the regularization (b), at least semiphenomenologically, using zero modes as independent variables [@yaf88; @yaf89]. In the present paper we review briefly our new parametrization of gauge fields on the lattice in “transversal” space coordinates on the LF. This parametrization is convinient for separate treatment of zero modes of fields on the LF and gives a way to introduce gauge invariant regularization of the theory. Then we limit ourselves by QCD(2+1) model in coordinates close to the LF and perform the limiting transition to the LF Hamiltonian keeping the dynamical independence of zero modes of fields. We apply this Hamiltonian for simple example of mass spectrum calculation. The definition of gauge fields on the “transverse” lattice ========================================================== The gluon part of QCD Lagrangian in continuous space has the following form: [$$\displaylines{\refstepcounter{equation} \label{1}\hskip 1em minus 1em {\cal L}=-\frac{1}{2} Tr F_{{\mu}{\nu}}F^{{\mu}{\nu}}. {\hfil\hskip 1em minus 1em (\theequation)}\hfilneg}$$]{} where $$F_{\mu\nu}= {\partial}_{\mu} A_{\nu}- {\partial}_{\nu }A_{\mu}-ig[A_{{\mu}},A_{{\nu}}]$$ and the gluon vector fields $A_{\mu}(x)$ are $N\times N$ Hermitian traceless matrices. Under SU(N) gauge transformations the $A_{\mu}(x)$ transform as follows: [$$\displaylines{\refstepcounter{equation} \label{2}\hskip 1em minus 1em A_{\mu}(x) \to \Omega(x)A_{\mu}(x)\Omega^+(x)+ \frac {i}{g} \Omega(x){\partial}_{\mu}\Omega^+(x). {\hfil\hskip 1em minus 1em (\theequation)}\hfilneg}$$]{} Here the $\Omega (x)$ are $N\times N$ matrices, corresponding to the SU(N) gauge transformation. In the LF Hamiltonian approach one uses continuous coordinates $x^+, x^-$ and inroduces, as an ultraviolet regulator, the lattice in transversal coordinates. Gauge invariance is maintained via appropriate use of Wilson lattice method [@wilson], describing gauge fields by matrices related to lattice links. If one uses the unitary matrices for these link variables and constructs the Hamiltonian, one needs to apply the “transfer matrix” method, described in the paper [@creutz; @Grunewald]. However this method is not accomodated to the LF and to the corresponding choice of the gauge $A_-=0$. To overcome this difficulty we propose the modification of these link variables, introducing nonunitary matrics of special form, where only zero modes are related with links and nonzero modes are related with the sites, belonging to these links. Using these lattice variables we can represent the complete regularization of the theory in gauge invariant form. The gluon field components $A_+$ and $A_-$ are related with the lattice sites. Under the gauge transformations they transform according to previous formulae (\[2\]). Transverse components are described by the following $N\times N$ complex matrices: [$$\displaylines{\refstepcounter{equation} \label{3}\hskip 1em minus 1em M_{{\mu}}(x)=(I+iga\tilde A_{{\mu}}(x))U_{{\mu}}(x), {\hfil\hskip 1em minus 1em (\theequation)}\hfilneg}$$]{} where $m$ is the index of transversal components and the $\tilde A_{{\mu}}(x)$ are Hermitian $N\times N$ matrices, related to corresponding lattice sites, $U_{{\mu}}(x)$ are unitary $N\times N$ matrices, related to the links $(x-ae_{{\mu}}, x)$, $a$ is the parameter of the lattice (the size of the link) and the $e_{{\mu}}$ is the unit vector along the $x^{{\mu}}$ axis, $g$ is the QCD coupling constant. We define the transformation law under gauge transformations as follows: [$$\displaylines{\refstepcounter{equation} \label{4}\hskip 1em minus 1em \tilde A_{\mu}(x) \to \Omega(x)\tilde A_{\mu}(x)\Omega^+(x),\quad U_{{\mu}}(x)\to \Omega(x)U_{{\mu}}(x)\Omega^+(x-ae_{{\mu}}) . {\hfil\hskip 1em minus 1em (\theequation)}\hfilneg}$$]{} In consequence the matrices $M_{{\mu}}(x)$ transform like link variables [@lat; @lat1]: $$M_{{\mu}}(x)\to \Omega(x)M_{{\mu}}(x)\Omega^+(x-ae_{{\mu}}).$$ Let us remark that the Hermicity of matrices $\tilde A_{{\mu}}(x)$ is kept under these gauge transformations. Let us introduce the operator $D_-$ by the following definitions: [$$\displaylines{\refstepcounter{equation} \label{5}\hskip 1em minus 1em D_-\tilde A_{{\mu}}(x)={\partial}_-\tilde A_{{\mu}}(x)-ig[A_-(x),\tilde A_{{\mu}}(x) ],{\hfil \hskip 1em minus 1em\phantom{(\theequation)} \hfilneg\cr\hfilneg\hskip 1em minus 1em\hfil}D_-U_{{\mu}}(x)={\partial}_-U_{{\mu}}(x)-igA_-(x) U_{{\mu}}(x)+igU_{{\mu}}(x) A_-(x-ae_{{\mu}}),{\hfil \hskip 1em minus 1em\phantom{(\theequation)} \hfilneg\cr\hfilneg\hskip 1em minus 1em\hfil}D_-M_{{\mu}}(x)={\partial}_-M_{{\mu}}(x)-igA_-(x) M_{{\mu}}(x)+igM_{{\mu}}(x) A_-(x-ae_{{\mu}}). {\hfil\hskip 1em minus 1em (\theequation)}\hfilneg}$$]{} This definition of the $D_-$ has gauge invariant form under the gauge transformations, defined above. Further we impose on the $U_{{\mu}}(x)$ the condition [$$\displaylines{\refstepcounter{equation} \label{6}\hskip 1em minus 1em D_-U_{{\mu}}(x)=0, {\hfil\hskip 1em minus 1em (\theequation)}\hfilneg}$$]{} while from the $\tilde A_{{\mu}}(x)$ we exclude the part, which satisfies the equality $D_-\,\tilde A_{{\mu}}(x)=0$. In the gauge $A_-=0$ these conditions simply mean a separation of zero ($U_{{\mu}}(x)$) and nonzero ($\tilde A_{{\mu}}(x)$) Fourier modes of the field in the $x^-$. In general we have some gauge invariant definition of this separation. Furthermore we can introduce the gauge invariant cutoff in $p_-$, using a cutoff in the eigenvalues $q_-$ of the $D_-$: $|q_-|\leqslant \Lambda$. Now let us consider the naive continuous space limit $a\to 0$. We require the following relation in the fixed gauge $A_-=0$ at $a\to 0$: $$U_{{\mu}}(x)\to \exp{igaA_{{\mu}0}(x)}\to (I + igaA_{{\mu}0}(x)),$$ Here the $A_{{\mu}0}(x)$ is zero mode of the field $A_{{\mu}}(x)$ in continuous space. And for the $\tilde A_{{\mu}}(x)$ we require that it tend to nonzero mode part of the $A_{{\mu}}(x)$. Then at nonzero $A_-$ we can get for the matrix $M_{{\mu}}(x)$ the following relation: [$$\displaylines{\refstepcounter{equation} \label{7}\hskip 1em minus 1em M_{{\mu}}(x)\to (I+iga A_{{\mu}}(x)+ O((ag)^2)). {\hfil\hskip 1em minus 1em (\theequation)}\hfilneg}$$]{} Indeed, at $a\to 0$ we have: [$$\displaylines{\refstepcounter{equation} \label{8}\hskip 1em minus 1em M_{{\mu}}(x)\to\Omega(x; A_-)(I+iga A_{{\mu}}(x))_{A_-=0} \Omega^+(x-ae_{{\mu}}; A_-)\to{\hfil \hskip 1em minus 1em\phantom{(\theequation)} \hfilneg\cr\hfilneg\hskip 1em minus 1em\hfil}\to \Omega(x; A_-)(I+iga A_{{\mu}}(x))_{A_-=0}\Omega^+(x; A_-)-a \Omega(x; A_-){\partial}_{{\mu}}\Omega^+(x; A_-){\hfil \hskip 1em minus 1em\phantom{(\theequation)} \hfilneg\cr\hfilneg\hskip 1em minus 1em\hfil}\to (I+iga A_{{\mu}}(x)), {\hfil\hskip 1em minus 1em (\theequation)}\hfilneg}$$]{} where the $\Omega (x; A_-)$ is the matrix of the gauge transformation, which transforms the field in the gauge $A_-(x)=0$ to the field with a given $A_-(x)$. Let us introduce the lattice analog of the continuous space field strength $F_{{\mu}{\nu}}(x)$. With this aim we define the following quantities (${\mu}, {\nu}= 1,2$): [$$\displaylines{\refstepcounter{equation} \label{9}\hskip 1em minus 1em G_{{\mu}{\nu}}(x)=-\frac {1}{ga^2}[M_{{\mu}}(x)M_{{\nu}}(x-ae_{{\mu}})-M_{{\nu}}(x)M_{{\mu}}(x-ae_{{\nu}})], {\hfil\hskip 1em minus 1em (\theequation)}\hfilneg}$$]{} [$$\displaylines{\refstepcounter{equation} \label{10}\hskip 1em minus 1em G_{+-}(x)=iF_{+-}(x),\quad G_{-{\mu}}= \frac{1}{ga}D_-M_{{\mu}},{\hfil \hskip 1em minus 1em\phantom{(\theequation)} \hfilneg\cr\hfilneg\hskip 1em minus 1em\hfil}G_{+{\mu}}(x)=\frac {1}{ga}[{\partial}_{+}M_{{\mu}}(x)-ig(A_{+}(x)M_{{\mu}}(x) -M_{{\mu}}(x)A_{+}(x-ae_{{\mu}}))]. {\hfil\hskip 1em minus 1em (\theequation)}\hfilneg}$$]{} It is not difficult to show that at $a\to 0$ one gets $G_{{\mu}{\nu}}(x)\to iF_{{\mu}{\nu}}(x)$, and the analogous relations are true for the $G_{+{\mu}}$, $G_{-{\mu}}$. We get the following transformation law under the gauge transformations: [$$\displaylines{\refstepcounter{equation} \label{11}\hskip 1em minus 1em G_{\pm{\mu}}(x) \to \Omega(x)\,G_{\pm{\mu}}(x)\,\Omega^+(x-ae_{{\mu}}),{\hfil \hskip 1em minus 1em\phantom{(\theequation)} \hfilneg\cr\hfilneg\hskip 1em minus 1em\hfil}G_{{\mu}{\nu}}(x)\to \Omega(x)\,G_{{\mu}{\nu}}(x)\,\Omega^+(x-ae_{{\mu}}-ae_{{\nu}}). {\hfil\hskip 1em minus 1em (\theequation)}\hfilneg}$$]{} Having these quantities one can construct gauge-invariantly regularized action and the LF Hamiltonian of the QCD (similarly to the work [@lat1]). One can also apply the transfer matrix method of the paper [@creutz] to construct the Hamiltonian on the lattice even in the gauge $A_-=0$, because only zero modes are described by unitary matrices on links, and it is possible to find the necessary “transfer matrix” in $x^+$, in analogy with paper [@creutz]. 1em [**Acknowledgements.**]{} We thank V.A. Franke and S.A. Paston for useful discussions. [99]{} Forms of relativistic dynamics. Rev. Mod. Phys. 1949. V. 21. P. 392-398. On The Light Cone Formulation of Classical Nonabelian Gauge Theory. Lett. Math. Phys. 1981. V. 5. P. 239-245. On The Light Cone Quantization of Nonabelian Gauge Theory. Lett. Math. Phys. 1981. V. 5. P. 437-444. Hamiltonian formulation of (2+1)-dimensional QED on the light cone. Phys. Rev. D. 1991. V. 44(4). P. 1187-1197; Rotational invariance in light-cone quantization. Phys. Rev. D. 1991. V. 44(12), P. 3857-3867. Comparison of quantum field perturbation theory for the light front with the theory in Lorentz coordinates. Theor. Math. Phys. 112: 1117-1130, 1997, Teor. Mat. Fiz. 112: 399-416, 1997, arXiv:hep-th/9901110. Approximate description of QCD condensates in light cone coordinates. Sov. J. Nucl. Phys. 47: 559, 1988, Yad. Fiz. 47: 882-883, 1988. Limiting transition to lightlike coordinates in the field theory and QCD Hamiltonian. Sov. J. Nucl. Phys. 49: 688-692, 1989, Yad. Fiz. 49: 1109-1117, 1989. Quantization of Field Theory on the Light Front. In: Kovras, O. (ed.) Focus on quantum field theory, pp. 23-81. Nova science publishers, New York (2005), arXiv:hep-th/0404031. Gauge invariant regularization of quantum field theory on the light front. Theor. Math. Phys. 139: 807-822, 2004, Teor. Mat. Fiz. 139: 429-448, 2004, arXiv:hep-th/0303180. Confinement of Quarks. Phys. Rev. D. 1974. V. 10. P. 2445-2459. Gauge fixing, the transfer matrix, and confinement on a lattice. Phys. Rev. D. 1977. V. 15. P. 1128-1136. Formulating light cone QCD on the lattice. Phys. Rev. D 77, 014512 (2008), arXiv:0711.0620 \[hep-th\], 48 pp. [^1]: E-mail: [email protected]
--- author: - | **R. C. VERMA\ \ INDIA** title: '**$ SU(3)_{FLAVOR}$-ANALYSIS OF NONFACTORIZABLE CONTRIBUTIONS TO $ D \rightarrow PP$ DECAYS** ' --- —————————————————————————— =10000 2truecm 2truecm 0.5truecm We study charm D - meson decays to two pseudoscalar mesons in Cabibbo favored mode employing SU(3)-flavor for the nonfactorizable matrix elements. Using $D\rightarrow \bar K \pi$ and $D_{s} \rightarrow \bar K K$ to fix the reduced matrix elements, we obtain a consistent fit for $ \eta$ and $ \eta ' $ emitting decays of $D$ and $ D_{s}$ mesons. 2truecm It is now fairly established that the naive factorization model does not explain the data on weak hadronic decays of charm mesons. On one hand large $ N_{c} \rightarrow \infty $ limit, which apparently was thought to be supported by D-meson phenomenology \[1,2\], has failed to explain B-meson decays, as B-meson data clearly demands \[3\] a positive value of the $a_{2}$-parameter. On the other hand even in D-meson decays, the two body Cabibbo favored decays of $D^{0}$ and $D_{s}^{+}$ involving $ \eta$ and $ \eta'$ in their final state have proven to be problematic for a universal choice of $a_{1}$ and $a_{2}$ \[4\]. Annihilation terms, if used to bridge the discrepancy between theory and experiment, require large form factors, particularly for $D \rightarrow \bar K^0 + \eta / \eta'$ and $D^{0} \rightarrow \bar K^{*0} + \eta $ decays \[4\]. Further, factorization also fails to relate $D_{s}^{+} \rightarrow \eta / \eta' + \pi^{+}/\rho^{+} $ decays with semileptonic decays $D_{s}^{+} \rightarrow \eta / \eta' + e^{+} \nu $ \[4,5\] consistently. Recently, there has been a growing interest in studying nonfactorizable terms for weak hadronic decays of charm and bottom mesons \[6\]. In an earlier work \[7\], we have searched for a systematics in the nonfactorizable contributions for various decays of $D^{0}$ and $D^{+}$ mesons involving isospin 1/2 and 3/2 final states. We observe that the nonfactorizable isospin 1/2 and 3/2 amplitudes have nearly the same ratio for $D \rightarrow \bar K \pi / \bar K \rho / \bar K^{*} \pi / \bar K a_{1} / \bar K^{*} \rho $ decay modes. In order to realize the full impact of isospin symmetry, and to relate $D_{s}^{+}$-decays with those of the nonstrange charm mesons, we generalize it to the SU(3)-flavor symmetry. We analyze Cabibbo favored decays of $D^{0}, D^{+} $ and $D_{s}^{+}$ mesons to two pseudoscalar mesons. Determining the SU(3) reduced matrix elements from $ D^{+} \rightarrow \bar K^{0} \pi^{+} $ and $ D_{s}^{+} \rightarrow \bar K^{0} K^{+}$, we obtain a consistent fit for $D^{0} \rightarrow \bar K + \pi / \eta / \eta'$ and $ D_{s}^{+} \rightarrow \pi + \eta / \eta'$ decays. We start with the effective weak Hamiltonian $$H_{w} \hskip 0.5 cm = \hskip 0.5 cm \tilde{G}_{F} [c_{1} ( \bar u d)( \bar s c) + c_{2} ( \bar s d) ( \bar u c) ], \eqno(1)$$ where $ \tilde{G}_{F} = \frac {G_{F}} { \sqrt2} V_{ud} V_{cs}^{*} $ and $ \bar q_{1} q_{2} \equiv \bar q_{1} \gamma_{\mu} (1 - \gamma_{5} ) q_{2} $ represents color singlet $ V - A $ current and the QCD coefficients at the charm mass scale are $$c_{1} = 1.26 \pm 0.04, \hskip 2.5 cm c_{2} = - 0.51 \pm 0.05. \eqno(2)$$ Separating the factorizable and nonfactorizable parts, the matrix element of the operator $(\bar u d)(\bar s c)$ in eq. (1) between initial and final states can be written as $$< P_{1} P_{2} |(\bar u d)(\bar s c) | D > \hskip 0.2cm = \hskip 0.2cm < P_{1}| (\bar u d)|0>< P_{2}|(\bar s c) | D >$$ $$\qquad + < P_{1} P_{2} |(\bar u d)(\bar s c) | D >_{nonfac}. \eqno(3)$$ Using the Fierz identity $$(\bar u d)(\bar s c) \hskip 0.5cm = \hskip 0.5cm \frac {1} {N_{c}} (\bar s d)(\bar u c) + \frac {1} {2} \sum_{a=1}^{8} ( \bar s \lambda ^{a} d ) ( \bar u \lambda ^{a} c ), \eqno(4)$$ where $ \bar q_{1} \lambda ^{a} q_{2} \equiv \bar q _{1} \gamma_{\mu} (1 - \gamma_{5} ) \lambda ^{a} q_{2} $ represents color octet current, the nonfactorizable part of the matrix element in eq.(3) can be expanded as $$< P_{1} P_{2} |(\bar u d)(\bar s c) | D >_{nonfac} \hskip 0.5cm = \hskip 0.5cm \frac {1} {N_c} < P_{2}| (\bar s d)|0>< P_{1}|(\bar u c) | D >$$ $$+ \frac {1} {2} < P_{1} P_{2} | \sum_{a=1}^{8} ( \bar s \lambda ^{a} d ) ( \bar u \lambda ^{a} c )| D >_{nonfac} + \frac {1} {N_c} < P_{1} P_{2} |(\bar s d)(\bar u c) | D >_{nonfac} . \eqno(5)$$ Performing a similar treatment to the other operator $(\bar s d)(\bar u c)$ in eq.(1), the decay amplitude becomes $$< P_{1} P_{2} | H_{w} | D > ~ = ~ \tilde{G}_{F} [ a_{1} < P_{1}| (\bar u d)|0>< P_{2}|(\bar s c) | D >$$ $$\quad + a_{2} < P_{2}| (\bar s d)|0>< P_{1}|(\bar u c) | D >$$ $$\quad + c_{2} ( < P_{1} P_{2} | H_{w}^{8} | D > + < P_{1} P_{2} | H_{w}^{1} | D > )_{nonfac}$$ $$\quad + c_{1} ( < P_{1} P_{2} | \tilde{H}_{w}^{8} | D > + < P_{1} P_{2} | \tilde{H}_{w}^{1} | D > )_{nonfac}~ ], \eqno(6)$$ where $$a_{1,2} = c_{1,2} + \frac {c_{2,1}} {N_{c}}, \eqno(7)$$ $$H^{8}_{w} ~~ = {}~~ \frac {1} {2} \sum_{a=1}^{8} ( \bar s \lambda ^{a} d ) ( \bar u \lambda ^{a} c ), \hskip 0.2 cm \tilde{H} ^ {8}_ {w} ~~ = ~~ \frac {1} {2} \sum_{a=1}^{8} ( \bar u \lambda ^{a} d ) ( \bar s\lambda ^{a} c );$$ $$H^{1}_{w} ~~ = ~~ \frac {1} {N_{c}} ( \bar s d ) (\bar u c ), \hskip 0.2 cm \tilde{H} ^ {1}_ {w} \hskip 0.5 cm = \hskip 0.5 cm \frac {1} {N_{c}} ( \bar u d ) ( \bar s c ). \eqno(8)$$ Thus nonfactorizable effects arise through the Hamiltonian made up of color-octet currents ($ H^{8}_{w} $ and $\tilde{H} ^ {8}_ {w}$ ) and also of color singlet currents ( $ H^{1}_{w} $ and $\tilde{H} ^ {1}_ {w}$ ). Matrix elements of the first and the second terms in eq. (6) can be calculated using the factorization scheme \[1\]. These are given in Table I. So long as one restricts to the color singlet intermediate states, remaining terms in eq.(6) are ignored and one usually treats $a_{1}$ and $a_{2}$ as input parameters in place of using $N_{c} = 3 $ in reality. It is generally believed \[1, 8\] that the $ D \rightarrow \bar K\pi$ decays favour $N_{c} \rightarrow \infty $ limit, i.e., $$a_{1} \approx 1.26, \hskip 0.5cm a_{2} \approx -0.51. \eqno(9)$$ However, it has been shown that this does not explain all the decay modes of charm mesons \[4,5\]. For instance, the observed $D^{0} \rightarrow \bar {K^{0}} \eta $ and $ D^{0} \rightarrow \bar {K^{0}} \eta'$ decay widths are considerably larger than those predicted in the spectator quark model. Also in $ D \rightarrow PV $ mode, measured branching ratios for $ D^{0} \rightarrow \bar {K^{*0}} \eta $, $ D_{s}^{+} \rightarrow \eta / \eta' + \rho^{+},$ are higher than those predicted by the spectator quark diagrams. For $ D_{s}^{+} \rightarrow \eta / \eta' + \pi^{+},$ though factorization can account for substantial part of the measured branching ratios, it fails to relate them to corresponding semileptonic decays $ D_{s}^{+} \rightarrow \eta / \eta' + e^{+} \nu $ consistently \[4,5\]. In addition to the spectator quark diagram, factorizable W-exchange or W-annihilation diagrams may contribute to the weak nonleptonic decays of D mesons. However, for $ D \rightarrow PP $ decays, such contributions are helicity suppressed \[1\]. For $ D $ meson decays, these are futher color-suppressed as these involve QCD coefficient $c_{2}$, whereas for $ D_{s}^{+} \rightarrow PP $ decays these vanish \[4\] due to the conserved vector (CVC) nature of isovector current $( \bar u d)$. Therefore, it is desirable to investigate nonfactorizable contributions more seriously. It is well known that nonfactorizable terms cannot be determined unambiguiously without making some assumptions \[6\] as these involve nonperturbative effects arising due to soft-gluon exchange. We thus employ SU(3)-flavor-symmetry \[9\] to handle these matrix elements. In the SU(3) framework, the weak Hamiltonians $ {H}^{8}_{w}$, $ \tilde {H}^{8}_{w}$, $ {H}^{1}_{w}$ and $ \tilde {H}^{1}_{w}$ for Cabibbo-enhanced mode behave like $ {H}^{2}_{13}$ component of $ 6^{*} $ and 15 representations of the SU(3). Since $ {H}^{8}_{w}$ and $ \tilde {H}^{8}_{w}$ transform into each other under interchange of $u$ and $s$ quarks, which forms V-spin subgroup of the SU(3), we assume the reduced amplitudes to follow $$< P_{1} P_{2} || \tilde {H}_{w}^{8} || D > = < P_{1} P_{2} || H_{w}^{8} || D >. \eqno(10)$$ Then, the matrix elements $< P_{1} P_{2} | H^{8}_{w} | D >$ can be considered as $ weak ~spurion + D \rightarrow P + P $ scattering process, whose general structure can be written as $$< P_{1} P_{2} | H_{w}^{8} | D > \hskip 0.5cm = \hskip 0.5cm b_{1}(P^{m}_{a}P^{c}_{m}P^{b})H^{a}_{[b,c]} + d_{1}(P^{m}_{a}P^{c}_{m}P^{b})H^{a}_{(b,c)}$$ $$\hskip 3truecm + e_{1}(P^{b}_{m}P^{c}_{a}P^{m})H^{a}_{(b,c)} + f_{1}(P^{m}_{m}P^{b}_{a}P^{c})H^{a}_{(b,c)} \eqno(11)$$ where $ P^{a} $ denotes triplet of D-mesons $ P^{a} \equiv(D^{0},\hskip 0.1truecm D^{+},\hskip 0.1truecm D_{s}^{+})$ and $P_{b}^{a}$ denotes $ 3 \bigotimes 3 $ matrix of uncharmed pseudoscalar mesons, $$P_{b}^{a} \hskip 0.5cm = \hskip 0.5cm \left(\matrix{ P^{1}_{1} &\pi^{+} &K^{+}\cr \pi^{-} &P^{2}_{2} &K^{0}\cr K^{-} &\bar K^{0} &P^{3}_{3}\cr}\right) \eqno(12)$$ with $$P^{1}_{1}\hskip 0.5cm = \hskip 0.5cm \frac {\pi^{0}} {\sqrt2} + \frac {\eta_{8}} {\sqrt6} + \frac {\eta_{0}} {\sqrt3},$$ $$P^{2}_{2}\hskip 0.5cm = \hskip 0.5cm - \frac {\pi^{0}} {\sqrt2} + \frac {\eta_{8}} {\sqrt6} + \frac {\eta_{0}} {\sqrt3},$$ $$P^{3}_{3}\hskip 0.5cm = \hskip 0.5 cm - \frac {2 \eta_{8}} {\sqrt6} + \frac {\eta_{0}} {\sqrt3}.$$ Particle data group \[10\] defines the physical $ \eta - \eta'$ mixing as $$\eta = \eta_{8} \cos \phi - \eta_{0} \sin \phi,$$ $$\eta' = \eta_{8} \sin \phi + \eta_{0} \cos \phi, \eqno(13)$$ where $ \phi = -10^{0} $ and $ \phi = -19^{0} $ follow from the quadratic mass formula and the two photon decays widths respectively \[10\]. We employ the following basis \[4\] $$\eta = \frac {1} {\sqrt2} ( u \bar u + d \bar d ) \sin \theta - ( s \bar s ) \cos \theta ,$$ $$\eta' = \frac {1} {\sqrt2} ( u \bar u + d \bar d ) \cos \theta + ( s \bar s ) \sin \theta, \eqno(14)$$ where $\theta $ is given by $$\theta \hskip 0.5cm = \hskip 0.5cm {\theta}_{ideal} - {\phi}. \eqno(15)$$ Performing a similar treatment for $ H_{w}^{1} $ and $ \tilde H_{w}^{1} $, i.e. $$< P_{1} P_{2} || \tilde H_{w}^{1} || D > \hskip 0.5cm = \hskip 0.5cm < P_{1} P_{2} || H_{w}^{1} || D >, \eqno(16)$$ the matrix elements $< P_{1} P_{2} | H_{w}^{1} | D >$ are obtained from $$< P_{1} P_{2} | H_{w}^{1} | D > \hskip 0.5cm = \hskip 0.5cm b_{2}(P^{m}_{a}P^{c}_{m}P^{b})H^{a}_{[b,c]} + d_{2}(P^{m}_{a}P^{c}_{m}P^{b})H^{a}_{(b,c)}$$ $$\hskip 3truecm + e_{2}(P^{b}_{m}P^{c}_{a}P^{m})H^{a}_{(b,c)} + f_{2}(P^{m}_{m}P^{b}_{a}P^{c})H^{a}_{(b,c)} \eqno(17)$$ Since the C.G. coefficients appearing in the eqs. (11) and (17) are the same, the unknown reduced amplitudes get combined as $$b = b_{1} + b_{2}, \hskip 0.2 cm d = d_{1} + d_{2}, \hskip 0.2 cm e = e_{1} + e_{2}, \hskip 0.2 cm f = f_{1} + f_{2}, \eqno(18)$$ when the matrix elements are substituted in eq.(6). There exists a straight correspondence between the terms appearing in (11) and (17) and various quark level processes. The first two terms, involving the coefficients $b's$ and $d's$, represent W-annihilation or W-exchange diagrams. Notice that unlike factorizable W-exchange or W-annihilation diagrams, these diagrams are not suppressed on the basis of the helicity arguments due to the involvement of gluons. The third term, having coefficient $e's$, represents spectator quark like diagram where the uncharmed quark in the parent D-meson flows into one of the final state mesons. The last term is like a hair-pin diagram, where $ q \bar q $ generated in the process hadronizes to one of the final state mesons. Thus obtained nonfactorizable contributions to various $ D \rightarrow PP$ decays are given in Table II. Now we proceed to determine the SU(3) reduced amplitudes $b$, $d$, $e$, $ f $. First, we calculate the factorizable contributions to various decays using $ N_{c} = 3 $, which yields $$a_{1} = 1.09, ~~ a_{2} = -0.09 \eqno(19)$$ For the form factors, we use $$F^{DK}_{0} (0) = 0.76, ~~ F^{D \pi}_{0} (0) = 0.83, \eqno(20)$$ as guided by the semileptonic decays \[8, 12\], and $$F^{D \eta}_{0} (0) = 0.68, ~~ F^{D \eta'}_{0} (0) = 0.65,$$ $$F^{D_{s}\eta}_{0} (0) = 0.72,~~ F^{D_{s}\eta'}_{0} (0) = 0.70, \eqno(21)$$ from the BSW model \[1\]. Numerical values of the factorizbale amplitudes are given in col (iii) of Table I. $ D \rightarrow \bar K \pi $ decays involve elastic final state interactions (FSI) whereas the remaining decays are not affected by them. As a result, the isospin amplitudes 1/2 and 3/2 appearing in $ D \rightarrow \bar K \pi$ decays develop different phases; $$A(D^{0} \rightarrow K^{-} \pi^{+} ) {}~~ = ~~ \frac {1} { \sqrt3} [ A_{3/2} e^{i \delta_{3/2}} + \sqrt2 A_{1/2} e^{i \delta_{1/2}} ],$$ $$A(D^{0} \rightarrow \bar K^{0} \pi^{0} ) ~~ = ~~ \frac {1} { \sqrt3} [ \sqrt2 A_{3/2} e^{i \delta_{3/2}} - A_{1/2} e^{i \delta_{1/2}} ],$$ $$A(D^{+} \rightarrow \bar K^{0} \pi^{+} ) \hskip 0.5 cm = \hskip 0.5 cm \sqrt3 A_{3/2} e^{i \delta_{3/2}}. \eqno(22)$$ which yield the following phase independent \[7,11\] expressions: $$| A(D^{0} \rightarrow K^{-} \pi^{+} ) |^{2} + | A ( D^{0} \rightarrow \bar K^{0} \pi^{0} )|^{2} {}~~ = ~~ | A_{1/2} |^{2} + | A_{3/2} |^{2},$$ $$|A(D^{+} \rightarrow \bar K^{0} \pi^{+} ) |^{2} ~~ = ~~ 3 | A_{3/2} |^{2}. \eqno(23)$$ These relations allow one to work without the phases. Writing the total decay amplitude as sum of factorizable and nonfactorizable parts $$A ( D \rightarrow \bar K \pi) = A^{f} ( D \rightarrow \bar K \pi ) + A^{nf} ( D \rightarrow \bar K \pi ), \eqno(24)$$ we obtain $$A_{1/2}^{nf} ~~ = ~~ \frac{1}{ \sqrt3} \{ \sqrt2 A^{nf} (D^{0} \rightarrow K^{-} \pi^{+} ) - A^{nf} (D^{0} \rightarrow \bar K^{0} \pi^{0}) \}, \eqno(25)$$ $$A_{3/2}^{nf} ~~ = ~~ \frac{1}{ \sqrt3} \{ A^{nf} (D^{0} \rightarrow K^{-} \pi^{+} ) + \sqrt2 A^{nf} (D^{0} \rightarrow \bar K^{0} \pi^{0}) \},$$ $$\hskip 0.2 cm = \hskip 0.2 cm \frac{1}{ \sqrt3} \{ A^{nf} (D^{+} \rightarrow \bar K^{0} \pi^{+} )\}. \eqno(26)$$ The last relation (26) leads to the following constraint: $$\frac {b + d} {e} ~~ = ~~ \frac {c_{1} + c_{2}} {c_{2} - c_{1}} ~~ = {}~~ -0.424 \pm 0.042. \eqno(27)$$ Experimental value $ B(D^{+} \rightarrow \bar K^{0} \pi^{+}) \hskip 0.2cm = \hskip 0.2cm 2.74 \pm 0.29\% $ yields, up to a scale factor $\tilde{G}_{F}$, $$e ~~=~~ -0.094 \pm 0.027 \hskip 0.1 truecm GeV^{3}. \eqno(28)$$ This in turn predicts sum of the branching ratios of $ D^{0} \rightarrow \bar K \pi$ decay modes, $$B(D^{0} \rightarrow K^{-}\pi^{+}) + B(D^{0} \rightarrow \bar K^{0}\pi^{0}) {}~ = ~ 6.30 \pm 0.67\% \quad (6.06 \pm 0.30 \% ~Expt.) \eqno(29)$$ in good agreement with experiment. Using the experimental value of $ B(D_{s}^{+} \rightarrow \bar K^{o} K^{+}) ~~ = ~~3.5 \pm 0.7\%,$ we find (in $GeV^{3}$) $$b \hskip 0.2cm = \hskip 0.2cm +0.080 \pm 0.026, \eqno(30)$$ $$d \hskip 0.2cm = \hskip 0.2cm -0.040 \pm 0.026. \eqno(31)$$ Note that the unknown reduced amplitude $ f$ appears only in decays involving $\eta$ and $\eta'$ in the final state. We find that experimental values of these decay rates require (in $GeV^{3}$): $$f\hskip 0.2cm = \hskip 0.2cm -0.145 \pm 0.077 \hskip 0.5cm {\rm for} \hskip 0.5cm D^{0} \rightarrow \bar K^{0} \eta,$$ $$f \hskip 0.2cm = \hskip 0.2cm -0.115 \pm 0.012 \hskip 0.5cm {\rm for} \hskip 0.5cm D^{0} \rightarrow \bar K^{0} \eta',$$ $$f \hskip 0.2cm = \hskip 0.2cm -0.104 \pm 0.163 \hskip 0.5cm {\rm for} \hskip 0.5cm D_{s}^{+} \rightarrow \eta \pi^{+},$$ $$f \hskip 0.2cm = \hskip 0.2cm -0.081 \pm 0.073 \hskip 0.5cm {\rm for} \hskip 0.5cm D_{s}^{+} \rightarrow \eta' \pi^{+}. \eqno(32)$$ In Tables III, we calculate branching ratios for all the four $ \eta, \eta'$ emitting decay modes for different choice of $ f $, for $ \phi = -10^{o}$ and $ -19^{o}$. It is clear that for $ f = -0.12$ and $\phi = -10^{o}$, all the branching ratios match well with experiment. For the sake of comparison with factorizable terms, nonfactorizable contributions to various modes for $ f ~=~ -0.12 $ are given in column (iii) of the Table II. Color-suppressed decays obviously require large nonfactorizable contributions. 1truecm [ **Acknowledgments**]{} 0.5truecm The author thanks A.N. Kamal for providing support from a grant from NSERC, Canada during his stay at the University of Alberta, Canada. He also thanks the Theoretical Physics Institute, Department of Physics, University of Alberta, where part of the work was done, for their hospitality. Table I Spectator-quark decay amplitudes ( $ \times ~ \tilde {G}_{F}~ GeV^{3})$   -------------------------------------------------------------------------------------------------------------------------------------------------------- Process Amplitude $\phi ~=~-10^{0} $ $\phi ~=~-19^{0} $ ----------------------------------------- -------------------------------------------------------------------- -------------------- -------------------- $ D^{+} \rightarrow \bar K^{0} \pi^{+} $ a_{1}f_{ \pi} (m_{D}^{2}-m_{K}^{2})F_{0}^{DK}(m_{ \pi}^{2})$ $ + $ a_{2}f_{ K} (m_{D}^{2}-m_{ \pi}^{2})F_{0}^{D \pi}(m_{ K}^{2})$ +0.311 +0.311 $ D^{0} \rightarrow K^{-} \pi^{+}$ $ a_{1}f_{ \pi} (m_{D}^{2}-m_{ +0.354 +0.354 K}^{2})F_{0}^{DK}(m_{ \pi}^{2})$ $ D^{0} \rightarrow \bar K^{0} $ \frac {1}{ \sqrt2} a_{2}f_{ K} (m_{D}^{2}-m_{ -0.030 -0.030 \pi^{0}$ \pi}^{2})F_{0}^{D \pi}(m_{ K}^{2})$ $ D^{0} \rightarrow \bar K^{0} $ \frac {1}{ \sqrt2} a_{2} sin \theta f_{ K} (m_{D}^{2}-m_{ -0.016 -0.019 \eta$ \eta}^{2})F_{0}^{D \eta}(m_{ K}^{2})$ $ D^{0} \rightarrow \bar $ \frac {1}{ \sqrt2} a_{2} cos \theta f_{ K} -0.013 -0.010 K^{0} \eta'$ (m_{D}^{2}-m_{ \eta'}^{2})F_{0}^{D \eta'}(m_{ K}^{2})$ $ D^{+}_{s} $ a_{2} f_{ K} (m_{D_{s}}^{2}-m_{ -0.035 -0.035 \rightarrow \bar K^{0} K^{+}$ K}^{2})F_{0}^{D_{s}K} (m_{ K}^{2})$ $ D^{+}_{s} \rightarrow \pi^{0} $ 0 $ 0 0 \pi^{+}$ $ D^{+}_{s} \rightarrow \eta \pi^{+}$ $ -0.261 -0.216 -a_{1}cos\theta f_{ \pi} (m_{D_{s}}^{2}-m_{ \eta}^{2})F_{0}^{D_{s} \eta}(m_{ \pi}^{2})$ $ D^{+}_{s} \rightarrow \eta' \pi^{+}$ $ +0.213 +0.243 a_{1}sin\theta f_{ \pi} (m_{D_{s}}^{2}-m_{ \eta'}^{2})F_{0}^{D_{s} \eta'}(m_{ \pi}^{2})$ -------------------------------------------------------------------------------------------------------------------------------------------------------- 2 cm Table II Nonfactorizable contributions to $ D \rightarrow PP $ decays ( $ \times~ \tilde {G}_{F} ~GeV^{3}) $\ --------------------------------------------------------------------------------------------------------------------------------------------------------------- Process Amplitude $\phi =-10^{0} $ $\phi =-19^{0} $ ------------------------------------------- ----------------------------------------------------------------------------- ------------------ ------------------ $ D^{+} \rightarrow \bar K^{0} \pi^{+} $ $ 2(c_{1} + c_{2}) \hskip 0.1 cm e$ -0.141 -0.141 $ D^{0} \rightarrow K^{-} \pi^{+}$ $ c_{2} \hskip 0.1 cm(b+d+e)$ +0.028 +0.028 $ D^{0} \rightarrow \bar K^{0} \pi^{0}$ $ \frac{1}{ -0.119 -0.119 \sqrt2}c_{1} \hskip 0.1 cm (-b-d+e)$ $ D^{0} \rightarrow \bar K^{0} $ c_{1} [ \frac{sin \theta}{ \sqrt2} \hskip 0.1 cm (b+d+e+2f) -0.115 -0.154 \eta$ \hskip 0.2 cm - \hskip 0.2 cm cos \theta (b+d+f)]$ $ D^{0} \rightarrow \bar K^{0} \eta'$ $ c_{1} [ \frac{cos \theta}{ \sqrt2} -0.256 -0.235 \hskip 0.1 cm (b+d+e+2f) \hskip 0.2 cm + \hskip 0.2 cm sin \theta (b+d+f)]$ $ D^{+}_{s} \rightarrow \bar K^{0} K^{+}$ $ c_{1} \hskip 0.1 cm (-b+d+e)$ -0.268 -0.268 $ D^{+}_{s} \rightarrow \pi^{0} \pi^{+}$ $ 0 $ $ 0 $ 0 $ D^{+}_{s} \rightarrow \eta \pi^{+}$ $ c_{2} [ \sqrt2 +0.046 +0.076 sin \theta \hskip 0.1 cm (-b+d+f) \hskip 0.2 cm - \hskip 0.2 cm cos \theta (e+f)]$ $ D^{+}_{s} \rightarrow \eta' \pi^{+}$ $ c_{2} [ +0.199 +0.189 \sqrt2 cos \theta \hskip 0.1 cm (-b+d+f) \hskip 0.2 cm + \hskip 0.2 cm sin \theta (e+f)]$ --------------------------------------------------------------------------------------------------------------------------------------------------------------- 2truecm Table III Branching (%) of $ \eta/ \eta' $ emitting decays including nonfactorization terms ----------------------------------------- --------------------------------------------- ------------------------------------------------- --------------- Decay $\phi = -10^{o}$ $\phi = -19^{o}$ Expt. $f=-0.10,$ 0.4truecm $-0.12,$ 0.5cm $-0.14$ $f=-0.10,$ 0.5truecm $-0.12,$ 0.4truecm $-0.14$ $ D^{0} \rightarrow \eta \bar K^{0} $ 0.53 0.5truecm 0.59 0.5truecm 0.66 0.86 0.5truecm 1.02 0.5 cm 1.19 0.68$\pm$0.11 $ D^{0} \rightarrow \eta' 1.28 0.5truecm 1.81 0.5truecm 2.43 1.04 0.5truecm 1.51 0.5 cm 2.06 1.66$\pm$0.29 \bar K^{0} $ $ D_{s}^{+} 1.93 0.5truecm 1.87 0.5truecm 1.82 0.86 0.5truecm 0.80 0.5 cm 0.73 1.9$\pm$0.4 \rightarrow \eta \pi^{+} $ $ D_{s}^{+} \rightarrow \eta ' \pi^{+}$ 5.17 0.5truecm 5.64 0.5truecm 6.13 5.73 0.5truecm 6.22 0.5 cm 6.72 4.7$\pm$1.4 ----------------------------------------- --------------------------------------------- ------------------------------------------------- --------------- [99]{} M. Bauer, B.Stech and M. Wirbel, Z. Phys. C [**34**]{}, 103 (1987); M. Wirbel, B. Stech and M. Bauer, Z. Phys. C [**29**]{}, 637 (1985). N. Isgur, D. Scora, B. Grinstein and M. Wise, Phys. Rev. D [**39**]{}, 799 (1989). M. Gourdin, A. N. Kamal, Y. Y. Keum and X. Y. Pham, Phys. Letts. B [**333**]{}, 507 (1994); CLEP collaboration: M.S. Alam [*et al.*]{}, Phys. Rev. D [**50**]{}, 43 (1994); D. G. Cassel, ‘Physics from CLEO’, talk delivered at Lake-Louise Winter Institute on ‘Quarks and Colliders’, Feb. (1995). R. C. Verma, A. N. Kamal and M. P. Khanna, Z. Phys. C. [**65**]{}, 255 (1995). R. C. Verma, ‘A Puzzle in $ D, D_{s} \rightarrow \eta / \eta' + P/V'$, talk delivered at Lake Louise Winter Institute on ‘Quarks and Colliders’ Feb. (1995). H. Y. Cheng, Z. Phys. C. [**32**]{}, 237 (1986), ‘Nonfactorizable contributions to nonleptonic Weak Decays of Heavy Mesons’, IP-ASTP- -94, June (1994); J. M. Soares, Phys. Rev. D [**51**]{}, 3518 (1995); A. N. Kamal and A. B. Santra, ‘Nonfactorization and color Suppressed $ B \rightarrow \psi ( \psi (2S)) + K ( K^{*})$ Decays, University of Alberta preprint (1995); Nonfactorization and the Decays $ D_{s}^{+} \rightarrow \phi \pi^{+}, \phi \rho^{+},$ and $ \phi e^{+} \nu_{e}$ Alberta-Thy-1-95, Jan (1995); A. N. Kamal, A. B. Santra, T. Uppal and R. C. Verma, ‘Nonfactorization in Hadronic two-body Cabibbo favored decays of $ D^{0}$ and $ D^{+}$, Alberta-Thy-08-95, Feb. (1995). R. C. Verma, Zeits. Phys. C (1995) [*in press*]{} L.L Chau and H. Y. Cheng, Phys. Lett. [**B 333**]{}, 514 (1994). R. C. Verma and A. N. Kamal, Phys. Rev. D, [**35**]{}, 3515 (1987); Phys. Rev. D, [**43**]{}, 829 (1990). L. Montanet et al., Particle data group, Phys. Rev. D [**50**]{}, 3-I (1994). A. N. Kamal and T. N. Pham, Phys. Rev. D, [**50**]{}, 6849 (1994). M. S. Witherall, International Symposium on Lepton and Photon Interactions at High Energies, Ithaca, N.Y. (1993), edited by P. Drell and D. Rubin, AIP Conf. Proc. No. 302 (AIP, New York) p. 198.
--- abstract: 'The Ensemble Kalman Filter method can be used as an iterative numerical scheme for parameter identification or nonlinear filtering problems. We study the limit of infinitely large ensemble size and derive the corresponding mean-field limit of the ensemble method. The solution of the inverse problem is provided by the expected value of the distribution of the ensembles and the kinetic equation allows, in simple cases, to analyze stability of these solutions. Further, we present a slight but stable modification of the method which leads to a Fokker-Planck-type kinetic equation. The kinetic methods proposed here are able to solve the problem with a reduced computational complexity in the limit of a large ensemble size. We illustrate the properties and the ability of the kinetic model to provide solution to inverse problems by using examples from the literature.' author: - | Michael Herty and Giuseppe Visconti\ [*Institut für Geometrie und Praktische Mathematik (IGPM)*]{}\ [*RWTH Aachen University*]{}\ [*Templergraben 55, 52062 Aachen, Germany* ]{} bibliography: - 'references.bib' title: Kinetic Methods for Inverse Problems --- #### Mathematics Subject Classification (2010) 35Q84, 65N21, 93E11, 65N75 #### Keywords Kinetic Partial Differential Equations, Nonlinear Filtering Methods, Inverse Problems Introduction ============== We are concerned with the following abstract inverse problem or parameter identification problem $$\label{eq:noisyProb} {\mathbf{y}} = {\mathcal{G}}({\mathbf{u}}) + {\boldsymbol{\eta}}$$ where ${\mathcal{G}}:X \to Y$ is the (possible nonlinear) forward operator between finite dimensional Hilbert spaces $X={\mathbb{R}}^d$ and $Y={\mathbb{R}}^K$, with $d,K\in\mathbb{N}$, ${\mathbf{u}}\in X$ is the control, ${\mathbf{y}}\in Y$ is the observation and ${\boldsymbol{\eta}}$ is observational noise. Given noisy measurements or observations ${\mathbf{y}}$ and the known mathematical model ${\mathcal{G}}$, we are interested in finding the corresponding control ${\mathbf{u}}$. Typically, the observational noise ${\boldsymbol{\eta}}$ is not explicitly known but only information on its distribution is available. Inverse problems, in particular in view of a possible ill-posedness, have been discussed in vast amount of literature and we refer to  [@EnglHankeNeubauer1996] for an introduction and further references. In the following we will investigate a particular numerical method for solving problem , namely, the Ensemble Kalman Filter (EnKF). While this method has already been introduced more than ten years ago [@Evensen1994], recent theoretical progress [@schillingsstuart2017] is the starting point of this work. As in [@schillingsstuart2017] we aim to solve the inverse problem by minimizing the least squares functional $$\label{eq:leastSqFnc} \Phi({\mathbf{u}},{\mathbf{y}}) := \frac12 \left\| {\boldsymbol{\Gamma}}^{\frac12} ({\mathbf{y}} - {\mathcal{G}}({\mathbf{u}})) \right\|^2$$ where ${\boldsymbol{\Gamma}}^{-1}$ normalizes the so-called model-data misfit. This is defined as the covariance of the noise ${\boldsymbol{\eta}}$. Note that there is no regularization of the control ${\mathbf{u}}$ in the minimization problem of . See e.g. [@BianchiBucciniDonatelliSerra2015; @Groetsch1984; @Hansen1998; @KlannRamlau2008] for examples of Tikhonov and other regularization technique. We briefly recall a Bayesian inversion formulation for problem . Following [@dashtistuart2017; @Stuart2010] a solution to the inverse problem is obtained by treating the unknown control ${\mathbf{u}}$, the data ${\mathbf{y}}$ and the noise ${\boldsymbol{\eta}}$ as random variables. Then, the conditional probability measure of the control ${\mathbf{u}}$ given the observation ${\mathbf{y}}$, called posterior measure, is computed via Bayes Theorem. Typically, there is an interest in moments of the posterior, e.g. choosing the point of maximal probability (MAP estimator). For further details concerning Bayesian inversion, e.g. the modeling of the unknown prior distributions and other choices of estimators, see [@Berger1985; @BurgerLucka2014; @dashtistuart2017; @ernstetal2015] and references therein. Before finally stating the aim of this work, we briefly recall some references on the EnKF method without aiming to give a complete list. Iterative filtering methods have also been successfully applied to inverse problems since many years. A particular successful method has been originally proposed in [@Kalman1960] to estimate state variables, parameters, etc. of stochastic dynamical systems. This method has been extended to the EnKF in [@Evensen1994]. The EnKF sequentially updates each member of an ensemble of random elements in the space $X$ by means of the Kalman update formula, using the knowledge of the model ${\mathcal{G}}$ and of given observational data ${\mathbf{y}}$. It is important to note that [*no*]{} information on the derivative of ${\mathcal{G}}$ is required. The EnKF provides satisfactory results even when used with a small number of ensembles, as proved by the accuracy analysis in [@MajdaTong2018]. Some examples in mathematical literature of the application of the filtering method to inverse problems are given in the incomplete list [@Oliveretal; @schillingsetal2018; @SchillingsPreprint; @iglesias2015; @iglesiaslawstuart2013; @schillingsstuart2017; @schillingsstuart2018]. In particular, we refer to the following books [@Evensen2009book; @OliverReynoldsLiu2008]. Our starting point is [@schillingsstuart2017] where the continuous time limit of the EnKF has been studied as a regularization technique for minimization of the least squares functional  with a finite ensemble size. Recently, further study has been conducted in this direction [@ChadaStuartTong2019; @LangeStannat2019]. We also note that the EnKF can also formally be derived within the Bayesian framework [@ernstetal2015; @iglesiaslawstuart2013b; @kwiatkowskimandel2015; @LawStuart2012; @leglandmonbettran2009]. In the cited references the ensemble size was fixed and, due to the possible associated high computational cost, limited to a small number of ensembles. The analysis of the method for a large ensemble size limit has been investigated in [@DelMoralKurtzmannTugaut2017; @DelMoralTugaut2018; @ernstetal2015; @lawtembinetempone2016]. However, to the best of our knowledge, an evolution equation for the probability distribution of the unknown control has not been derived. We aim to provide a continuous representation of the EnKF method that also holds in the limit of infinitely many ensembles. We do believe that the derivation of the kinetic equation leads to insight to the method that might not be easy to obtain otherwise. The main advantage of the derivation of a mean-field equation is twofold. First, it formally allows to deal with the case of infinitely many ensembles and in this regime numerical simulations show a better reconstruction of an estimator of the unknown control than considering small ensemble sizes. Second, it allows to study stability, at least in the simple case of a one-dimensional control, and it suggests a modification of the method which results in improved stability of the corresponding Fokker-Planck equation. We proceed as follows: We start from the continuous time limit formulation of the EnKF derived in [@schillingsstuart2017] and interpret it as an interacting particle system. Then, we study the mean-field limit for large ensemble sizes. From a mathematical point of view, this technique has been widely used to reduce the computational complexity and to analyze interacting particle models, e.g. in socio-economic dynamics or gas dynamics [@CarrilloFornasierToscaniVecil2010; @CarrilloPareschiZanella2019; @CristianiPiccoliTosin2014; @hatadmor2008; @HertyRinghofer2011; @PareschiToscaniBOOK; @Toscani2006; @TrimbornPareschiFrank]. The kinetic equation evolves in time the probability distribution of the control and the solution to the inverse problem is shown to be the mean of this distribution. We analyze linear stability of the EnKF. Further, we present suitable modifications of the method based on the kinetic formulation in order to improve the stability pattern. The kinetic model guarantees a computational gain in the numerical simulations using a Monte Carlo approach similar to [@AlbiPareschi2013; @BabovskyNeunzert1986; @FornasierEtAl2011; @Lemou1998; @MouhotPareschi2006; @PareschiRusso1999; @PareschiToscaniBOOK]. From the Ensemble Kalman Filter to the gradient descent equation {#sec:enkf} ================================================================ The Ensemble Kalman Filter (EnKF) has been introduced [@Evensen1994] as a discrete time method to estimate state variables, parameters, etc. of stochastic dynamical systems. The estimations are based on system dynamics and measurement data that are possibly perturbed by known noise. The EnKF is a generalization and improved version of the classical Kalman Filter method [@Kalman1960]. In the following, we briefly review the definition of the EnKF which is based on a sequential update of an ensemble of states and parameters. Then we recover the continuous time limit equation derived in the recent work [@schillingsstuart2017]. This will be the starting point to introduce and compute in the next sections a mean-field limit for infinitely many ensembles. The arising kinetic partial differential equation allows subsequent analysis on the nature of the method. As in [@schillingsstuart2017] we consider a control ${\mathbf{u}}\in{\mathbb{R}}^d$, a given state ${\mathbf{y}} \in {\mathbb{R}}^K$ coupled by the system dynamic $\mathcal{G}$ as stated by equation . The problem is to identify the unknown control ${\mathbf{u}}$ given possibly perturbed measurements of the state ${\mathbf{y}}.$ Hence, the observation of the system dynamic $\mathcal{G}({\mathbf{u}})$ is perturbed by noise ${\boldsymbol{\eta}}\in{\mathbb{R}}^K$. The noise is assumed independent on the control ${\mathbf{u}}\in{\mathbb{R}}^d$ and normally distributed with zero mean and known covariance matrix ${\boldsymbol{\Gamma}}^{-1} \in {\mathbb{R}}^{K\times K}$, i.e. ${\boldsymbol{\eta}}\sim\mathcal{N}(0,{\boldsymbol{\Gamma}}^{-1})$. We consider a number $J$ of ensembles (realizations of the control) combined in ${\mathbf{U}}=\left\{{\mathbf{u}}^{j} \right\}_{j=1}^J$. The EnKF is originally posed as a discrete iteration on ${\mathbf{U}}.$ The iteration index is denoted by $n$ and the collection of the ensembles by ${\mathbf{u}}^{j,n}\in{\mathbb{R}}^d$, $\forall\,j=1,\dots,J$ and $n\geq 0$. According to [@schillingsstuart2017], the EnKF iterates each component of ${\mathbf{U}}^n$ at iteration $n+1$ as $$\label{eq:updateEnKF} \begin{aligned} {\mathbf{u}}^{j,n+1} &= {\mathbf{u}}^{j,n} + {\mathbf{C}}({\mathbf{U}}^n) \left( {\mathbf{D}}({\mathbf{U}}^n) + \frac1{\Delta t} {\boldsymbol{\Gamma}}^{-1} \right)^{-1} ({\mathbf{y}}^{{j,n+1}} - {\mathcal{G}}({\mathbf{u}}^{j,n}) ) \\ {{\mathbf{y}}^{j,n+1}} &= {\mathbf{y}} + {\boldsymbol{\xi}}^{j,n+1} \end{aligned}$$ for each $j=1,\dots,J$. Here, each observation or measurement ${{\mathbf{y}}^{j,n+1}}\in{\mathbb{R}}^K$ has been [perturbed by ${\boldsymbol{\xi}}^{j,n+1}\sim \mathcal{N}(0,\Delta t^{-1}{\boldsymbol{\Sigma}})$]{}, and $\Delta t\in{\mathbb{R}}^+$ is a parameter. As in [@schillingsstuart2017] two cases for the covariance ${\boldsymbol{\Sigma}}$ will be discussed: ${\boldsymbol{\Sigma}}=0$ corresponding to a problem where the measurement data ${\mathbf{y}}$ is unperturbed and ${\boldsymbol{\Sigma}}={\boldsymbol{\Gamma}}^{-1}$ corresponding to the case where ${\boldsymbol{\xi}}^{j,n+1}$ are realizations of the noise ${\boldsymbol{\eta}}.$ Note that the update  of the ensembles requires the knowledge of the operators ${\mathbf{C}}({\mathbf{U}}^n)$ and ${\mathbf{D}}({\mathbf{U}}^n)$ which are the covariance matrices depending on the ensemble set ${\mathbf{U}}^n$ at iteration $n$ and on ${\mathcal{G}}({\mathbf{U}}^n)$, i.e. the image of ${\mathbf{U}}^n$ at iteration $n$. More precisely, $$\label{eq:covariance} \begin{aligned} {\mathbf{C}}({\mathbf{U}}^n) &= \frac{1}{J} \sum_{k=1}^J \left({\mathbf{u}}^{k,n}-\overline{{\mathbf{u}}}^n\right) \otimes \left({\mathcal{G}}({\mathbf{u}}^{k,n})-\overline{{\mathcal{G}}}^n\right) \in {\mathbb{R}}^{d\times K} \\ {\mathbf{D}}({\mathbf{U}}^n) &= \frac{1}{J} \sum_{k=1}^J \left({\mathcal{G}}({\mathbf{u}}^{k,n})-\overline{{\mathcal{G}}}^n\right) \otimes \left({\mathcal{G}}({\mathbf{u}}^{k,n})-\overline{{\mathcal{G}}}^n\right) \in {\mathbb{R}}^{K\times K} \end{aligned}$$ where we define by $\overline{{\mathbf{u}}}^n$ and $\overline{{\mathcal{G}}}^n$ the mean of ${\mathbf{U}}^n$ and ${\mathcal{G}}({\mathbf{U}}^n)$, namely $$\overline{{\mathbf{u}}}^n = \frac{1}{J} \sum_{j=1}^J {\mathbf{u}}^{j,n}, \quad \overline{{\mathcal{G}}}^n = \frac{1}{J} \sum_{j=1}^J {\mathcal{G}}({\mathbf{u}}^{j,n}).$$ In recent years, the EnKF was also studied as technique to solve classical and Bayesian inverse problems. For instance see the works [@iglesiaslawstuart2013] and [@ernstetal2015], respectively, and the references therein. Here, we keep the attention on this type of application. The analysis of the method is proved to have a comparable accuracy with traditional least-squares approaches to inverse problems [@iglesiaslawstuart2013]. Moreover, it is known that the method provides an estimate of the unknown control ${\mathbf{u}}$ which lies in the subspace spanned by the initial ensemble set ${\mathbf{U}}^0$ [@iglesiaslawstuart2013]. We will see in this section that this property is still true at the continuous time level [@schillingsstuart2017]. Concerning Bayesian inverse problems, instead, the method is proved to approximate specific Bayes linear estimators but it is able to provide only an approximation of the posterior measure by a (possibly weighted) sum of Dirac masses. For a detailed discussion we refer to [@Apteetal2007; @ernstetal2015; @leglandmonbettran2009]. As showed in [@schillingsstuart2017], it is straightforward to compute the continuous time limit equation of the update  in the general case of a nonlinear model ${\mathcal{G}}$, even if the asymptotic analysis was performed in the easier linear setting. Consider the parameter $\Delta t$ as an artificial time step for the iteration in , i.e. we take $\Delta t \sim N_t^{-1}$ where $N_t$ is the maximum number of iterations. Assume then ${\mathbf{U}}^n \approx {\mathbf{U}}(n\Delta t)=\left\{{\mathbf{u}}^{j}(n\Delta t) \right\}_{j=1}^J$ for $n\geq 0$. Scaling by $\Delta t$ and computing the limit $\Delta t\to 0^+$, the continuous time limit equation of  reads $$\label{eq:continuousEnKF1} {\mathrm{d}} {\mathbf{u}}^j = {\mathbf{C}}({\mathbf{U}}) {\boldsymbol{\Gamma}} \left( {\mathbf{y}} - {\mathcal{G}}({\mathbf{u}}^j) \right) \, {\mathrm{dt}} {+ {\mathbf{C}}({\mathbf{U}}) {\boldsymbol{\Gamma}} \sqrt{{\boldsymbol{\Sigma}}} \; {\mathrm{d}{\mathbf{W}}^j}}$$ for $j=1,\dots,J$, initial condition ${\mathbf{U}}(0) = {\mathbf{U}}^0$ [and ${\mathrm{d}}{\mathbf{W}}^j$ are Brownian motions]{}. Using the definition of the operator ${\mathbf{C}}({\mathbf{U}})$, see , system  can be restated as $$\label{eq:continuousEnKF2} {\mathrm{d}} {\mathbf{u}}^j = \frac{1}{J} \sum_{k=1}^J \left\langle {\mathcal{G}}({\mathbf{u}}^k) - \overline{{\mathcal{G}}} , {\mathbf{y}} - {\mathcal{G}}({\mathbf{u}}^j) \right\rangle_{{\boldsymbol{\Gamma}}^{-1}} ({\mathbf{u}}^k - \overline{{\mathbf{u}}}) \, {\mathrm{dt}} {+ {\mathbf{C}}({\mathbf{U}}) {\boldsymbol{\Gamma}} \sqrt{{\boldsymbol{\Sigma}}} \; {\mathrm{d{\mathbf{W}}^j}}}$$ for $j=1,\dots,J$, where $\langle \cdot,\cdot \rangle_{{\boldsymbol{\Gamma}}^{-1}} = \langle {\boldsymbol{\Gamma}}^{\frac12} \cdot,{\boldsymbol{\Gamma}}^{\frac12} \cdot \rangle$ and $\langle \cdot,\cdot \rangle$ is the inner-product on ${\mathbb{R}}^K$. From  it is easy to observe that the invariant subspace property holds also at the continuous time level in the case ${\boldsymbol{\Sigma}} \equiv 0$ since the vector field is in the linear span of the ensemble itself. In [@schillingsstuart2017] the asymptotic behavior of the continuous time equation is analyzed in the linear setting with ${\boldsymbol{\Sigma}}\equiv 0$ so that  is written as gradient descent equation. In fact, let us consider the case of ${\mathcal{G}}$ linear, i.e. ${\mathcal{G}}({\mathbf{u}})=G {\mathbf{u}}$. Then the computation of the operator ${\mathbf{C}}({\mathbf{U}})$ is $ {\mathbf{C}}({\mathbf{U}}) = \frac{1}J \sum_{k=1}^J \left({\mathbf{u}}^k-\overline{{\mathbf{u}}}\right) \left({\mathbf{u}}^k-\overline{{\mathbf{u}}}\right)^T G^T. $ Further, note that the least squares functional  yields $$\label{eq:gradientGLinear} \nabla_{\mathbf{u}} \Phi({\mathbf{u}},{\mathbf{y}}) = - G^T {\boldsymbol{\Gamma}} ( {\mathbf{y}} - G {\mathbf{u}} ).$$ Therefore, equation  is stated in terms of the gradient of $\Phi$ as $$\label{eq:gradientEq} \frac{\mathrm{d}}{\mathrm{d}t} {\mathbf{u}}^j = - \frac{1}J \sum_{k=1}^J ({\mathbf{u}}^k-\overline{{\mathbf{u}}}) \otimes ( {\mathbf{u}}^k - \overline{{\mathbf{u}}} ) \nabla_{\mathbf{u}} \Phi({\mathbf{u}}^j,{\mathbf{y}})$$ for $j=1,\dots,J$. Equation  describes a preconditioned gradient descent equation for each ensemble. In fact, ${\mathbf{C}}({\mathbf{U}})$ is positive semi-definite and hence $$\frac{\mathrm{d}}{\mathrm{d}t} \Phi({\mathbf{u}}(t),{\mathbf{y}}) = \frac{\mathrm{d}}{\mathrm{d}t} \frac12 \left\| {\boldsymbol{\Gamma}}^{\frac12} \left({\mathbf{y}}-G{\mathbf{u}}\right) \right\|^2 \leq 0.$$ Observe that, although the forward operator is assumed to be linear, the gradient flow is nonlinear. For further details and properties of the gradient descent equation  we refer to [@schillingsstuart2017]. In particular, here we recall the important result on the velocity of the collapse of the ensembles towards their mean in the large time limit. \[lem:schillingsstuart\] Let ${\mathbf{U}}^0$ be the initial set of ensembles. Then the matrix ${\mathbf{R}}(t)$ whose entries are $$\left( {\mathbf{R}}(t) \right)_{ij} = \left\langle G({\mathbf{u}}^i-\overline{{\mathbf{u}}}),G({\mathbf{u}}^j-\overline{{\mathbf{u}}}) \right\rangle_{{\boldsymbol{\Gamma}}}$$ converges to $0$ for $t\to\infty$ and indeed $\left\| {\mathbf{R}}(t) \right\| = O(Jt^{-1})$. The previous Lemma also states that the collapse slows down linearly as the ensemble size increases. Later, this property is also obtained in the mean-field limit for a large ensemble size. We point out that the continuous time limit derivation suggests to stop at time $t = 1$. However, the study of the long-time analysis of the ODE system, such as the stability analysis in Section \[sec:unstableMoments\] and Section \[sec:stableAnalysis\], highlights possible improvements of the algorithm. Mean-field limit of the Ensemble Kalman Filter {#sec:meanfield} ============================================== Typically, the EnKF method is applied for a fixed and finite ensemble size. In fact, it is clear from  and  that the computational and memory cost of the method increases with the number of the ensembles. The analysis of the method was also studied in the large ensemble limit, see e.g. [@ernstetal2015; @kwiatkowskimandel2015; @lawtembinetempone2016; @leglandmonbettran2009]. However, to the best of our knowledge, the derivation of a kinetic equation that holds in the limit of a large number of ensembles has not yet been proposed. In this section, we derive the corresponding mean-field limit of the continuous time equation focusing on the case of a linear model $G$ and with ${\boldsymbol{\Sigma}}={\mathbf{0}}$ as in [@schillingsstuart2017]. We follow the classical formal derivation to formulate a mean-field equation of a particle system, see [@CarrilloFornasierToscaniVecil2010; @hatadmor2008; @PareschiToscaniBOOK; @Toscani2006]. Let us denote by $$\label{eq:kineticf} f = f(t,{\mathbf{u}}) : {\mathbb{R}}^+ \times {\mathbb{R}}^d \to {\mathbb{R}}^+$$ the compactly supported on ${\mathbb{R}}^d$ probability density of ${\mathbf{u}}$ at time $t$ and introduce the first moment ${\mathbf{m}}\in{\mathbb{R}}^d$ and the second moment ${\mathbf{E}}\in{\mathbb{R}}^{d\times d}$ of $f$ at time $t$, respectively, as $$\label{eq:moments} {\mathbf{m}}(t) = \int_{{\mathbb{R}}^d} {\mathbf{u}} f(t,{\mathbf{u}}) \mathrm{d}{\mathbf{u}}, \quad {\mathbf{E}}(t) = \int_{{\mathbb{R}}^d} {\mathbf{u}} \otimes {\mathbf{u}} f(t,{\mathbf{u}}) \mathrm{d}{\mathbf{u}}.$$ Since ${\mathbf{u}}\in{\mathbb{R}}^d$, the corresponding discrete measure on the ensemble set ${\mathbf{U}} = \left\{ {\mathbf{u}}^j \right\}_{j=1}^J$ is therefore given by the empirical measure $$\label{eq:empiricalf} f(t,{\mathbf{u}}) = \frac{1}J \sum_{j=1}^J \delta({\mathbf{u}}^j - {\mathbf{u}}) = \frac{1}J \sum_{j=1}^J \prod_{i=1}^d \delta(u^j_i - u_i),$$ where $u^j_i\in{\mathbb{R}}$ is the component $i$ of the $j$-th ensemble. Let us define the operator $${\boldsymbol{{\mathcal{C}}}}({\mathbf{U}}) = \frac{1}J \sum_{k=1}^J ({\mathbf{u}}^k-\overline{{\mathbf{u}}}) \otimes ({\mathbf{u}}^k-\overline{{\mathbf{u}}})$$ with the corresponding entry $$\left( {\boldsymbol{{\mathcal{C}}}}({\mathbf{U}}) \right)_{\kappa,\ell} = \frac{1}J \sum_{k=1}^J u_\kappa^k u_\ell^k - \overline{u}_\kappa \frac{1}J \sum_{k=1}^J u_\ell^k - \overline{u}_\ell \frac{1}J \sum_{k=1}^J u_\kappa^k + \overline{u}_\kappa \overline{u}_\ell = \frac{1}J \sum_{k=1}^J u_\kappa^k u_\ell^k - \overline{u}_\kappa \overline{u}_\ell,$$ where $\overline{u}_i$ denotes the component $i$ of the mean $\overline{{\mathbf{u}}}$ of the ensembles. This formulation allows for a mean-field limit as $$\left({\boldsymbol{{\mathcal{C}}}}(t)\right)_{\kappa,\ell} = \int_{{\mathbb{R}}^d} u_\kappa u_\ell f(t,{\mathbf{u}}) \mathrm{d}{\mathbf{u}} - \int_{{\mathbb{R}}^d} u_\kappa f(t,{\mathbf{u}}) \mathrm{d}{\mathbf{u}} \int_{{\mathbb{R}}^d} u_\ell f(t,{\mathbf{u}}) \mathrm{d}{\mathbf{u}}$$ and therefore ${\boldsymbol{{\mathcal{C}}}}({\mathbf{U}})$ can be written in terms of the moments  of the empirical measure only as $$\label{eq:covarianceMeanField} {\boldsymbol{{\mathcal{C}}}}(t) = {\mathbf{E}}(t) - {\mathbf{m}}(t) \otimes {\mathbf{m}}(t).$$ Let us denote $\varphi({\mathbf{u}}) \in C_0^1({\mathbb{R}}^d)$ a sufficiently smooth test function. We compute $$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} \left\langle f , \varphi \right\rangle &= \frac{\mathrm{d}}{\mathrm{d}t} \int_{{\mathbb{R}}^d} \frac{1}{J} \sum_{j=1}^J \delta({\mathbf{u}} - {\mathbf{u}}^j) \varphi({\mathbf{u}}) \mathrm{d}{\mathbf{u}} = - \frac{1}{J} \sum_{j=1}^J \nabla_{\mathbf{u}} \varphi({\mathbf{u}}^j) \cdot {\boldsymbol{{\mathcal{C}}}}(t) \nabla_{\mathbf{u}} \Phi({\mathbf{u}}^j,{\mathbf{y}}) \\ &= - \int_{{\mathbb{R}}^d} \nabla_{\mathbf{u}} \varphi({\mathbf{u}}) \cdot {\boldsymbol{{\mathcal{C}}}}(t) \nabla_{\mathbf{u}} \Phi({\mathbf{u}},{\mathbf{y}}) f(t,{\mathbf{u}}) \mathrm{d}{\mathbf{u}} $$ which finally leads to the following strong form of the mean-field kinetic equation corresponding to the gradient descent equation : $$\label{eq:kineticFromEnKF} \partial_t f(t,{\mathbf{u}}) - \nabla_{\mathbf{u}} \cdot \left( {\boldsymbol{{\mathcal{C}}}}(t) \nabla_{\mathbf{u}} \Phi({\mathbf{u}},{\mathbf{y}}) f(t,{\mathbf{u}}) \right) = 0.$$ Equation  provides a closed formula for the evolution in time of the distribution $f$ of the unknown control ${\mathbf{u}}$ when the observations ${\mathbf{y}}$ and the linear model $G$ are given and when endowed with an initial guess $f^0({\mathbf{u}}) = f(t=0,{\mathbf{u}})$ for the unknown control. Moment equations and linear stability analysis {#sec:unstableMoments} ---------------------------------------------- As discussed in Section \[sec:enkf\], the EnKF computes a solution to the inverse problem as mean of the ensembles in the large time behavior. Since the kinetic equation  formally holds in the limit of a large number of ensembles, here we analyze approximations to the solution of the inverse problem provided by the first moment ${\mathbf{m}}(t)$ of the kinetic distribution, see . Due to definition , multiplying  by ${\mathbf{u}}$, integrating over ${\mathbb{R}}^d$ and integrating by parts the second term, we get the following evolution equation for the first moment: $$\frac{\mathrm{d}}{\mathrm{d}t} {\mathbf{m}}(t) + \int_{{\mathbb{R}}^d} {\boldsymbol{{\mathcal{C}}}}(t) \nabla_{\mathbf{u}} \Phi({\mathbf{u}},{\mathbf{y}}) f(t,{\mathbf{u}}) \mathrm{d}{\mathbf{u}} = {\mathbf{0}}.$$ In particular, since we are assuming the simple setting of a linear model ${\mathcal{G}}({\mathbf{u}}) = G{\mathbf{u}}$, using , we can explicitly compute the integral and obtain $$\label{eq:linear1stMomEq} \frac{\mathrm{d}}{\mathrm{d}t} {\mathbf{m}}(t) + {\boldsymbol{{\mathcal{C}}}}(t) \nabla_{\mathbf{u}} \Phi({\mathbf{{\mathbf{m}}}},{\mathbf{y}}) = {\mathbf{0}}.$$ Multiplying  by ${\mathbf{u}} \otimes {\mathbf{u}}$ and integrating over ${\mathbb{R}}^d$ we obtain the following evolution equation for the second moment: $$\label{eq:linear2ndMomEq} \frac{\mathrm{d}}{\mathrm{d}t} {\mathbf{E}}(t) + \sum_{k=1}^d \int_{{\mathbb{R}}^d} {\mathbf{T}}_k^{(1)}({\mathbf{u}}) \left( {\boldsymbol{{\mathcal{C}}}}(t) \nabla_{\mathbf{u}} \Phi({\mathbf{u}},{\mathbf{y}}) f(t,{\mathbf{u}}) \right)_k \mathrm{d}{\mathbf{u}} = {\mathbf{0}},\quad {\mathbf{T}}_k^{(1)}({\mathbf{u}}) = \frac{\partial}{\partial u_k} {\mathbf{u}} \otimes {\mathbf{u}}.$$ Hence, equation  and equation  provide a closed system of ordinary differential equations. \[rem:mSolution\] As in Bayesian approach to inverse problems, also equation  poses the problem of selecting a solution out of $f$ which only provides a distribution for the unknown control ${\mathbf{u}}$. As pointed out at the beginning of this subsection, since the kinetic equation is derived via mean-field limit we choose, accordingly to the solution provided by the EnKF, the expected value ${\mathbf{m}}$ as an estimator of the unknown parameter ${\mathbf{u}}$. Observe that a steady-state ${\mathbf{m}}^\infty$ of equation  is given by $${\mathbf{m}}^\infty = \arg\min_{{\mathbf{u}}} \Phi({\mathbf{u}},{\mathbf{y}}),$$ corresponding to a control that minimizes the least squares functional $\Phi$. In the case of a linear model $G$, the above condition can be also stated as ${\mathbf{y}}-G{\mathbf{u}} \in \ker G^T.$ Neither ${\mathbf{u}}$ nor ${\mathbf{m}}^\infty$ need to be unique. Equation  for the first moment ${\mathbf{m}}$ and  for the second moment ${\mathbf{E}}$ give rise to a coupled system of ordinary differential equations. In the following, we employ a stability analysis for these equations in the simple case of a one-dimensional control in order to analyze the stability of the estimator ${\mathbf{m}}$. First, we observe that in the case of a scalar control the system of the moment equations reduces to $$\label{eq:system1D} \begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} m(t) &= G ( E(t) - m^2(t) )(y - G m(t)) \\ \frac{\mathrm{d}}{\mathrm{d}t} E(t) &= 2 G ( E(t) - m^2(t) ) (y m(t) - G E(t)) \end{aligned}$$ with $y\in{\mathbb{R}}$ and $G\in {\mathbb{R}}\setminus\{0\}$. The nullclines of the system of ODEs  are given by $$m = \frac{y}{G}, \quad E = \frac{y}{G} m, \quad E = m^2.$$ The equilibrium or fixed points of  are the intersections of the nullclines and therefore we have the following three sets of points: $$F_0 = (0,0), \quad F_1 = (\frac{y}{G},\frac{y^2}{G^2}), \quad F_k=(k,k^2), \; k\in{\mathbb{R}},$$ i.e. all the fixed points are on the parabola $E = m^2$ in the phase plane $(m,E)$. Given the Jacobian ${\mathbf{J}}\in{\mathbb{R}}^{2\times 2}$ of the ODE system  $$\label{eq:jacobianODE} {\mathbf{J}}(m,E) = \begin{bmatrix} 3 G^2 m^2 - 2 G y m - G^2 E & -G^2 m + G y \\ &\\ 2 G y E + 4 G^2 m E - 6 G y m^2 & -4 G^2 E + 2 G y m + 2 G^2 m^2 \end{bmatrix}$$ it follows that ${\mathbf{J}}(F_k)$ has eigenvalues $\mu_1 = \mu_2 = 0$. Clearly, the same holds for $F_0$ and $F_1$, since they are points of the type $F_k$. Therefore all the fixed points are non-hyperbolic and the stability must be analyzed directly. More precisely, since $\mu_1 = \mu_2 = 0$, the fixed points are Bogdanov-Takens-type equilibria and hence unstable as we indeed show in the following analysis. The vector field of the system  can be easily analyzed on nullclines and on the $m$- and $E$-axis of the phase plane. For the sake of simplicity, let us assume that $\frac{y}{G} > 0$. The analysis is equivalent in the opposite case. Let $m(0) = \frac{y}{G}$ so that $\frac{\mathrm{d}}{\mathrm{d}t} m=0$ for all $t$. We have that $$\frac{\mathrm{d}}{\mathrm{d}t} E = - \frac{2}{G^2} (y^2 - G^2 E )^2 < 0$$ and therefore $E$ is decreasing in time on the nullcline $m = \frac{y}{G}$ which in turn means that $F_1$ is an attractor only if $E(0) > \frac{y}{G^2}$. Let now $E(0) = \frac{y}{G} m(0)$, for some $m(0)$. Then we have $\frac{\mathrm{d}}{\mathrm{d}t} E = 0$ and $$\frac{\mathrm{d}}{\mathrm{d}t} m = m (y - G m)^2 = \begin{cases} > 0, & \text{if $m(0)>0$},\\ < 0, & \text{otherwise}. \end{cases}$$ Thus, since $m(0)>0$ is the only acceptable initial condition in order to guarantee that $E(0)>0$, the trajectories are moving on the right side of the phase plane on the nullcline $E = \frac{y}{G} m$. Obviously, each trajectory is still in time on the nullcline $E = m^2$ since $\frac{\mathrm{d}}{\mathrm{d}t} m = \frac{\mathrm{d}}{\mathrm{d}t} E = 0$. The nullclines and the complete vector field for the case $(y,G)=(2,1)$ is shown in the left panel of Figure \[fig:phaseplane\]. We immediately observe that the behavior around the equilibrium point $F_1$ is unstable as showed also in the right panel of Figure \[fig:phaseplane\]. ![Left: vector field of the ODE system  with $(y,G)=(2,1)$. Red lines are the nullclines. Right: trajectory behavior around the equilibrium $(\frac{y}{G},\frac{y^2}{G^2}) = (2,4)$.\[fig:phaseplane\]](phasePortrait.pdf "fig:"){width="49.00000%"} ![Left: vector field of the ODE system  with $(y,G)=(2,1)$. Red lines are the nullclines. Right: trajectory behavior around the equilibrium $(\frac{y}{G},\frac{y^2}{G^2}) = (2,4)$.\[fig:phaseplane\]](behaviorEQ.pdf "fig:"){width="49.00000%"} The previous considerations can be also derived by looking at the solutions of . Assuming that the initial conditions are such that $E(0) \neq m(0)^2$, we get then the following pairs of analytical solutions: $$\begin{gathered} m(t) = \frac{y}{G}, \quad E(t) = \frac{y^2}{G^2} + \frac{1}{2G^2(C+t)}\\ m(t) = \frac{y}{G} \pm \frac{1}{G\sqrt{-2 C_1 G t - 2 C_2 G}}, \quad E(t) = m^2 + \frac{\frac{\mathrm{d}}{\mathrm{d}t} m}{G(y - G m)}\end{gathered}$$ with $C, C_1, C_2 \in {\mathbb{R}}$ constants uniquely prescribed by the initial conditions. In particular, the first set of solutions is found by assuming that $m(0) = \frac{y}{G}$ and solving the following Riccati’s equation with constant coefficients $$\frac{\mathrm{d}}{\mathrm{d}t} E(t) = -2 G^2 E^2 + 4 E y - 2 \frac{y^4}{G^2}.$$ In this case, letting $E(0)=E_0$, the constant is given by $C = \frac12 (G^2 E_0^2 - y^2)$ which is positive when $E_0 > \frac{y^2}{G^2}$ and negative otherwise. In this latter case we also observe that there exists a time $t$ in which the trajectory $E(t)$ has a vertical asymptote. For the above discussion we know that $E(t)$ is also decreasing. It is also simple to observe that in the second pair of solutions $E(t)$ can blow up driving $m(t)$ away from the equilibrium. We observe that the linear stability analysis provided in this section is applied to system  without including restrictions on $E(t)-m^2(t)$ which must be non-negative. Under this constraint, the region $E(t)<m^2(t)$ is not admissible and the unstable equilibrium $(\frac{y}{G},\frac{y^2}{G^2})$ lies on the boundary of this region. Extension of the mean-field EnKF method {#sec:stabilization} ======================================= The analysis of Section \[sec:unstableMoments\] shows that, at least in a one-dimensional setting, the system of moment equations  could lead to unconditionally unstable equilibria. This is due to the possible decay of the energy which drives the expected value far from the equilibrium value. In the general case of a $d$-dimensional control ${\mathbf{u}}$, the situation may be even more complex. The instability of fixed points of  can be related to the loss of an $O(\Delta t)$ term in the derivation of the continuous time limit equation . In fact, instability could occur also for  but it is possible to show that the discrete equation  has stable equilibria. Next, we stabilize the system of the moment equations  by introducing additional uncertainty to the microscopic interactions. This leads to a diffusive term in the kinetic equation avoiding the decay of kinetic energy and the appearance of unstable equilibria. First, we write binary microscopic interactions corresponding to the mean-field kinetic equation . Then, we introduce noise in these interactions and we derive a Fokker-Planck-type equation. Finally, we study the stability of the resulting moment system. Let again $f=f(t,{\mathbf{u}}):{\mathbb{R}}^+ \times {\mathbb{R}}^d \to {\mathbb{R}}$ be the probability density of the control ${\mathbf{u}}\in{\mathbb{R}}^d$ at time $t>0$ as defined in . Let ${\mathbf{m}}\in{\mathbb{R}}^d$ and ${\mathbf{E}}\in{\mathbb{R}}^{d\times d}$ be the first and the second moment of $f$, respectively, as given in . We introduce the microscopic interaction rules: $$\label{eq:microInteraction} \begin{aligned} {\mathbf{u}} &= {\mathbf{u}}_* - \epsilon ({\mathbf{E}} - {\mathbf{m}} \otimes {\mathbf{m}}) \nabla_{\mathbf{u}} \Phi({\mathbf{u}}_*,{\mathbf{y}}) + \sqrt{\epsilon} \, {\mathbf{K}}({\mathbf{u}}_*) {\boldsymbol{\xi}} \\ &= {\mathbf{u}}_* - \epsilon \, {\boldsymbol{{\mathcal{C}}}}(t) \nabla_{\mathbf{u}} \Phi({\mathbf{u}}_*,{\mathbf{y}}) + \sqrt{\epsilon} \, {\mathbf{K}}({\mathbf{u}}_*) {\boldsymbol{\xi}} \end{aligned}$$ where ${\mathbf{u}}$ is the post-interaction value of the ensemble member, ${\mathbf{u}}_*$ is its pre-interaction value and ${\boldsymbol{\xi}}\in{\mathbb{R}}^d$ is a random variable with given distribution $\theta({\boldsymbol{\xi}})$ having zero mean and covariance matrix ${\boldsymbol{\Lambda}}\in{\mathbb{R}}^{d\times d}$. Instead, ${\mathbf{K}}({\mathbf{u}}_*)\in{\mathbb{R}}^{d\times d}$ is an arbitrary function of ${\mathbf{u}}_*$. For ${\mathbf{K}} = {\mathbf{C}}$ we observe a similar structure as in equation . The quantity $\epsilon$ describes the strength of the interactions and it is a scattering rate. \[rem:probabilisticMicro\] Observe that  is in fact the microscopic interaction corresponding to the mean-field equation , that is in the case of a linear model $$\label{eq:probabInteraction} {\mathbf{u}} = \left( {\mathbf{1}} - \epsilon \, {\boldsymbol{{\mathcal{C}}}}(t) G^T {\boldsymbol{\Gamma}} G \right) {\mathbf{u}}_* + \epsilon \, {\boldsymbol{{\mathcal{C}}}}(t) G^T {\boldsymbol{\Gamma}} G G^{-1} {\mathbf{y}},$$ with an additional term representing the uncertainty in the interaction. The interaction  has a probabilistic interpretation [@AlbiPareschi2013] provided $$\epsilon \, \rho\left({\boldsymbol{{\mathcal{C}}}}(t) G^T {\boldsymbol{\Gamma}} G \right) \leq 1,$$ where $\rho(\cdot)$ is the spectral radius. The probability density $f$ satisfies the following (linear) Boltzmann equation in weak form $$\label{eq:wfBoltzmann} \frac{\mathrm{d}}{\mathrm{d}t} \int_{{\mathbb{R}}^d} f(t,{\mathbf{u}}) \varphi({\mathbf{u}}) \mathrm{d}{\mathbf{u}} = \left\langle \int_{{\mathbb{R}}^d} (\varphi({\mathbf{u}})-\varphi({\mathbf{u}}_*)) f(t,{\mathbf{u}}) \mathrm{d}{\mathbf{u}} \right\rangle$$ where $\varphi \in C_c^\infty({\mathbb{R}}^d)$ is a test function and where the operator $\langle \cdot \rangle$ denotes the mean with respect to the distribution $\theta$, i.e. $ \langle g \rangle = \int_{{\mathbb{R}}^d} g({\boldsymbol{\xi}}) \theta({\boldsymbol{\xi}}) \mathrm{d}{\boldsymbol{\xi}}. $ Consider the time asymptotic scaling by setting $$\label{eq:scaling} \tau = t \epsilon, \quad f(t,{\mathbf{u}}) = \tilde{f}(\tau,{\mathbf{u}})$$ and allow $\epsilon \to 0^+$. This corresponds to large interaction frequencies and small interaction strengths, a situation similar to the so-called grazing collision limit  [@Desvillettes; @DiPernaLions; @PareschiToscaniVillani; @Villani1999]. We denote the scaled quantities again by $f$ and $t,$ respectively. A second-order Taylor expansion yields the corresponding formal Fokker-Planck equation: $$\begin{aligned} \varphi({\mathbf{u}}) - \varphi({\mathbf{u}}_*) =& \nabla_{\mathbf{u}} \varphi({\mathbf{u}}_*) \cdot ({\mathbf{u}}-{\mathbf{u}}_*) + \frac12 ({\mathbf{u}}-{\mathbf{u}}_*)^T {\mathbf{H}}(\varphi({\mathbf{u}}_*)) ({\mathbf{u}}-{\mathbf{u}}_*) \\ &+ \frac12 ({\mathbf{u}}-{\mathbf{u}}_*)^T \widehat{{\mathbf{H}}}(\varphi;\tilde{{\mathbf{u}}},{\mathbf{u}}_*) ({\mathbf{u}}-{\mathbf{u}}_*)\end{aligned}$$ with $\tilde{{\mathbf{u}}} = \alpha {\mathbf{u}}_* + (1-\alpha) {\mathbf{u}}$, $\alpha \in (0,1)$ and where ${\mathbf{H}}\in{\mathbb{R}}^{d\times d}$ is the Hessian matrix and $ \widehat{{\mathbf{H}}}(\varphi;\tilde{{\mathbf{u}}},{\mathbf{u}}_*) = {\mathbf{H}}(\varphi(\tilde{{\mathbf{u}}})) - {\mathbf{H}}(\varphi({\mathbf{u}}_*)). $ Substituting this expression in equation  and using definition  of the microscopic interactions, we obtain $$\begin{aligned} & \frac{\mathrm{d}}{\mathrm{d}t} \int_{{\mathbb{R}}^d} f(t,{\mathbf{u}}) \varphi({\mathbf{u}}) \mathrm{d}{\mathbf{u}} = \frac{1}{\epsilon} \left( \mathcal{A} + \mathcal{B} + \mathcal{R} \right), \\ & \mathcal{A} = - \epsilon \left\langle \int_{{\mathbb{R}}^d} \nabla_{\mathbf{u}} \varphi({\mathbf{u}}_*) \cdot \left( {\boldsymbol{{\mathcal{C}}}}(t) \nabla_{\mathbf{u}} \Phi({\mathbf{u}}_*,{\mathbf{y}}) \right) f(t,{\mathbf{u}}_*) \mathrm{d}{\mathbf{u}}_* \right\rangle + \sqrt{\epsilon} \left\langle \int_{{\mathbb{R}}^d} \nabla_{\mathbf{u}} \varphi({\mathbf{u}}_*) \cdot {\mathbf{K}}({\mathbf{u}}_*) {\boldsymbol{\xi}} f(t,{\mathbf{u}}_*) \mathrm{d}{\mathbf{u}}_* \right\rangle, \\ & \mathcal{B} = \frac{\epsilon^2}{2} \left\langle \int_{{\mathbb{R}}^d} \left( {\boldsymbol{{\mathcal{C}}}}(t) \nabla_{\mathbf{u}} \Phi({\mathbf{u}}_*,{\mathbf{y}}) \right)^T {\mathbf{H}}(\varphi({\mathbf{u}}_*)) \left( {\boldsymbol{{\mathcal{C}}}}(t) \nabla_{\mathbf{u}} \Phi({\mathbf{u}}_*,{\mathbf{y}}) \right) f(t,{\mathbf{u}}_*) \mathrm{d}{\mathbf{u}}_* \right\rangle \\ &- \epsilon \sqrt{\epsilon} \left\langle \int_{{\mathbb{R}}^d} \left({\mathbf{K}}({\mathbf{u}}_*){\boldsymbol{\xi}}\right)^T {\mathbf{H}}(\varphi({\mathbf{u}}_*)) \left( {\boldsymbol{{\mathcal{C}}}}(t) \nabla_{\mathbf{u}} \Phi({\mathbf{u}}_*,{\mathbf{y}}) \right) f(t,{\mathbf{u}}_*) \mathrm{d}{\mathbf{u}}_* \right\rangle \\ &+ \frac{\epsilon}{2} \left\langle \int_{{\mathbb{R}}^d} \operatorname{Tr}\left( ({\boldsymbol{\xi}}\otimes{\boldsymbol{\xi}})^T {\mathbf{K}}({\mathbf{u}}_*)^T {\mathbf{H}}(\varphi({\mathbf{u}}_*)) {\mathbf{K}}({\mathbf{u}}_*) \right) f(t,{\mathbf{u}}_*) \mathrm{d}{\mathbf{u}}_* \right\rangle,\end{aligned}$$ where $\operatorname{Tr}(\cdot)$ is the matrix trace and $\mathcal{R}$ is the remaining term. One can easily prove that $\epsilon^{-1}\mathcal{R}$ vanishes in the asymptotic scaling . In order to show this, it is sufficient the fact that $\varphi$ is an enough smooth function and thus each second partial derivative is Lipschitz continuous so that $\exists\,L>0$ such that $$\left| \frac{\partial^2 \varphi(\tilde{{\mathbf{u}}})}{\partial u_iu_j} - \frac{\partial^2 \varphi({\mathbf{u}}_*)}{\partial u_iu_j} \right| \leq L |\tilde{{\mathbf{u}}} - {\mathbf{u}}_*| < L |{\mathbf{u}} - {\mathbf{u}}_*| = L | \epsilon {\boldsymbol{{\mathcal{C}}}}(t) \nabla_{\mathbf{u}} \Phi({\mathbf{u}}_*,{\mathbf{y}}) + \sqrt{\epsilon} {\mathbf{K}}({\mathbf{u}}_*) {\boldsymbol{{\xi}}} | \xrightarrow{\epsilon\to 0^+} 0$$ for all $i,j=1,\dots,d$. For ${\mathbf{K}}({\mathbf{u}}_*)$ constant or depending on moments of the kinetic distribution $f$, the grazing limit in strong form is then obtained as $$\label{eq:kineticFP} \partial_t f(t,{\mathbf{u}}) = \nabla_{\mathbf{u}} \cdot \left( {\boldsymbol{{\mathcal{C}}}}(t) \nabla_{\mathbf{u}} \Phi({\mathbf{u}},{\mathbf{y}}) f(t,{\mathbf{u}}) \right) + \frac12 \nabla_{\mathbf{u}} \cdot \left( {\boldsymbol{\Lambda}} {\mathbf{K}}^T{\mathbf{K}} \nabla_{\mathbf{u}} f(t,{\mathbf{u}}) \right)$$ where we used the basic fact $$\operatorname{Tr}\left( {\boldsymbol{\Lambda}} {\mathbf{K}}^T{\mathbf{K}} {\mathbf{H}}(f(t,{\mathbf{u}})) \right) = \nabla_{\mathbf{u}} \cdot \left( {\boldsymbol{\Lambda}} {\mathbf{K}}^T{\mathbf{K}} \nabla_{\mathbf{u}} f(t,{\mathbf{u}}) \right).$$ Some remarks are in order. As expected, the Fokker-Planck-type equation  is consistent with the kinetic equation  in the limit of vanishing covariance ${\boldsymbol{\Lambda}}$. The introduction of the uncertainty in  allows for a different interpretation of the data perturbation in  and  within the kinetic model. Moment equations and linear stability analysis {#sec:stableAnalysis} ---------------------------------------------- In the setting of [@schillingsstuart2017] we have ${\mathcal{G}}({\mathbf{u}})=G{\mathbf{u}}$ and, for ${\mathbf{K}}={\mathbf{I}}$ identity matrix, a straightforward computation leads to the following moment equations based on the Fokker-Planck equation . $$\label{eq:stableMoments} \begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} {\mathbf{m}}(t) &= - {\boldsymbol{{\mathcal{C}}}}(t) \nabla_{\mathbf{u}} \Phi({\mathbf{m}},{\mathbf{y}})\\ \frac{\mathrm{d}}{\mathrm{d}t} {\mathbf{E}}(t) &= - \sum_{k=1}^d \int_{{\mathbb{R}}^d} {\mathbf{T}}_k^{(1)}({\mathbf{u}}) \left( {\boldsymbol{{\mathcal{C}}}}(t) \nabla_{\mathbf{u}} \Phi({\mathbf{u}},{\mathbf{y}}) f(t,{\mathbf{u}}) \right)_k \mathrm{d}{\mathbf{u}} + \frac12 \sum_{i,j=1}^d \Lambda_{ij}^2 \int_{{\mathbb{R}}^d} {\mathbf{T}}_{ij}^{(2)}({\mathbf{u}}) f(t,{\mathbf{u}}) \mathrm{d}{\mathbf{u}}, \end{aligned}$$ where $ {\mathbf{m}}$ and ${\mathbf{E}}$ are defined as before and where we have $${\mathbf{T}}_k^{(1)}({\mathbf{u}}) = \frac{\partial}{\partial u_k} {\mathbf{u}} \otimes {\mathbf{u}}, \quad {\mathbf{T}}_{ij}^{(2)}({\mathbf{u}}) = \frac{\partial}{\partial u_i u_j} {\mathbf{u}} \otimes {\mathbf{u}}.$$ Comparing equation and , we observe that they are equivalent. This implies that the equation for ${\mathbf{m}}$ is still providing a solution according to Remark \[rem:mSolution\]. Instead, the equation of the second moment ${\mathbf{E}}$ has an additional term that stabilize the equilibria of . We analyze linear stability of  in the case of a one-dimensional control. In this particular case the moment equations are $$\label{eq:systemNoise1D} \begin{aligned} \frac{\mathrm{d}}{\mathrm{d}t} m(t) &= G ( E(t) - m(t)^2 ) (y - G m(t)) \\ \frac{\mathrm{d}}{\mathrm{d}t} E(t) &= 2 G ( E(t) - m(t)^2 ) (y m(t) - G E(t)) + \lambda^2 \end{aligned}$$ with $y\in{\mathbb{R}}$, $G\in{\mathbb{R}}\setminus\{0\}$ and where now $\lambda^2\in{\mathbb{R}}$ represents the variance of the univariate noise $\xi$. Following the same analysis performed in Section \[sec:unstableMoments\], we compute the nullclines of the ODE system  and they are given by $$m = \frac{y}{G}, \quad E = m^2, \quad E = \frac{m(y+Gm) \pm \sqrt{m^2(y-Gm)^2 + 2 \lambda^2}}{2G}.$$ We are interested in the behavior around the equilibrium with $m=\frac{y}{G}$ which is obtained as intersection of the first and the third nullcline: $$\tilde{F}_1^\pm = (\frac{y}{G},\frac{y^2}{G^2} \pm \frac{\sqrt{2\lambda^2}}{2G}).$$ Observe that this equilibrium point is in fact the equilibrium point $F_1$ given in Section \[sec:unstableMoments\] when $\lambda \to 0^+$. For simplicity, in the following we consider $y,G>0$. Similar considerations can be done in the other cases. Letting $m(0)=\frac{y}{G}$ so that $\frac{\mathrm{d}}{\mathrm{d}t} m = 0$ for all $t$, we have $$\frac{\mathrm{d}}{\mathrm{d}t} E = -2G^2 \left( E - \frac{y^2}{G^2} \right)^2 + \lambda^2$$ where the right-hand side represents a parabola in $E$ with negative leading coefficient. Therefore, using classical arguments of stability theory for ODEs, we can state that the greater root $\tilde{F}_1^+$ is the stable equilibrium and the smaller root $\tilde{F}_1^-$ is the unstable equilibrium. This result can be also obtained by looking at the eigenvalues of the Jacobian matrix of the system  which is equivalent to . In fact, computing the eigenvalues $\mu^\pm_{1,2}$ of ${\mathbf{J}}(m,E)$ evaluated in $\tilde{F}_1^\pm$ we have $$\mu_1^\pm = \mp \frac{G}{2} \sqrt{2\lambda^2}, \quad \mu_2^\pm = \mp 2G\sqrt{2\lambda^2}$$ and therefore the equilibrium $\tilde{F}_1^+$ corresponding to the two negative eigenvalues is stable. Moreover, we stress the fact that the in the case of  the equilibria are no longer non-hyperbolic as in the case of . However, the variance $\lambda^2$ plays the role of a bifurcation parameter since for $\lambda^2 \to 0^+$ we recover the Bogdanov-Takens-type equilibria and thus $\lambda^2$ changes the stability of the equilibrium point. In view of this consideration we wish to avoid $\lambda^2$ going to zero and, furthermore, we can apply a control on it in order to guarantee that the unstable equilibrium $\tilde{F}_1^-$ is always negative and thus not admissible. More precisely, the standard deviation should satisfy $$\lambda > \frac{y^2\sqrt{2}}{G}.$$ Then, the solutions of  are given by $$m(t) = \frac{y}{G}, \quad E(t) = \frac{\left(\tanh(\sqrt{2\lambda^2} G C + \sqrt{2\lambda^2} G t \right) \sqrt{2\lambda^2} G + 2 y^2}{2 G^2}$$ and $$\begin{aligned} m(t) &= \frac{\pm e^{2 \sqrt{2\lambda^2} G t} C_1 y \mp C_2 y + \sqrt{\sqrt{2\lambda^2} C_1 e^{3\sqrt{2\lambda^2} G t} - C_2 \sqrt{2\lambda^2} e^{2\sqrt{2\lambda^2} G t}}}{(C_1 e^{2\sqrt{2\lambda^2} G t} - C_2) G}, \\ E(t) &= \frac{m(t)^3 G^2 - m(t)^2 G y - \frac{\mathrm{d}}{\mathrm{d}t} m(t)}{m(t) G^2 - G y}\end{aligned}$$ with $C, C_1, C_2 \in {\mathbb{R}}$ constants uniquely prescribed by the initial conditions. We observe that, in the large time behavior, $m \to \frac{y}{G}$ unconditionally, as in the case of . Instead, the large time behavior of $E$ is changed and shifted by a quantity depending on $\lambda^2$ which avoids the possibility of having a decay in the energy which drives $m$ away from the expected equilibrium value. In Figure \[fig:phaseplaneNoise\] we show the nullclines and the complete vector field for the case $(y,G)=(2,1)$ (left panel) and the behavior around the stable equilibrium point $\tilde{F}_1^+ = (2,8)$ (right panel). ![Left: vector field of the ODE system  with $(y,G)=(2,1)$. Red lines are the nullclines. Right: trajectory behavior around the equilibrium $\tilde{F}_1^+ = (2,8)$.\[fig:phaseplaneNoise\]](phasePortraitStable.pdf "fig:"){width="49.00000%"} ![Left: vector field of the ODE system  with $(y,G)=(2,1)$. Red lines are the nullclines. Right: trajectory behavior around the equilibrium $\tilde{F}_1^+ = (2,8)$.\[fig:phaseplaneNoise\]](behaviorEQStable.pdf "fig:"){width="49.00000%"} Lemma \[lem:schillingsstuart\] shows that the collapse of the ensembles towards their mean slows down linearly as the number of the ensemble increases. The kinetic equation  holds in the limit of a large ensemble size and the energy ${\mathbf{E}}$ gives information on the concentration of the distribution $f$ of the control ${\mathbf{u}}$ around its mean. The previous analysis shows also that, in fact, the result of Lemma \[lem:schillingsstuart\] holds at the kinetic level since ${\mathbf{E}}$ does not decay to zero as $t\to\infty$. Numerical simulation results {#sec:numericalTest} ============================ The simulations are performed by using a standard Monte Carlo approach [@Caflisch1998] to solve the kinetic equation . More precisely, we use a simple modification of the mean-field interaction algorithm given in [@AlbiPareschi2013] which is a direct simulation Monte Carlo method based on the mean-field microscopic dynamics described by  giving rise to the corresponding kinetic equation . For further details on the method we refer to  [@BabovskyNeunzert1986; @FornasierEtAl2011; @Lemou1998; @MouhotPareschi2006; @PareschiRusso1999; @PareschiToscaniBOOK]. The algorithmic details are as follows. In each example we consider a sampling of $J$ controls $\{{\mathbf{u}}^j\}_{j=1}^J$ from the prior or initial distribution $f_0({\mathbf{u}})$. Then, each sample is updated according to the mean-field microscopic rule  by selecting $M \leq J$ interacting particles uniformly distributed without repetition. The parameter $\epsilon$ in  is closely related with the concept of a time step and it is taken such that stability of the discrete method is guaranteed. [@AlbiPareschi2013]. In particular, for the kinetic model  we require that $$\label{eq:stabilitycondition} \epsilon \leq \frac{1}{\max_{i}\left(|(\Re(\mu_i)|\right)}$$ where the $\mu_i$’s are the eigenvalues of ${\boldsymbol{{\mathcal{C}}}}(t) G^T {\boldsymbol{\Gamma}} G$, cf. Remark \[rem:probabilisticMicro\]. As we observe that ${\boldsymbol{{\mathcal{C}}}}(t) G^T {\boldsymbol{\Gamma}} G$ is characterized by large spectral radius at initial time that reduces over time, we chose an adaptive computation of $\epsilon$ by recomputing it at each iteration. As already pointed out in Section \[sec:stabilization\], the microscopic interactions  are closely related to a time discretization of the gradient descent equation . However, a deterministic numerical method for  requires $O(J^2)$ operations due to the direct evaluation of the sum for $J$ ensembles. The numerical discretization of the kinetic equation by means of a Monte Carlo approach allows to compute the microscopic dynamics with a cost directly proportional to the number $J$ of ensembles. Information on the simulation results is presented in the following norms: $$\label{eq:normErr} \begin{aligned} v &= \frac{1}{J} \sum_{j=1}^J \| {\mathbf{v}}^j \|_2^2, \quad r = \frac{1}{J} \sum_{j=1}^J \| {\mathbf{r}}^j \|_2^2\\ V &= \frac{1}{J} \sum_{j=1}^J | {\mathbf{V}}_{jj} |^2, \quad R = \frac{1}{J} \sum_{j=1}^J | {\mathbf{R}}_{jj} |^2 \end{aligned}$$ which are computed at each iteration and where $$\label{eq:singleErr} \begin{aligned} {\mathbf{v}}^j &= {\mathbf{u}}^j - \overline{{\mathbf{u}}}, \quad {\mathbf{r}}^j = {\mathbf{u}}^j - {\mathbf{u}}^\dagger \\ {\mathbf{V}}_{ij} &= \left\langle G {\mathbf{v}}^i, G {\mathbf{v}}^j \right\rangle_{{\boldsymbol{\Gamma}}^{-1}}, \quad {\mathbf{R}}_{ij} = \left\langle G {\mathbf{r}}^i, G {\mathbf{r}}^j \right\rangle_{{\boldsymbol{\Gamma}}^{-1}}. \end{aligned}$$ The quantity ${\mathbf{v}}^j$ measures the deviation of the $j$-th sample from the mean $\overline{{\mathbf{u}}}$ of the approximated distribution by the samples and ${\mathbf{r}}^j$ measures the deviation of the $j$-th sample from the truth solution ${\mathbf{u}}^\dagger$. The quantities ${\mathbf{V}}$ and ${\mathbf{R}}$ give information on the deviation of ${\mathbf{v}}^j$ and ${\mathbf{r}}^j$ under application of the model $G$. Another additional important quantities is given by the misfit which allows to measure the quality of the solution at each iteration. The misfit for the $j$-th sample is defined as $$\label{eq:singleMisfit} {\boldsymbol{\vartheta}}^j = G {\mathbf{r}}^j - {\boldsymbol{\eta}}.$$ By using  we finally look at $$\label{eq:misfit} \vartheta = \frac{1}{J} \sum_{j=1}^J \| {\boldsymbol{\vartheta}}^j \|_{{\boldsymbol{\Gamma^{-1}}}}^2.$$ Driving this quantity to zero leads to over-fitting of the solution. For this reason, usually it is suitable introducing a stopping criterion which avoids this effect. In the following we will consider the discrepancy principle which check and stop the simulation when the condition $\vartheta \leq \| {\boldsymbol{\eta}} \|_2^2$ is satisfied. The algorithm employed in the experiments is summarized by the steps described in Algorithm \[alg:meanfield\]. Given $J$ samples ${\mathbf{u}}^{j,0}$, with $j=1,\dots,J$ computed from the initial distribution $f_0({\mathbf{u}})$ and $M\leq J$; set $n=0$, $t^0=0$ and a final time $T_\text{fin}$; compute the misfit $\vartheta$ as in ; compute $\epsilon = \frac{1}{\max_{i}\left(|(\Re(\mu_i)|\right)}$ $\mu_i$’s are the eigenvalues of ${\boldsymbol{{\mathcal{C}}}}(t) G^T {\boldsymbol{\Gamma}} G$; set $\epsilon = T_\text{fin}-t^n$; sample $M$ data $j_1,\dots,j_M$ uniformly without repetition among all data; compute $${\mathbf{m}}_M^n = \frac{1}{M} \sum_{k=1}^M {\mathbf{u}}^{j_k,n}, \quad {\mathbf{E}}_M^n = \frac{1}{M} \sum_{k=1}^M {\mathbf{u}}^{j_k,n} \otimes {\mathbf{u}}^{j_k,n};$$ sample ${\boldsymbol{\xi}}$ from a zero mean distribution $\theta({\boldsymbol{\xi}})$ having given covariance matrix ${\boldsymbol{\Lambda}}$; compute the data change $${\mathbf{u}}^{j,n+1} = {\mathbf{u}}^{j,n} - \epsilon ({\mathbf{E}}_M^n - {\mathbf{m}}_M^n \otimes {\mathbf{m}}_M^n) \nabla_{\mathbf{u}} \Phi({\mathbf{u}}^{j,n},{\mathbf{y}}) + \sqrt{\epsilon} \, {\boldsymbol{\xi}};$$ set $n=n+1$ and $t^{n+1}=t^n+\epsilon$. Linear elliptic problem {#sec:elliptic} ----------------------- A test proposed e.g. in [@iglesiaslawstuart2013; @schillingsstuart2017], is the ill-posed inverse problem of finding the force function of an elliptic equation in one spatial dimension assuming that noisy observation of the solution to the problem are available. This problem is widely used since is explicitly solvable due to the linearity of the model. The problem is prescribed by the following one dimensional elliptic equation $$-\frac{\mathrm{d}^2}{\mathrm{d}x^2} p(x) + p(x) = u(x), \quad x\in[0,\pi]$$ endowed with boundary conditions $p(0) = p(\pi) = 0$. The linear model is thus defined as $$A = \left( -\frac{\mathrm{d}^2}{\mathrm{d}x^2} + 1 \right)^{-1}$$ which can be discretized, for instance, by a finite difference method or by the explicit solution $$p(x) = A \, u(x) = \exp(x) \left( C_1 - \frac12 \int_0^x \exp(y) u(y) \mathrm{d}y \right) + \exp(-x) \left( C_2 + \frac12 \int_0^x \exp(-y) u(y) \mathrm{d}y \right)$$ where the constants $C_1$ and $C_2$ can be uniquely determined by the boundary conditions. We assign a continuous control $u(x)$ and then introduce a uniform mesh consisting of $d=K=2^8$ equidistant points in the interval $[0,\pi]$. Let ${\mathbf{u}}^\dagger\in{\mathbb{R}}^d$ be the vector of the evaluations of the control function $u(x)$ on the mesh. We simulate noisy observations ${\mathbf{y}}\in{\mathbb{R}}^K$ as $${\mathbf{y}} = {\mathbf{p}} + {\boldsymbol{\eta}} = G {\mathbf{u}}^\dagger + {\boldsymbol{\eta}},$$ where $G$ is the finite difference discretization of the continuous operator $A$. For simplicity we assume that ${\boldsymbol{\eta}}$ is a Gaussian white noise, more precisely ${\boldsymbol{\eta}}\sim \mathcal{N}(0,\gamma^2 {\mathbf{I}})$ with $\gamma \in {\mathbb{R}}^+$ and ${\mathbf{I}} \in {\mathbb{R}}^{d\times d}$ is the identity matrix. We are interested in recovering the control ${\mathbf{u}}^\dagger \in {\mathbb{R}}^d$ from the noisy observations ${\mathbf{y}}\in{\mathbb{R}}^K$ only. The initial ensemble of particles is sampled by an initial distribution $f_0({\mathbf{u}}) = \mathcal{N}(0,{\mathbf{C}}_0)$. The choice of $f_0({\mathbf{u}})$ is related to the choice of the prior distribution in Bayesian problems. In this case $f_0({\mathbf{u}})$ represents a Brownian bridge as in [@schillingsstuart2017] #### Test case 1. ![Elliptic problem - Test case 1 with $\gamma=0.01$. Top row: plots of the residual $r$, the projected residual $R$ and the misfit $\vartheta$ for $M=250,500,1000$. Bottom row: plots of the noisy data, the reconstruction of $p(x)$ and the reconstruction of the control $u(x)$ at final iteration for $M=250,500,1000$.\[fig:test1gamma001\]](ResidualsTest1gamma001 "fig:"){width="49.00000%"} ![Elliptic problem - Test case 1 with $\gamma=0.01$. Top row: plots of the residual $r$, the projected residual $R$ and the misfit $\vartheta$ for $M=250,500,1000$. Bottom row: plots of the noisy data, the reconstruction of $p(x)$ and the reconstruction of the control $u(x)$ at final iteration for $M=250,500,1000$.\[fig:test1gamma001\]](MisfitTest1gamma001 "fig:"){width="49.00000%"}\ ![Elliptic problem - Test case 1 with $\gamma=0.01$. Top row: plots of the residual $r$, the projected residual $R$ and the misfit $\vartheta$ for $M=250,500,1000$. Bottom row: plots of the noisy data, the reconstruction of $p(x)$ and the reconstruction of the control $u(x)$ at final iteration for $M=250,500,1000$.\[fig:test1gamma001\]](NoisyDataTest1gamma001 "fig:"){width="32.00000%"} ![Elliptic problem - Test case 1 with $\gamma=0.01$. Top row: plots of the residual $r$, the projected residual $R$ and the misfit $\vartheta$ for $M=250,500,1000$. Bottom row: plots of the noisy data, the reconstruction of $p(x)$ and the reconstruction of the control $u(x)$ at final iteration for $M=250,500,1000$.\[fig:test1gamma001\]](ReconstructionSolutionTest1gamma001 "fig:"){width="32.00000%"} ![Elliptic problem - Test case 1 with $\gamma=0.01$. Top row: plots of the residual $r$, the projected residual $R$ and the misfit $\vartheta$ for $M=250,500,1000$. Bottom row: plots of the noisy data, the reconstruction of $p(x)$ and the reconstruction of the control $u(x)$ at final iteration for $M=250,500,1000$.\[fig:test1gamma001\]](ReconstructionControlTest1gamma001 "fig:"){width="32.00000%"} ![Elliptic problem - Test case 1 with $\gamma=0.1$. Top row: plots of the residual $r$, the projected residual $R$ and the misfit $\vartheta$ for $M=250,500,1000$. Bottom row: plots of the noisy data, the reconstruction of $p(x)$ and the reconstruction of the control $u(x)$ at final iteration for $M=250,500,1000$.\[fig:test1gamma01\]](ResidualsTest1gamma01 "fig:"){width="49.00000%"} ![Elliptic problem - Test case 1 with $\gamma=0.1$. Top row: plots of the residual $r$, the projected residual $R$ and the misfit $\vartheta$ for $M=250,500,1000$. Bottom row: plots of the noisy data, the reconstruction of $p(x)$ and the reconstruction of the control $u(x)$ at final iteration for $M=250,500,1000$.\[fig:test1gamma01\]](MisfitTest1gamma01 "fig:"){width="49.00000%"}\ ![Elliptic problem - Test case 1 with $\gamma=0.1$. Top row: plots of the residual $r$, the projected residual $R$ and the misfit $\vartheta$ for $M=250,500,1000$. Bottom row: plots of the noisy data, the reconstruction of $p(x)$ and the reconstruction of the control $u(x)$ at final iteration for $M=250,500,1000$.\[fig:test1gamma01\]](NoisyDataTest1gamma01 "fig:"){width="32.00000%"} ![Elliptic problem - Test case 1 with $\gamma=0.1$. Top row: plots of the residual $r$, the projected residual $R$ and the misfit $\vartheta$ for $M=250,500,1000$. Bottom row: plots of the noisy data, the reconstruction of $p(x)$ and the reconstruction of the control $u(x)$ at final iteration for $M=250,500,1000$.\[fig:test1gamma01\]](ReconstructionSolutionTest1gamma01 "fig:"){width="32.00000%"} ![Elliptic problem - Test case 1 with $\gamma=0.1$. Top row: plots of the residual $r$, the projected residual $R$ and the misfit $\vartheta$ for $M=250,500,1000$. Bottom row: plots of the noisy data, the reconstruction of $p(x)$ and the reconstruction of the control $u(x)$ at final iteration for $M=250,500,1000$.\[fig:test1gamma01\]](ReconstructionControlTest1gamma01 "fig:"){width="32.00000%"} ![Elliptic problem - Test case 1. Left: spectrum of ${\boldsymbol{{\mathcal{C}}}}(t) G^T {\boldsymbol{\Gamma}} G$ for the initial data with $\gamma=0.01$ and $\gamma=0.1$. Right: adaptive $\epsilon$ and spectral radius of ${\boldsymbol{{\mathcal{C}}}}(t) G^T {\boldsymbol{\Gamma}} G$ over iterations with $\gamma=0.01$ and $\gamma=0.1$.\[fig:test1spectral\]](SpectrumTest1 "fig:"){width="49.00000%"} ![Elliptic problem - Test case 1. Left: spectrum of ${\boldsymbol{{\mathcal{C}}}}(t) G^T {\boldsymbol{\Gamma}} G$ for the initial data with $\gamma=0.01$ and $\gamma=0.1$. Right: adaptive $\epsilon$ and spectral radius of ${\boldsymbol{{\mathcal{C}}}}(t) G^T {\boldsymbol{\Gamma}} G$ over iterations with $\gamma=0.01$ and $\gamma=0.1$.\[fig:test1spectral\]](EpsilonRhoTest1 "fig:"){width="49.00000%"} Let us consider $u(x) = 1$, $\forall\,x\in[0,\pi]$. We solve the inverse problem by the proposed method for different values $M$ of the interacting samples. We observe that taking $M < J$ does not strongly influence the results of the simulation. But, $M < J$ allows to have a computational gain. We allow for two values of the noise level $\gamma=0.01$, see Figure \[fig:test1gamma001\], and $\gamma=0.1$, see Figure \[fig:test1gamma01\]. In both figures, the top panels show the residual (left) and misfit (right) decrease over the number of iterations. Due to the discrepancy principle, the simulation is automatically stopped when the misfit reaches $\|{\boldsymbol{\eta}}\|$. The final residual values are obviously larger in the case of $\gamma=0.1$ due to the larger noise level present in the initial observations. The bottom panels show, form left to right, the initial noisy data which are spread around the exact solution $p(x)$ of the problem, the reconstruction of $p(x)$ and the reconstruction of the control $u(x)$ by using the mean of the samples as estimator of the solution. We observe that the different values of $M$ does not give significantly different results. In the left panel of Figure \[fig:test1spectral\] we show the spectrum of ${\boldsymbol{{\mathcal{C}}}}(t) G^T {\boldsymbol{\Gamma}} G$ at initial time for $\gamma=0.01$ and $\gamma=0.1$. We observe that the ratio between the largest and smaller eigenvalues is very large, reflecting the ill-posedness of the problem and the need of using a small $\epsilon$ in  in order to guarantee stability. However, we consider an adaptive $\epsilon$ since the spectral radius is observed to decrease quickly over iterations. See the red lines in the right panel of Figure \[fig:test1spectral\], where, instead, the blue lines show the corresponding values of $\epsilon$ which avoid the lack of stability. #### Test case 2. ![Elliptic problem - Test case 2 with $\gamma=0.01$. Top row: plots of the residual $r$, the projected residual $R$ and the misfit $\vartheta$ for $J=25,25\cdot2^9$. Middle row: plots of the noisy data and of the reconstruction of $p(x)$ at final iteration for $J=25,25\cdot2^9$. Bottom row: plots of the reconstruction of the control $u(x)$ at final iteration for $J=25,25\cdot2^9$ and behavior of the relative error $\frac{\|\overline{{\mathbf{u}}}-{\mathbf{u}}\|_2^2}{\|\overline{{\mathbf{u}}}\|_2^2}$.\[fig:test2\]](ResidualsTest2 "fig:"){width="49.00000%"} ![Elliptic problem - Test case 2 with $\gamma=0.01$. Top row: plots of the residual $r$, the projected residual $R$ and the misfit $\vartheta$ for $J=25,25\cdot2^9$. Middle row: plots of the noisy data and of the reconstruction of $p(x)$ at final iteration for $J=25,25\cdot2^9$. Bottom row: plots of the reconstruction of the control $u(x)$ at final iteration for $J=25,25\cdot2^9$ and behavior of the relative error $\frac{\|\overline{{\mathbf{u}}}-{\mathbf{u}}\|_2^2}{\|\overline{{\mathbf{u}}}\|_2^2}$.\[fig:test2\]](MisfitTest2 "fig:"){width="49.00000%"}\ ![Elliptic problem - Test case 2 with $\gamma=0.01$. Top row: plots of the residual $r$, the projected residual $R$ and the misfit $\vartheta$ for $J=25,25\cdot2^9$. Middle row: plots of the noisy data and of the reconstruction of $p(x)$ at final iteration for $J=25,25\cdot2^9$. Bottom row: plots of the reconstruction of the control $u(x)$ at final iteration for $J=25,25\cdot2^9$ and behavior of the relative error $\frac{\|\overline{{\mathbf{u}}}-{\mathbf{u}}\|_2^2}{\|\overline{{\mathbf{u}}}\|_2^2}$.\[fig:test2\]](NoisyDataTest2 "fig:"){width="49.00000%"} ![Elliptic problem - Test case 2 with $\gamma=0.01$. Top row: plots of the residual $r$, the projected residual $R$ and the misfit $\vartheta$ for $J=25,25\cdot2^9$. Middle row: plots of the noisy data and of the reconstruction of $p(x)$ at final iteration for $J=25,25\cdot2^9$. Bottom row: plots of the reconstruction of the control $u(x)$ at final iteration for $J=25,25\cdot2^9$ and behavior of the relative error $\frac{\|\overline{{\mathbf{u}}}-{\mathbf{u}}\|_2^2}{\|\overline{{\mathbf{u}}}\|_2^2}$.\[fig:test2\]](ReconstructionSolutionTest2 "fig:"){width="49.00000%"}\ ![Elliptic problem - Test case 2 with $\gamma=0.01$. Top row: plots of the residual $r$, the projected residual $R$ and the misfit $\vartheta$ for $J=25,25\cdot2^9$. Middle row: plots of the noisy data and of the reconstruction of $p(x)$ at final iteration for $J=25,25\cdot2^9$. Bottom row: plots of the reconstruction of the control $u(x)$ at final iteration for $J=25,25\cdot2^9$ and behavior of the relative error $\frac{\|\overline{{\mathbf{u}}}-{\mathbf{u}}\|_2^2}{\|\overline{{\mathbf{u}}}\|_2^2}$.\[fig:test2\]](ReconstructionControlTest2 "fig:"){width="49.00000%"} ![Elliptic problem - Test case 2 with $\gamma=0.01$. Top row: plots of the residual $r$, the projected residual $R$ and the misfit $\vartheta$ for $J=25,25\cdot2^9$. Middle row: plots of the noisy data and of the reconstruction of $p(x)$ at final iteration for $J=25,25\cdot2^9$. Bottom row: plots of the reconstruction of the control $u(x)$ at final iteration for $J=25,25\cdot2^9$ and behavior of the relative error $\frac{\|\overline{{\mathbf{u}}}-{\mathbf{u}}\|_2^2}{\|\overline{{\mathbf{u}}}\|_2^2}$.\[fig:test2\]](ErrTest2 "fig:"){width="49.00000%"} Let us consider $u(x) = \sin(8x)$, $\forall\,x\in[0,\pi]$, and a fixed value of the noise level $\gamma=0.01$. We show that the method provides a good performance also cases where the control function has a high-frequency profile. In Figure \[fig:test2\] we consider the results obtained with $J=25$ and $J=25\cdot2^9=12800$ sampling from the initial distribution $f_0({\mathbf{u}})$. In order to measure the quality of the solution to the inverse problem, we again compare the residual $r$ and the projected residual $R$ (top left plot) and the misfit $\vartheta$ (top right plot) for the two values of $J$. The misfit reaches the noise level in a very small number of iterations for both $J$’s but the residual $r$ for $J=12800$ is reaching a smaller value than the residual computed with $J=25$. This result is observable also in the middle right plot and in the bottom left plot where we compare the reconstruction of $p(x)$ and of the exact control $u(x)$ at the final iteration with the two values of $J$ and using the mean as estimator of the solution. It is very clear that the case with $J=12800$ is providing a better resolution. Finally, in the bottom right plot we show the relative error $\frac{\|\overline{{\mathbf{u}}}-{\mathbf{u}}\|_2^2}{\|\overline{{\mathbf{u}}}\|_2^2}$ as function of the increasing value of $J$ noticing a decreasing behavior. Nonlinear elliptic problem {#sec:nonlinear} -------------------------- ![Nonlinear problem. Top row: plots of the density estimation of the initial samples (left) and position of the samples at final iteration (right). Middle row: Marginals of $u_1$ (left) and $u_2$ (right) as relative frequency plot. Bottom row: residual errors $r$ and $R$ (left) and misfit error (right).\[fig:nonlinearExample\]](InitialSamplesNonlinear "fig:"){width="49.00000%"} ![Nonlinear problem. Top row: plots of the density estimation of the initial samples (left) and position of the samples at final iteration (right). Middle row: Marginals of $u_1$ (left) and $u_2$ (right) as relative frequency plot. Bottom row: residual errors $r$ and $R$ (left) and misfit error (right).\[fig:nonlinearExample\]](FinalSamplesNonlinear "fig:"){width="49.00000%"}\ ![Nonlinear problem. Top row: plots of the density estimation of the initial samples (left) and position of the samples at final iteration (right). Middle row: Marginals of $u_1$ (left) and $u_2$ (right) as relative frequency plot. Bottom row: residual errors $r$ and $R$ (left) and misfit error (right).\[fig:nonlinearExample\]](Marginal1Nonlinear "fig:"){width="49.00000%"} ![Nonlinear problem. Top row: plots of the density estimation of the initial samples (left) and position of the samples at final iteration (right). Middle row: Marginals of $u_1$ (left) and $u_2$ (right) as relative frequency plot. Bottom row: residual errors $r$ and $R$ (left) and misfit error (right).\[fig:nonlinearExample\]](Marginal2Nonlinear "fig:"){width="49.00000%"}\ ![Nonlinear problem. Top row: plots of the density estimation of the initial samples (left) and position of the samples at final iteration (right). Middle row: Marginals of $u_1$ (left) and $u_2$ (right) as relative frequency plot. Bottom row: residual errors $r$ and $R$ (left) and misfit error (right).\[fig:nonlinearExample\]](ResidualsNonlinear "fig:"){width="49.00000%"} ![Nonlinear problem. Top row: plots of the density estimation of the initial samples (left) and position of the samples at final iteration (right). Middle row: Marginals of $u_1$ (left) and $u_2$ (right) as relative frequency plot. Bottom row: residual errors $r$ and $R$ (left) and misfit error (right).\[fig:nonlinearExample\]](MisfitNonlinear "fig:"){width="49.00000%"} The second numerical experiment is a slightly modified example proposed in [@ernstetal2015]. We consider a one-dimensional elliptic boundary value problem given by $$-\frac{\mathrm{d}}{\mathrm{d}x} \left( \exp(u_1) \frac{\mathrm{d}}{\mathrm{d}x} p(x) \right) = f(x), \quad x\in[0,1]$$ with boundary conditions $p(0)=p_0$ and $p(1)=u_2$, where ${\mathbf{u}}=(u_1,u_2)$ is the unknown control. The exact solution of this problem is given by $$p(x) = p_0 + (u_2-p_0) + \exp(-u_1) \left( -S_x(F) + S_1(F)x \right)$$ where $S_x(g)=\int_0^x g(y)\mathrm{d}y$ and $F(x)=S_x(f)=\int_0^x f(y)\mathrm{d}y$. In the following example we consider $f(x)=1$, $\forall\,x\in[0,1]$ and $p_0=0$, so that the explicit solution is given by $$p(x) = u_2 x + \exp(-u_1) \left( -\frac{x^2}{2} + \frac{x}{2} \right).$$ We assume to have noisy measurements of $p$ at the points $x_1 = \frac14$ and $x_2=\frac34$ with value ${\mathbf{y}} = (27.5,79.7)$. The goal is to seek the control ${\mathbf{u}}$ based on the knowledge of ${\mathbf{y}}$, of the prior $f_0({\mathbf{u}})$ and of the noise model. More precisely, we consider a prior information given by ${\mathbf{u}} \sim \mathcal{N}(0,1) \otimes \mathcal{U}(90,110)$ and a Gaussian white noise ${\boldsymbol{\eta}} \sim \mathcal{N}({\mathbf{0}},\gamma^2 {\mathbf{I}})$, with $\gamma = 0.1$ and ${\mathbf{I}}\in{\mathbb{R}}^{2\times 2}$ begin the identity matrix. Thus, as in Section \[sec:elliptic\], noisy observations are simulated by $${\mathbf{y}} = {\mathbf{p}} + {\boldsymbol{\eta}} = {\mathcal{G}}({\mathbf{u}}^\dagger) + {\boldsymbol{\eta}}$$ where the forward model is defined as $$\begin{aligned} {\mathcal{G}}\colon {\mathbf{u}} \in {\mathbb{R}}^2 \mapsto {\mathbf{p}}=(p(x_1),p(x_2))\in{\mathbb{R}}^2.\end{aligned}$$ The example has $d=2$ dimension of the control in order to make a comparison between the solution to the inverse problem provided by the kinetic method and by the Bayes’ formula. In particular, it is possible to analyze the approximation of the mean estimator and of the posterior distribution computed by the kinetic equation . However, observe that  is derived by assuming a linear forward operator $G$ but in this example the model ${\mathcal{G}}$ is nonlinear. Thus, inspired by , we consider a small modification of the microscopic interaction rule  with ${\mathbf{K}} = {\mathbf{I}}$ identity matrix given by $${\mathbf{u}} = {\mathbf{u}}_* + \epsilon \, {\mathbf{C}}({\mathbf{U}}_*) {\boldsymbol{\Gamma}} ({\mathbf{y}}-{\mathcal{G}}({\mathbf{u}}_*)) + \sqrt{\epsilon} \, {\boldsymbol{\xi}}$$ in order to perform the simulations for the nonlinear model. For this example, the true posterior mean is computed in [@ernstetal2015] thanks to Bayes’ formula and it is given by $(-2.65,104.5)$. In Figure \[fig:nonlinearExample\] we show the results provided by the kinetic model. The top row shows the density estimation of the $J=10^5$ sampling from the initial distribution (left plot) and the positions of the samples at the last iterations. Again, we use the discrepancy principle as stopping criterion. The middle row shows the marginals of $u_1$ and $u_2$ as relative frequency plots. The solution computed as the mean estimator of the kinetic distribution is ${\mathbf{u}} = (-2.56, 104.77)$ is very close to the true posterior mean, as proved also by the plot of the residuals in the bottom left panel of Figure \[fig:nonlinearExample\]. The application of the original EnKF method provides ${\mathbf{u}} = (-2.92, 105.14)$ which is less accurate, see also [@ernstetal2015]. Conclusions {#sec:conclusion} =========== In this paper we have introduced a kinetic model for the solution to inverse problems. The kinetic equation has been derived as mean-field limit of the Ensemble Kalman Filter method for infinitely large ensemble. The introduction of a continuous equation describing the evolution of the probability distribution of the unknown control guarantees several advantages: information on statistical quantities of the solution, implicit regularization modeled by the initial distribution, analysis of the properties of the solution. The derivation of the kinetic equation has also the advantage to provide a different interpretation of the method and a possibly different scheme using binary collisions with consequent computational gain for numerical simulations. This leads to a different scheme as well as a modified scheme as introduced in the paper. A linear stability analysis for the simple setting of a one dimensional control has showed that the modified method has only stable solutions. Numerical simulations have been performed in order to investigate the good performance of the kinetic equation in providing solutions to inverse problems. Acknowledgments {#acknowledgments .unnumbered} =============== The authors would like to thank the German Research Foundation DFG for the kind support within the Cluster of Excellence Internet of Production (IoP).\ The authors also acknowledge support by DFG HE5386/14,15.\ Giuseppe Visconti is member of the “National Group for Scientific Computation (GNCS-INDAM)”.
--- author: - Viet Hung Nguyen and Fabio Massacci title: 'A Systematically Empirical Evaluation of Vulnerability Discovery Models: a Study on Browsers’ Vulnerabilities ' --- Introduction {#sec:intro} ============ discovery models (VDMs) operate on known vulnerability data to estimate the total number of vulnerabilities that will be reported after the release of a software. Successful models can be useful instruments for both software vendors and users to understand security trends, plan patching schedules, decide updates and forecast security investments in general. A VDM is a parametric mathematical function counting the number of cumulative vulnerabilities of a software at an arbitrary time $t$. For example, if $\Omega(t)$ is the cumulative number of vulnerabilities at time $t$, the function of the linear model (LN) is $\Omega(t) = At + B$ where $A, B$ are two parameters of LN, which are calculated from the historical vulnerability data. sketches a taxonomy of the major VDMs. It includes Anderson’s Thermodynamic (AT) model [@ANDE-02-OSS], Rescorla’s Quadratic (RQ) and Rescorla’s Exponential (RE) models [@RESC-05-SP], Alhazmi & Malaiya’s Logistic (AML) model [@ALHA-MALA-05-ISSRE], AML for Multi-version [@KIM-etal-07-HASE], Weibull model (JW) [@JOH-etal-08-ISSRE], and Folded model (YF) [@YOUNIS-etal-11-SAM]. The *goodness-of-fit* of these models, how well a model could fit the numbers of discovered vulnerabilities, is normally evaluated in each paper on a specific vulnerability data set, except AML which has been validated for different types of application (operating system [@ALHA-etal-05-DAS; @ALHA-MALA-08-TR], browsers [@WOO-ALHAZMI-MALAIYA-06-SEA], web servers [@ALHA-MALA-06-ISSRE; @WOO-etal-11-CS]). Yet, no independent validation by somebody other than the authors exists. Furthermore, a number of issues might bias the results of previous studies. - Firstly, many studies did not clearly define what a vulnerability is. Indeed different definitions of vulnerability might lead to different counted numbers of vulnerabilities, and consequently, different conclusions. - Secondly, all versions of a software were considered as a single “entity". Even though there is a large amount of shared code, they are still different by a non-negligible amount of code. - Thirdly, the goodness-of-fit of the models was often evaluated at a single time point (of writing their papers) and not used as a predictor to forecast data for the next quarter for instance. A detail discussion about these issues is available later in section . ![Taxonomy of Vulnerability Discovery Models.[]{data-label="fig:VDM:taxonomy"}](figures/vdm-taxonomy.pdf){width="\columnwidth" height="11\baselineskip"} In this paper we want to address these shortcomings and derive a methodology that can answer two basic questions concerning VDM: *“Are VDMs adequate to capture the discovery process of vulnerabilities?"*, and *“which VDM is the best?"*. Contributions of This Paper --------------------------- The contributions of this work are detailed below: - We proposed an experimental methodology to assess the performance of a VDM based on its *goodness-of-fit quality* and *predictability*. - We demonstrated the methodology by conducting an experiment analyzing eight VDMs, including AML, AT, JW, RQ, RE, LP, LN, and YF on 30 major releases of four popular browsers Internet Explorer (IE), Firefox (FF), Chrome and Safari. - We presented an empirical evidence for the adequacy of the VDMs in terms of quality and predictability. The AT and RQ models are not adequate; whereas all other models may be adequate when software is young (12 months). However only s-shape models (AML, JW, YF) should be considered when software is middle age (36 months) or older. - We compared these VDMs (except AT and RQ) in different usage scenarios in terms of predictability and quality. The simplest model, LN, is more appropriate than other complex models when software is young and the prediction time span is not too long (12 months or less). Otherwise, the AML model is superior. These results are summarized in . The rest of the paper is organized as follows. Section presents terminology in our work. The research questions are presented in section and the proposed methodology is described in section . Next, we apply the methodology to analyze the empirical performance of VDMs in all data sets in section . Then in section , we discuss the threats to the validity of our work. Finally, we review related work in section , and conclude in section . [cX]{} Model & Performance\ AT, RQ & should be rejected due to low quality.\ LN & is *the best model* for first 12 months$^{(*)}$.\ AML & is *the best model* for $13^{th}$ to $36^{th}$ month $^{(*)}$.\ RE, LP & may be adequate for first 12 months $^{(**)}$.\ JW, YF & may be adequate for $13^{th}$ to $36^{th}$ month$^{(*)}$.\ Terminology {#sec:terms} =========== - [*A vulnerability* ]{} is “an instance of a \[human\] mistake in the specification, development, or configuration of software such that its execution can \[implicitly or explicitly\] violate the security policy"[@KRSU-98-PHD], later revised by [@OZMEN-07-QoP]. The definition covers all aspects of vulnerabilities discussed in [@ARBA-etal-00-IEEE; @AVIZ-etal-04-TDSC; @DOWD-etal-07; @SCHNEIDER-91-NAP], see also [@OZMEN-07-QoP] for a discussion. - [*A data set*]{} is a collection of vulnerability data extracted from one or more data sources. - [*A release*]{} refers to a particular version of an application Firefox . - [*A horizon*]{} is a specific time interval sample. It is measured by the number of months since the released date, from month $1$ to $12$ months after the release. - [*An observed vulnerability sample*]{} (or observed sample, for short) is a time series of monthly cumulative vulnerabilities of a major release since the first month after release to a particular horizon. - [*An evaluated sample*]{} is a tuple of an observed sample, a VDM model fitted to this sample (or another observed sample), and the goodness-of-fit of this model to this sample. Research Questions and Methodology Overview {#sec:questions} =========================================== In this work, we address the following two questions: \[rq:applicability\] \[rq:comparison\] We propose a methodology to answer these questions. The proposed methodology identifies data collection steps and mathematical analyses to empirically assess different performance aspects of a VDM . The methodology is summarized in . [&gt;rX]{}\ desc. & Identify the vulnerability data sources, and the way to count vulnerabilities. If possible, different vulnerability sources should be used to select the most robust one. Observed samples then can be extracted from collected vulnerability data.\ input & Vulnerability data sources.\ output & Set of observed samples.\ criteria & *Collection of observed samples* - Vulnerabilities should be counted for individual releases (possibly by different sources). - Each observable sample should have at least $5$ data points. \ \ desc. & Estimate the parameters of the VDM formula to fit observed samples as much as possible. The  goodness-of-fit test is employed to assess the goodness-of-fit of the fitted model based on criteria \[cr:gof\].\ input & Set of observed samples.\ output & Set of evaluated samples.\ criteria & - **Good Fit**: $\textit{p-value} \in [0.80, 1.0]$, a good evidence to accept the model. We have more than $80\%$ chances of generating the observed sample from the fitted model. - **Not Fit**: $\textit{p-value} \in [0, 0.05)$, a strong evidence to reject the model. It means less than $5\%$ chances that the fitted model would generate the observed sample. - **Inconclusive Fit**: $\textit{p-value} \in [0.05, 0.80)$, there is not enough evidence to neither reject nor accept the fitted model. \ \ desc. & Analyze the goodness-of-fit quality of the fitted model by using the temporal quality metric which is the weighted ratio between fitted evaluated samples (both  and ) and total evaluated samples.\ input & Set of evaluated samples.\ output & Temporal quality metric.\ criteria & A VDM is rejected if it has a temporal quality lower than 0.5 even by counting samples as positive (with weight 0.5). Different periods of software lifetime could be considered: - $12$ months (young software) - $36$ months (middle-age software) - $72$ months (old software) \ \ desc. & Analyze the predictability of the fitted model by using the predictability metric. Depending on different usage scenarios, we have different observation periods and time spans that the fitted model supposes to be able to predict. This is described in \[cr:timespan\].\ input & Set of evaluated samples.\ output & Predictability metric.\ criteria & *The observation period and prediction time spans based on some possible usage scenarios.*\ \ desc. & Compare the quality of the VDM with other VDMs by comparing their temporal quality and predictability metrics.\ input & Temporal quality and predictability measurements of models in comparison.\ output & Ranks of models.\ criteria & A VDM is better than a VDM if: - either the predictability of is significantly greater than that of , - or there is no significant difference between the predictability of and , but the temporal quality of is significantly greater than that of . The temporal quality and predictability should have their horizons and prediction time spans in accordance to criteria \[cr:quality\] and \[cr:timespan\]. Furthermore, a controlling procedure for multiple comparisons should be considered.\ In order to satisfactorily answer the questions above, we must address some biases that potentially affected the validity of previous studies. The *vulnerability definition* bias may affect the vulnerability data collection process. Indeed all previous studies reported their data sources, but none clearly mentioned what a vulnerability is, and how to count it. A vulnerability could be either an advisory reported by a software vendor such as Mozilla Foundation Security Advisory – MFSA, or a security bug causing software to be exploited (reported in Mozilla Bugzilla), or an entry in third-party vulnerability databases (National Vulnerability Database – NVD). Some entries may be classified differently by different entities: a third-party database might report vulnerabilities, but the security bulletin of vendors may not classify them as such. Consequently, the counted number of vulnerabilities could be widely different depending on the different definitions. exemplifies this issue. A security flaw concerning the buffer overflow and use-after-free of Firefox is reported in three databases with different number of entries: one MFSA entry (<span style="font-variant:small-caps;">mfsa-2012-40</span>), three Bugzilla entries (<span style="font-variant:small-caps;">744541, 747688</span>, and 750066), and three NVD entries (<span style="font-variant:small-caps;">cve-2012-1947, cve-2012-1940</span>, and <span style="font-variant:small-caps;">cve-2012-1941</span>). The cross references among these entries are illustrated as directional connections. This figure raises a question “how many vulnerabilities should we count in this case?". ![The problem of counting vulnerabilities in Firefox.[]{data-label="fig:issue:counting"}](figures/issue-counting.pdf){width="0.7\columnwidth"} The *multi-version software* bias affects the count of vulnerability across releases. Some studies ([@WOO-ALHAZMI-MALAIYA-06-SEA; @WOO-etal-11-CS]) considered all versions of software as a single entity, and counted vulnerabilities for this entity. Our previous study [@MASS-etal-11-ESSOS] has shown that each Firefox version has its own code base, which may differ by $30\%$ or more from the immediately preceding one. Therefore, as time goes by, we can no longer claim that we are counting the vulnerabilities of the same application. \[ex:bias:multiversions\] visualizes this problem in a plot of the cumulative vulnerabilities of Firefox , Firefox , and Firefox as a single entity. Clearly, the function of the “global" version should be different from the functions of the individual versions. The *overfitting* bias, as name suggested, concerns the ability of a VDM to explain history in hindsight. Previous studies took a snapshot of vulnerability data, and fitted this entire snapshot to a VDM. This made a brittle claim of fitness: the claim was only valid at the time vulnerabilities were collected. It explained history but did not tell us anything about the future. Meanwhile, we are interested in the ability of a VDM to be a good “law of nature" that is valid across releases and time and to some predict extent the future. ![Global vs individual vulnerability trends for Firefox.[]{data-label="fig:firefox:firework"}](figures/ChromeFireworks.pdf){width="0.7\columnwidth"} Methodology Details {#sec:expdesign} =================== This section discusses the details of our methodology to evaluate the performance of a VDM. \[step:data\]: Acquire the Vulnerability Data {#sec:step:data} --------------------------------------------- The acquisition of vulnerability data consists of two sub steps: *Data set collection*, and *Data sample extraction*. During *Data set collection*, we identify the data sources to be used for the study (as they may not equally fit to the task). We can classify them as follows: - *Third-party advisory* (): is a vulnerability database maintained by a third-party organization (not software vendors) , Open Source Vulnerability Database (). - *Vendor advisory* (): is a vulnerability database maintained by a software vendor, MFSA, Microsoft Security Bulletin. Vulnerability information in this database could be announced from third-party, but it is always validated before being announced as an advisory. - *Vendor bug tracker* (): is a bug-tracking database, usually maintained by vendors. For our purposes, the following features of a vulnerability are interesting and must be provided: - *Identifier* (): is the identifier of a vulnerability within a data source. - *Disclosure date* (): refers to the date when a vulnerability is reported to the database[^1]. - *Vulnerable Releases* (): is a list of releases affected by a vulnerability. - *References* (): is a list of reference links to other data sources. Not every feature is available from all data sources. To obtain missing features, we can use and to link across data sources and extract the expected features from secondary data sources. Vulnerabilities of Firefox are reported in three data sources: NVD[^2], MFSA, and Mozilla Bugzilla. Neither MFSA nor Bugzilla provides the *Vulnerable Releases* feature, but NVD does. Each MFSA entry has one or more links to NVD and Bugzilla. Therefore, we could to combine MFSA and NVD, Bugzilla and NVD to obtain the missing data. We address the *vulnerability definition* bias by taking into account different definitions of vulnerability. Particularly, we collected different vulnerability data sets with respect to these definitions. We also address the *multi-version* issue by collecting vulnerability data for individual releases. shows different data sets that we have considered in our study. They are combinations of three types of data sources : third-party (NVD as a representative), vendor advisory, and vendor bug tracker. The English descriptions of these data sets for a *release* $r$ are as follows: - : a set of NVD entries which claim $r$ is vulnerable. - : a set of NVD entries which are confirmed by at least a vendor bug report, and claim $r$ is vulnerable. - : a set of NVD entries which are confirmed by at least a vendor advisory, and claim $r$ is vulnerable. Notice that the advisory report might *not* mention $r$, but later releases. - : a set of vendor bug reports confirmed by NVD, and $r$ is claimed vulnerable by NVD. - : a set of bug reports mentioned in an advisory report of a vendor. The advisory report also refers to at least an entry that claims $r$ is vulnerable. For *Data sample extraction*, we extract observed samples from collected data sets. An *observed sample* is a time series of (monthly) cumulative vulnerabilities of a release. It starts from the first month since release to the end month, called *horizon*. A month is an appropriate granularity for sampling because week and day are too short intervals and are subject to random fluctuation. Additionally, this granularity was used in the literature. Let  be the set of analyzed releases and $DS$ be the set of data sets, an observed sample (denoted as $\odp$) is a time series defined as follows: $$\odp = \series{\release, ds, \horizon}$$ where: - $\release \in \Release$ is a release in the evaluation; - $ds \in DS$ is the data set where samples are extracted; - $\horizon \in \Horizon_\release = \left[\horizon^\release_{min}, \horizon^\release_{max}\right]$ is the horizon of the observed sample, in which $\Horizon_\release$ is the *horizon range of release* . In the horizon range of release , the minimum value of horizon $\horizon^\release_{min}$ of  depends on the starting time of the first observed sample of . Here we choose $\horizon^\release_{min}=6$ for all releases so that all observed samples have enough data points for fitting all VDMs. The maximum value of horizon $\horizon^\release_{max}$ depends on how long the data collection period is for each release. IE was released in September, $1997$[^3]. The first  was on $31$ October, $1997$. The first observed sample of IE is a time series of $6$ numbers of cumulative vulnerabilities for the $1^{st},2^{nd},\ldots,6^{th}$ months. Since the date of data collection is on 01 July 2012, IE have been released for 182 months, and therefore has 177 observed samples. Hence the maximum value of horizon ($\horizon^\ds{IE\ver{4.0}}_{max}$) is $182$. \[step:model-fit\]: Fit a VDM to Observed Samples {#sec:step:modelfit} ------------------------------------------------- We estimate the parameters of the VDM formula by a regression method so that the VDM curve fits an observed sample as much as possible. We denote the fitted curve (or fitted model) as: $$\curve{\series{\release, ds,\horizon}} \label{eq:model}$$ where $vdm$ is the VDM being fitted; $\odp=\series{\release, ds, \horizon}$ is an observed sample from which the ’s parameters are estimated. could be shortly written as . Fitting the AML model to the NVD data set of Firefox at the $30^{th}$ , the observed sample $\odp=\series{\ds{FF3.0},\ds{NVD}, 30}$, generates the curve: $$\curve[AML]{\series{\ds{FF3.0}, \ds{NVD}, 30}} = \frac{183}{183\cdot0.078\cdot e^{-0.001 \cdot 183 \cdot t} + 1}$$ illustrates the plots of three curves , where $r$ is , and . The X-axis is the number of months since release, and the Y-axis is the cumulative number of vulnerabilities. Circles represent observed vulnerabilities. The solid line indicates the fitted AML curve. ![image](figures/vdm-fit.pdf){width="90.00000%"} In , the distances of the circles to the curve are used to estimate the goodness-of-fit of the model. The goodness-of-fit is measured by the Pearson’s Chi-Square ($\chi^2$) test, which is a common test in the literature. In this test, we measure the  statistic value of the curve by using the following formula: $$\chisq = \sum_{t=1}^{\horizon}\frac{(O_t - E_t)^2}{E_t} \label{eq:chisq}$$ where $O_t$ is the observed cumulative number of vulnerabilities at time $t$ ($t^\textnormal{th}$ value of the observed sample); $E_t$ denotes the expected cumulative number of vulnerabilities which is the value of the curve at time $t$. The  value is proportional to the differences between the observed values and the expected values. Hence, the larger , the smaller goodness-of-fit. If the  value is large enough, we can safely reject the model. In other words, the model statistically does not fit the observed data set. The  test requires all expected values should be at least $5$ to ensure the validity of the test [@NIST-StatBook-12 Chap. 1]. If there is any expected value is less than 5, we need to combine some first months to increase the expected value increase the starting value of $t$ in until $E_t \ge 5$. The conclusion about whether a VDM curve statistically fits an observed sample relies on the  of the test, which is derived from  value and the degrees of freedom (the number of months minus one). Semantically, the  is the probability that we falsely reject the *null hypothesis* when it is true (error Type I: false positive). The null hypothesis here is: *“there is no statistical difference between observed and expected values."* which means that the model fits the observed sample. Therefore, if the  is less than the significance level $\alpha$ of $0.05$, we can reject a VDM because there is less than $5\%$ chances that this fitted model would generate the observed sample. In contrast, to accept a VDM, we exploit the power of the  test which is the probability of rejecting the null hypothesis when it is false. Normally, ‘an $80\%$ power is considered desirable’ [@MCKILLUP-BOOK Chap. 8]. Hence we accept a VDM if the  is greater than or equal to $0.80$. We have more than $80\%$ chances of generating the observed sample from the fitted curve. In all other cases, we should neither accept nor reject the model (inconclusive fit). The criteria \[cr:gof\] in summarizes the way by which we assess the goodness-of-fit of a fitted model based on the  of the  test. In the sequel, we use the term *evaluated sample* to denote the triplet composed by an observed sample, a fitted model, and the  of the test. In , the first plot shows the AML model with a *Good Fit* ($\pvalue = 0.993 > 0.80$), the second plot exhibits the AML model with an *Inconclusive Fit* ($0.05 < \pvalue = 0.417 < 0.80$), and the last one denotes the AML model with a *Not Fit* ($\pvalue=0.0001 < 0.05$). There are also other statistic tests for goodness-of-fit, for instance the Kolmogorov-Smirnov (K-S) test, and the Anderson-Darling (A-D) test. The K-S test is an exact test; it, however, only applies to continuous distributions. An important assumption is that the parameters of the distribution cannot be estimated from the data. Hence, we cannot apply it to perform the goodness-of-fit test for a VDM. The A-D test is a modification of the K-S test that works for some distributions [@NIST-StatBook-12 Chap. 1] (normal, log-normal, exponential, Weibull, extreme value type I, and logistic distribution), but some VDMs violate this assumption. \[step:quality\]: Perform Goodness-of-Fit Quality Analysis {#sec:step:quality} ---------------------------------------------------------- To address the *overfitting* bias, we introduce the *goodness-of-fit quality* (or *quality*, for short) that measures the overall number of *Good Fit*s and *Inconclusive Fit*s among different samples. In contrast, previous studies considered only one observed sample which is the one with the largest horizon in their experiment. Let $\DP = \Set{\series{\release, ds, \horizon}|\release \in R \wedge ds \in DS \wedge \horizon \in \Horizon_\release}$ be the set of observed samples, the *overall quality* of a model  is defined as the weighted ratio of the number of *Good Fit* and *Inconclusive Fit* evaluated samples over the total ones, as shown bellow: $$Q_\omega = \frac{|\GAP| + \omega \cdot |\IAP|}{|\AP|} \label{eq:quality:global}$$ where: - $\AP = \Set{\Seq{\odp, \curve{\odp}, p}|\odp \in \DP}$ is the set of evaluated samples generated by fitting  to observed samples; - $\GAP = \Set{\Seq{\odp, \curve{\odp}, p} \in \AP|p \ge 0.80}$ is the set of *Good Fit* evaluated samples; - $\IAP = \Set{\Seq{\odp, \curve{\odp}, p} \in \AP| 0.05 \le p < 0.80}$ is the set of *Inconclusive Fit* evaluated samples; - $\omega \in [0..1]$ is the *inconclusiveness contribution* factor denoting that an *Inconclusive Fit* is $\omega$ times less important than a *Good Fit*. If we fit the AML model to $3,895$ observed samples of the four browsers IE, Firefox, Chrome, and Safari. For $1,526$ times AML is a , and for $1,463$ times AML is an . The overall quality of AML is: $$\begin{aligned} Q_{\omega=0} &= \frac{1,526}{3,895} = 0.39 \\ Q_{\omega=1} &= \frac{1,526 + 1,463}{3,895} = 0.77 \\ Q_{\omega=0.5} &= \frac{1,526 + 0.5 \cdot 1,463}{3,895} = 0.58\end{aligned}$$ To calculate the  test we refit the model each and every time. So this means that we have $1,526$ different parameters A, B and C for each good fit curve (see ). The overall quality metric ranges between 0 and 1. The quality of 0 indicates a completely inappropriate model, whereas the quality of 1 indicates a perfect one. This metric is a very optimistic measure as we are essentially “refitting" the model as more data become available. Hence, it is the upper bound value of the VDM quality. The factor $\omega$ denotes the contribution of an inconclusive fit to the overall quality. A skeptical analyst would expect $\omega=0$, which means only *Good Fits* are meaningful. Meanwhile an optimistic analyst would set $\omega=1$, which mean an *Inconclusive Fit* is as good as a *Good Fit*. The optimistic choice $\omega=1$ is usually adopted by the proposers of each model in previous studies in the field while assessing the VDM quality. The effect of the $\omega$ factor on the overall quality metrics is illustrated in showing the variation of the overall quality of two models AML and AT with respect to $\omega$. We do not know whether an  is good or not because the observed samples do not provide enough evidence. Hence, the choice of $\omega = 0.5$ may be considered a good balance. During our analysis we use $\omega = 0.5$; any exception will be explicitly noted. ![The variation of the overall quality $Q_\omega$ with respect to the $\omega$ factor.[]{data-label="fig:quality:omega"}](figures/quality-omega-factor.pdf){width="0.7\columnwidth"} The overall quality metric is sensitive to brittle performance in time. A VDM could produce a lot of evaluated samples for the first $6$ months, but almost at other horizons. Unfortunately, the metric did not address this phenomenon. To avoid this unwanted effect, we introduce the *temporal quality* metric which represents the evolution of the overall quality over time. The temporal quality $Q_\omega(\horizon)$ is the weighted ratio of the *Good Fit* and *Inconclusive Fit* evaluated samples over total ones at the particular horizon . The temporal quality is formulated in the following equation: $$Q_\omega(\horizon) = \frac{|\GAP(\horizon)| + \omega \cdot |\IAP(\horizon)|}{|\AP(\horizon)|} \label{eq:quality:horizon}$$ where: - $\horizon \in \Horizon$ is the horizon that we observe samples, in which $\Horizon \subseteq \bigcup_{\release \in \Release}\Horizon_\release$ is the subset of the union of the horizon ranges of all releases  in evaluation; - $\AP(\horizon) = \Set{\Seq{\odp,\curve{\odp},p}|\odp \in \DP(\horizon)}$ is the set of evaluated samples at the horizon $\horizon$; where () is the set of observed samples at the horizon  of all releases; - $\GAP(\horizon) \subseteq \AP(\horizon)$ is the set of *Good Fit* evaluated samples at the horizon ; - $\IAP(\horizon) \subseteq \AP(\horizon)$ is the set of *Inconclusive Fit* evaluated samples at the horizon ; - $\omega$ is the same as for the overall quality $Q_\omega$. To study the trend of the temporal quality $Q_\omega(\horizon)$, we employ the *moving average* technique which is commonly used in time series analysis to smooth out short-term fluctuations and highlight longer-term trends. Intuitively each point in the moving average is the average of some adjacent points in the original series. The moving average of the temporal quality is defined as follows: $$%\textit{MA}_k^{Q_\omega}(\horizon) = \sqrt[k]{\prod^k_{i = 1}Q_\omega(\horizon - i + 1)}\label{eq:ma:q} \textit{MA}_k^{Q_\omega}(\horizon) = \frac{1}{k}\sum_{i=1}^kQ_\omega(\horizon - i + 1)\label{eq:ma:q}$$ where $k$ is the *window size*. The choice of $k$ changes the spike-smoothening effect: higher $k$, smoother spikes. Additionally, $k$ should be an odd number so that variations in the mean are aligned with variations in the data rather than being shifted in time. ![An example about the moving average of the temporal quality of AML and AT models. []{data-label="fig:quality:ma:example"}](figures/ma-quality-AML-AT.pdf){width="1\columnwidth"} depicts the moving average for the temporal quality of AML and AT models. In this example, we choose a window size $k=5$ because the minimum horizon is six ($\horizon^\release_{min} = 6$), so $k$ should be less than this horizon ($k < \horizon^\release_{min}$); and $k=3$ is too small to smooth out the spikes. \[step:predictability\]: Perform Predictability Analysis {#sec:step:predictability} -------------------------------------------------------- The predictability of a VDM measures the capability of predicting future trends of vulnerabilities. This essentially makes a VDM applicable in practice. The calculation of the predictability of a VDM has two phases, the *learning phase* and the *prediction phase*. In the learning phase, we fit a VDM to an observed sample at a certain horizon. In the prediction phase, we evaluate the qualities of the fitted model on observed samples in future horizons. We extend to calculate the prediction quality. Let be a fitted model at horizon . The prediction quality of this model in the next  months is calculated as follows: $$Q^*_\omega(\horizon,\timespan) = \frac{|\GAP*(\horizon,\timespan)| + \omega \cdot |\IAP*(\horizon,\timespan)|}{|\AP*(\horizon,\timespan)|} \label{eq:quality:predict}$$ where: - $\AP*(\horizon,\timespan) = \Set{\Seq{\series{\release,ds,\horizon+\timespan}, \curve{\series{\release,ds,\horizon}}, p}}$ is the set of evaluated samples at the horizon $\horizon+\timespan$ in which we evaluate the quality of the model fitted at horizon () on observed samples at the future horizon $\horizon + \timespan$. We refer to $\AP*(\horizon,\timespan)$ as set of evaluated samples of prediction; - $\GAP*(\horizon,\timespan) \subseteq \AP*(\horizon,\timespan)$ is the set of *Good Fit* evaluated samples of prediction at the horizon $\horizon + \timespan$; - $\IAP*(\horizon,\timespan) \subseteq \AP*(\horizon,\timespan)$ is the set of *Inconclusive Fit* evaluated samples of prediction at the horizon $\horizon + \timespan$. - $\omega$ is the same as for the overall quality $Q_\omega$. illustrates the prediction qualities of two models AML and AT starting from the horizon of $12^{th}$ month ($\horizon=12$, left) and $24^{th}$ month ($\horizon=24$, right), and predicting the value for next $12$ months ($\timespan=0 \ldots 12$). White circles are prediction qualities of AML, and red (gray) circles are those of AT. As we can see from the ability of a model to predict data decreases with time. It is therefore useful to identify some interesting prediction time spans (such as next $6$ months) that can be used for pairwise comparison between VDMs. To this extent, we identify some different scenarios by which we specify the duration of data observation and the prediction time span. Other scenarios may be identified depending on the application or the readers’ interest: - *Plan for short-term support*: the data observation period may vary from $6$ months to the whole lifetime. We are looking for the ability to predict the trend in next quarter ($3$ months) to plan the short-term support activities allocating resources for fixing vulnerabilities. - *Plan for long-term support*: we would like to predict a realistic expectation for bug reports in the next 1 year to plan the long-term activities. - *Upgrade or keep*: the data observation period is short (at from 6 to 12 months). We are looking on what is going to happen in next 6 months. For example to decide whether to keep the current system or to go over the hassle of updating it. - *Historic analysis*: the data observation period is long (2 to 3 years), we are considering what happens for extra support in the next 1 year. ![The prediction qualities of the AML and AT model at some horizons.[]{data-label="fig:predict:quality"}](figures/p-quality-AML-AT.pdf){width="0.98\columnwidth"} We should assess the predictability of a VDM not only along the prediction time span, but also along the horizon to ensure that the VDM is able to consistently predict the vulnerability data in an expected period. To facilitate such assessment we introduce the *predictability* metric which is the average of prediction qualities at a certain horizon. The predictability of the curve at the horizon  in a time span of $\Timespan$ months is defined as the average of the prediction quality of at the horizon  and its $\Timespan$ consecutive horizons $\horizon + 1, \horizon+2,..., \horizon+\Timespan$, as the following equation shows: $$\begin{aligned} \predict(\horizon, \Timespan) &= \sqrt[\Timespan+1]{\prod_{\timespan=0}^\Timespan Q^*_\omega(\horizon,\timespan)} \label{eq:predict}\end{aligned}$$ where  is the prediction time span. In , we use the geometric mean instead of the arithmetic mean. The temporal quality is a normalized measure so using the arithmetic mean to average such values might produce a meaningless result, whereas the geometric mean behaves correctly [@FLEM-WALL-86-CACM]. \[step:comparison\]: Compare VDM -------------------------------- This section addresses the second research question \[rq:comparison\]. The comparison is based on the quality and the predictability of VDMs. The base line for the comparison is that: *the better model is a better one in forecasting changes*. Hereafter, we discuss how to compare VDM: Given two models and , the comparison between and could be done as below: We compare the predictability of and that of . Let $\rho_1, \rho_2$ be the predictability of and , respectively. $$\label{eq:compare:predict} \begin{aligned} \rho_1 &= \Set{\predict[\omega=0.5](\horizon, \Timespan)|\horizon = 6..\horizon_{max}, \vdm[1]} \\ \rho_2 &= \Set{\predict[\omega=0.5](\horizon, \Timespan)|\horizon = 6..\horizon_{max}, \vdm[2]} \end{aligned}$$ where the prediction time span  could follow the criteria \[cr:timespan\]; $\horizon_{max} = \min(72, \max_{\release \in \Release}\horizon^\release_{max})$. We employ the one-sided Wilcoxon rank-sum test to compare $\rho_1, \rho_2$. If the returned  is less than the significance level $\alpha = 0.05$, the predictability of is stochastically greater than that of . It also means that is better than . If $\pvalue \ge 1-\alpha$, we conclude the opposite is better than . Otherwise we have not enough evidence either way. If the previous comparison is inconclusive, we retry the comparison using the value of temporal quality of the VDMs instead of the predictability. We just replace $Q_{\omega=0.5}(\horizon)$ for (, ) in the equation , and repeat the above activities. When we compare models we run several hypothesis tests, we should pay attention on the familywise error rate which is the probability of making one or more type I errors. To avoid such problem, we should apply an appropriate controlling procedure such as the Bonferroni correction. In the case above, the significance level by which we conclude a model is better than another one is divided by the number of tests performed. When we compare one model against other seven models, the Bonferroni-corrected significance level is: $\alpha = ^{0.05}/_7 \approx 0.007$. The above comparison activities are summarized in the criteria \[cr:compare\] (see ). An Assessment on Existing VDMs {#sec:exp} ============================== We apply the above methodology to assess the performance of eight existing VDMs (see also ). The experiment evaluates these VDMs on $30$ releases of the four popular web browsers: IE, Firefox, Chrome, and Safari. Here, only the formulae of these models are provided. More detail discussion about these models as well as the meaning of their parameters are referred to their corresponding original work. \[tbl:vdm\] \[tbl:datasets\] Data Acquisition ---------------- presents the availability of vulnerability data sources for the browsers in our study. For each data source, the table reports the name, the category (see also ), and the browser that the data source maintains vulnerability data. We use as a representative third-party data source due to its popularity in past studies. This makes our work comparable with previous ones. reports data sets collected for this experiment (see also for the classification). In total, we collected 96 data sets for 30 major releases. In the table, we use the bullet () to indicate the availability of data sets. In these collected data sets, we extracted a total of $4,063$ observed samples. The Applicability of VDMs {#sec:exp:applicability} ------------------------- We ran model fitting algorithms for these observed samples by using R . Model fitting took about $82$ minutes on a dual-core 2.73GHz Windows machine with 6GB of RAM yielding $32,504$ curves in total. ### Goodness-of-Fit Analysis for VDMs ![image](figures/quality-evolution.pdf){width="95.00000%" height="17\baselineskip"} reports the goodness-of-fit of existing VDMs on the largest horizons of browser releases, using the data sets. In other words, we use following observed samples to evaluate all models: $$\DP_{\ds{NVD}} = \Set{\series{\release, \ds{NVD}, \horizon^\release_{max}}|\release \in \Release}$$ where  is the set of all releases mentioned in . provides a view that previous studies often used to report the goodness-of-fit for their proposed models. To improve readability, we report the categorized goodness-of-fit based on the (see \[cr:gof\]) instead of the raw s. In this table, we use a check mark (), a blank, and a cross ($\times$) to respectively indicate a , an , and a . Cells are shaded accordingly to improve the visualization effect. The table shows that two models AT and RQ have a very high ratio of  ($0.9$ and $0.7$, respectively); whereas, all other models have their ratio of  less than . We should observe that this is a very large time interval and some systems have long gone into retirement. For example, FF vulnerabilities are no longer sought by researchers. They are a byproduct of research on later versions. To have a more realistic picture, we also study the temporal quality. The inconclusiveness contribution factor $\omega$ is set to $0.5$ as described in \[cr:quality\]. exhibits the moving average (windows size $k=5$) of the $Q_{\omega}(\horizon)$. The dotted vertical lines marks horizon $12$ (when software is young), and $36$ (when software is middle-age). We cut the temporal quality at horizon $72$ though we have more data for some systems (IE , FF ). It is because that after 6 years software is very old, the vulnerability data reported for such releases might be not reliable, and might overfit the VDMs. The dotted horizon line at $0.5$ is used as a base line to assess VDMs. Clearly from the temporal quality trends in both AT and RQ models should be rejected since their temporal quality always sinks below the base line. Other models may be adequate when software is young (before 12 months). The AML and LN models look better than other models in this respect. When software is middle-age (between 12 and 36 months), the AML model is still relatively good. JW and YF improve when approaching month $36^{th}$ though JW get worse after month $12^{th}$. The quality of both LN and LP worsen after month $12^{th}$, and sink below the base line when approaching month $36^{th}$. RE is almost below the base line after month $15^{th}$. Hence, in the middle-age period, AML, JW, and YF models may turn to be adequate; LN and LP are deteriorating but might be still considered adequate; whereas RE should clearly be rejected. When software is old (36+ months), AML, JW, and YF deteriorate and go below the base line at month $48^{th}$ (approximately); other models also collapse below the base line. summarizes the distribution of VDM temporal quality in three period: software is young (before 12 moths), software is middle-age (13 to 36 months), and software is old (37 to 72 months). The red horizonal line at $0.5$ is the base line. We additionally colour these box plots according to the comparison between the corresponding distribution and the base line as follows: - white: the distribution is significantly greater than the base line; - dark gray: the distribution is significantly less than the base line (we should reject the models outright); - gray: the distribution is not statistically different from the base line. The box plots clearly confirm our observation in . Both AT and RE models are all significantly below the base line. AML, JW, and YF modes are significantly above the base line when software is young and middle age, and not statistically different from the base line when software is old. LN and LP models are significantly greater than the base line when software is young, but they deteriorate for middle-age software, and significantly collapse below the base line for old software. In summary, our quality analysis shows that: - AT and RQ models should be rejected. - All other models may be adequate when software is young. Only s-shape models (AML, YW, YF) might be adequate when software is middle-age. - No model is good when the software is too old. ![The temporal quality distribution of each VDM in different periods of software lifetime.[]{data-label="fig:quality:distribution"}](figures/quality-bwplot.pdf){width="1.05\columnwidth" height="10\baselineskip"} ### Predictability Analysis for VDMs From the previous quality analysis, AT and RQ models are low quality, and they should not be considered for all periods of software lifetime. Hence, we exclude these models from the predictability analysis. Furthermore, since no model is good when software is too old, we analyze the predictability of these models only for the first $36$ months since the release of a software. This period is still a large time if we consider that most recent releases live less than a year. reports the moving average (windows size equals $5$) for the trends of VDMs’ predictability along horizons in different prediction time spans. The dotted horizonal line at value of $0.5$ is the base line to assess the predictability of VDMs (as same as the temporal quality of VDMs). ![The predictability of VDM in different prediction time spans ().[]{data-label="fig:predict"}](figures/predictability.pdf){width="1.02\columnwidth"} When the prediction time span is short ($3$ months), the predictability of LN, AML, JW, and LP models is above the base line for young software ($12$ months). When software is approaching month $24^{th}$, though decreasing the predictability of LN is still above the base line, but goes below the base line after month $24^{th}$. The LP model is no different with the base line before month $24^{th}$, but then also goes below the base line. In contrast, the predictability of AML, YF and JW are improving with age. They are all above the base line until the end of the study period (month $36^{th}$. Therefore, only s-shape models (AML, YF, and JW) may be adequate for middle-age software. For the medium prediction time span of 6 months, only the LN model may be adequate (above the base line) when software is young, but becomes inadequate (below the base line) after month $24^{th}$. In the meanwhile S-shape models are inadequate for young software, but are improving quickly later. They may be all adequate after month $18^{th}$ and keep this performance until the end of the study period. When the prediction time span is long (12 months), all models (except LN) sink below the base line for young software. The LN model is not significantly different from the base line. In other words, no model could be adequate for young software in this prediction time span. After month $18^{th}$, the AML model goes above the base line, and after month $24^{th}$, all s-shape models are above the base line. Hence they may be all adequate. Their performances are somewhat unchanged for the remain period. When the prediction time span is very long (24 months) no model is good enough as all models sink below the base line. In summary, our predictability analysis shows that: - For a short prediction time span (3 months), the predictability of LN, AML, and LP models may be adequate for young software. Hence they could be considered for the scenario *Plan for short-term support*. When software is approaching middle-age, s-shape models (AML, JW, YF) are better than others. - For a medium (6 months) and long (12 month) prediction time spans, only the predictability of the LN model may be adequate for young software. And therefore this model could be appropriate the purpose of the scenarios *Upgrade of keep* and *Plan for long-term support*. When software is approaching middle-age, only s-shape models (AML, JW, YF) may be adequate and might be considered for planning the long-term support and for studying historical trends (scenario *Historic analysis*). - For a very long prediction time span (24 months), no model has a good enough predictability. Comparison of Existing VDMs --------------------------- The comparison between VDMs follows \[step:comparison\]. Instead of reporting tables of s, we visualize the comparison result in terms of directed graphs where nodes represent models, and connections represent the order relationship between models. summarizes the comparison results between models in different settings of horizons () and prediction time spans (). A directed connection from two models determines that the source model is better than the target model in terms of either predictability, or quality, or both. The line style of the connection depended on the following rules: - *Solid line*: the predictability and quality of the source is significantly better than the target’s. - *Dashed line*: the predictability of the source is significantly better than the target. - *Dotted line*: the quality of the source is significantly better than the target. By term *significantly*, we means the  of the corresponding one-side Wilcoxon rank-sum test is less than the significance level. We apply the Bonferroni correction to control the multi comparison problem, hence the significance level is: $\alpha = 0.05/5 = 0.01$. According to the figure, suggests model(s) for different usage scenario described in the criteria \[cr:timespan\] (see ). In short, *when software is young, the LN model is the most appropriate choice. This is because the vulnerability discovery is linear. When software is approaching middle-age, the AML model becomes superior*. Threats to Validity {#sec:validity} =================== [**Construct validity**]{} includes threats affecting the way we collect vulnerability data and the way we generate VDM curves with respect to the collected data. Following threats in this category are identified: - [**Bugs in data collector.**]{} Most of vulnerability data are available in HTML pages. We have developed a web crawler to extract interesting feature from HTML page, and also XML data. The employed technique is as same as the one discussed in [@MASS-NGUY-10-METRISEC]. The crawler might be buggy and could generate errors in data collection. To minimize such impact, we have tested the crawler many times before collecting the data. Then by randomly checking the collected data, when an error is found we corrected the corresponding bug in the crawler and recollected the data. - [**Bias in *bug-to-nvd* linking scheme.**]{} While collecting data for , we apply some heuristic rules to link a to an entry based on the relative position in the MFSA report. We manually checked many links for the relevant connection between bug reports and NVD entries. All checked links were found to be consistent. Some errors might still creep in this case. - [**Bias in *bug-affects-version* identification.**]{} We do not have a complete assurance that a security bug affects to which versions. Consequently, we assume that a bug affects all versions mentioned in the linked . This might overestimate the number of bugs in each version. To mitigate the problem, we estimate the latest release that a bug might impact, and filter all vulnerable releases after this latest. Such estimation is done thank to the mining technique discussed in [@SLIW-05-MSR]. We further discuss these types of errors in NVD in [@NGUY-MASS-13-ASIACCS]. These errors only affect the fitness of models over the long term so only valuations after the $24$ or $36$ months might be affected. - [**Error in curve fitting.**]{} From the collected vulnerability, we estimate the parameters of VDMs by using the Nonlinear Least-Square technique implemented in R (`nls()` function). This might not produce the most optimal solution and may impact the goodness-of-fit of VDMs. To mitigate this issue, we additionally employed a commercial tool CurveExpert Pro[^4] to cross check the goodness-of-fit in many cases. The results have shown that there is no difference between R and CurveExpert. [**Internal validity**]{} concerns the causal relationship between the collected data and the conclusion drawn in our study. Here, we have identified the following threats that might bias our conclusion. - [**Bias in statistics tests.**]{} Our conclusions are based on statistics tests. These tests have their own assumptions. Choosing tests whose assumptions are violated might end up with wrong conclusions. To reduce the risk we carefully analyzed the assumptions of the tests to make sure no unwarranted assumption was present. We did not apply any tests with normality assumptions since the distribution of vulnerabilities is not normal. [**External validity**]{} is the extent to which our conclusion could be generalized to other scenarios. Our experiment is based on the vulnerability data of some major releases of the four most popular browsers covering almost all market shares. Therefore we can be quite confident about our conclusion for browsers in general. However, it does not mean that our conclusion is valid for other types of application such as operating systems. Such validity requires extra experiments. Related Work {#sec:relatedwork} ============ Anderson [@ANDE-02-OSS] proposed a VDM (a.k.a. Anderson Thermodynamic, AT) based on reliability growth models, in which the probability of a security failure at time $t$, when $n$ bugs have been removed, is in inverse ratio to $t$ for alpha testers. This probability is even harder for beta testers, $\lambda$ times more than alpha testers. However, he did not conduct any experiment to validate the proposed model. Our results show that this model is not appropriate. This is a first evidence that reliability and security obey different laws. Rescorla [@RESC-05-SP] also proposed two mathematical models, called *Linear model* (a.k.a Rescorla Quadratic, RQ) and *Exponential model* (a.k.a Rescorla Exponential, RE). He has performed an experiment on four versions of different operation systems (Windows NT 4.0, Solaris 2.5.1, FreeBSD 4.0 and RedHat 7.0). In all cases, the goodness-of-fit of these two models were inconclusive since their  ranged from $0.167$ to $0.589$. Rescorla discussed many shortcomings of NVD, but his study heavily relied on it nonetheless. Alhazmi and Malaiya [@ALHA-MALA-05-ISSRE] proposed another VDM inspired by s-shape logistic model, called *Alhazmi Malaiya Logistic* (AML). The intuition behind the model is to divide the discovery process into three phases: *learning phase, linear phase* and *saturation phase*. In the first phase, people need some time to study the software, so less vulnerabilities are discovered. In the second phase, when people get deeper knowledge of the software, much more vulnerabilities are found. In the final phase, since the software is out of date, people may lose interest in finding new vulnerabilities. So cumulative vulnerabilities tend to stable. In [@ALHA-MALA-05-ISSRE], the authors validated their proposal against several versions of Windows (Win 95/98/NT4.0/2K) and Linux (RedHat Linux 6.1, 7.1). Their model fitted Win 95 very well (*p-value* $\approx 1$), and Win NT4.0 (*p-value* = $0.923$). For other versions, their own validation showed that the AML model was inconclusive (the *p-value* ranged from $0.054$ to $0.317$). In another work, Alhazmi and Malaiya [@ALHA-MALA-08-TR] compared their proposed model with Rescorla’s [@RESC-05-SP] (RE, RQ) and Anderson’s [@ANDE-02-OSS] (AT) on Windows 95/XP and Linux RedHat Linux 6.2, Fedora. The result shows that their logistic model has a better goodness-of-fit than others. For Windows 95 and Linux 6.2, as the vulnerabilities distribute along s-shape-like curves, only AML is able to fit it (*p-value*=1), whereas all other models fail to match the data (*p-value* $\le 0.05$). For Windows XP, the story is different. RQ turns to be the best one with *p-value*$=0.97$, while AML poorly match the data (*p-value*=$0.147$). Woo [@WOO-ALHAZMI-MALAIYA-06-SEA] carried out an experiment with AML model on three browsers IE, Firefox and Mozilla. However, it is unclear which versions of these browsers were analyzed. Most likely, they did not distinguish between versions. As discussed in section (\[ex:bias:multiversions\]), this could largely bias their final result. In their experiment, IE has not been fitted, Firefox was fairly fitted, and Mozilla was good fitted. From this result, we could not conclude any thing about the performance of AML. In another experiment, Woo [@WOO-etal-11-CS] validated AML against two web servers: Apache and IIS. Also, they did not distinguish between versions of Apache and IIS. In this experiment, AML has demonstrated a very good performance on vulnerability data (*p-value* $=1$). Kim [@KIM-etal-07-HASE] introduced the Multiple-Version Discovery Model (MVDM) which is the generalization of AML. The MVDM separated the cumulative vulnerabilities of a version into several fragments where the first fragment captured the vulnerabilities affecting this version and past versions, and the other fragments are the shared vulnerabilities of this version and future versions. The MVDM basically is the weighted aggregation of individual AML model in these fragments. The weights are determined by the ratios of shared code between this version and future ones. The goodness-of-fit of MVDM has been compared with AML in two versions of Apache and two version of MySQL. As the result, both AML and MVDM were well fitted against the data ($\pvalue \ge 0.99$). MVDM might be better but the difference was quite negligible. Joh [@JOH-etal-08-ISSRE] proposed a VDM based on the Weibull distribution. The proposed model was also compared with the AML model in two versions of Windows (XP, Server 2007) and two versions of Linux (RedHat Linux and RedHat Enterprise Linux). In that evaluation, the goodness-of-fit of the proposed model was compared with the AML model. Younis [@YOUNIS-etal-11-SAM] exploited the Folded distribution to model the discovery of vulnerabilities. The authors also compared the proposed model with the AML model in different types of application (Windows 7, OSX 5.0, Apache 2.0.x, and IE8). The reported results showed that the new model is better than the AML in the cases when the learning phase is not present. Conclusion {#sec:conclusion} ========== Vulnerability discovery models have the potential to help us in predicting future vulnerability trends. Such predictions could help individuals and companies to adapt their software upgrade and patching schedule. However, we have not seen any method to systematically assess these models. Hence, in this work we have proposed an empirical methodology for VDM validation. The methodology is built upon the analyses on the goodness-of-fit, and the predictability of VDM at several time points during the software lifetime. These analyses rely on two quantitative metrics: *quality* and *predictability*. We have applied this methodology to conduct an empirical experiment to assess eight VDMs (AML, AT, LN, JW, LP, RE, RQ, and YF) based on the vulnerability data of 30 major releases of four web browsers: IE, Firefox, Chrome, and Safari. Our experiment has revealed that: - AT and RQ models should be rejected since their quality is not good enough. - For young software, the quality of all other models may be adequate. Only the predictability of LN is good enough for short (3 months and medium (6 months) prediction time spans, other models however is not good enough for latter time span. - For middle-age software, only s-shape models (AML, JW, and YF) may be adequate in terms of both quality and predictability. - For old software, no model is good enough. - No model is good enough for predicting results for a very long period (24 months in the future). In conclusion, *for young releases of browsers ($6$ – $12$ months old) it is better to use a linear model to estimate the vulnerabilities in the next $3$ – $6$ months. For middle age browsers ($12$ – $24$ months) it is better to use an s-shape logistic model.* In future, it is interesting to replicate our experiment in other kinds of software, for instance operating systems and server-side applications. Based on that, a more comprehensive assessment about the VDMs will be more solid. [10]{} Omar Alhazmi and Yashwant Malaiya. Modeling the vulnerability discovery process. In [*Proceedings of the 16th IEEE International Symposium on Software Reliability Engineering (ISSRE’05)*]{}, pages 129–138, 2005. Omar Alhazmi and Yashwant Malaiya. Measuring and enhancing prediction capabilities of vulnerability discovery models for [A]{}pache and [IIS HTTP]{} servers. In [*Proceedings of the 17th IEEE International Symposium on Software Reliability Engineering (ISSRE’06)*]{}, pages 343–352, 2006. Omar Alhazmi and Yashwant Malaiya. Application of vulnerability discovery models to major operating systems. , 57(1):14–22, 2008. Omar Alhazmi, Yashwant Malaiya, and Indrajit Ray. Security vulnerabilities in software systems: A quantitative perspective. In Sushil Jajodia and Duminda Wijesekera, editors, [*Data and Applications Security XIX*]{}, volume 3654 of [*LNCS*]{}, pages 281–294. 2005. Ross Anderson. Security in open versus closed systems - the dance of [Boltzmann, Coase and Moore]{}. In [*Proceedings of Open Source Software: Economics, Law and Policy*]{}, 2002. William A. Arbaugh, William L. Fithen, and John McHugh. Windows of vulnerability: A case study analysis. , 33(12):52–59, 2000. Algirdas Avizienis, Jean-Claude Laprie, Brian Randell, and Carl Landwehr. Basic concepts and taxonomy of dependable and secure computing. , 1(1):11–33, 2004. Mark Dowd, John McDonald, and Justin Schuh. The art of software security assessment. Addision-Wesley publications, 2007. Philip J. Fleming and John J. Wallace. How not to lie with statistics: the correct way to summarize benchmark results. , 29(3):218–221, 1986. HyunChul Joh, Jinyoo Kim, and Yashwant Malaiya. Vulnerability discovery modeling using [W]{}eibull distribution. In [*Proceedings of the 19th IEEE International Symposium on Software Reliability Engineering (ISSRE’08)*]{}, pages 299–300, 2008. Jinyoo Kim, Yashwant Malaiya, and Indrajit Ray. Vulnerability discovery in multi-version software systems. In [*Proceeding of the 10th IEEE International Symposium on High Assurance Systems Engineering*]{}, pages 141–148, 2007. Ivan Victor Krsul. . PhD thesis, Purdue University, 1998. Fabio Massacci, Stephan Neuhaus, and Viet Hung Nguyen. After-life vulnerabilities: A study on firefox evolution, its vulnerabilities and fixes. In [*Proceedings of the 2011 Engineering Secure Software and Systems Conference (ESSoS’11)*]{}, 2011. Fabio Massacci and Viet Hung Nguyen. Which is the right source for vulnerabilities studies? an empirical analysis on mozilla firefox. In [*Proceedings of the International ACM Workshop on Security Measurement and Metrics (MetriSec’10)*]{}, 2010. Steve McKillup. . Cambridge University Press, 2005. Viet Hung Nguyen and Fabio Massacci. The (un) reliability of nvd vulnerable versions data: an empirical experiment on google chrome vulnerabilities. In [*Proceeding of the 8th ACM Symposium on Information, Computer and Communications Security (ASIACCS’13)*]{}, 2013. . , 2012. http://www.itl.nist.gov/div898/handbook/. Andy Ozment. Improving vulnerability discovery models: Problems with definitions and assumptions. In [*Proceedings of the 3rd Workshop on Quality of Protection*]{}, 2007. Eric Rescorla. Is finding security holes a good idea? , 3(1):14–19, 2005. Fred B. Schneider. Trust in cyberspace. , 1991. Jacek Sliwerski, Thomas Zimmermann, and Andreas Zeller. When do changes induce fixes? In [*Proceedings of the 2nd International Working Conference on Mining Software Repositories MSR(’05)*]{}, pages 24–28, May 2005. Sung-Whan Woo, Omar Alhazmi, and Yashwant Malaiya. An analysis of the vulnerability discovery process in web browsers. In [*Proceedings of the 10th IASTED International Conferences Software Engineering and Applications*]{}, 2006. Sung-Whan Woo, HyunChul Joh, Omar Alhazmi, and Yashwant Malaiya. Modeling vulnerability discovery process in [A]{}pache and [IIS HTTP]{} servers. , 30(1):50 – 62, 2011. Awad Younis, HyunChul Joh, and Yashwant Malaiya. Modeling learningless vulnerability discovery using a folded distribution. In [*Proceeding of the Internaltional Conference Security and Management (SAM’11)*]{}, pages 617–623, 2011. A replication guide of this work could be found online at <https://wiki.science.unitn.it/security/doku.php?id=vulnerability_discovery_models>. Also, you can find all required materials (tools, scripts, and data) to rerun the experiment. [Viet Hung Nguyen]{} He is a PhD student in computer science at University of Trento, Italy under the supervision of professor Fabio Massacci since November 2009. He received his MSc and BEng in computer science and computer engineering in 2007 and 2003. Currently, his main interest is the correlation of vulnerability evolution and software code base evolution. [Fabio Massacci]{} [^1]: The actual discovery date might be significantly earlier than that. [^2]: Other third party data sources (OSVDB, Bugtraq, IBM XForce) also report Firefox’s vulnerabilities, but most of them refer to NVD by the CVE-ID. Therefore, we consider NVD as a representative of third-party data sources. [^3]: Wikipedia, <http://en.wikipedia.org/wiki/Internet_Explorer>, visited on 24 June 2012. [^4]: <http://www.curveexpert.net/>, site visited on 16 Sep, 2011
--- author: - Jonathan Liu and Michael Whitmeyer bibliography: - 'proj-references.bib' date: 'May 12, 2019' title: Algorithmic Discrepancy Minimization --- Abstract ======== This report will be a literature review on a result in algorithmic discrepancy theory. We will begin by providing a quick overview on discrepancy theory and some major results in the field, and then focus on an important result by Shachar Lovett and Raghu Meka in [@Lovett]. We restate the main algorithm and ideas of the paper, and rewrite proofs for some of the major results in the paper. Introduction ============ The discrepancy problem is as follows: given a finite family of finite sets of points, our goal is to color the underlying points (contained in the union of all the sets in the family) red and blue, such that each set has a roughly equal number of red and blue points. Formally, it is described as follows: Given a universe $[n] = \{1, ... ,n\}$ and a collection of sets in the universe $S = \{S_1,...,S_m \subseteq [n]\}$, We wish to find an assignment $\chi: [n] \rightarrow \{-1, 1\}$ such that ${\mathsf{disc}}(\chi)$ is minimized, where ${\mathsf{disc}}(\chi)$ is defined as $${\mathsf{disc}}(\chi) = \max_{S_i \in S} \left|\sum_{i \in S_i} \chi(i)\right|.$$ Perhaps two of the most major results in Discrepancy Theory came in the 1980’s, when two papers published proofs of the existence of assignments with surprisingly strong lower bounds. First, in 1981, József Beck and Tibor Fiala showed that given an upper limit on the number of sets that each point is included in, we can find an assignment with discrepancy linear in that limit. We start with the assumption that for each $x \in [n]$, it appears in at most $t$ sets. More formally, we have the constraint that $\forall i \in [n]$, $$|\{j; i\in S_j\}| \leq t$$ Then, one can find an assignment $\chi$ such that ${\mathsf{disc}}(\chi) \leq 2t-1$. We provide a proof of the Beck-Fiala theorem in the appendix in \[Beck-Fiala\], using only arguments from linear algebra. The other groundbreaking result in Discrepancy Theory is called Spencer’s Six Standard Deviations and is given here: \[Spencer\] Given any system of $n$ sets on a universe of $n$ points, there exists a coloring $\chi$ such that ${\mathsf{disc}}(\chi) \leq 6\sqrt{n}$. Both of these results remain cornerstones of Discrepancy Theory. Yet, despite their significance, they were both proven using nonconstructive methods, so we had no way to achieve them algorithmically. For some time, it was even conjectured that no algorithm could be provided. This question remained open until [@Bansal], where Nikhil Bansal provides a constructive randomized algorithm for discrepancy minimization based on an SDP relaxation. Later, Lovett and Meka propose a new constructive algorithm using only linear algebra [@Lovett]. We will be focusing on this paper. Overview ======== The paper provides a constructive algorithm for minimizing discrepancy, and uses it to prove that their bounds match the bounds given by the previously mentioned theorems. First, they demonstrate a result matching Spencer’s Six Standard Deviations. \[result\] For any system of $m$ sets on a universe of $n$ points, there exists a randomized algorithm that, in polynomial time and with at least $1/2$ chance, computes a coloring $\chi: [n] \to \{-1, 1\}$ such that ${\mathsf{disc}}(\chi) < K\sqrt{n\log_2(m/n)}$ for some universal constant $K$. We note that for $m = n$, as in the case of Theorem \[Spencer\], we reach the same asymptotic bound as Spencer provided. [@Lovett] then provides a similar result for the “Beck-Fiala case” where the occurrence of each point is upper bounded For a system of $m$ sets on a universe of $n$ points where each point is contained in at most $t$ sets, there exists a randomized algorithm that, in polynomial time and with at least $1/2$ chance, computes a coloring $\chi: [n] \to \{-1, 1\}$ such that ${\mathsf{disc}}(\chi) < K\sqrt{t}\log n$ for some universal constant $K$. In this review we will focus on Theorem 3. The main idea behind the algorithm will be to first create a *partial coloring*. Given a set-system $(V,S)$, where $V=\{1,..., n\}$ and $|S| = m$, we assume that $m \geq n$ (the other case can easily be reduced to this case by adding some empty sets to $S$). We call this partial coloring $\chi : V \rightarrow [-1,1]$ such that: 1. For all $S_i \in S, |\chi(S_i)| = O(\sqrt{n\log(m/n)})$ 2. $|\{i : |\chi(i)| = 1\}| \geq cn$ for a constant $c>0$ The idea is that if we are provided a good algorithm for finding a partial coloring, we can repeatedly apply this algorithm on the variables not yet “colored” by this partial coloring, while holding the colored ones constant. This will eventually converge to a full coloring and total discrepancy, as the number of points colored follows a geometric series with ratio $\sqrt{1-c}$, and the discrepancy can be bounded by $O(\sqrt{n\log(m/n)})$. Achieving a Partial Coloring ============================ The first and most important step is actually achieving a good partial coloring. We start with a convenient construction: We let $v_1,...,v_m \in \mathbb{R}^n$ be the indicator vectors for each of our subsets $S_1,...,S_m$ respectively. Then, the discrepancy of our collection $S$ can be very easily described as $${\mathsf{disc}}(S) = \max_{i \in [m]} |\langle \chi, v_i \rangle|$$ \[Coloring\] Let $v_1,...,v_m \in \mathbb{R}^n$ be vectors, and $x_0 \in [-1,1]^n$ be a “starting point”. Further let $c_1,...,c_m \geq 0$ be thresholds such that $\sum_{j=1}^m\exp(-c_j^2/16) \leq n/16$. Then let $\delta>0$ be an approximation parameter. Then there exists an efficient randomized algorithm which with probability $\geq 0.1$ finds $x \in [-1,1]^n$ such that 1. discrepancy constraints: $|\langle x-x_0,v_j\rangle| \leq c_j \|v_j\|_2$ 2. variable constraints: $|x_i| \geq 1-\delta$ for at least $n/2$ indices $i \in [n]$ Moreover, the algorithm runs in time $O((m+n)^3\delta^{-2}\log (nm/\delta))$ The reason for the constraint on the $c_j$’s will become apparent later, but for now we note that the smaller the $c_j$ are, the stronger the theorem is. In other words, we want them to be small, but they can’t be too small otherwise the theorem won’t hold, hence the constraint. We also note that we can increase the probability of success by simply running the algorithm multiple times over. 4.1. The Algorithm. We begin with a general idea, before going into the details of the algorithm. We also assume without changing the problem that the $v_i$’s have all been normalized (we can simply adjust our $c_j$’s to account for this): $\|v_i\|_2 = 1, \forall i$. Consider the following polytope, which describes the legal values that $x\in \mathbb{R}^n$ can take on: $$\mathcal{P} = \{x\in \mathbb{R}^n: |x_i|\leq 1 \forall i \in [n], |\langle x-x_0,v_j\rangle| \leq c_j\}.$$ Then the above theorem says we can find an $x\in \mathbb{R}^n$ such that at least $n/2$ of the variable constraints are satisfied with (virtually) no slack, and it works with good probability as long as we have $\sum \exp(-c_j^2) << n$. The idea is to take very small, discrete, Gaussian steps (called Brownian motion) starting from $x_0$. We intuitively want to use these steps to find such an $x$ that is as far away from the origin ($x_0$) as possible, as this implies that more of the constraints are satisfied with no slack. We are now ready to present the constructive algorithm that serves as a proof of Theorem \[Coloring\]. Let $\gamma >0$ be a small step size such that $\delta=O(\gamma \sqrt{\log(nm/\gamma})$. The correctness of the algorithm will not be affected by the choice of $\gamma$, only the runtime. Further let $T=K_1/\gamma^2$, where $K_1 = 16/3$, and assume $\delta < 0.1$. The algorithm then produces $X_0 = x_0, X_1,...,X_T \in \mathbb{R}^n$ according to the following algorithm\ When we say that $U \sim \mathcal{N}(\mathcal{V}_t)$, we are referring to the standard multi-dimensional Gaussian distribution: $U = U_1v_1 + ... + U_dv_d$ where $\{v_1,..,v_d\}$ is an orthonormal basis for $\mathcal{V}_t$ and $U_1, ...,U_d \sim \mathcal{N}(0,1)$ are all independent. 4.2. Analysis Outline. We seek to prove the following: \[coloring-analysis\] We have that Theorem \[Coloring\] holds for $X_T$ in the above algorithm, and that with probability at least $0.1$, $X_0,...,X_T \in \mathcal{P}$. We begin with a useful claim regarding the behavior of the random walk. \[Ortho\] For all $t$ we have that $C_t^{var} \subseteq C_{t+1}^{var}$ and similarly $C_t^{{\mathsf{disc}}} \subseteq C_{t+1}^{{\mathsf{disc}}}$ for $t=0,...,T-1$. This further implies that $\dim(\mathcal{V}_t) \geq \dim(\mathcal{V}_{t+1})$. Intuitively, we are taking Gaussian steps orthogonal to the subspace $C_t$, so at each step we should never be able to remove any elements in $C_t^{var}$ or $C_t^{{\mathsf{disc}}}$. Formally, let $i \in C_t^{var}$. Then $U_t \in \mathcal{V}_t$ which implies $(U_t)_i = 0$. This implies that $(X_t)_i = (X_{t-1})_i$ and $i \in C_{t+1}^{var}$ as desired. The argument is very similar for the discrepancy constraints. Now, we can begin to look at the results of the algorithm. First, we can prove that with good probability, our Brownian motion will not leave the polytope. The “nearly hit" constraints serve this purpose; we select step size $\gamma$ small enough that whenever a solution approaches a constraint, it is more likely to fall into the $\delta$-band of the constraint than it is to break the polytope. Once it falls into this band, Claim \[Ortho\] implies that it will never break the polytope. This can be shown formally using Gaussian tailbounds. Next, we argue that the algorithm satisfies many variable constraints and few discrepancy constraints with high probability. Using our bound on the discrepancy coefficients as well as Gaussian tailbounds, we can demonstrate that the easily-satisfiable discrepancy constraints is small, and that it is unlikely that many other ones are met. With this in mind, at any time $t$ note that there are two scenarios for $C_t^{var}$: if it is large then we are done, and if it is small then our Brownian motion is less constrained so we expect to take steps of larger magnitude. Thus, we argue that by time $T$ it is likely that we “nearly-hit" many variable constraints. Finally, we look at the computational complexity of the algorithm, which is claimed to be $O((n+m)^3\delta^{-2}\log(nm/\delta))$. The paper does not provide a full justification of this runtime, but we believe it to be inaccurate. Computing $C_t^{var}$ and $C_t^{{\mathsf{disc}}}$ given $X_{t-1}$ takes $O(nm)$ time, since computing $C_t^{{\mathsf{disc}}}$ requires the computation of $m$ dot products in $\mathbb{R}^n$. We can sample from $\mathcal{N}(\mathcal{V}_t)$ by constructing an orthonormal basis for $\mathcal{V}_t$. We do this by constructing an orthogonal basis using our constraints, and using the completion theorem to find a basis of $\mathcal{V}_t$. Finding a basis from $n+m$ constraint vectors requires Gaussian elimination, so it takes $O((n+m)^3)$ time. Now, we have to repeat this for $T$ rounds, so the runtime should be expressible as $O((n+m)^3 T)$. Note that $T = O(1/\gamma^2)$, so the runtime described by [@Lovett] holds in the case where $$\begin{aligned} \frac{1}{\gamma^2} &= O(\delta^{-2}\log(nm/\delta)) \\ \frac{1}{\gamma} &= O\left(\frac{1}{\delta}\sqrt{\log(nm/\delta)}\right) \\ \delta &= O(\gamma\sqrt{\log(nm/\delta)}).\end{aligned}$$ However, in the paper, $\gamma$ is selected under the condition $\delta = O(\gamma\sqrt{nm/\gamma})$. Of course, this ends up being a small distinction for $nm \gg \delta$, but it is still worth noting. A full proof is provided in the appendix at section \[full-proof\]. The Discrepancy Minimizer ========================= For the purposes of brevity, we only provide a proof for Theorem \[result\]. To find our full coloring, we will simply repeatedly use Theorem \[Coloring\]. For $m$ sets on a universe of size $n$, we’ll select $\delta = 1/(8 \log m)$ and $c_1, ..., c_m = 8\sqrt{\log(m/n)}$, and denote by $v_i ... v_m$ the indicator vectors for the sets. We’ll use the partial coloring algorithm starting with vector $\vec{x}_0 = 0^n$ to find some vector $\vec{x}_1$ where $|\langle v_j, x_1 \rangle| < \sqrt{n}(8\sqrt{\log(m/n)})$ for all $j$ and where more than half of the points have values within the “nearly-hit" bound. By Theorem \[Coloring\], this has probability of at least $0.1$, which we can boost by repeating as needed. Applying this iteratively to the vectors that haven’t yet been assigned a partial coloring, we find that within $t = O(\log n)$ iterations every value in $x$ will be within $\delta$ of an assignment. When this occurs, for any $j \in [m]$, we note that $n_i < \frac{n}{2^i}$, so we have $$\begin{aligned} |\langle v_j, x \rangle| &< \sum_{i=0}^t |\langle v_j, x_t \rangle| \\ &< \sum \sqrt{n_i}8\sqrt{\log(m/n_i)} \\ &< 8\sqrt{n} \sum_{i=1}^{\infty} \sqrt{\frac{i + \log(m/n)}{2^i}} \\ &< C\sqrt{n\log(m/n)}\end{aligned}$$ for some constant $C$. We then use this candidate solution and round it to an actual coloring. Knowing that each variable is within $\delta$ of either $1$ or $-1$, we’ll set each variable to the one it is closer to with probability $(1+|x_i|)/2$, which means that $\mathbb{E}[\chi_i] = x_i$. Denoting $Y := \chi - x$ we have that the discrepancy for any set $j$ follows $$|\langle \chi, v_j \rangle| \leq |\langle x, v_j \rangle| + |\langle Y, v_j \rangle|$$ due to triangle inequality. What’s left, then, is to find an upper bound for $|\langle Y, v_j \rangle|$. Noting that $|Y_i| \leq 2$, $\mathbb{E}[Y_i]=0$, $\sigma^2(Y_i) \leq \delta$ (which the paper claims but we are only able to show this is true for $2\delta$), and $||v_j||_2 \leq \sqrt{n}$, the fact that $||v_j||_{\infty} \leq 1$ allows us to use a Chernoff bound and get $$\begin{aligned} \Pr[|\langle Y, v_j \rangle| > 2 \sqrt{2\log m} \sqrt{n\delta}] &\leq 2\exp(-2\log m) \\ &\leq 2/m^2 \\ &\leq 1/2m\end{aligned}$$ for $m > 2$. Note that $\delta = 1/(8 \log m)$, so $2 \sqrt{2\log m} \sqrt{n\delta} = \sqrt{n}$, which means that across all $j$ we have $\Pr[|\langle Y, v_j \rangle| > \sqrt{n}] < 1/2$. Therefore, with probability at least $1/2$, ${\mathsf{disc}}(\chi) \leq C\sqrt{n\log(m/n)} + \sqrt{n} < K\sqrt{n\log(m/n)}$, as desired. References {#references .unnumbered} ========== Appendix ======== A proof of the Beck-Fiala Theorem {#Beck-Fiala} --------------------------------- We present a proof of the Beck-Fiala theorem using only arguments from linear algebra. [@Chazelle] We start by initializing all $\chi(i) = 0, \forall i \in [n]$, and we call all of these variables *undecided*. We also call a set *stable* if it has less than or equal to $t$ undecided elements. We also note that due to the constraint, there must be less $n$ sets that contain strictly more than $t$ elements to start off with (all of which are undecided upon initialization). If we impose the constraints that all of the elements in each unstable set must be zero, we get a system of less than $n$ equations, and $n$ variables. This tells us that there is at least one nontrivial solution to the system of equations, that changes only undecided variables and maintains that the discrepancy of all unstable sets remains zero. We can normalize this solution until at least one of the undecided variables is $\pm 1$. Then, this variable is decided, and we have a partial coloring. We now have at most $n-1$ undecided variables, and each undecided variable is in $(-1, 1)$. By the same argument from above, we have that the number of unstable sets is strictly less than the number of undecided variables, so we can repeat the procedure to find another nontrivial solution to our system of equations. We continue in the manner until all the sets are stable. Then we note that until a set is declared stable, its discrepancy is 0. Then, when it is declared stable, it has at most $t$ undecided variables, all of which are in $(-1,1)$. Then the process of deciding those variables changes the discrepancy of the set by strictly less than $2t$. And since the final discrepancy must be integral, we get the result. A Full Proof of Lemma \[coloring-analysis\] {#full-proof} ------------------------------------------- We have already argued about the runtime of the algorithm. Here, we must show that the solution is unlikely to leave the polytope, that few discrepancy constraints are met, and that many variable constraints are met. \[Polytope\] For $\gamma \leq \delta / \sqrt{c\log (mn/\gamma)}$ and $c$ a sufficiently large constant, with probability at least $1-1/(mn)^{c-2}$ we have that $X_0,...,X_T \in \mathcal{P}$ To prove the above claim, we will need to use a Gaussian tail bound: \[tailbound\] For any $\lambda >0$, $P(|G| \geq \lambda ) \leq 2\exp(-\lambda^2/2)$, where $G \sim \mathcal{N}(0,1)$ We have that $$P(|G|>\lambda) = 2P(G > \lambda) = 2 \int_{\lambda}^{\infty} \frac{1}{\sqrt{2\pi}}\exp(-t^2/2)dt \leq 2 \int_{\lambda}^{\infty} \frac{t}{\lambda}\frac{1}{\sqrt{2\pi}}\exp(-t^2/2)dt = \frac{2\exp(-\lambda^2/2)}{\sqrt{2\pi}\lambda}$$ From here, it is easy to see that for $\lambda \geq 1/\sqrt{2\pi}$ we have that $\frac{2\exp(-\lambda^2/2)}{\sqrt{2\pi}\lambda} \leq 2\exp(-\lambda^2/2) $ as desired. For the case when $\lambda \leq 1/\sqrt{2\pi}$, it is easy to see that $2\exp(-\lambda^2/2) > 1$ so the bound is trivial. Clearly $X_0 = x_0 \in \mathcal{P}$. We further define $E_t := \{X_t \not\in \mathcal{P} | X_0,...,X_{t-1} \in \mathcal{P}\}$ denote the event that $X_t$ is the first element of the sequence not in $\mathcal{P}$.  We then have $$Pr(X_0,...,X_T \in \mathcal{P}) = 1-\sum_{t=1}^T Pr(E_t).$$ The next step of the proof is clearly to calculate $Pr(E_t)$. In order for $E_t$ to happen, it must be the case that either a variable constraint or a discrepancy constraint was violated. Lets first look at the variable constraint case: say $(X_t)_i>1$. Since $X_{t-1} \in \mathcal{P}$, we must have that $(X_{t-1})_i \leq 1$. Yet, if $(X_{t-1})_i \geq 1-\delta,$ then $i \in C_t^{var}$ so $(X_{t-1})_i = (X_{t})_i$. Thus, for the constraint to be violated, we must have had that $(X_{t-1})_i < 1-\delta $. Then, in order for $(X_t)_i$ to be greater than 1, and since $X_t = X_{t-1} + \gamma U_t$, we must have that $|U(t)_i| \geq \delta/\gamma$. Now, let’s look what must happen in order for $X_t$ to violate a discrepancy constraint. First we define $W := \{e_1, ..., e_n, v_1,...,v_m\}$. By our construction of $W$, we conclude that if $E_t$ holds then we must have that $|\langle X_t - X_{t-1} ,w\rangle| \geq \delta$ for some $w \in W$. This is equivalent to saying that $|\langle U_t, w \rangle| \geq \delta/\gamma$ for that same $w$. We note here that, once again by construction of $W$, we have that if $|U(t)_i| \geq \delta/\gamma$ holds, then we must have that $|\langle U_t, w \rangle| \geq \delta/\gamma$ holds for some $w$, in particular it holds if we pick $w=e_i$. However, the reverse does not hold. Therefore, it since the event of a variable constraint being violated is entirely contained in the event of a discrepancy constraint being violated, it suffices to bound $Pr[|\langle U_t, w \rangle| \geq \delta/\gamma]$. In order to bound this, we need the following claim: \[subspace\] Let $V \subseteq \mathbb{R}^n$ be a subspace and let $G \sim \mathcal{N}(V)$. Then for all $u \in \mathbb{R}^n$, $\langle G, u \rangle \sim \mathcal{N}(0, \sigma^2)$, where $\sigma^2 \leq \|u\|_2^2$ We have that $G = G_1v_1 + ... + G_dv_d$, where $\{v_1,...,v_d\}$ is an arbitrary orthonormal basis for $V$ and $G_1, ..., G_d$ are all standard normals and independent. Then $\langle G, u \rangle = \sum_{i=1}^d \langle v_i, u \rangle G_i$. This is a Gaussian RV which has mean zero, and variance $\sum_{i=1}^d \langle v_i, u \rangle^2$. This equation is simply equal to $\|{\mathsf{Proj}}_V u\|_2^2$, the norm squared of the projection of $u$ onto the span of $V$. Therefore, we have that $\sum_{i=1}^d \langle v_i, u \rangle^2 \leq \|u\|_2^2$, and we are done. Now we can use the above claim, and we have that $\langle U_t, w\rangle$ is Gaussian with mean 0 and variance at most 1. Then, by claim \[tailbound\], we have that: $$Pr[|\langle U_t, w \rangle| \geq \delta/\gamma] \leq 2\exp(-(\delta/\gamma)^2/2)$$ Now, by our choices of variables, we have that $\delta/\gamma = \sqrt{C\log (nm/\gamma})$ and $T = O(1/\gamma^2)$. Therefore, we have $$Pr[X_0 \cup ... \cup X_t \not\in \mathcal{P}] = \sum_{t-1}^T Pr[E_t]$$ which, by a union bound, $$\leq \sum_{t-1}^T \sum_{w \in W} Pr[|\langle U_t, w \rangle| \geq \delta/\gamma] \leq T (n+m)\cdot 2\exp(-(\sqrt{C\log (nm/\gamma}))^2/2) = T(n+m)\cdot 2\left (\frac{\gamma}{nm}\right )^C$$ $$\leq T(nm)\frac{\gamma^2}{(nm)^C} \leq \frac{1}{(nm)^{C-2}}$$ For large enough C, since we have that $\gamma <1$ and $(nm) >1$. We are now well on our way to proving Lemma \[coloring-analysis\]. The intuition behind the remaining steps is as follows. We will use the constraint on our discrepancy thresholds $c_j, j \in [m]$ to argue first that $\mathbb{E}[|C_T^{var}|] \ll n$. This will be useful because it will mean that $\dim (\mathcal{V}_{t-1})$ will be larger, which means that $\mathbb{E}[\|X_t\|^2]$ should increase more appreciably compared to the previous timestep. At any given timestep ,either $|C_t^{var}|$ is large and we are done, or $|C_t^{var}|$ is small and once again $\dim (\mathcal{V}_{t-1})$ and we will be taking bigger steps. We note also that in order to prove the lemma, we really only need to show that $\mathbb{E}[|C_t^{var}|] = \Omega(n)$, since if we achieve this, then we can use this along with the fact that $|C_T^{var}|$ is upper bounded by $n$ to show that $Pr[|C_t^{var}| < n/2] < 0.9$.\ We first show that $\mathbb{E}[|C_t^{{\mathsf{disc}}}|]$ is small; that is, on average very few discrepancy constraints are ever nearly hit. $\mathbb{E}[|C_t^{{\mathsf{disc}}}|] < n/4$ We let $J := \{j: c_j \leq 10\delta \}$. In order to bound the size of $J$, we have that from our constraints $$n/16 \geq \sum_{j \in J} \exp (-c_j^2/16) \geq |J| \cdot \exp(-100\delta^2/16) \geq |J| \cdot \exp(-1/16) > 9|J|/10$$ Since $\delta <0.1$. So we have then that $|J| \leq 1.2n/16 < 2n/16$. Now we consider the $j \not \in J$. If $j \in C_T^{{\mathsf{disc}}}$, then $|\langle X_T-x_0, v_j\rangle| \geq c_j - \delta \geq 0.9c_j$. We want to bound the probability that his occur. Via our update formula, we have that $X_T = x_0 + \gamma(U_1+...+U_T)$. We then define $Y_i = \langle U_i, v_j \rangle$. We then have that for $j \not\in J$ $$Pr[j \in C_T^{{\mathsf{disc}}}] = Pr[|Y_1 + ... + Y_T| \geq 0.9c_j/\gamma]$$ We will also need the following Lemma: Let $X_1,...X_T$ be random variables, and let $Y_1,...,Y_T$ be RVs where each $Y_i$ is a function of $X_i$. Suppose that for all $1 \leq i \leq T$, $Y_i|X_1, ..., X_{i-1} $ is Gaussian with mean zero and variance at most one. Then for any $\lambda >0$: $$Pr[|Y_1 + ...+ Y_T| \geq \lambda \sqrt{T}] \leq 2 \exp(-\lambda^2/2)$$ The proof of the above lemma is a generalization of the proof for Claim \[tailbound\], and is omitted. We note that $Y_i = \langle U_i, v_j \rangle \sim \mathcal{N}(0, \sigma^2)$, where $\sigma^2 \leq \|v_j\|^2 = 1$ by our assumption at the beginning of the problem that we had normalized all the the $v_i$. We can apply the above lemma to our $Y_i$’s, since $Y_i$ is a function of $U_i$ and $Y_i|U_1,...,U_{i-1}$ is Gaussian with mean zero and variance at most one to get that $$Pr[j \in C_T^{{\mathsf{disc}}}] \leq 2\exp(-(0.9c_j)^2/2\gamma^2T$$ which, since $T = K_1/\gamma^2$ $$= 2\exp(-(0.9c_j)^2/2K_1) < 2\exp(-c_j^2/2)$$ We therefore have that $$\mathbb{E}[|C_T^{{\mathsf{disc}}}|] \leq |J| + \sum_{j \not\in J} Pr[j \in C_T^{{\mathsf{disc}}}] < 2n/16+2n/16 = n/4$$ Where above we have used conditional expectations, and assumed in the worst case that every element in $J$ is in $C_T^{{\mathsf{disc}}}$ and we have also used the constraint $\sum_{j=1}^m\exp(-c_j^2/16) \leq n/16$. \[bound-x-t\] $\mathbb{E}[\|X_T\|_2^2] \leq n$ We start by noting that it suffices to show that $\mathbb{E}[(X_T)_i^2] \leq 1$ for all $i \in [n]$, since $\mathbb{E}[\|X_T\|_2^2] = \sum_i \mathbb{E}[(X_T)_i^2]$. We have that $$\mathbb{E}[(X_T)_i^2] = \mathbb{E}[(X_T)_i^2|i \not \in C_T^{var}]Pr[i \not\in C_T^{var}] + \sum _{t=1}^T \mathbb{E}[(X_T)_i^2|i \in C_t^{var} \setminus C_{t-1}^{var}]Pr[i \in C_t^{var} \setminus C_{t-1}^{var}]$$ Now, we have that clearly $\mathbb{E}[(X_T)_i^2|i \not \in C_T^{var}] \leq 1$. For the rest of the terms, we have: $$\mathbb{E}[(X_T)_i^2|i \in C_t^{var} \setminus C_{t-1}^{var}] = \mathbb{E}[(X_t)_i^2|i \in C_t^{var} \setminus C_{t-1}^{var}]$$ $$= \mathbb{E}[((X_{t-1})_i+\gamma(U_t)_i)^2|i \in C_t^{var} \setminus C_{t-1}^{var}] \leq \mathbb{E}[(1-\delta+\gamma(U_t)_i)^2|i \in C_t^{var} \setminus C_{t-1}^{var}]$$ $$= (1-\delta)^2 + 2(1-\delta)\gamma\mathbb{E}[(U_t)_i] + \gamma^2 \mathbb{E}[(U_t)_i^2]$$ Here, we note that $\mathbb{E}[(U_t)_i^2] = 1$ and $\mathbb{E}[(U_t)_i] = 0$, so we have $$= (1-\delta)^2 + \gamma^2 \leq 1-\delta + \gamma < 1$$ by our construction of $\gamma$. $\mathbb{E}[|C^{var}_T|] \geq 0.56n$. We will use the high average norm of $X_t$ and low number of discrepancy constraints broken to demonstrate that the number of variable constraints broken is high with high probability. Note that $$\begin{aligned} \mathbb{E}[||X_t||^2_2] &= \mathbb{E}[||X_{t-1} + \gamma U_t||_2^2] \\ &= \mathbb{E}[||X_{t-1}||^2_2] + 2\mathbb{E}[||X_{t-1} \cdot \gamma U_t||_2] + \mathbb{E}[||\gamma U_t||_2^2] \\ &= \mathbb{E}[||X_{t-1}||^2_2] + \mathbb{E}[||\gamma U_t||_2^2] \\ &= \mathbb{E}[||X_{t-1}||^2_2] + \gamma^2\mathbb{E}[\text{dim}(\mathcal{V}_t)].\end{aligned}$$ We use the fact that $\mathbb{E}[U_t]=0$, and we use Claim \[subspace\] as well. Then, by Claim \[bound-x-t\], we have $$\begin{aligned} n &\geq \mathbb{E}[||X_T||_2^2] \\ n &\geq \gamma^2 \sum_{t=1}^T \mathbb{E}[\text{dim}(\mathcal{V}_t)] \\ n &\geq \gamma^2 |T|\mathbb{E}[\text{dim}(\mathcal{V}_T)] \\ n &\geq K_1 \mathbb{E}[n - |C_T^{var}| - |C_T^{{\mathsf{disc}}}|] \\ n/K_1 &\geq \mathbb{E}[n] - \mathbb{E}[|C_T^{var}|] - \mathbb{E}[|C_T^{{\mathsf{disc}}}|] \\ \mathbb{E}[|C_T^{var}|] &\geq n (1 - 1/K_1) - n/4 \\ \mathbb{E}[|C_T^{var}|] &\geq n (1 - 3/16 - 1/4) \\ \mathbb{E}[|C_T^{var}|] &\geq 0.5625n.\end{aligned}$$ Now we can fully prove Lemma \[coloring-analysis\]. From Claim 14 and the fact that $|C_T^{var}| \leq n$, in the worst case we have that with probability $0.88$, $|C_T^{var}| = n/2$, and with probability $0.12$, $|C_T^{var}| = n$. This maximizes the number of instances where $|C_T^{var}| \leq n/2$, while still maintaining the fact that $\mathbb{E}[|C_T^{var}|] \geq 0.56$. Therefore we have that $Pr[|C_T^{var}| > n/2] \geq 0.12$. Combining this with claim 8 tells us that with probability at least $0.12 - 1/poly(m,n)$ we achieve the partial coloring, and we are done.
--- abstract: 'We prove the unique existence and exponential decay of global in time classical solutions to the special relativistic Boltzmann equation without any angular cut-off assumptions with initial perturbations in some weighted Sobolev spaces. We consider perturbations of the relativistic Maxwellian equilibrium states. We work in the case of a spatially periodic box. We consider the general conditions on the collision kernel from Dudyński and Ekiel-Jeźewska (Commun Math Phys **115**(4):607–629, 1985). Additionally, we prove sharp constructive upper and coercive lower bounds for the linearized relativistic Boltzmann collision operator in terms of a geometric fractional Sobolev norm; this shows that a spectral gap exists and that this behavior is similar to that of the non-relativistic case as shown by Gressman and Strain (Journal of AMS **24**(3), 771–847, 2011). Lastly, we derive the relativistic analogue of Carleman dual representation of Boltzmann collision operator. This is the first global existence and stability result for relativistic Boltzmann equation without angular cutoff and this resolves the open question of perturbative global existence for the relativistic kinetic theory without the Grad’s angular cut-off assumption.' author: - Jin Woo Jang date: 'Completed: August 11, 2015, Revised: ' title: 'Global classical solutions to the relativistic Boltzmann equation without angular cut-off' --- Introduction ============ In 1872, Boltzmann [@Boltzmann] derived an eqution which mathematically models the dynamics of a gas represented as a collection of molecules. This was a model for the collisions between non-relativistic particles. For the collisions between relativistic particles whose speed is comparable to the speed of light, Lichnerowicz and Marrot [@LM] have derived the relativistic Boltzmann equations in 1940. This is a fundamental model for fast moving particles. Understanding the nature of relativistic particles is crucial in describing many astrophysical and cosmological processes [@Kremer]. Although the classical non-relativistic Boltzmann kinetic theory has been widely and heavily studied, the relativistic kinetic theory has received relatively less attention because of its complicated structure and computational difficulty on dealing with relativistic post-collisional momentums. The relativistic Boltzmann equation is written as $$\label{RBE} p^\mu\partial_\mu f={p^0}\partial_tf+cp\cdot\nabla_xf= C(f,f),$$ where the collision operator $C(f,f)$ can be written as $$\label{Colop} C(f,h)=\int_{\mathbb{R}^3}\frac{dq}{{q^0}}\int_{\mathbb{R}^3}\frac{dq'\hspace{1mm}}{{q'^0}}\int_{\mathbb{R}^3}\frac{dp'\hspace{1mm}}{{p'^0}} W(p,q|p',q')[f(p')h(q')-f(p)h(q)].$$ Here, the transition rate $W(p,q|p',q')$ is $$W(p,q|p',q')=\frac{c}{2}s\sigma(g,\theta)\delta^{(4)}(p^\mu +q^\mu -p'^\mu -q'^\mu),$$ where $\sigma(g,\theta)$ is the scattering kernel measuring the interactions between particles and the Dirac $\delta$ function expresses the conservation of energy and momentum. Notation -------- The relativistic momentum of a particle is denoted by a 4-vector representation $p^\mu$ where $\mu=0,1,2,3$. Without loss of generality we normalize the mass of each particle $m=1$. We raise and lower the indices with the Minkowski metric $p_\mu=g_{\mu\nu}p^\nu$, where the metric is defined as $g_{\mu\nu}=\text{diag}(-1, 1, 1, 1).$ The signature of the metric throught this paper is $(-+++)$. With $p\in {{\mathbb{R}^3}}$, we write $p^\mu=({p^0},p)$ where ${p^0}$ which is the energy of a relativistic particle with momentum $p$ is defined as ${p^0}=\sqrt{c^2+|p|^2}$. The product between the 4-vectors with raised and lowered indices is the Lorentz inner product which is given by $$p^\mu q_\mu=-{p^0}{q^0}+\sum^3_{i=1} p_iq_i.$$ Note that the momentum for each particle satisfies the mass shell condition $p^\mu p_\mu=-c^2$ with ${p^0}>0$. Also, the product $p^\mu q_\mu$ is Lorentz invariant. By expanding the relativistic Boltzmann equation and dividing both sides by ${p^0}$ we write the relativistic Boltzmann equation as $$\partial_t F+\hat{p}\cdot \nabla_x F= Q(F,F)$$ where $Q(F,F)=C(F,F)/{p^0}$ and the normalized velocity of a particle $\hat{p}$ is given by $$\hat{p}=c\frac{p}{{p^0}}=\frac{p}{\sqrt{1+|p|^2/c^2}}.$$ We also define the quantities $s$ and $g$ which respectively stand for the square of the energy and the relative momentum in the *center-of-momentum* system, $p+q=0$, as $$s=s(p^\mu,q^\mu)=-(p^\mu+q^\mu)(p_\mu+q_\mu)=2(-p^\mu q_\mu+1)\geq 0,$$ and $$g=g(p^\mu,q^\mu)=\sqrt{(p^\mu-q^\mu)(p_\mu-q_\mu)}=\sqrt{2(-p^\mu q_\mu-1)}.$$ Note that $s=g^2+4c^2$. Conservation of energy and momentum for elastic collisions is described as $$\label{conservation} p^\mu+q^\mu=p'^\mu+q'^\mu.$$ The scattering angle $\theta$ is defined by $$\cos\theta=\frac{(p^\mu-q^\mu)(p'_\mu-q'_\mu)}{g^2}.$$ Together with the conservation of energy and momentum as above, it can be shown that the angle and $\cos\theta$ are well-defined [@Glassey]. Here we would like to introduce the relativistic Maxwellian which models the steady state solutions or equilibrium solutions also known as Jüttner solutions. These are characterized as a particle distribution which maximizes the entropy subject to constant mass, momentum, and energy. They are given by $$J(p)=\frac{e^{-\frac{c{p^0}}{k_BT}}}{4\pi ck_BTK_2(\frac{c^2}{k_BT})},$$ where $k_B$ is Boltzmann constant, $T$ is the temperature, and $K_2$ stands for the Bessel function $K_2(z)=\frac{z^2}{2}\int_1^\infty dt e^{-zt}(t^2-1)^\frac{3}{2}.$ Throughout this paper, we normalize all physical constants to 1, including the speed of light $c=1$. Then we obtain that the relativistic Maxwellian is given by $$J(p)=\frac{e^{-{p^0}}}{4\pi}.$$ We now consider the *center-of-momentum* expression for the relativistic collision operator as below. Note that this expression has appeared in the physics literature; see [@deGroot]. For other representations of the operator such as Glassey-Strauss coordinate expression, see [@Andreasson], [@GS1], and [@GS2]. Also, see [@Strain1] for the relationship between those two representations of the collision operator. As in [@StrainPHD] and [@deGroot], one can reduce the collision operator (\[Colop\]) using Lorentz transformations and get $$\label{omegaint} Q(f,h)=\int_{{\mathbb{R}^3}}dq\int_{\mathbb{S}^2}d\omega\hspace{1mm} \hspace{1mm} v_\phi\sigma(g,\theta)[f(p')h(q')-f(p)h(q)],$$ where $v_\phi=v_\phi(p,q)$ is the M$\phi$ller velocity given by $$v_\phi(p,q)=\sqrt{\Big|\frac{p}{{p^0}}-\frac{q}{{q^0}}\Big|^2-\Big|\frac{p}{{p^0}}\times\frac{q}{{q^0}}\Big|^2}=\frac{g\sqrt{s}}{{p^0}{q^0}}.$$ Comparing with the reduced version of collision operator in [@Andreasson], [@GS1], and [@GS2], we can notice that one of the advantages of this *center-of-momentum* expression of the collision operator is that the reduced integral (\[omegaint\]) is written in relatively simple terms which only contains the M$\phi$ller velocity, scattering kernel, and the cancellation between gain and loss terms. The post-collisional momentums in the *center-of-momentum* expression are written as $$\label{p'} p'=\frac{p+q}{2}+\frac{g}{2}\Big(\omega+(\gamma-1)(p+q)\frac{(p+q)\cdot\omega}{|p+q|^2}\Big),$$ and $$\label{q'} q'=\frac{p+q}{2}-\frac{g}{2}\Big(\omega+(\gamma-1)(p+q)\frac{(p+q)\cdot\omega}{|p+q|^2}\Big).$$ The energy of the post-collisional momentums are then written as $${p'^0}=\frac{{p^0}+{q^0}}{2}+\frac{g}{2\sqrt{s}}(p+q)\cdot\omega,$$ and $${q'^0}=\frac{{p^0}+{q^0}}{2}-\frac{g}{2\sqrt{s}}(p+q)\cdot\omega.$$ These can be derived by using the conservation of energy and momentum (\[conservation\]); see [@Strain2]. As in (\[kw\]) in the Appendix, we can show that the angle can be written as $\cos\theta=k\cdot \omega$ with $k=k(p,q)$ and $|k|=1$. For $f,g$ smooth and small at infinity, it turns out [@Glassey] that the collision operator satisfies $$\int Q(f,g)dp=\int pQ(f,g)dp=\int {p^0}Q(f,g)dp=0$$ and $$\label{entropy} \int Q(f,f)(1+\log f)dp\hspace{1mm}\leq 0.$$ Using (\[entropy\]), we can prove the famous Boltzmann H-theorem that the entropy of the system $ -\int f\log f dp\hspace{1mm}dx $ is a non-decreasing function of $t$. The expression $-f\log f$ is called the *entropy density*. A brief history of previous results in relativistic kinetic theory ------------------------------------------------------------------ The full relativistic Boltzmann equation appeared first in the paper by Lichnerowicz and Marrot [@LM] in 1940. In 1967, Bichteler [@Bich] showed the local existence of the solutions to the relativistic Boltzmann equation. In 1989, Dudynski and Ekiel-Jezewska [@Dudynski5] showed that there exist unique $L^2$ solutions to the linearized equation. Afterwards, Dudynski [@Dudynski1] studied the long time and small-mean-free-path limits of these solutions. Regarding large data global in time weak solutions, Dudynski and Ekiel-Jezewska [@Dudynski6] in 1992 extended DiPerma-Lions renormalized solutions [@DiPerna] to the relativistic Boltzmann equation using their causality results from 1985 [@Dudynski3]. Here we would like to mention the work by Alexandre and Villani [@AlexandreV] on renormalized weak solutions with non-negative defect measure to non-cutoff non-relativistic Boltzmann equation. In 1996, Andreasson [@Andreasson] studied the regularity of the gain term and the strong $L^1$ convergence of the solutions to the Jüttner equilibrium which were generalizations of Lions’ results [@Lions1; @Lions2] in the non-relativistic case. He showed that the gain term is regularizing. In 1997, Wennberg [@Wennberg] showed the regularity of the gain term in both non-relativistic and relativistic cases. Regarding the Newtonian limit for the Boltzmann equation, we have a local result by Cercignani [@Cercignani] and a global result by Strain [@Strain1]. Also, Andreasson, Calogero and Illner [@And2] proved that there is a blow-up if only with gain-term in 2004. Then, in 2009, Ha, Lee, Yang, and Yun [@Ha] provided uniform $L^2$-stability estimates for the relativistic Boltzmann equation. In 2011, Speck and Strain [@Speck] connected the relativistic Boltzmann equation to the relativistic Euler equation via the Hilbert expansions. Regarding problems with the initial data nearby the relativistic Maxwellian, Glassey and Strauss [@GS2] first proved there exist unique global smooth solutions to the equation on the torus $\mathbb{T}^3$ for the hard potentials in 1993. Also, in the same paper they have shown that the convergence rate to the relativistic Maxwellian is exponential. Note that their assumptions on the differential cross-section covered the case of hard potentials. In 1995 [@GS5], they extended their results to the whole space and have shown that the convergence rate to the equilibrium solution is polynomial. Under reduced restrictions on the cross-sections, Hsiao and Yu [@Hsiao] gave results on the asymptotic stability of Boltzmann equation using energy methods in 2006. Recently, in 2010, Strain [@Strain3] showed that unique global-in-time solutions to the relativistic Boltzmann equation exist for the soft potentials which contains more singular kernel and decay with any polynomial rate towards their steady state relativistic Maxwellian under the conditions that the initial data starts out sufficiently close in $L^\infty$. In addition, we would like to mention that Glassey and Strauss [@GS1] in 1991 computed the Jacobian determinant of the relativistic collision map. Also, we notice that there are results by Guo and Strain [@Guo2; @Guo] on global existence of unique smooth solutions which are initially close to the relativistic Maxwellian for the relativistic Landau-Maxwell system in 2004 and for the relativistic Landau equation in 2006. In 2009, Yu [@Yu] proved the smoothing effects for relativistic Landau-Maxwell system. In 2010, Yang and Yu [@Yang] proved time decay rates in the whole space for the relativistic Boltzmann equation with hard potentials and for the relativistic Landau equation. Statement of the Main Results and Remarks ========================================= Linearization and reformulation of the Boltzmann equation --------------------------------------------------------- We will consider the linearization of the collision operator and perturbation around the relativistic Jüttner equilibrium state $$\label{pert} F(t,x,p)=J(p)+\sqrt{J(p)}f(t,x,p).$$ Without loss of generality, we suppose that the mass, momentum, energy conservation laws for the perturbation $f(t,x,p)$ holds for all $t\geq 0$ as $$\label{zero} \int_{{\mathbb{R}^3}}dp\hspace{1mm}\int_{\mathbb{T}^3} dx\hspace{1mm} \left(\begin{array}{c} 1\\ p\\ {p^0}\end{array}\right) \sqrt{J(p)}f(t,x,p)=0.$$ We linearize the relativistic Boltzmann equation around the relativistic Maxwellian equilibrium state (\[pert\]). By expanding the equation, we obtain that $$\label{Linearized B} \partial_t f+\hat{p}\cdot\nabla_x f+L(f)=\Gamma(f,f), \hspace{10mm} f(0,x,v)=f_0(x,v),$$ where the linearized relativistic Boltzmann operator $L$ is given by $$\begin{split} L(f)\eqdef&-J^{-1/2}Q(J,\sqrt{J}f)-J^{-1/2}Q(\sqrt{J}f,J)\\ =&\int_{{{\mathbb{R}^3}}}dq \int_{\mathbb{S}^2} d\omega\hspace{1mm}v_\phi \sigma(g,\omega)\Big(f(q)\sqrt{J(p)}\\ &+f(p)\sqrt{J(q)}-f(q')\sqrt{J(p')}-f(p')\sqrt{J(q')}\Big)\sqrt{J(q)}, \end{split}$$ and the bilinear operator $\Gamma$ is given by $$\label{Gamma1} \begin{split} \Gamma(f,h)&{\overset{\mbox{\tiny{def}}}{=}}J^{-1/2}Q(\sqrt{J}f,\sqrt{J}h)\\ &=\int_{{{\mathbb{R}^3}}}dq \int_{\mathbb{S}^2} d\omega\hspace{1mm}v_\phi \sigma(g,\theta)\sqrt{J(q)}(f(q')h(p')-f(q)h(p)). \end{split}$$ Then notice that we have $$L(f)=-\Gamma(f,\sqrt{J})-\Gamma(\sqrt{J},f).$$ We further decompose $L=N+K$. We would call $N$ as norm part and $K$ as compact part. First, we define the weight function $\tilde{\zeta}=\zeta+\zeta_K$ such that $$\Gamma(\sqrt{J},f)=\left(\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega\hspace{1mm}v_\phi \sigma(g,\theta)(f(p')-f(p))\sqrt{J(q')}\sqrt{J(q)}\right)-\tilde{\zeta}(p)f(p),$$ where $$\begin{split} \tilde{\zeta}(p)&=\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega\hspace{1mm}v_\phi \sigma(g,\theta)(\sqrt{J(q)}-\sqrt{J(q')})\sqrt{J(q)}\\ &=\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega\hspace{1mm}v_\phi \sigma(g,\theta)(\sqrt{J(q)}-\sqrt{J(q')})^2\\ &\hspace{15mm}+\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega\hspace{1mm}v_\phi \sigma(g,\theta)(\sqrt{J(q)}-\sqrt{J(q')})\sqrt{J(q')}\\ &=\zeta(p)+\zeta_K(p). \end{split}$$ Then the first piece in $\Gamma$ above contains a crucial Hilbert space structure and this is a similar phenomenon to the non-relativistic case as mentioned in Gressman and Strain [@GS]. To see this, we take a pre-post collisional change of variables $(p,q) \rightarrow (p',q')$ as $$\label{Hilbert} \begin{split} -&\int_{{\mathbb{R}^3}}dp\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega\hspace{1mm}v_\phi \sigma(g,\theta)(f(p')-f(p))h(p)\sqrt{J(q')}\sqrt{J(q)}\\ =&-\frac{1}{2}\int_{{\mathbb{R}^3}}dp\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega\hspace{1mm}v_\phi \sigma(g,\theta)(f(p')-f(p))h(p)\sqrt{J(q')}\sqrt{J(q)}\\ &-\frac{1}{2}\int_{{\mathbb{R}^3}}dp\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega\hspace{1mm}v_\phi \sigma(g,\theta)(f(p)-f(p'))h(p')\sqrt{J(q)}\sqrt{J(q')}\\ =&\frac{1}{2}\int_{{\mathbb{R}^3}}dp\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega\hspace{1mm}v_\phi \sigma(g,\theta)(f(p')-f(p))(h(p')-h(p))\sqrt{J(q')}\sqrt{J(q)}. \end{split}$$ Then, we define the compact part $K$ of the lineaerized Boltzmann operator $L$ as $$\begin{split} Kf&=\zeta_K(p)f-\Gamma(f,\sqrt{J})\\ &=\zeta_K(p)f-\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega\hspace{1mm}v_\phi \sigma(g,\theta)\sqrt{J(q)}(f(q')\sqrt{J(p')}-f(q)\sqrt{J(p)}). \end{split}$$ Then, the rest of $L$ which we call as the norm part $N$ is defined as $$\begin{split} Nf&=-\Gamma(\sqrt{J},f)-\zeta_K(p)f\\ &=-\int_{{{\mathbb{R}^3}}}dq \int_{\mathbb{S}^2} d\omega\hspace{1mm}v_\phi \sigma(g,\omega)(f(p')-f(p))\sqrt{J(q')}\sqrt{J(q)}+\zeta(p)f(p). \end{split}$$ Then, as in (\[Hilbert\]), this norm piece satisfies that $$\begin{split} \langle Nf,f\rangle =\frac{1}{2}&\int_{{{\mathbb{R}^3}}}dp \int_{{{\mathbb{R}^3}}}dq \int_{\mathbb{S}^2} d\omega\hspace{1mm}v_\phi \sigma(g,\theta)(f(p')-f(p))^2\sqrt{J(q')}\sqrt{J(q)}\\ &+\int_{{{\mathbb{R}^3}}} dp \hspace{1mm}\zeta(p)|f(p)|^2. \end{split}$$ Thus, we define a fractional semi-norm as $$|f|^2_B{\overset{\mbox{\tiny{def}}}{=}}\frac{1}{2}\int_{{\mathbb{R}^3}}dp\int_{{\mathbb{R}^3}}dq\hspace{1mm}\int_{\mathbb{S}^2}d\omega\hspace{1mm}v_\phi \hspace{1mm}\sigma(g,\theta)(f(p')-f(p))^2\sqrt{J(q)J(q')}.$$ This norm will appear in the process of linearization of the collision operator. For the second part of the norm piece, we recall $|f|_{L^2_{\frac{a+\gamma}{2}}}$ by Pao’s estimates in [@Pao] that $$\zeta(p)\approx {(p^0)}^{\frac{a+\gamma}{2}}\hspace{5mm}\text{and}\hspace{5mm} |\zeta_K(p)|\lesssim {(p^0)}^{\frac{a}{2}}.$$ This completes our main splitting of the linearized relativistic Boltzmann collision operator. We can also think of the spatial derivative of $\Gamma $ which will be useful later. Recall that the linearization of the collision operator is given by (\[Gamma1\]) and that the post-collisional variables $p'$ and $q'$ satisfies (\[p’\]) and (\[q’\]). Then, we can define the spatial derivatives of the bilinear collision operator $\Gamma$ as $$\partial^\alpha\Gamma(f,h)=\sum_{\alpha_1\leq\alpha} C_{\alpha,\alpha_1}\Gamma(\partial^{\alpha-\alpha_1}f,\partial^{\alpha_1}h),$$ where $C_{\alpha,\alpha_1}$ is a non-negative constant. Main Hypothesis on the collision kernel $\sigma$ ------------------------------------------------ The Boltzmann collision kernel $\sigma(g,\theta)$ is a non-negative function which only depends on the relative velocity $g$ and the scattering angle $\theta$. Without loss of generality, we may assume that the collision kernel $\sigma$ is supported only when $\cos\theta\geq 0$ throught this paper; i.e., $0\leq \theta \leq \frac{\pi}{2}$. Otherwise, the following *symmetrization* [@Glassey] will reduce the case: $$\bar{\sigma}(g,\theta)=[\sigma(g,\theta)+\sigma(g,-\theta)]1_{\cos\theta\geq 0},$$ where $1_A$ is the indicator function of the set $A$. Throughout this paper we assume the collision kernel satisfies the following growth/decay estimates: $$\label{hard} \begin{split} &\sigma(g,\theta) \lesssim (g^a+g^{-b})\sigma_0(\theta)\\ &\sigma(g,\theta) \gtrsim (\frac{g}{\sqrt{s}})g^{a}\sigma_0(\theta) \end{split}$$ Additionally, the angular function $\theta \mapsto \sigma_0(\theta)$ is not locally integrable; for $c>0$, it satisfies $$\frac{c}{\theta^{1+\gamma}} \leq \sin\theta\cdot\sigma_0(\theta) \leq \frac{1}{c\theta^{1+\gamma}}, \hspace*{5mm}\gamma \in (0,2), \hspace*{5mm}\forall \theta \in (0,\frac{\pi}{2}].$$ Here we have that $a+\gamma \geq 0$ and $\gamma < b < \frac{3}{2}+\gamma$. Note that we do not assume any cut-off condition on the angular function. The assumptions on our collision kernel have been motivated from many important physical interactions; the Boltzmann cross-sections which satisfy the assumptions above can describe many interactions such as short range interactions [@short1; @short2] which describe the relativistic analogue of hard-sphere collisions, M$\phi$ller scattering [@deGroot] which describes elctron-elctron scattering, Compton scattering [@deGroot] which is an approximation of photon-electron scattering, neutrino gas interactions [@neutrino], and the interactions of Israel particles [@Israel] which are the relativistic analogue of the interactions of Maxwell molecules. Some of the collision cross-sections of those important physical interactions have high angular singularities, so the non-cutoff assumptions on the angular kernel are needed. Spaces ------ As will be seen, our solutions depend heavily on the following weighted geometric fractional Sobolev space: $$I^{a,\gamma}{\overset{\mbox{\tiny{def}}}{=}}\{f\in L^2(\mathbb{R}^3_p):|f|_{I^{a,\gamma}}<\infty\},$$ where the norm is described as $$|f|^2_{I^{a,\gamma}}{\overset{\mbox{\tiny{def}}}{=}}|f|^2_{L^2_{\frac{a+\gamma}{2}}}+\int_{{\mathbb{R}^3}}dp\int_{{\mathbb{R}^3}}dp'\hspace{1mm}\frac{(f(p')-f(p))^2}{\bar{g}^{3+\gamma}}({p'^0}{p^0})^{\frac{a+\gamma}{4}}1_{\bar{g}\leq 1}$$ where $\bar{g}$ is the relative momentum between $p'^\mu$ and $p^\mu$ in the *center-of-momentum* system and is defined as $$\label{gbar} \bar{g}=g(p'^\mu,p^\mu)=\sqrt{(p'^\mu-p^\mu)(p'_\mu-p_\mu)}=\sqrt{2(-p'^\mu p_\mu-1)}.$$ Here, we also define another relative momentum between $p'^\mu$ and $q^\mu$ as $$\tilde{g}=g(p'^\mu,q^\mu)=\sqrt{(p'^\mu-q^\mu)(p'_\mu-q_\mu)}=\sqrt{2(-p'^\mu q_\mu-1)}.$$ Note that this space $I^{a,\gamma}$ is included in the following weighted $L^2$ space given by $$|f|^2_{L^2_{\frac{a+\gamma}{2}}}{\overset{\mbox{\tiny{def}}}{=}}\int_{{\mathbb{R}^3}}dp\hspace{1mm} {(p^0)}^{\frac{a+\gamma}{2}}|f(p)|^2.$$ The notation on the norm $|\cdot|$ refers to function space norms acting on $\mathbb{R}^3_p$ only. The analogous norm acting on $\mathbb{T}^3_x\times \mathbb{R}^3_p$ is denoted by $||\cdot||$. So, we have $$||f||^2_{I^{a,\gamma}}{\overset{\mbox{\tiny{def}}}{=}}||\hspace{1mm}|f|_{I^{a,\gamma}}\hspace{1mm}||^2_{L^2(\mathbb{T}^3)}.$$ The multi-indices $\alpha=(\alpha^1,\alpha^2,\alpha^3)$ will be used to record spatial derivatives. For example, we write $$\partial^\alpha=\partial^{\alpha^1}_{x_1}\partial^{\alpha^2}_{x_2}\partial^{\alpha^3}_{x_3}.$$ If each component of $\alpha$ is not greater than that of $\alpha_1$, we write $\alpha\leq\alpha_1$. Also, $\alpha<\alpha_1$ means $\alpha\leq\alpha_1$ and $|\alpha|<|\alpha_1|$ where $|\alpha|=\alpha_1+\alpha_2+\alpha_3$. We define the space $H^N=H^N(\mathbb{T}^3\times{{\mathbb{R}^3}})$ with integer $N\geq 0$ spatial derivatives as $$||f||^2_{H^N}=||f||^2_{H^N(\mathbb{T}^3\times{{\mathbb{R}^3}})}=\sum_{|\alpha|\leq N}||\partial^\alpha f||^2_{L^2(\mathbb{T}^3\times{{\mathbb{R}^3}})}.$$ We sometimes denote the norm $||f||^2_{H^{N} }$ as $||f||^2_H$ for simplicity. We also define the derivative space $I^{a,\gamma}_{N}(\mathbb{T}^3\times{{\mathbb{R}^3}})$ whose norm is given by $$||f||^2_{I^{a,\gamma}_{N}}=||f||^2_{I^{a,\gamma}_{N}(\mathbb{T}^3\times{{\mathbb{R}^3}})}=\sum_{|\alpha|< N}||\partial^\alpha f||^2_{I^{a,\gamma}(\mathbb{T}^3\times{{\mathbb{R}^3}})}.$$ Lastly, we would like to mention that we denote $B_R\subset {{\mathbb{R}^3}}$ to be the Euclidean ball of radius $R$ centered at the origin. The space $L^2(B_R)$ is the space $L^2$ on this ball and similarly for other spaces. Now, we state our main result as follows: (Main Theorem) \[MAIN\] Fix $N\geq 2$, the total number of spatial derivatives. Choose $f_0=f_0(x,p)\in H^N(\mathbb{T}^3\times{{\mathbb{R}^3}})$ in (\[pert\]) which satisfies (\[zero\]). There is an $\eta_0>0$ such that if $||f_0||_{H^N(\mathbb{T}^3\times{{\mathbb{R}^3}})} \leq \eta_0$, then there exists a unique global strong solution to the relativistic Boltzmann equation (\[RBE\]), in the form (\[pert\]), which satisfies $$f(t,x,p)\in L^\infty_t([0,\infty);H^N(\mathbb{T}^3\times{{\mathbb{R}^3}}))\cap L^2_t((0,\infty);I^{a,\gamma}_{N}(\mathbb{T}^3\times{{\mathbb{R}^3}})).$$ Furthermore, we have exponential decay to equilibrium. For some fixed $\lambda >0$, $$||f(t)||_{H^N(\mathbb{T}^3\times{{\mathbb{R}^3}}))}\lesssim e^{-\lambda t}||f_0||_{H^N(\mathbb{T}^3\times{{\mathbb{R}^3}}))}.$$ We also have positivity; $F=J+\sqrt{J}f\geq 0$ if $F_0=J+\sqrt{J}f_0\geq 0$. Remarks and possibilities for the future ---------------------------------------- Our main theorem assumes that the initial function has at least $N$ spatial derivatives. The minimum number of spatial derivatives $N\geq 2$ is needed to use the Sobolev embedding theorems that $ L^\infty(\mathbb{T}^3_x)\supset H^2(\mathbb{T}^3_x) $. Note that if the number of spatial derivatives is $N> 4$, the strong solutions in the existence theorem are indeed classical solutions by the Sobolev lemma [@Folland] that if $N>1+\frac{6}{2}$ then $H^N(\mathbb{T}^3\times {{\mathbb{R}^3}})\subset C^1(\mathbb{T}^3\times {{\mathbb{R}^3}})$. For the lowest number of spatial derivatives, $N\geq 2$, we obtain that the equation is satisfied in the weak sense; however, the weak solution is also a strong solution to the equation because we show that the solution is unique.\ **Cancellation estimates.** Here we want to record one of the main computational and technical difficulties which arise in dealing with relativistic particles. While one of the usual techniques to deal with the cancellation estimates which contains $|f(p)-f(p')|$ is to use the fundamental theorem of calculus and the change of variables in the non-relativistic settings, this method does not give a favorable output in the relativistic theory because the momentum derivative on the post-collisional variables (\[p’\]) and (\[q’\]) creates additional high singularities which are tough to control in the relativistic settings. Even with the other different representation of post-collisional variables as in [@GS2], it is known in much earlier work [@GS1] that the growth of momentum derivatives is large enough and this high growth prevents us from using known the non-relativistic method from [@Guo3]. It is also worth it to mention that the Jacobian which arises in taking the change of variables from $p$ to $u=\theta p+(1-\theta)p'$ for some $\theta\in(0,1)$ has a bad singularity at some $\theta=\theta(p,p')$. Even if we take a non-linear path from $p$ to $p'$, the author has computed that the Jacobian always blows up at a point on the path and has concluded that there exists a 2-dimensional hypersurface between the momentums $p$ and $p'$ on which the Jacobian blows up. This difficulty led the author to deal with the cancellation estimate by avoiding the change of variables technique; see Section \[Cancellation\].\ **Non-cutoff results.** Regarding non-relativistic results with non-cutoff assumptions, we would like to mention the work by Alexandre and Villani [@AlexandreV] from 2002 on renormalized weak solutions with non-negative defect measure. Also, we would like to record the work by Gressman and Strain [@GS7; @GS] in 2010-2011. We also want to mention that Alexandre, Morimoto, Ukai, Xu, and Yang [@AMUXYglobal; @AMUXYR; @AMUXY1; @AMUXY2; @AMUXY] obtained a proof, using different methods, of the global existence of solutions with non-cutoff assumptions in 2010-2012. Lastly, we would like to mention the recent work by the same group of Alexandre, Morimoto, Ukai, Xu, and Yang [@AMUXYnew] from 2013 on the local existence with mild regularity for the non-cutoff Boltzmann equation where they work with an improved initial condition and do not assume that the initial data is close to a global equilibrium. We also want to remark that Theorem \[MAIN\] is the first global existence and stability proof in the relativistic kinetic theory without angular cutoff conditions. This solves an open problem regarding global existence and stability for the relativistic Boltzmann equations without cutoff assumption.\ **Future possibilities:** We believe that our method can be useful for making further progress on the non-cutoff relativistic kinetic theory. Note that our kernel assumes the hard potential interaction. We can use the similar methods to prove another open problem on the global stability of the relativistic Boltzmann equations for the soft potentials without angular cutoff. We will soon address in a future work the generalization to the soft potential interaction which assumes $-b+\gamma<0$ and $-\frac{3}{2}<-b+\gamma$ in a subsequent paper [@Jang]. For more singular soft potentials $-b+\gamma\leq -\frac{3}{2}$, we need to take the velocity-derivatives on the bilinear collision operator $\partial_\beta \Gamma$ which is written in the language of the derivatives of the post-collision maps of (\[p’\]) and (\[q’\]) and the estimates on those terms need some clever choices of splittings of kernels so that we reduce the complexity of the derivatives. This difficulty on the derivatives is known and expected in the relativistic kinetic theory, for the representations of the post-collisional momentums in the *center-of-momentum* expression in (\[p’\]) and (\[q’\]) contain many non-linear terms. Furthermore, we expect to generalize our result to the whole space case $\mathbb{R}^3_x$ by combining our estimates with the existing cut-off technology in the whole space. It is also possible that our methods could help to prove the global existences and stabilities for other relativistic PDEs such as relativistic Vlasov-Maxwell-Boltzmann system for hard potentials without angular cut-off. Outline of the article ---------------------- In the following subsection, we first introduce the main lemmas and theorems that are needed to prove the local existence in Section \[global exist\]. In Section \[main estimates\], some simple size estimates on single decomposed pieces will be introduced. We first start by introducing our dyadic decomposition method of the angular singularity and start making an upper bound estimate on each decomposed piece. Some proofs will be based on the relativistic Carleman-type dual representation which is introduced in the Appendix. Note that some proofs on the dual representation require the use of some new Lorentz frames. In Section \[Cancellation\], we estimate the upper bounds of the difference of the decomposed gain and loss pieces for the $k\geq 0$ case. In Section \[LP decomp\], we develop the Littlewood-Paley decomposition and prove estimates connecting the Littlewood-Paley square functions with our weighted geometric fractional Sobolev norm $||\cdot ||_{I}$. In Section \[main upper\], we first split the main inner product of the non-linear collision operator $\Gamma$ which is written as a trilinear form. Then, we use the upper bound estimate on each decomposed piece, upper bound estimates on the difference terms, and the estimates on the Littlewood-Paley decomposed piece that were proven in the previous sections to prove the main upper bound estimates. In Section \[main coercive estimates\], we use the Carleman dual representation on the trilinear form and find the coercive lower bound. We also show that the norm part $\langle Nf,f\rangle$ is comparable to the weighted geometric fractional Sobolev norm $|\cdot|_I$. In Section \[global exist\], we finally use the standard iteration method and the uniform energy estimate for the iterated sequence of approximate solutions to prove the local existence. After this, we derive our own systems of macroscopic equations and the local conservation laws and use these to prove that the local solutions should be global by the standard continuity argument and the energy estimates. In the Appendix, we mainly derive the relativistic Carleman-type dual representation of the gain and loss terms and obtain the dual formulation of the trilinear form which is used in many places from the previous sections. Main estimates -------------- Here we would like to record our main upper and lower bound estimates of the inner products that involve the operators $\Gamma$, $L$, and $N$. The proofs for the estimates are introduced in Section \[main estimates\] through \[main coercive estimates\]. \[thm1\] We have the basic estimate $$|\langle \Gamma(f,h),\eta\rangle| \lesssim |f|_{L^2}|h|_{I^{a,\gamma}}|\eta|_{I^{a,\gamma}}.$$ \[Lemma1\] Suppose that $|\alpha|\leq N$ with $N\geq 2$. Then we have the estimate $$|\langle \partial^\alpha\Gamma(f,h),\partial^\alpha\eta\rangle| \lesssim ||f||_{H^{N}}||h||_{I^{a,\gamma}_{N}}||\partial^\alpha\eta||_{I^{a,\gamma}}.$$ \[Lemma2\] We have the uniform inequality for $K$ that $$|\langle Kf,f\rangle | \leq \epsilon|f|^2_{L^2_{\frac{a+\gamma}{2}}}+C_\epsilon|f|^2_{L^2(B_{C_\epsilon})}$$ where $\epsilon$ is any positive small number and $C_\epsilon>0$. \[Lemma3\] We have the uniform inequality for $N$ that $$|\langle Nf,f\rangle | \lesssim |f|^2_{I^{a,\gamma}}.$$ \[2.5\] We have the uniform coercive lower bound estimate: $$\langle Nf,f\rangle \gtrsim |f|^2_{I^{a,\gamma}}.$$ Lemma \[Lemma3\] and Lemma \[2.5\] together implies that the norm piece is comparable to the fractional Sobolev norm $I^{a,\gamma}$ as $$\langle Nf,f\rangle \approx |f|^2_{I^{a,\gamma}}.$$ Finally, we have the coercive inequality for the linearlized Boltzmann operator $L$: \[2.10\] For some $C>0$, we have $$\langle Lf,f\rangle \gtrsim |f|^2_{I^{a,\gamma}}-C|f|^2_{L^2(B_C)}.$$ Note that this lemma is a direct consequence of Lemma \[Lemma2\] and Lemma \[2.5\] because $L=K+N$. Estimates on the Single Decomposed Piece {#main estimates} ======================================== In this chapter, we mainly discuss about the estimates on the decomposed pieces of the trilinear product $\langle \Gamma(f,h), \eta\rangle$. Each decomposed piece can be written in two different representations: one with the usual 8-fold reduced integral in $\int dp\int dq\int d\omega$ and the other in Carleman-type dual representation as introduced in the Appendix. For the usual 8-fold representation, we recall (\[Gamma1\]) and obtain that $$\begin{split} \langle \Gamma(f,h), \eta\rangle &=\int_{{\mathbb{R}^3}}dp \int_{{\mathbb{R}^3}}dq \int_{\mathbb{S}^2}d\omega \hspace{2mm}v_\phi \sigma(g,\theta)\eta(p)\sqrt{J(q)}\left(f(q')h(p')-f(q)h(p)\right)\\ &= T_+-T_-\\ \end{split}$$ where the gain term $T_+$ and the loss term $T_-$ are defined as $$\begin{split} T_+(f,h,\eta)&{\overset{\mbox{\tiny{def}}}{=}}\int_{{\mathbb{R}^3}}dp \int_{{\mathbb{R}^3}}dq \int_{\mathbb{S}^2}d\omega \hspace{2mm}v_\phi \sigma(g,\theta)\eta(p)\sqrt{J(q)}f(q')h(p')\\ T_-(f,h,\eta)&{\overset{\mbox{\tiny{def}}}{=}}\int_{{\mathbb{R}^3}}dp \int_{{\mathbb{R}^3}}dq \int_{\mathbb{S}^2}d\omega \hspace{2mm}v_\phi \sigma(g,\theta)\eta(p)\sqrt{J(q)}f(q)h(p)\\ \end{split}$$ In this chapter, we would like to decompose $T_+$ and $T_-$ dyadically around the angular singularity as the following. We let $\{\chi_k\}^\infty_{k=-\infty}$ be a partition of unity on $(0,\infty)$ such that $|\chi_k|\leq 1$ and supp$(\chi_k) \subset [2^{-k-1},2^{-k}]$. Then, we define $\sigma_k(g,\theta){\overset{\mbox{\tiny{def}}}{=}}\sigma(g,\theta)\chi_k(\bar{g})$ where $\bar{g}{\overset{\mbox{\tiny{def}}}{=}}g(p^\mu,p'^\mu)$. The reason that we dyadically decompose around $\bar{g}$ is that we have $\theta \approx \frac{\bar{g}}{g}$ for small $\theta$. Then we write the decomposed pieces $T^k_+$ and $T^k_-$ as $$\begin{split} T^k_+(f,h,\eta)&{\overset{\mbox{\tiny{def}}}{=}}\int_{{\mathbb{R}^3}}dp \int_{{\mathbb{R}^3}}dq \int_{\mathbb{S}^2}d\omega \hspace{2mm}v_\phi \sigma_k(g,\theta)\eta(p)\sqrt{J(q)}f(q')h(p')\\ T^k_-(f,h,\eta)&{\overset{\mbox{\tiny{def}}}{=}}\int_{{\mathbb{R}^3}}dp \int_{{\mathbb{R}^3}}dq \int_{\mathbb{S}^2}d\omega \hspace{2mm}v_\phi \sigma_k(g,\theta)\eta(p)\sqrt{J(q)}f(q)h(p)\\ \end{split}$$ For some propositions, we utilize the Carleman-type dual representation and write the operator $T_+$ as $$T_+(f,h,\eta)\eqdef\frac{c}{2} \int_{{{\mathbb{R}^3}}}\frac{dp'\hspace{1mm}}{{p'^0}}\eta(p')\int_{{{\mathbb{R}^3}}}\frac{dq}{{q^0}}f(q)\int_{E^p_{q-p'}}\frac{d\pi_p}{{p^0}} \frac{s\sigma(g,\theta)}{\tilde{g}}\sqrt{J(q')}h(p).$$ We also take the dyadic decomposition on those integral above. Again, we let $\{\chi_k\}^\infty_{k=-\infty}$ be a partition of unity on $(0,\infty)$ such that $|\chi_k|\leq 1$ and supp$(\chi_k) \subset [2^{-k-1},2^{-k}]$. Then, we define the following integral $$\label{CT+} T^k_+(f,h,\eta)\eqdef\frac{c}{2} \int_{{{\mathbb{R}^3}}}\frac{dp'\hspace{1mm}}{{p'^0}}\eta(p')\int_{{{\mathbb{R}^3}}}\frac{dq}{{q^0}}f(q)\int_{E^p_{q-p'}}\frac{d\pi_p}{{p^0}} \tilde{\sigma_k}\sqrt{J(q')}h(p),$$ where $$\tilde{\sigma_k}{\overset{\mbox{\tiny{def}}}{=}}\frac{s\sigma(g,\theta)}{\tilde{g}}\chi_k(\bar{g}),\hspace*{5mm} \bar{g}{\overset{\mbox{\tiny{def}}}{=}}g(p^\mu,p'^\mu), \hspace*{5mm}\tilde{g}{\overset{\mbox{\tiny{def}}}{=}}g(p'^\mu,q^\mu).$$ Thus, for $f,h,\eta \in S({{\mathbb{R}^3}}),$ $$\langle \Gamma(f,h),\eta\rangle=\sum^\infty_{k=-\infty}\{T^k_+(f,h,\eta)-T^k_-(f,h,\eta)\}$$ Now, we start making some size estimates for the decomposed pieces $T^k_-$ and $T^k_+$. For any integer $k,l$, and $m\geq 0$, we have the uniform estimate: $$\begin{split} |T^k_-(f,h,\eta)|\lesssim 2^{k\gamma}|f|_{L^2_{-m}}| h|_{L^2_{\frac{a+\gamma}{2}}}| \eta|_{L^2_{\frac{a+\gamma}{2}}}. \end{split}$$ The term $T^k_-$ is given as: $$\label{T--} T^k_-(f,h,\eta)=\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega\hspace{1mm}\sigma_k(g,\omega)v_\phi f(q)h(p)\sqrt{J(q)}\eta(p),$$ where $\sigma_k(g,\omega)=\sigma(g,\omega)\chi_k(\bar{g})$. Since $\cos\theta=1-2\frac{\bar{g}^2}{g^2},$ we have that $\bar{g}=g\sin\frac{\theta}{2}$. Therefore, the condition $\bar{g}\approx 2^{-k}$ is equivalent to say that the angle $\theta$ is comparable to $2^{-k}g^{-1}$. Given the size estimates for $\sigma(g,\omega)$ and the support of $\chi_k$, we obtain $$\label{Bk} \begin{split} \int_{\mathbb{S}^2}d\omega\hspace{1mm}\sigma_k(g,\omega)&\lesssim (g^a+g^{-b})\int_{\mathbb{S}^2}d\omega\hspace{1mm}\sigma_0(\cos \theta)\chi_k(\bar{g})\\ &\lesssim (g^a+g^{-b})\int_{2^{-k-1}g^{-1}}^{2^{-k}g^{-1}}d\theta\sigma_0\sin\theta \\ &\lesssim (g^a+g^{-b})\int_{2^{-k-1}g^{-1}}^{2^{-k}g^{-1}}d\theta\frac{1}{\theta^{1+\gamma}}\\ &\lesssim (g^a+g^{-b})2^{k\gamma}g^\gamma. \end{split}$$ Thus, $$\label{T-} \begin{split} |T^k_-(f,h,\eta)|&\lesssim 2^{k\gamma}\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq(g^{a+\gamma}+g^{-b+\gamma})v_\phi |f(q)||h(p)|\sqrt{J(q)}|\eta(p)|\\ &=I_1+I_2. \end{split}$$ Note that $a+\gamma\geq0$ and $-b+\gamma<0$. We first estimate $I_1$. Since $g\lesssim\sqrt{{p^0}{q^0}}$ and $v_\phi \lesssim 1$, we obtain $$I_1\lesssim 2^{k\gamma}\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\hspace{1mm}({p^0}{q^0})^{\frac{a+\gamma}{2}} |f(q)||h(p)|\sqrt{J(q)}|\eta(p)|.$$ By Cauchy-Schwarz inequality, $$\label{T-esimate1} \begin{split} I_1\lesssim &2^{k\gamma}(\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\hspace{1mm}|f(q)|^2|h(p)|^2\sqrt{J(q)}{p^0}^{\frac{a+\gamma}{2}})^\frac{1}{2}\\ & \times(\int_{{{\mathbb{R}^3}}}dp\hspace{1mm} |\eta(p)|^2{p^0}^{\frac{a+\gamma}{2}}\int_{{{\mathbb{R}^3}}}dq\sqrt{J(q)}{q^0}^{a+\gamma})^\frac{1}{2}. \end{split}$$ Since $\int_{{{\mathbb{R}^3}}}dq\sqrt{J(q)}{q^0}^{a+\gamma}\approx 1$, we have $$\label{T-estimate} \begin{split} I_1&\lesssim 2^{k\gamma}(\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\hspace{1mm}|f(q)|^2|h(p)|^2\sqrt{J(q)} {p^0}^{\frac{a+\gamma}{2}})^\frac{1}{2}(\int_{{{\mathbb{R}^3}}}dp\hspace{1mm} |\eta(p)|^2{p^0}^{\frac{a+\gamma}{2}})^\frac{1}{2}\\ &\lesssim 2^{k\gamma}|f|_{L^2_{-m_1}}| h|_{L^2_{\frac{a+\gamma}{2}}}| \eta|_{L^2_{\frac{a+\gamma}{2}}} \hspace{5mm} \text{for} \hspace{1mm}m_1\geq 0. \end{split}$$ For $I_2$, we have $$I_2=2^{k\gamma}\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\hspace{1mm}g^{-b+\gamma} |f(q)||h(p)|\sqrt{J(q)}|\eta(p)|.$$ Since $g\geq\frac{|p-q|}{\sqrt{{p^0}{q^0}}}$ and $ -b+\gamma<0$, this is $$I_2\lesssim 2^{k\gamma}\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\hspace{1mm}|p-q|^{-b+\gamma}({p^0}{q^0})^\frac{b-\gamma}{2} |f(q)||h(p)|\sqrt{J(q)}|\eta(p)|.$$ With Cauchy-Schwarz, $$\begin{split} I_2\lesssim &2^{k\gamma}(\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\hspace{1mm}|f(q)|^2|h(p)|^2\sqrt{J(q)}{p^0}^{\frac{1}{2}(-b+\gamma)}{q^0}^{b-\gamma})^\frac{1}{2}\\ & \times(\int_{{{\mathbb{R}^3}}}dp\hspace{1mm} |\eta(p)|^2{p^0}^{-\frac{1}{2}(-b+\gamma)}{p^0}^{b-\gamma}\int_{{{\mathbb{R}^3}}}dq\sqrt{J(q)}|p-q|^{2(-b+\gamma)})^\frac{1}{2}. \end{split}$$ Since $\int_{{{\mathbb{R}^3}}}dq\sqrt{J(q)}|p-q|^m\approx {p^0}^m$ if $m>-3$ and since $2(-b+\gamma)>-3$, we have $$\begin{split} I_2&\lesssim 2^{k\gamma}(\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\hspace{1mm}|f(q)|^2|h(p)|^2\sqrt{J(q)}{p^0}^{\frac{1}{2}(-b+\gamma)}{q^0}^{b-\gamma})^\frac{1}{2}\\ &\times(\int_{{{\mathbb{R}^3}}}dp\hspace{1mm} |\eta(p)|^2{p^0}^{\frac{1}{2}(-b+\gamma)})^\frac{1}{2}\\ &\lesssim 2^{k\gamma}|f|_{L^2_{-m_2}}| h|_{L^2_{\frac{1}{2}(-b+\gamma)}}| \eta|_{L^2_{\frac{1}{2}(-b+\gamma)}} \hspace{5mm} \text{for some} \hspace{1mm}m_2\geq 0. \end{split}$$ This completes the proof. Before we do the size estimates for $T^k_+$ terms, we first prove a useful inequality as in the following proposition. \[usefulineq\] On the set $E^p_{q-p'},$ we have that $$\label{Uineq} \int_{E^p_{q-p'}}\frac{d\pi_p}{{p^0}}\tilde{g}(\bar{g})^{-2-\gamma}\chi_k(\bar{g}) \lesssim 2^{k\gamma}\sqrt{{q^0}},$$ where $d\pi_p$ is the Lebesgue measure on the set $E^p_{q-p'}$ and is defined as $$d\pi_p=dp\hspace{1mm}u({p^0}+{q^0}-{p'^0})\delta\Big(\frac{\tilde{g}^2+2p^\mu(q_\mu-p'_\mu)}{2\tilde{g}}\Big).$$ We first introduce our 4-vectors $\bar{p}^\mu$ and $\tilde{p}^\mu$ defined as $$\bar{p}^\mu=p^\mu-p'^\mu\hspace*{2mm}\text{and}\hspace*{2mm} \tilde{p}^\mu=p'^\mu-q^\mu.$$ Then, notice that the Lorentzian inner product of the two 4-vectors are given by $$\bar{p}^\mu\bar{p}_\mu=\bar{g}^2\hspace*{2mm}\text{and}\hspace*{2mm} \tilde{p}^\mu\tilde{p}_\mu=\tilde{g}^2.$$ Similarly, we define some other 4-vectors which will be useful: $$\underbar{p}^\mu=p^\mu+p'^\mu\hspace*{2mm}\text{and}\hspace*{2mm} \hat{p}^\mu=p'^\mu+q^\mu.$$ The product is then given by $$-\underbar{p}^\mu\underbar{p}_\mu=\bar{s}\hspace*{2mm}\text{and}\hspace*{2mm} -\hat{p}^\mu\hat{p}_\mu=\tilde{s}.$$ Note that the four-dimensional delta-function occuring in the measure is derived from the following orthogonality equation $$(p^\mu-q'^\mu)(p_\mu+q'_\mu)=0$$ which tells that the total momentum is a time-like 4-vector orthogonal to the space-like relative momentum 4-vector. This orthogonality can be obtained from the following conservation laws $$p^\mu+q^\mu=p'^\mu+q'^\mu.$$ We start with expanding the measure as $$\begin{split} I{\overset{\mbox{\tiny{def}}}{=}}&\int_{E^p_{q-p'}}\frac{d\pi_p}{{p^0}}\tilde{g}(\bar{g})^{-2-\gamma}\chi_k(\bar{g})\\ &=\int_{{{\mathbb{R}^3}}}\frac{dp}{{p^0}}\hspace*{1mm} u({p^0}+{q^0}-{p'^0})\delta\Big(\frac{\tilde{g}^2+2p^\mu(q_\mu-p'_\mu)}{2\tilde{g}^2}\Big)(\bar{g})^{-2-\gamma}\chi_k(\bar{g}) \end{split}$$ where $u(x)=1$ if $x\geq 1$ and $0$ otherwise. Here, the numerator in the delta function can be rewritten as $$\begin{split} &\tilde{g}^2+2p^\mu(q_\mu-p'_\mu)\\ &=(q^\mu-p'^\mu+2p^\mu)(q_\mu-p'_\mu)\\ &=q^\mu q_\mu+p'^\mu p'_\mu-2p'^\mu q_\mu +2p^\mu q_\mu -2p^\mu p'_\mu\\ &=2(p'^\mu p'_\mu-p'^\mu q_\mu+p^\mu q_\mu-p^\mu p'_\mu)\\ &=2(p'^\mu-p^\mu)(p'_\mu-q_\mu). \end{split}$$ Now, define $\bar{p}=p-p'\in {{\mathbb{R}^3}}$ and $\bar{p}^0={p^0}-{p'^0}\in \mathbb{R}$. We denote the 4-vector $\bar{p}^\mu=(\bar{p}^0,\bar{p})=p^\mu-p'^\mu.$ We now apply the change of variables $p\in {{\mathbb{R}^3}}\rightarrow \bar{p}\in {{\mathbb{R}^3}}$. Note that our kernel $I$ will be estimated inside the integral of $\int\frac{dq}{{q^0}}\int \frac{dp'}{{p'^0}}$ in the next propositions and this change of variables is indeed $(p',p)\rightarrow (p', \bar{p})=(p', p-p')$. With this change of variables the integral becomes $$I=\int_{{{\mathbb{R}^3}}}\frac{d\bar{p}}{\bar{p}^0+{p'^0}} \hspace{1mm}u(\bar{p}^0+{q^0})\delta\Big(\frac{\bar{p}^\mu(p'_\mu-q_\mu)}{\tilde{g}^2}\Big)(\bar{g})^{-2-\gamma}\chi_k(\bar{g}).$$ The remaining part of this estimate will be performed in the *center-of-momentum* system where $p+p'=0$; i.e., we take a Lorentz transformation such that $\underbar{p}^\mu=(\sqrt{\bar{s}},0,0,0)$ and $\bar{p}^\mu=(0,\bar{p})=(0,\bar{p}_x,\bar{p}_y,\bar{p}_z)$. Note that this gives us that $|\bar{p}|=\bar{g}$. Also, we choose the z-axis parallel to $\tilde{p}\in{{\mathbb{R}^3}}$. Then, we have $\tilde{p}_x=\tilde{p}_y=0$ and $\tilde{p}_z=\tilde{g}$. Additionally, we introduce a polar-coordinates for $\bar{p}$, taking the polar-axis along the z-direction: $$\bar{p}=|\bar{p}|(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta).$$ Note that $\bar{g}$ and the measure $\frac{d\bar{p}}{\bar{p}^0+{p'^0}}$ are Lorentz invariant because $$\frac{d\bar{p}}{\bar{p}^0+{p'^0}}=2d\bar{p}^\mu u(\bar{p}^0+{p'^0}) \delta(\bar{p}^\mu\bar{p}_\mu+2\bar{p}^\mu p'_\mu)=2d\bar{p}^\mu u(\bar{p}^0+{p'^0}) \delta((\bar{p}^\mu+p'^\mu)((\bar{p}_\mu+p'_\mu)+1)$$ and these are Lorentz invariant. Then the measure of the integral is now $$\begin{split} d\bar{p}&=|\bar{p}|^2d|\bar{p}| d(\cos\theta) d\phi\\ &=\bar{g}^2d\bar{g} d(\cos\theta) d\phi \end{split}$$ We now write the terms in the delta function in these variables and perform the integration with respect to $\cos\theta$. The delta function is now written as $$\begin{split} \delta\Big(\frac{2\bar{p}^\mu(p'_\mu-q_\mu)}{\tilde{g}^2}\Big)&=\delta\Big(\frac{2|\bar{p}||\tilde{p}|\cos\theta}{\tilde{g}^2}\Big)=\frac{\tilde{g}^2}{2|\bar{p}||\tilde{p}|}\delta(\cos\theta)=\frac{\tilde{g}^2}{2\bar{g}|p'-q|}\delta(\cos\theta). \end{split}$$ After we evaluate the integral by reducing this delta function, we obtain that our integral is now $$I=\int_0^\infty d\bar{g} (\bar{g})^{-\gamma}\chi_k(\bar{g})\frac{\tilde{g}^2}{2{p'^0}\bar{g}|p'-q|}=\frac{\tilde{g}^2}{2{p'^0}|p'-q|}\int_0^\infty d\bar{g} (\bar{g})^{-1-\gamma}\chi_k(\bar{g}).$$ We recall the inequality that $\tilde{g}\leq |p'-q|$ and that $\tilde{g}\lesssim \sqrt{{p'^0}{q^0}}$. Using this inequality and the support condition of $\chi$, we obtain that the integral is bounded above by $$I\lesssim \sqrt{{q^0}}\int_0^\infty d\bar{g} (\bar{g})^{-1-\gamma}\chi_k(\bar{g})\lesssim 2^{k\gamma}\sqrt{{q^0}}.$$ This completes the proof for the proposition. Using this Proposition \[usefulineq\], we can now obtain the estimates on the decomposed pieces $T^k_*$ and $T^k_+$ as below. For any integer $k,l$, and $m\geq 0$, we have the uniform estimate: $$\begin{split} |T^k_*(f,h,\eta)|\lesssim 2^{k\gamma}|f|_{L^2_{-m}}| h|_{L^2_{\frac{a+\gamma}{2}}}| \eta|_{L^2_{\frac{a+\gamma}{2}}}. \end{split}$$ For this estimate, we use the Carleman representation as below. $$T^k_*(f,h,\eta)=\frac{c}{2} \int_{{{\mathbb{R}^3}}}\frac{dp'\hspace{1mm}}{{p'^0}}\eta(p')\int_{{{\mathbb{R}^3}}}\frac{dq}{{q^0}}f(q)\int_{E^p_{q-p'}}\frac{d\pi_p}{{p^0}} \tilde{\sigma_k}\sqrt{J(g)}h(p').$$ First, we are interested in estimating the following quantity: $$I\eqdef\int_{E^p_{q-p'}}\frac{d\pi_p}{{p^0}}\frac{s\sigma(g,\theta)}{\tilde{g}}\chi_k(\bar{g}).$$ Note that $\sigma_0(\theta)\approx \theta^{-2-\gamma}\approx (\frac{\bar{g}}{g})^{-2-\gamma}$ because $\sigma_0\sin\theta\approx \theta^{-1-\gamma}$ and $\cos \theta = 1-2\frac{\bar{g}^2}{g^2}.$ Similarly, we have that $\sigma_0(\tilde{\theta})\approx (\frac{\bar{g}}{\tilde{g}})^{-2-\gamma}.$ Also, $g\approx \tilde{g}$ and $s\approx \tilde{s}$ on the set $E^p_{q-p'}$ because the identity on the set $g^2=\bar{g}^2+\tilde{g}^2$ gives $g^2 \gtrsim \tilde{g}^2$ and the assumption that $\sigma_0$ vanishes for $\theta \in (\frac{\pi}{2},\pi]$ gives $\cos \theta \geq 0$ which hence gives $g^2 \leq 2 \tilde{g}^2$. Then, $$\begin{split} I &\lesssim \int_{E^p_{q-p'}}\frac{d\pi_p}{{p^0}}\frac{s}{\tilde{g}}(g^a+g^{-b})(\frac{\bar{g}}{g})^{-2-\gamma}\chi_k(\bar{g})\\ &\lesssim \int_{E^p_{q-p'}}\frac{d\pi_p}{{p^0}}\frac{s}{\tilde{g}}(\tilde{g}^a+\tilde{g}^{-b})(\frac{\bar{g}}{\tilde{g}})^{-2-\gamma}\chi_k(\bar{g})\\ &\lesssim \int_{E^p_{q-p'}}\frac{d\pi_p}{{p^0}}\tilde{g}(\bar{g})^{-2-\gamma}\chi_k(\bar{g}){p'^0}{q^0}(\tilde{g}^{a+\gamma}+\tilde{g}^{-b+\gamma})\\ &\lesssim 2^{k\gamma}{p'^0}{q^0}^{\frac{3}{2}}(\tilde{g}^{a+\gamma}+\tilde{g}^{-b+\gamma}). \end{split}$$ Here we have used the inequality (\[Uineq\]), $\tilde{g}\lesssim\sqrt{{p'^0}{q^0}} $, and $s\approx\tilde{s} \lesssim {p'^0}{q^0} $. With this estimate, we obtain $$\label{T*} \begin{split} &|T^k_*(f,h,\eta)|\\ &\lesssim 2^{k\gamma}\int_{{{\mathbb{R}^3}}}\frac{dp'\hspace{1mm}}{{p'^0}}\int_{{{\mathbb{R}^3}}}\frac{dq}{{q^0}} {p'^0}{q^0}^{\frac{3}{2}}(\tilde{g}^{a+\gamma}+\tilde{g}^{-b+\gamma}) |f(q)||h(p')|\sqrt{J(q)}|\eta(p')|\\ &\lesssim 2^{k\gamma}\int_{{{\mathbb{R}^3}}}dp'\hspace{1mm}\int_{{{\mathbb{R}^3}}}dq(g^{a+\gamma}+g^{-b+\gamma})|f(q)||h(p')|J(q)^{\frac{1}{2}-\epsilon}|\eta(p')|. \end{split}$$ In the last inequality we have absorbed any positive power of ${q^0}$ in $J(q)^{\frac{1}{2}-\epsilon}$ for any small $\epsilon>0$. We use Cauchy-Schwarz inequailty and notice that the last quantity in (\[T\*\]) is exactly the same as the quantity in (\[T-\]) and hence $T^k_*$ also has the same estimate as $T^k_-$. This completes the proof. Fix an integer $k$. Then, we have the uniform estimate: $$\label{T+} \begin{split} |T^k_+(f,h,\eta)|&\lesssim 2^{k\gamma}| f|_{L^2}| h|_{L^2_{\frac{a+\gamma}{2}}}| \eta|_{L^2_{\frac{a+\gamma}{2}}}. \end{split}$$ The term $T^k_+$ is defined as: $$\label{T++} \begin{split} T^k_+(f,h,\eta)=\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega\hspace{1mm}\sigma_k(g,\omega)v_\phi f(q)h(p)\sqrt{J(q')}\eta(p'), \end{split}$$ where $\sigma_k(g,\omega)=\sigma(g,\omega)\chi_k(\bar{g})$. Thus, $$\begin{split} &|T^k_+(f,h,\eta)|\\ &\lesssim \int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega\hspace{1mm}(g^{a}+g^{-b})v_\phi\sigma_0\chi_k(\bar{g}) |f(q)||h(p)|\sqrt{J(q')}|\eta(p')|\\ &{\overset{\mbox{\tiny{def}}}{=}}I_1+I_2. \end{split}$$ We estimate $I_2$ first. By Cauchy-Schwarz, $$\begin{split} I_2 &\lesssim(\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega\hspace{1mm}v_\phi\frac{g^{-b}\sigma_0\chi_k(\bar{g})}{g^{-b+\gamma}} |f(q)|^2|h(p)|^2\sqrt{J(q')}({p'^0})^{\frac{-b+\gamma}{2}})^{\frac{1}{2}}\\ &\times (\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega\hspace{1mm}v_\phi g^{-b}\sigma_0\chi_k(\bar{g})g^{-b+\gamma}|\eta(p')|^2\sqrt{J(q')}({p'^0})^{\frac{b-\gamma}{2}})^{\frac{1}{2}}\\ &=I_{21}\cdot I_{22}. \end{split}$$ For $I_{21}$, we split the region of $p'$ into two: ${p'^0} \leq \frac{1}{2}({p^0}+{q^0})$ and ${p'^0}\geq \frac{1}{2}({p^0}+{q^0})$. If ${p'^0} \leq \frac{1}{2}({p^0}+{q^0})$, ${p^0}+{q^0}-{q'^0}\leq \frac{1}{2}({p^0}+{q^0})$ by conservation laws. Thus, $-{q'^0}\leq -\frac{1}{2}({p^0}+{q^0})$ and $J(q')\leq \sqrt{J(p)}\sqrt{J(q)}$. Since $({p'^0})^{\frac{1}{2}(-b+\gamma)}\lesssim1 $ and the exponential decay is faster than any polynomial decay, we have $$({p'^0})^{\frac{1}{2}(-b+\gamma)}\sqrt{J(q')}\lesssim ({p^0})^{-m}({q^0})^{-m}$$ for any fixed $m>0.$ On the other region, we have ${p'^0} \geq \frac{1}{2}({p^0}+{q^0})$ and hence ${p'^0} \approx ({p^0}+{q^0})$ because ${p'^0} \leq ({p^0}+{q^0})$. Also, we have $({p'^0})^{\frac{1}{2}(-b+\gamma)}\lesssim({p^0})^{\frac{1}{2}(-b+\gamma)}$ because $-b+\gamma<0$. Thus, we obtain $$({p'^0})^{\frac{1}{2}(-b+\gamma)}\sqrt{J(q')}\lesssim ({p^0})^{\frac{1}{2}(-b+\gamma)}.$$ In both cases, we have $$\begin{split} I_{21}&\lesssim(\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\frac{g^{-b}2^{k\gamma}g^\gamma}{g^{-b+\gamma}} |f(q)|^2|h(p)|^2\sqrt{J(q')}({p'^0})^{\frac{-b+\gamma}{2}})^{\frac{1}{2}}\\ &\lesssim(\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq2^{k\gamma} |f(q)|^2|h(p)|^2 ({p^0})^{\frac{1}{2}(-b+\gamma)})^{\frac{1}{2}}\\ &\lesssim2^{\frac{k\gamma}{2}}| f|_{L^2}| h|_{L^2_{\frac{1}{2}(-b+\gamma)}} \end{split}$$ by (\[Bk\]) and Cauchy-Schwarz inequality. Now we estimate $I_{22}$. Note that $v_\phi=\frac{g\sqrt{s}}{{p^0}{q^0}}$. Then, by (\[Bk\]), $$\begin{split} I_{22}&=(\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega\hspace{1mm}v_\phi g^{-b}\sigma_0\chi_k(\bar{g})g^{-b+\gamma}|\eta(p')|^2\sqrt{J(q')}({p'^0})^{\frac{b-\gamma}{2}})^{\frac{1}{2}}\\ &\lesssim (\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\frac{g\sqrt{s}}{{p^0}{q^0}} 2^{k\gamma}g^{2(-b+\gamma)}|\eta(p')|^2\sqrt{J(q')}({p'^0})^{\frac{b-\gamma}{2}})^{\frac{1}{2}} \end{split}$$ By pre-post collisional change of variables, we have $$I_{22}\lesssim (\int_{{{\mathbb{R}^3}}}dp'\hspace{1mm}\int_{{{\mathbb{R}^3}}}dq'\hspace{1mm}\frac{g\sqrt{s}}{{p'^0}{q'^0}} 2^{k\gamma}g^{2(-b+\gamma)}|\eta(p')|^2\sqrt{J(q')}({p'^0})^{\frac{b-\gamma}{2}})^{\frac{1}{2}}.$$ Note that, by conservation laws, $$v_\phi=\frac{g\sqrt{s}}{{p'^0}{q'^0}}=\frac{g(p'^\mu,q'^\mu)\sqrt{s(p'^\mu,q'^\mu)}}{{p'^0}{q'^0}}\lesssim 1.$$ Since $g\geq\frac{|p'-q'|}{\sqrt{{p'^0}{q'^0}}}$ and $ -b+\gamma<0$, $$I_{22}\lesssim (\int_{{{\mathbb{R}^3}}}dp'\hspace{1mm}\int_{{{\mathbb{R}^3}}}dq'\hspace{1mm}2^{k\gamma}\frac{|p'-q'|^{2(-b+\gamma)}}{({p'^0}{q'^0})^{-b+\gamma}}|\eta(p')|^2\sqrt{J(q')}({p'^0})^{\frac{b-\gamma}{2}})^{\frac{1}{2}}.$$ Note that $({q'^0})^{b-\gamma}\sqrt{J(q')}\lesssim \sqrt{J^\alpha(q')}$ for some $\alpha>0$. Thus, $$\begin{split} I_{22}&\lesssim \int_{{{\mathbb{R}^3}}}dp'\hspace{1mm}2^{k\gamma}|\eta(p')|^2({p'^0})^{\frac{3}{2}(b-\gamma)}(\int_{{{\mathbb{R}^3}}}dq'\hspace{1mm}\frac{\sqrt{J^\alpha(q')}}{|p'-q'|^{2(b-\gamma)}}))^{\frac{1}{2}}\\ &\lesssim \int_{{{\mathbb{R}^3}}}dp'\hspace{1mm}2^{k\gamma}|\eta(p')|^2({p'^0})^{\frac{3}{2}(b-\gamma)}({p'^0})^{2(-b+\gamma)})^{\frac{1}{2}}\\ &=2^\frac{k\gamma}{2}| \eta|_{L^2_{\frac{-b+\gamma}{2}}}. \end{split}$$ Together, we obtain that $$I_{2}\lesssim 2^{k\gamma}| f|_{L^2}| h|_{L^2_{\frac{1}{2}(-b+\gamma)}}| \eta|_{L^2_{\frac{-b+\gamma}{2}}}.$$ Now, we estimate $I_1$. By Cauchy-Schwarz, $$\begin{split} I_1 &\lesssim(\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega\hspace{1mm}v_\phi\frac{g^{a}\sigma_0\chi_k(\bar{g})}{\tilde{g}^{a+\gamma}} |f(q)|^2|\eta(p')|^2\sqrt{J(q')}({p^0})^{a+\gamma})^{\frac{1}{2}}\\ &\times (\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega\hspace{1mm}v_\phi g^{a}\sigma_0\chi_k(\bar{g})\tilde{g}^{a+\gamma}|h(p)|^2\sqrt{J(q')}({p^0})^{-a-\gamma} )^{\frac{1}{2}}\\ &=I_{11}\cdot I_{12}. \end{split}$$ As before, we split the region of $p'$ into two: ${p'^0} \leq \frac{1}{2}({p^0}+{q^0})$ and ${p'^0}\geq \frac{1}{2}({p^0}+{q^0})$. If ${p'^0} \leq \frac{1}{2}({p^0}+{q^0})$, we have $$({p'^0})^{-a-\gamma}\sqrt{J(q')}\lesssim ({p^0})^{-m}({q^0})^{-m}$$ for any fixed $m>0.$ On the other region, we have ${p'^0} \geq \frac{1}{2}({p^0}+{q^0})$ and hence ${p'^0} \approx ({p^0}+{q^0})$ because ${p'^0} \leq ({p^0}+{q^0})$. Also, we have $({p'^0})^{-a-\gamma}\lesssim({p^0})^{-a-\gamma}$ because $-a-\gamma\leq0$. Thus, we obtain $$({p'^0})^{-a-\gamma}\sqrt{J(q')}\lesssim ({p^0})^{-a-\gamma}({q^0})^{-1}.$$ Thus, in both cases, we have $$I_{11}\lesssim (\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega\hspace{1mm}v_\phi\frac{g^{a}\sigma_0\chi_k(\bar{g})}{\tilde{g}^{a+\gamma}} |f(q)|^2|\eta(p')|^2({p'^0})^{a+\gamma}({q^0})^{-1})^{\frac{1}{2}}.$$ By the Carleman dual representation, this is $$\label{T11} \begin{split} I_{11}\approx &\Big(\int_{{{\mathbb{R}^3}}}\frac{dp'\hspace{1mm}}{{p'^0}}|\eta(p')|^2\int_{{{\mathbb{R}^3}}}\frac{dq}{{q^0}}|f(q)|^2 \\ &\cdot ({p'^0})^{a+\gamma}({q^0})^{-1}\int_{E^p_{q-p'}}\frac{d\pi_p}{{p^0}}\frac{s}{\tilde{g}}\frac{g^{a}\sigma_0\chi_k(\bar{g})}{\tilde{g}^{a+\gamma}}\Big)^\frac{1}{2} \end{split}$$ where $d\pi_p=dp\cdot u({p^0}+{q^0}-{p'^0})\cdot \delta\Big(\frac{\tilde{g}^2+2p^\mu(q_\mu-p'_\mu)}{2\tilde{g}}\Big)$. Note that $\sigma_0(\theta)\approx \theta^{-2-\gamma}\approx (\frac{\bar{g}}{g})^{-2-\gamma}$ and $g\approx \tilde{g}$ on the set $E^p_{q-p'}$. By the inequality (\[Uineq\]) and $s \approx \tilde{s}\lesssim {p'^0}{q^0} $, we have $$\int_{E^p_{q-p'}}\frac{d\pi_p}{{p^0}}\frac{s}{\tilde{g}}\frac{g^{a}\sigma_0\chi_k(\bar{g})}{\tilde{g}^{a+\gamma}}\approx\int_{E^p_{q-p'}} \frac{d\pi_p}{{p^0}}(\bar{g})^{-2-\gamma}\chi_k(\bar{g})s\tilde{g}\lesssim 2^{k\gamma}{p'^0}{q^0}^{\frac{3}{2}}.$$ Thus, $$\label{T12} \begin{split} I_{11}&\lesssim (\int_{{{\mathbb{R}^3}}}\frac{dp'\hspace{1mm}}{{p'^0}}|\eta(p')|^2\int_{{{\mathbb{R}^3}}}\frac{dq}{{q^0}}|f(q)|^2 ({p'^0})^{a+\gamma}({q^0})^{-1}2^{k\gamma}{p'^0}{q^0}^{\frac{3}{2}})^\frac{1}{2}\\ &\lesssim (\int_{{{\mathbb{R}^3}}}dp'\hspace{1mm}|\eta(p')|^2\int_{{{\mathbb{R}^3}}}dq|f(q)|^2 ({p'^0})^{a+\gamma} 2^{k\gamma})^\frac{1}{2}\\ &\lesssim 2^\frac{k\gamma}{2}|w^{(l^+-l')}f|_{L^2}| \eta|_{L^2_{\frac{a+\gamma}{2}}}. \end{split}$$ On the other hand, by taking pre-post collisional change of variables and by the relativistic Carleman dual representation of $I_{12}$, we have $$\begin{split} I_{12}&=(\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega\hspace{1mm}v_\phi g^{a}\sigma_0\chi_k(\bar{g})\tilde{g}^{a+\gamma}|h(p)|^2\sqrt{J(q')}({p^0})^{-a-\gamma} )^{\frac{1}{2}}\\ &=(\int_{{{\mathbb{R}^3}}}dp'\hspace{1mm}\int_{{{\mathbb{R}^3}}}dq'\hspace{1mm}\int_{\mathbb{S}^2}d\omega\hspace{1mm}\frac{g\sqrt{s}}{{p'^0}{q'^0}} g^{a}\sigma_0\chi_k(\bar{g})g^{a+\gamma}|h(p')|^2\sqrt{J(q')}({p'^0})^{-a-\gamma} )^{\frac{1}{2}}\\ &\approx (\int_{{{\mathbb{R}^3}}}\frac{dp'\hspace{1mm}}{{p'^0}} |h(p')|^2\int_{{{\mathbb{R}^3}}}\frac{dq'\hspace{1mm}}{{q'^0}}\sqrt{J(q')}\int_{E^p_{q'-p'}}\frac{d\pi'_p}{{p^0}}(\frac{g}{{p'^0}})^{a+\gamma}g^{a}\frac{s}{g}\sigma_0\chi_k(\bar{g}))^\frac{1}{2}\\ &\lesssim (\int_{{{\mathbb{R}^3}}}\frac{dp'\hspace{1mm}}{{p'^0}} |h(p')|^2\int_{{{\mathbb{R}^3}}}\frac{dq'\hspace{1mm}}{{q'^0}}\sqrt{J(q')}\int_{E^p_{q'-p'}}\frac{d\pi'_p}{{p^0}}(\frac{g}{{p'^0}})^{a+\gamma}g^{a}\frac{s}{g}(\frac{\bar{g}}{g})^{-2-\gamma}\chi_k(\bar{g}))^\frac{1}{2} \end{split}$$ where $d\pi'_p=dp\cdot u({p'^0}+{q'^0}-{p^0})\cdot \delta(\frac{(p'^\mu-p^\mu)\cdot(q'_\mu-p'_\mu)}{2g})$. Following the same proof of the Proposition 2 with the roles of $q$ and $q'$ reversed, we obtain the corollary with respect to the measure $d\pi_p'$ which tells $$\int_{E^p_{q'-p'}}\frac{d\pi_p'}{{p^0}}g(\bar{g})^{-2-\gamma}\chi_k(\bar{g}) \lesssim 2^{k\gamma}\sqrt{{q'^0}}.$$ Together with this corollary, $s\lesssim p'^0q'^0$, and that $g\lesssim \sqrt{p'^0q'^0}$, we finally obtain the following inequality: $$\begin{split} I_{12}&\lesssim (\int_{{{\mathbb{R}^3}}}dp'\hspace{1mm} ({p'^0})^{a+\gamma}|h(p')|^2\int_{{{\mathbb{R}^3}}}dq'\hspace{1mm}\sqrt{J(q')}2^{k\gamma}{(q'^0)}^{\frac{1}{2}})^\frac{1}{2}\\ &\lesssim 2^\frac{k\gamma}{2}| h|_{L^2_{\frac{a+\gamma}{2}}}. \end{split}$$ Thus, $$I_{1}\lesssim 2^{k\gamma}| f|_{L^2}| h|_{L^2_{\frac{a+\gamma}{2}}}| \eta|_{L^2_{\frac{a+\gamma}{2}}}.$$ This completes the proof. With dual representation, we have $$\begin{split} &\langle \Gamma(f,h),\eta\rangle\\ &=\int_{\mathbb{R}^3}\frac{dp'\hspace{1mm}}{{p'^0}} \eta(p')\int_{\mathbb{R}^3}\frac{dq}{{q^0}}\int_{E^p_{q-p'}}\frac{d\pi_p}{{p^0}} \frac{s\sigma(g,\theta)}{\tilde{g}}f(q)\Big(h(p)\sqrt{J(q')}-\frac{\tilde{s}\tilde{g}^4\Phi(\tilde{g})}{sg^4\Phi(g)}h(p')\sqrt{J(q)}\Big)\\ &=\int_{\mathbb{R}^3}\frac{dp'\hspace{1mm}}{{p'^0}} \eta(p')\int_{\mathbb{R}^3}\frac{dq}{{q^0}}\int_{E^p_{q-p'}}\frac{d\pi_p}{{p^0}} \frac{s\sigma(g,\theta)}{\tilde{g}}f(q)\Big(h(p)\sqrt{J(q')}-h(p')\sqrt{J(q)}\Big)\\ &\hspace{10mm}+\Gamma_*(f,h,\eta)\\ \end{split}$$ where $$\begin{split} &\Gamma_*(f,h,\eta)\\ &{\overset{\mbox{\tiny{def}}}{=}}\int_{\mathbb{R}^3}\frac{dp'\hspace{1mm}}{{p'^0}} \eta(p')\int_{\mathbb{R}^3}\frac{dq}{{q^0}}\int_{E^p_{q-p'}}\frac{d\pi_p}{{p^0}} \frac{s\sigma(g,\theta)}{\tilde{g}}f(q)h(p')\sqrt{J(q)}\Big(1-\frac{\tilde{s}\tilde{g}^4\Phi(\tilde{g})}{sg^4\Phi(g)}\Big). \end{split}$$ Here, we would like to estimate the size for $\Gamma_*$. Before that, we define the following integral: $$I(f,h,\eta){\overset{\mbox{\tiny{def}}}{=}}\int_{{\mathbb{R}^3}}dq\int_{{\mathbb{R}^3}}dp\hspace{1mm} |f(q)|\sqrt{J(q)}g^\rho |h(p)||\eta(p)|1_{g\leq 1}.$$ Then we have the following size estimate for $I$: Let $-3<\rho\leq -\frac{3}{2}$ and $0\leq b_1,b_2\leq \frac{3}{2}$ and $\rho+b_1+b_2>-\frac{3}{2}$. Then there is some $\delta>0$ such that $$\label{HLS} I(f,h,\eta)\lesssim |fJ^\delta|_{L^2}|hJ^\delta|_{H^{b_1}}|\eta J^\delta|_{H^{b_2}}.$$ Note that, for any sufficiently small $\delta>0$, we have $$1_{g\leq 1}\sqrt{J(q)}\lesssim J(q)^\delta J(p)^{2\delta}.$$ Then we have $$I(f,h,\eta)\lesssim \int_{{\mathbb{R}^3}}dq\int_{{\mathbb{R}^3}}dp\hspace{1mm} |f(q)|g^\rho |h(p)||\eta(p)|J(q)^\delta J(p)^{2\delta}.$$ Since we have $g^\rho \leq \frac{|p-q|^\rho}{({p^0}{q^0})^{\frac{\rho}{2}}}$, we have that $$I(f,h,\eta)\lesssim \int_{{\mathbb{R}^3}}dq\int_{{\mathbb{R}^3}}dp\hspace{1mm} |f(q)||p-q|^\rho |h(p)||\eta(p)|J(q)^\delta J(p)^{2\delta}({p^0}{q^0})^{-\frac{\rho}{2}}.$$ Since $-\frac{\rho}{2}>0,$ $J(q)$ and $J(p)$ can absorb them with additional sufficiently small powers and we get the following upper bound: $$\label{3.21} I(f,h,\eta)\lesssim \int_{{\mathbb{R}^3}}dq\int_{{\mathbb{R}^3}}dp\hspace{1mm} |f(q)||p-q|^\rho |h(p)||\eta(p)|J(q)^{\delta+\epsilon} J(p)^{2(\delta+\epsilon)}.$$ If $\rho>-\frac{3}{2}$, we have (\[HLS\]) with $b_1=b_2=0$ and $\delta'=\delta+\epsilon'$ for some sufficietnly small $\epsilon'$. If $\rho\leq -\frac{3}{2},$ the right-hand side of (\[3.21\]) is equal to $$\int_{{\mathbb{R}^3}}dp \hspace{1mm} I_{n+\rho}(|f|J^{\delta''})(p)J^{2{\delta''}}|h(p)||\eta(p)|,$$ where $I_{3+\rho}$ is the Riesz potential which is the fractional integral operator of order $3+\rho$ and is defined by $$(I_\alpha f)(x)=\frac{1}{\gamma(\alpha)}\int_{{\mathbb{R}^3}}|x-y|^{-3+\alpha}f(y)dy.$$ By the Theorem 1 (b) on pg. 119 of [@Stein], we have that $$|I_{3+\rho}(|f|J^{\delta''})|_{L^{a_3}}\lesssim |fJ^{\delta''}|_{L^2},$$ where $\frac{1}{a_3}>\frac{1}{2}-\frac{3+\rho}{3}$. Here, we have $a_3>0$ because $-3<\rho<-\frac{3}{2}$. Also, by the Sobolev embedding theorem, we have $$|hJ^{\delta''}|_{L^{a_2}}\lesssim |fJ^{\delta''}|_{H^{b_2}}$$ and $$|\eta J^{\delta''}|_{L^{a_1}}\lesssim |\eta J^{\delta''}|_{H^{b_1}},$$ where $\frac{1}{a_i}>\frac{1}{2}-\frac{b_i}{3}$ for $i=1,2$. Again, $a_1,a_2>0$ because $0\leq b_1,b_2\leq \frac{3}{2}$. Now, for proper $q_1,q_2,q_3$ which satisfy $\frac{1}{q_1}+\frac{1}{q_2}+\frac{1}{q_3}\leq 1$, we use inequalities above and the Holder’s inequality to get the upper bound: $$I(f,h,\eta)\lesssim |fJ^\delta|_{L^2}|hJ^\delta|_{H^{b_1}}|\eta J^\delta|_{H^{b_2}}.$$ Now for the size estimate of $\Gamma_*$, we have the following proposition: For any integer $l$, and $m\geq 0$, we have the uniform estimate when $a>-\frac{3}{2}$: $$\label{Gamma} |\Gamma_*(f,h,\eta)|\lesssim |f|_{L^2_{-m}}| h|_{L^2_{\frac{a}{2}}}| \eta|_{L^2_{\frac{a}{2}}}.$$ If $a+\gamma\geq0$ and $-b+\gamma>-\frac{3}{2}$, we have the alternate uniform inequality for some $\delta>0$, $$\label{Gamma2} \begin{split} |\Gamma_*(f,h,\eta)|\lesssim& |f|_{L^2_{-m}}| h|_{L^2_{\frac{a}{2}}}| \eta|_{L^2_{\frac{a}{2}}}+|fJ^\delta|_{L^2}|hJ^\delta|_{H^{\frac{\gamma}{2}+\epsilon}}|\eta J^\delta|_{H^{\frac{\gamma}{2}-\epsilon}}, \end{split}$$ where $0\leq \epsilon \leq \min\{\frac{\gamma}{2},1-\frac{\gamma}{2}\}.$ We start at $$\begin{split} &\Gamma_*(f,h,\eta)\\ &{\overset{\mbox{\tiny{def}}}{=}}\int_{\mathbb{R}^3}\frac{dp'\hspace{1mm}}{{p'^0}} \eta(p')\int_{\mathbb{R}^3}\frac{dq}{{q^0}}\int_{E^p_{q-p'}}\frac{d\pi_p}{{p^0}} \frac{s\sigma(g,\theta)}{\tilde{g}}f(q)h(p')\sqrt{J(q)}\Big(1-\frac{\tilde{s}\tilde{g}^4\Phi(\tilde{g})}{sg^4\Phi(g)}\Big). \end{split}$$ Let $K{\overset{\mbox{\tiny{def}}}{=}}1-\frac{\tilde{s}\tilde{g}^4\Phi(\tilde{g})}{sg^4\Phi(g)}$. The collision geometry tells us that $\bar{g}^2+\tilde{g}^2=g^2$. From this, we have that $\tilde{g}\leq g$ and $\tilde{s}\leq s$. Also, as in the previous chapters, the support condition for $\cos\theta$ tells us that we further have $\tilde{g}\approx g$. With this, we have that $$\begin{split} K&=1-\frac{\tilde{s}\tilde{g}^4\Phi(\tilde{g})}{sg^4\Phi(g)}=\frac{sg^4\Phi(g)-\tilde{s}\tilde{g}^4\Phi(\tilde{g})}{sg^4\Phi(g)}\\ &\lesssim \frac{(sg^4-\tilde{s}\tilde{g}^4)\Phi(g)}{sg^4\Phi(g)}\lesssim \frac{g^2-\tilde{g}^2}{g^2}=\frac{\bar{g}^2}{g^2}. \end{split}$$ Then, $$\begin{split} \int_{E^p_{q-p'}}\frac{d\pi_p}{{p^0}}\frac{s\sigma(g,\theta)}{\tilde{g}}|K|\chi_k(\bar{g}) &\lesssim \int_{E^p_{q-p'}}\frac{d\pi_p}{{p^0}}\frac{s\sigma(g,\theta)}{\tilde{g}}\frac{\bar{g}^2}{g^2}\chi_k(\bar{g})\\ &\lesssim \int_{E^p_{q-p'}}\frac{d\pi_p}{{p^0}}\frac{s}{\tilde{g}}(g^a+g^{-b})(\frac{\bar{g}}{g})^{-2-\gamma+2}\chi_k(\bar{g})\\ &\approx \int_{E^p_{q-p'}}\frac{d\pi_p}{{p^0}}\tilde{g}\bar{g}^{-\gamma}\chi_k(\bar{g}) \tilde{s}(g^{a+\gamma-2}+g^{-b+\gamma-2})\\ &\lesssim 2^{k(\gamma-2)}{p'^0}{q^0}^{\frac{3}{2}}(g^{a+\gamma-2}+g^{-b+\gamma-2}). \end{split}$$ The third inequality is by (\[Uineq\]) and that $\tilde{s}\lesssim {p'^0}{q^0}. $ Also, the support of $\chi_k$ tells us that $2^{-k-1}\leq \bar{g}\leq 2^{-k}$ and hence $2^k\bar{g}\geq \frac{1}{2}$. Then, if we sum over all $k$, the terms for which $2^kg\leq \frac{1}{4}$ will vanish for fixed $\bar{g}$. Thus, $$\begin{split} \int_{E^p_{q-p'}}\frac{d\pi_p}{{p^0}}\frac{s\sigma(g,\theta)}{\tilde{g}}|K|&\lesssim \sum_{k:2^kg>1}2^{k(\gamma-2)}{p'^0}{q^0}^{\frac{3}{2}}(g^{a+\gamma-2}+g^{-b+\gamma-2})\\ &=\sum_{k:2^kg>1}2^{k(\gamma-2)}{p'^0}{q^0}^{\frac{3}{2}}g^{\gamma-2}(g^{a}+g^{-b})\\ &\lesssim(g^{a}+g^{-b}){p'^0}{q^0}^{\frac{3}{2}}, \end{split}$$ since $\gamma-2<0$. Therefore, $$|\Gamma_*(f,h,\eta)| \lesssim \int_{{{\mathbb{R}^3}}}dp'\hspace{1mm}\int_{{{\mathbb{R}^3}}}dq\hspace{1mm}(g^{a}+g^{-b})|f(q)||h(p')||\eta(p')|\sqrt{{q^0}}\sqrt{J(q)}.$$ This is exactly the same as (\[T\*\]) except that this does not have $2^{k\gamma}$ term any more and that the powers of $g$ are slightly different. If $g\geq 1,$ then the singularity is avoided and we conclude that the upper bound is $$|\Gamma_*(f,h,\eta)|\lesssim |f|_{L^2_{-m}}| h|_{L^2_{\frac{a}{2}}}| \eta|_{L^2_{\frac{a}{2}}}$$ because we have $g^a\geq g^{-b}$. If $g\leq 1,$ then we have $g^a\leq g^{-b}$. So, in the above integral we let $g^a+g^{-b}\lesssim g^{-b}$. We further split this case into two subcases: if $-b\geq -\frac{3}{2},$ then we just follow the estimate (\[T\*\]) and obtain that the upper bound is $$|\Gamma_*(f,h,\eta)|\lesssim |f|_{L^2_{-m}}| h|_{L^2_{\frac{-b}{2}}}| \eta|_{L^2_{\frac{-b}{2}}}\lesssim |f|_{L^2_{-m}}| h|_{L^2_{\frac{a}{2}}}| \eta|_{L^2_{\frac{a}{2}}}.$$ If $-\frac{3}{2}-\gamma<-b<-\frac{3}{2},$ then we use (\[HLS\]) to obtain that the upper bound is $$|\Gamma_*(f,h,\eta)|\lesssim |fJ^\delta|_{L^2}|hJ^\delta|_{H^{\frac{\gamma}{2}+\epsilon}}|\eta J^\delta|_{H^{\frac{\gamma}{2}-\epsilon}}$$ where $0\leq \epsilon \leq \min\{\frac{\gamma}{2},1-\frac{\gamma}{2}\}.$ This completes the proof of the proposition. Cancellation with hard potential kernels {#Cancellation} ======================================== Our goal in this section is to establish an upper bound estimate for the difference $T^k_+ - T^k_-$ for the case that $k\geq 0$. We would like it to have a dependency on the negative power of $2^k$ so we have a good estimate after summation in $k$. Note that $k\geq 0$ also implies that $\bar{g}\leq 1$. Firstly, we define paths from $p'$ to $p$ and from $q'$ to $q$. Fix any two $p,p'\in{{\mathbb{R}^3}}$ and consider $\kappa:[0,1] \rightarrow {{\mathbb{R}^3}}$ given by $$\kappa(\theta){\overset{\mbox{\tiny{def}}}{=}}\theta p+(1-\theta)p'.$$ Similarly, we define the following for the path from $q'$ to $q$; $$\kappa_q(\theta){\overset{\mbox{\tiny{def}}}{=}}\theta q+(1-\theta)q'.$$ Then we can easily notice that $\kappa(\theta)+\kappa_q(\theta)=p'+q'=p+q$. We define the length of the gradient as: $$\label{nabla} |\nabla|^iH(p){\overset{\mbox{\tiny{def}}}{=}}\max_{0\leq j\leq i}\sup_{|\chi|\leq 1}\Big|\big(\chi\cdot\nabla\big)^jH(p)\Big|, \hspace{5mm} i=0,1,2,$$ where $\chi\in{{\mathbb{R}^3}}$ and $|\chi|$ is the usual Euclidean length. Note that we have $|\nabla|^0H=|H|.$ Now we start estimating the term $|T^k_+ - T^k_-|$ under the condition $\bar{g}\leq 1.$ We recall from (\[T–\]) and (\[T++\]) that $|(T^k_+-T^k_-)(f,h,\eta)|$ is defined as $$\begin{split} &|(T^k_+-T^k_-)(f,h,\eta)|\\ &=\left|\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega\hspace{1mm}\sigma_k(g,\omega)v_\phi f(q)h(p)(\sqrt{J(q')}\eta(p')-\sqrt{J(q)}\eta(p))\right|, \end{split}$$ The key part is to estimate $|\sqrt{J(q')}\eta(p')-\sqrt{J(q)}\eta(p)|$. We have the following proposition for the cancellation estimate: Suppose $\eta$ is a Schwartz function on ${{\mathbb{R}^3}}$. Then, for any $k\geq 0$ and for $0<\gamma<2$ and $m\geq 0$, we have the uniform estimate: $$\begin{split} & |(T^k_+-T^k_-)(f,h,\eta)|\\ \lesssim & \hspace{1mm}2^{(\gamma-2)k}|f|_{L^2_{-m}}| h|_{L^2_{\frac{a+\gamma}{2}}}| \eta |_{L^2_{\frac{a+\gamma}{2}}}+2^{\frac{(\gamma-3)}{2}k}|f|_{L^2_{-m}}| h|_{L^2_{\frac{a+\gamma}{2}}}|\eta|_{I^{a,\gamma}}.\\ \end{split}$$ We observe that the weighted fractional Sobolev norm $|\eta|_{I^{a,\gamma}}$ is greater than or equal to $|\eta |_{L^2_{\frac{a+\gamma}{2}}}$. Therefore, the direct consequence of this proposition is that $$\label{can+-} |(T^k_+-T^k_-)(f,h,\eta)|\lesssim \max\{2^{(\gamma-2)k}, 2^{\frac{(\gamma-3)}{2}k}\}|f|_{L^2_{-m}}| h|_{L^2_{\frac{a+\gamma}{2}}}|\eta|_{I^{a,\gamma}}.$$ Note that $0<\gamma<2$. We want our kernel has a good dependency on $2^{-k}$ so we end up with the negative power on 2 as $2^{(\gamma-2)k}.$ Note that under $\bar{g}\leq1$, we have ${p'^0} \approx {p^0}$ and ${q^0}\approx {q'^0}$. Thus, it suffices to estimate $|\sqrt{J(q')}\eta(p')-\sqrt{J(q)}\eta(p)|$ only. We now split the term into three parts as $$\begin{split} |\sqrt{J(q')}&\eta(p')-\sqrt{J(q)}\eta(p)|\\ \leq&\sqrt{J(q')}|\eta(p')-\eta(p)|+|\eta(p)|\left|\sqrt{J(q')}-\sqrt{J(q)}-(\nabla\sqrt{J})(q)\cdot(q'-q)\right|\\ &+|\eta(p)|\left|(\nabla\sqrt{J})(q)\cdot(q'-q)\right|\\ =&\text{I}+\text{II}+\text{III}. \end{split}$$ We estimate the part II first. By the mean-value theorem for multi-variables on $\sqrt{J}$, we have $$\sqrt{J(q')}-\sqrt{J(q)}=(q'-q)\cdot(\nabla\sqrt{J})(\kappa_q(\theta_1))$$ for some $\theta_1\in(0,1)$. Now with the fundamental theorem of calculus, we obtain $$(\nabla\sqrt{J})(\kappa_q(\theta_1))-(\nabla\sqrt{J})(q)=\left(\int_{0}^{\theta_1}D(\nabla\sqrt{J})(\kappa_q(\theta'))d\theta'\right)\cdot (\kappa_q(\theta_1)-q),$$ where $D(\nabla\sqrt{J})$ is the 3$\times$3 Jacobian matrix of $\nabla\sqrt{J}$. With the definition on $|\nabla|$ from (\[nabla\]), we can bound the part II by $$\begin{split} \text{II}&\leq |\eta(p)||q'-q|\left|\Big(\int_0^{\theta_1}D(\nabla \sqrt{J})(\kappa_q(\theta'))d\theta'\Big)\cdot(\kappa_q(\theta_1)-q)\right|\\ &\leq |\eta(p)||q'-q||\kappa_q(\theta_1)-q|\int_0^{\theta_1}|\nabla|^2\sqrt{J}(\kappa_q(\theta')) d\theta'\\ &\leq|\eta(p)||q'-q|^2\int_0^{\theta_1}|\nabla|^2\sqrt{J}(\kappa_q(\theta')) d\theta'. \end{split}$$ Note that $$|\nabla|^2\sqrt{J}\lesssim \sqrt{J}.$$ Also, we have that $$\sqrt{J}(\kappa_q(\theta'))\lesssim (J(q)J(q'))^\epsilon$$ for sufficiently small $\epsilon$. Thus, the estimate for the integral with this kernel II follows exactly the same as in the proposition for $|T^k_-|$ as in (\[T-\]) and we get the first term in the right-hand side of the proposition. For the part III, as (\[E\]) in the Appendix, we reduce this integral to the integral on the set $E^{p'}_{p+q}$ as the following: $$|T^k_{+,\text{III}}-T^k_{-,\text{III}}|\leq\left|\int_{{{\mathbb{R}^3}}}\frac{dp}{{p^0}}\int_{{{\mathbb{R}^3}}}\frac{dq}{{q^0}}\hspace{1mm}\int_{E^{p'}_{p+q}}\frac{d\pi_{p'}}{\sqrt{s}{p'^0}}s\sigma(g,\omega)f(q)h(p)(p-p')\cdot (\nabla\sqrt{J})(q)\eta(p)\right|$$ where $d\pi_{p'}=dp'\hspace{1mm}u({p^0}+{q^0}-{p'^0})\delta\left(-\frac{s}{2\sqrt{s}}-\frac{p'^\mu(p_\mu+q_\mu)}{\sqrt{s}}\right).$ As it is expressed in the measure $d\pi_{p'}$, we can see that $(p'^\mu-p^\mu)(p_\mu+q_\mu)=0$ on the set $E^{p'}_{p+q}$. In this integral as $p'$ varies on hyperboloids of constant $\bar{g}$, the integral is constant except for $(p-p')\cdot (\nabla\sqrt{J})(q)$. Write the term as: $$(p-p')\cdot (\nabla\sqrt{J})(q)=(p^\mu-p'^\mu)(\nabla\sqrt{J})_\mu(q),$$ where we define $(\nabla\sqrt{J})^\mu(q){\overset{\mbox{\tiny{def}}}{=}}(0,(\nabla\sqrt{J})(q))$. Then this term is linear in $(p^\mu-p'^\mu)$ and hence this whole integral vanishes by the symmetry of the set $E^{p'}_{p+q}$. For the part I, we define $\tilde{\eta}(p,p')=\eta(p')-\eta(p)$. Since $\bar{g}\leq 1$, we have ${q^0}\approx {q'^0}$ and this gives that there is some uniform constant $c>0$ such that $\sqrt{J(q')}\leq (J(q))^c$. Thus, we have $$\begin{split} |T^k_{+,\text{I}}-T^k_{-,\text{I}}|\leq& \left|\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega \hspace{1mm} v_\phi \sigma_k(g,\omega)f(q)h(p)\tilde{\eta}(p,p')\sqrt{J(q')}\right|\\ \leq& \left|\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega \hspace{1mm} v_\phi \sigma_k(g,\omega)f(q)h(p)\tilde{\eta}(p,p')(J(q))^c\right|.\\ \end{split}$$ Now, we use Cauchy-Schwarz inequality and obtain $$\label{152} \begin{split} |T^k_{+,\text{I}}-T^k_{-,\text{I}}| \leq& \left(\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega \hspace{1mm} v_\phi \sigma_k(g,\omega)|f(q)|^2|h(p)|^2(J(q))^c\right)^{\frac{1}{2}}\\ &\times \left(\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega \hspace{1mm} v_\phi \sigma_k(g,\omega)|\tilde{\eta}(p,p')|^2(J(q))^c\right)^{\frac{1}{2}}. \end{split}$$ The first part on the right-hand side is bounded by $2^{\frac{k\gamma}{2}}|f|_{L^2_{-m}}| h|_{L^2_{\frac{a+\gamma}{2}}}$ for some $m\geq 0$ as in (\[T-esimate1\]) and (\[T-estimate\]). For the second part, we rewrite this 8-fold integral as the following 12-fold integral: $$\begin{split} &\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega \hspace{1mm} v_\phi \sigma_k(g,\omega)|\tilde{\eta}(p,p')|^2(J(q))^c\\ =& \int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\int_{{{\mathbb{R}^3}}}dp'\int_{{{\mathbb{R}^3}}}dq'\hspace{1mm} s \sigma(g,\omega)\chi_k(\bar{g})|\tilde{\eta}(p,p')|^2(J(q))^c \delta^{(4)}(p'^\mu+q'^\mu-p^\mu-q^\mu). \end{split}$$ As in (\[E2\]), we reduce this integral to the integral on the set $E^q_{p'-p}$ as the following: $$\begin{split} & \int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\int_{{{\mathbb{R}^3}}}dp'\int_{{{\mathbb{R}^3}}}dq'\hspace{1mm} s \sigma(g,\omega)\chi_k(\bar{g})|\tilde{\eta}(p,p')|^2(J(q))^c \delta^{(4)}(p'^\mu+q'^\mu-p^\mu-q^\mu)\\ =&\int_ {{\mathbb{R}^3}}\frac{dp}{{p^0}} \int_ {{\mathbb{R}^3}}\frac{dp'}{{p'^0}} \int_{E^{q}_{p'-p}} \frac{d\pi_{q}}{2\bar{g}{q^0}}s\sigma(g,\theta)\chi_k(\bar{g})|\tilde{\eta}(p,p')|^2(J(q))^c\\ =&\int_ {{\mathbb{R}^3}}\frac{dp}{{p^0}} \int_ {{\mathbb{R}^3}}\frac{dp'}{{p'^0}} \int_{E^{q}_{p'-p}} \frac{d\pi_{q}}{2\bar{g}{q^0}}s\sigma(g,\theta)\chi_k(\bar{g})\frac{|\tilde{\eta}(p,p')|^2}{\bar{g}^{3+\gamma}}\bar{g}^{3+\gamma}(J(q))^c\\ \lesssim &\hspace{1mm}2^{-k(3+\gamma)}\int_ {{\mathbb{R}^3}}\frac{dp}{{p^0}} \int_ {{\mathbb{R}^3}}\frac{dp'}{{p'^0}} \int_{E^{q}_{p'-p}} \frac{d\pi_{q}}{2\bar{g}{q^0}}s\sigma(g,\theta)\chi_k(\bar{g})\frac{|\tilde{\eta}(p,p')|^2}{\bar{g}^{3+\gamma}}(J(q))^c 1_{\bar{g}\leq 1}\\ =&\hspace{1mm}2^{-k(3+\gamma)}\int_ {{\mathbb{R}^3}}\frac{dp}{{p^0}} \int_ {{\mathbb{R}^3}}\frac{dp'}{{p'^0}} \int_{E^{q}_{p'-p}} \frac{d\pi_{q}}{2\bar{g}{q^0}}s\sigma(g,\theta)\chi_k(\bar{g})\frac{|\eta(p')-\eta(p)|^2}{\bar{g}^{3+\gamma}}(J(q))^c 1_{\bar{g}\leq 1}. \end{split}$$ By recalling Proposition \[usefulineq\] and that $\sigma(g,\omega)\lesssim (g^a+g^{-b})\sigma_0(\omega)\approx (g^a+g^{-b})\left(\frac{\bar{g}}{g}\right)^{-2-\gamma}$, we can show that $$\frac{1}{{p^0}{p'^0}}\int_{E^{q}_{p'-p}}\frac{d\pi_{q}}{2\bar{g}{q^0}}s\sigma(g,\theta)\chi_k(\bar{g})(J(q))^c\lesssim 2^{k\gamma}({p^0}{p'^0})^\frac{a+\gamma}{4}.$$ Therefore, the second part of the right-hand side of (\[152\]) is bounded above by $$\begin{split} &\left(\int_{{{\mathbb{R}^3}}}dp\int_{{{\mathbb{R}^3}}}dq\int_{\mathbb{S}^2}d\omega \hspace{1mm} v_\phi \sigma_k(g,\omega)|\tilde{\eta}(p,p')|^2(J(q))^c\right)^{\frac{1}{2}}\\ \lesssim & \hspace{1mm} 2^{-\frac{3k}{2}}\left(\int_{{\mathbb{R}^3}}dp \int_{{\mathbb{R}^3}}dp' ({p^0}{p'^0})^{\frac{a+\gamma}{4}}\frac{|\eta(p')-\eta(p)|^2}{\bar{g}^{3+\gamma}}1_{\bar{g}\leq 1}\right)^{\frac{1}{2}}\leq 2^{-\frac{3k}{2}}|\eta|_{I^{a,\gamma}}. \end{split}$$ Therefore, we finally obtain that $$\begin{split} |T^k_{+,\text{I}}-T^k_{-,\text{I}}| \leq& 2^{\frac{(\gamma-3)}{2}k}|f|_{L^2_{-m}}| h|_{L^2_{\frac{a+\gamma}{2}}}|\eta|_{I^{a,\gamma}}. \end{split}$$ Together with the previous estimates on part II and III, we obtain the proposition. Littlewood-Paley decompositions {#LP decomp} =============================== In this section, we would like to decompose our function further so that each decomposed piece has a support of disjoint annulus in the frequency space. We will see that the sum of decomposed pieces depends on the negative power of 2 and the sum is bounded well to obtain the main upper bound estimate by the Littlewood-Paley theory in this and the next chapters. This standard Littlewood-Paley decompositions will allow us to make sharp estimates on the linearized relativistic Boltzmann operator. Main Estimates on the Littlewood-Paley Decomposition ---------------------------------------------------- Our purpose in this section is to decompose $f$ into infinitely many pieces $f_j$ for $j\geq0$ such that $$f=\sum_{j=0}^{\infty}f_j$$ and that each $f_j$ corresponds to the usual projection onto frequencies comparable to $2^j$ which corresponds to the scale $2^{-j}$ in physical space. From here, $\hat{f}(\xi)$ will denote the Fourier transform of $f(p)$. We first choose and fix any $C^\infty$-function $\phi$ supported on the unit ball of ${{\mathbb{R}^3}}$. Then, we define the difference kernel $\psi$ as $\psi(w){\overset{\mbox{\tiny{def}}}{=}}\phi(w)-2^{-3}\phi(w/2)$ so that its Fourier transform satisfies $\hat{\psi}(\xi)= \hat{\phi}(\xi)-\hat{\phi}(2\xi)$. We also abbreviate $\phi_j(w){\overset{\mbox{\tiny{def}}}{=}}2^{3j}\phi(2^jw)$ and likewise for $\psi_j$ so that their Fourier transform satisfies $\hat{\phi}_j(\xi)=\hat{\phi}(2^{-j}\xi)$ and likewise for $\hat{\psi}_j$. Then we have $$\begin{split} \hat{\phi}(\xi)+\sum_{j=1}^l\hat{\psi}_j(\xi)&=\hat{\phi}(\xi)+\sum_{j=1}^l(\hat{\phi}(2^{-j}\xi)-\hat{\phi}(2^{-j+1}\xi))\\ &=\hat{\phi}(2^{-l}\xi)\rightarrow 1 \hspace{3mm} \text{as}\hspace{1mm} l\rightarrow \infty. \end{split}$$ Now define the partial sum operator $$S_j(f){\overset{\mbox{\tiny{def}}}{=}}f*\phi_j=\int_{{\mathbb{R}^3}}2^{3j}\phi(2^j(p-q))f(q)dq$$ and the difference operator $$\Delta_j(f){\overset{\mbox{\tiny{def}}}{=}}f*\psi_j=\int_{{\mathbb{R}^3}}2^{3j}\psi(2^j(p-q))f(q)dq$$ where we define $\Delta_0=S_0$. Then we notice that $\Delta_j$ satisfies $$\label{lpcan} \begin{split} \Delta_j(1)(p)=(1*\psi_j)(p)=\int_{{{\mathbb{R}^3}}}\psi_j(q)dq=\hat{\psi_j}(0)=0. \end{split}$$ Throughout this section, the variables $p$ and $p'$ are considered to be independent vectors in ${{\mathbb{R}^3}}$ and we will not assume the variables $p$ and $p'$ are related by the collision geometry. We will, however, see that the estimates on these Littlewood-Paley projections will be used in later chapters for the estimates that involve the relativistic collisional geometry. Note that we have $$\widehat{S_jf}(\xi)=\hat{f}(\xi)\hat{\phi_j}(\xi)$$ and $$\widehat{\Delta_jf}(\xi)=\hat{f}(\xi)\hat{\psi_j}(\xi).$$ Remark that, if $\int_{{\mathbb{R}^3}}\phi \hspace{1mm} dx =1$, we have that $$S_jf(p)\rightarrow f(p)$$ as $j\rightarrow\infty$ for all sufficiently smooth $f$ and that $$\Big(\int_{{\mathbb{R}^3}}dp|S_jf(p)|^{\bar{p}}({p^0})^\rho\Big)^\frac{1}{\bar{p}}\lesssim \Big(\int_{{\mathbb{R}^3}}dp|f(p)|^{\bar{p}}({p^0})^\rho\Big)^\frac{1}{\bar{p}}$$ uniformly in $j\geq 0$ for any fixed $\rho\in \mathbb{R}$ and any $\bar{p}\in[1,\infty].$ This $L^p$-boundedness property also holds for the operators $\Delta_j$. We are interested in making an upper bound estimate for $$\sum_{j=0}^\infty 2^{\gamma j}\int_{{\mathbb{R}^3}}dp\hspace{1mm}|\Delta_jf|^2({p^0})^\rho$$ when $0<\gamma<2$ and $\rho\geq 0$. Here we state our first main proposition of this chapter: \[LPP\] For any $\gamma\in(0,2)$ and any $\rho\in\mathbb{R}$, the following inequality holds: $$\label{LP} \begin{split} \sum_{j=0}^\infty 2^{\gamma j}&\int_{{\mathbb{R}^3}}dp\hspace{1mm}|\Delta_jf|^2({p^0})^\rho\\ &\lesssim |f|^2_{L_\rho^2}+\int_{{\mathbb{R}^3}}dp\int_{{\mathbb{R}^3}}dp'\hspace{1mm} ({p^0}{p'^0})^{\frac{\rho}{2}}\frac{(f(p)-f(p'))^2}{\bar{g}^{3+\gamma}}1_{\bar{g}\leq 1}. \end{split}$$ This holds for any smooth $f$ uniformly. We denote the right-hand side of the inequality as $|f|^2_{I^\rho}$. For any $j\geq1$, we have $$\begin{split} &\frac{1}{2}\int_{{\mathbb{R}^3}}dp\int_{{\mathbb{R}^3}}dp'\hspace{1mm}\int_{{\mathbb{R}^3}}dz (f(p)-f(p'))^2\psi_j(z-p)\psi_j(z-p')(z^0)^\rho\\ &=-\int_{{\mathbb{R}^3}}dp\hspace{1mm}(\Delta_jf(p))^2({p^0})^\rho + \int_{{\mathbb{R}^3}}dp\int_{{\mathbb{R}^3}}dz(f(p))^2\psi_j(z-p)\Delta_j(1)(z)(z^0)^\rho \end{split}$$ because $\Delta_j(1)(z)=\int_{{\mathbb{R}^3}}dp'\hspace{1mm} \psi_j(z-p')$ and the left-hand side of the equality is equal to $$\begin{split} \text{(LHS)}=&\int_{{\mathbb{R}^3}}dp\int_{{\mathbb{R}^3}}dz(f(p))^2\psi_j(z-p)(z^0)^\rho \int_{{\mathbb{R}^3}}dp'\hspace{1mm} \psi_j(z-p')\\ &-\int_{{\mathbb{R}^3}}dp\int_{{\mathbb{R}^3}}dp'\hspace{1mm}\int_{{\mathbb{R}^3}}dz f(p)f(p')\psi_j(z-p)\psi_j(z-p')(z^0)^\rho\\ =&\int_{{\mathbb{R}^3}}dp\int_{{\mathbb{R}^3}}dz(f(p))^2\psi_j(z-p)(z^0)^\rho \Delta_j(1)(z)\\ &-\int_{{\mathbb{R}^3}}dz(z^0)^\rho\Big(\int_{{\mathbb{R}^3}}dp\hspace{1mm}f(p)\psi_j(z-p)\Big)^2\\ =&\text{(RHS)}. \end{split}$$ Since $\Delta_j(1)(p)=0$ from (\[lpcan\]), we have $$\begin{split} \int_{{\mathbb{R}^3}}&dp\int_{{\mathbb{R}^3}}dz(f(p))^2\psi_j(z-p)(z^0)^\rho \Delta_j(1)(z)=0. \end{split}$$ On the other hand, from the support condition for $\psi_j(z-p)\psi_j(z-p')$ on $z$, we have ${p^0}\approx {p'^0} \approx z^0$. Also notice that $|\psi_j(z-p')|\lesssim 2^{3j}$ because $ |\psi_j(z-p)|= 2^{3j}|\psi(2^j(z-p))|\lesssim 2^{3j}. $ Thus, $$\begin{split} \int_{{\mathbb{R}^3}}&dz|\psi_j(z-p')||\psi_j(z-p)|(z^0)^\rho\\ &\lesssim 2^{3j}\int_{{\mathbb{R}^3}}dz |\psi_j(z-p)|(z^0)^\rho\\ &\lesssim 2^{3j}({p^0})^\rho\approx 2^{3j}({p^0}{p'^0})^\frac{\rho}{2}. \end{split}$$ Note that the integral is supported only when $|p-p'|\leq 2^{-j+1}$ because $$|p-p'|\leq |z-p|+|z-p'|\leq 2^{-j+1}.$$ Therefore, we have $$2^{\gamma j}\int_{{\mathbb{R}^3}}dz|\psi_j(z-p')||\psi_j(z-p)|(z^0)^\rho\lesssim 2^{(3+\gamma)j}({p^0}{p'^0})^\frac{\rho}{2}1_{|p-p'|\leq 2^{-j+1}}.$$ Since there exists $j_0>0$ such that $2^{-j_0}<|p-p'|\leq 2^{-j_0+1},$ we have $$\begin{split} \sum_{j=1}^\infty 2^{(3+\gamma)j}1_{|p-p'|\leq 2^{-j+1}} &= \sum_{j=1}^{j_0} 2^{(3+\gamma)j}1_{|p-p'|\leq 2^{-j+1}}\\ &\lesssim 2^{(3+\gamma)j_0}1_{|p-p'|\leq 2^{-j+1}}\\ &\lesssim \frac{1_{\bar{g}\leq 1}}{|p-p'|^{\gamma+3}} \end{split}$$ because $\bar{g}\leq |p-p'|$. We recall that $\bar{g}=g(p'^\mu,p^\mu)=(p'^\mu-p^\mu)(p'_\mu-p_\mu)$. If $j=0$, the term $\int_{{{\mathbb{R}^3}}}dp|\Delta_0 f|^2({p^0})^\rho$ is bounded above by $|f|^2_{L^2_\rho}$. Finally, we obtain that $$\begin{split} \sum_{j=0}^\infty &2^{\gamma j}\frac{1}{2}\int_{{\mathbb{R}^3}}dp\int_{{\mathbb{R}^3}}dp'\hspace{1mm}\int_{{\mathbb{R}^3}}dz (f(p)-f(p'))^2\psi_j(z-p)\psi_j(z-p')(z^0)^\rho\\ &\lesssim \int_{{\mathbb{R}^3}}dp\int_{{\mathbb{R}^3}}dp'\hspace{1mm} ({p^0}{p'^0})^\frac{\rho}{2}\frac{(f(p)-f(p'))^2}{|p-p'|^{\gamma+3}}1_{\bar{g}\leq 1}\\ &\lesssim \int_{{\mathbb{R}^3}}dp\int_{{\mathbb{R}^3}}dp'\hspace{1mm} ({p^0}{p'^0})^\frac{\rho}{2}\frac{(f(p)-f(p'))^2}{\bar{g}^{\gamma+3}}1_{\bar{g}\leq 1} \end{split}$$ because $\bar{g}\leq |p-p'|$. Therefore, by the first equality in this proof, we obtain the proposition. Estimates on the Derivatives ---------------------------- We would also need to establish a similar inequality when $\Delta_j$’s are replaced by $2^{-kj}\nabla \Delta_j$ where $\nabla$ is the spatial gradient. We consider the estimates of the spatial derivatives of our operators. Although we have high angular singularity on the collision kernel, we do not use any momentum derivative for our proof throughout this paper. Recall our notation that $\nabla^\alpha =(\partial^{\alpha_1}_{x_1},\partial^{\alpha_2}_{x_2},\partial^{\alpha_3}_{x_3})$. Note that, for any partial derivative $\frac{\partial}{\partial x_i}\Delta_j f$, there holds $\frac{\partial}{\partial x_i}\Delta_j f= 2^j\tilde{\Delta}_j f$ where $\tilde{\Delta}_j$ is the $j^{\text{th}}$-Littlewood-Paley cut-off operator associated to a new cut-off function $\tilde{\psi}$ which also satisfies the cancellation property (\[lpcan\]) that $\tilde{\Delta}_j(1)(p)=0$. Thus, we can write $$2^{-|\alpha|j}\nabla^\alpha \Delta_j f(p)= \Delta^\alpha_j(f)(p)$$ where $\Delta^{\alpha}_j$ is the cut-off operator associated to some $\psi^\alpha$ and $\nabla$ is the usual 3-dimensional spatial gradient. Then, we can repeat the similar proof as that for Proposition \[LPP\] by considering the following integral instead to make an upper-bound estimate on the weighted $L^2$-norm of the derivatives of each Littlewood-Paley decomposed piece: $$\frac{1}{2}\int_{{\mathbb{R}^3}}dp\int_{{\mathbb{R}^3}}dp'\hspace{1mm}\int_{{\mathbb{R}^3}}dz (f(p)-f(p'))^2\psi^\alpha_j(z-p)\psi^\alpha_j(z-p')(z^0)^\rho.$$ Then we show that the integral above is equal to $$- \int_{{{\mathbb{R}^3}}} dp |\Delta^\alpha_j(f)(p)|^2(p^0)^{\rho}+\int_{{{\mathbb{R}^3}}} dp\int_{{{\mathbb{R}^3}}}dz\hspace{1mm} (f(p))^2\psi^\alpha_j(z-p)\Delta^\alpha_j(1)(z)(z^0)^{\rho}.$$ Together with the same condition as in (\[lpcan\]) that $\Delta^\alpha_j(1)(p)=0$, these estimates can be multiplied by $2^{\gamma j}$ and summed over $j$ to get $$\begin{split} \sum_{j=0}^\infty 2^{\gamma j}&\int_{{\mathbb{R}^3}}dp\hspace{1mm}|\Delta^\alpha_j(f)(p)|^2({p^0})^\rho\\ &\lesssim |f|^2_{L_\rho^2}+\int_{{\mathbb{R}^3}}dp\int_{{\mathbb{R}^3}}dp'\hspace{1mm} ({p^0}{p'^0})^{\frac{\rho}{2}}\frac{(f(p)-f(p'))^2}{\bar{g}^{3+\gamma}}1_{\bar{g}\leq 1}. \end{split}$$ Therefore, it follows that $$\label{LPd1} \sum_{j=0}^\infty 2^{(\gamma-|\alpha|)j}\int_{{{\mathbb{R}^3}}}dp |\nabla^\alpha \Delta_j f(p)|^2({p'^0})^{\frac{a+\gamma}{2}}\lesssim |f|^2_{I^{a,\gamma} }.$$ These two inequalities hold for any multi-index $\alpha$ and any fixed $l\in \mathbb{R}$. Main upper bound estimates {#main upper} ========================== In this section, we finally establish the main upper bound estimates with the hard potential collision kernel. We write $$h=\Delta_0h+\sum^\infty_{j=1}\Delta_jh=\sum_{j=0}^\infty h_j$$ where we denote $h_j=\Delta_j h$ for $j\geq 0$. Then, the trilinear product can be written as $$\label{trilinearsum} \begin{split} \langle \Gamma(f,h),\eta\rangle =\sum_{j=0}^\infty \langle \Gamma(f,h_{j}),\eta\rangle.\\ \end{split}$$ We consider the dyadic decomposition of gain and loss terms as the following. $$\begin{split} \sum_{j=0}^\infty \langle\Gamma(f,h_{j}),\eta\rangle =&\sum_{j=0}^\infty\sum_{k=-\infty}^{\infty}\{T^k_+(f,h_{j},\eta)-T^k_-(f,h_{j},\eta)\}\\ =&\sum_{j=0}^\infty\sum_{k=-\infty}^{0}\{T^k_+(f,h_{j},\eta)-T^k_-(f,h_{j},\eta)\}\\ &+\sum_{j=0}^\infty\sum_{k=1}^{[\frac{j}{4}]}\{T^k_+(f,h_{j},\eta)-T^k_-(f,h_{j},\eta)\}\\ &+\sum_{j=0}^\infty\sum_{k=[\frac{j}{4}]+1}^\infty\{T^k_+(f,h_{j},\eta)-T^k_-(f,h_{j},\eta)\}\\ \eqdef&\hspace{1mm} S_1+S_2+S_3. \end{split}$$ We first compute the upper bound for the sum $S_3.$ In this sum, we note that $k\geq 0$ and $0<\gamma<2$. Then, by (\[can+-\]), we obtain $$\begin{split} |S_3|&\lesssim \sum_{j=0}^\infty\sum_{k=[\frac{j}{4}]+1}^\infty \max\{2^{(\gamma-2)k}, 2^{\frac{(\gamma-3)}{2}k}\}|f|_{L^2_{-m}}| h_j|_{L^2_{\frac{a+\gamma}{2}}}|\eta|_{I^{a,\gamma}}\\ &\lesssim \sum_{j=0}^\infty \max\{2^{(\gamma-2)\frac{j}{4}}, 2^{\frac{(\gamma-3)}{2}\frac{j}{4}}\} |f|_{L^2_{-m}}| h_j|_{L^2_{\frac{a+\gamma}{2}}}|\eta|_{I^{a,\gamma}} \end{split}$$ Then, we impose (\[LP\]) to obtain that $$|S_3|\lesssim |f|_{L^2_{-m}}| h|_{I^{a,\gamma}}|\eta|_{I^{a,\gamma}}.$$ For the sum $S_2$, we use (\[T+\]) and (\[T-\]). Then, we have $$\begin{split} |S_2|\lesssim& \sum_{j=0}^\infty \sum_{k=1}^{[\frac{j}{4}]}2^{k\gamma}|f|_{L^2_{-m}}| h_{j}|_{L^2_{\frac{a+\gamma}{2}}}|\eta|_{L^2_{\frac{a+\gamma}{2}}}\\ \lesssim& \sum_{j=0}^\infty2^{\frac{\gamma j}{4}} |f|_{L^2_{-m}}| h_{j}|_{L^2_{\frac{a+\gamma}{2}}}|\eta|_{L^2_{\frac{a+\gamma}{2}}}\\ \lesssim& |f|_{L^2_{-m}}\left|\sum_{j=0}^\infty 2^{\gamma j} |h_{j}|^2_{L^2_{\frac{a+\gamma}{2}}}\right|^{\frac{1}{2}}|\eta|_{L^2_{\frac{a+\gamma}{2}}}\\ \lesssim& |f|_{L^2_{-m}}|h|_{I^{a,\gamma}}|\eta|_{L^2_{\frac{a+\gamma}{2}}} \end{split}$$ where the last inequality is by (\[LP\]) and the third inequality is by $$\sum_{j=0}^\infty2^{\frac{\gamma j}{4}} | h_{j}|_{L^2_{\frac{a+\gamma}{2}}}\leq \left|\sum_{j=0}^\infty 2^{\gamma j} |h_{j}|^2_{L^2_{\frac{a+\gamma}{2}}}\right|^{\frac{1}{2}}\left|\sum_{j=0}^\infty 2^{-\frac{\gamma j}{2}} \right|^{\frac{1}{2}}$$. For the sum $S_1$, we note that $\sum_{k=-\infty}^0 2^{k\gamma}\lesssim 1$. Then, by (\[T+\]) and (\[T-\]), we obtain that $$\begin{split} |S_1|\lesssim& \sum_{j=0}^\infty \sum_{k=-\infty}^{0} 2^{k\gamma}|f|_{L^2_{-m}}| h_{j}|_{L^2_{\frac{a+\gamma}{2}}}|\eta|_{L^2_{\frac{a+\gamma}{2}}}\\ \lesssim& \sum_{j=0}^\infty |f|_{L^2_{-m}}| h_{j}|_{L^2_{\frac{a+\gamma}{2}}}|\eta|_{L^2_{\frac{a+\gamma}{2}}}\\ \lesssim& |f|_{L^2_{-m}}\left|\sum_{j=0}^\infty 2^{\gamma j} |h_{j}|^2_{L^2_{\frac{a+\gamma}{2}}}\right|^{\frac{1}{2}}|\eta|_{L^2_{\frac{a+\gamma}{2}}}\\ \lesssim& |f|_{L^2_{-m}}|h|_{I^{a,\gamma}}|\eta|_{L^2_{\frac{a+\gamma}{2}}} \end{split}$$ Thus, we can collect the estimates on $S_1$, $S_2$, and $S_3$ and conclude that $$|\langle\Gamma(f,h),\eta\rangle|\lesssim |f|_{L^2_{-m}}| h|_{I^{a,\gamma}}|\eta|_{I^{a,\gamma}}.$$ This proves Theorem \[thm1\]. Note that this immediately implies Lemma \[Lemma1\] by taking the spatial derivatives on the functions. When $f, h, \eta$ are Schwartz functions, the order of summation may be rearranged because the sum is absolutely convergent. Then by $(\ref{T+})$ and $(\ref{T-})$, we obtain $$\begin{split} |S_1|\lesssim& \sum_{j=0}^\infty 2^{\gamma j}| f|_{L^2}| h_{j+\ell}|_{L^2_{\frac{a+\gamma}{2}}}| \eta_{j}|_{L^2_{\frac{a+\gamma}{2}}}\\ \lesssim& 2^{-\frac{\gamma \ell}{2}}| f|_{L^2}\left| \sum_{j=0}^\infty 2^{\gamma(j+\ell)} | h_{j+\ell}|^2_{L^2_{\frac{a+\gamma}{2}}}\right|^{\frac{1}{2}}\left|\sum_{j=0}^\infty 2^{\gamma j} | \eta_{j}|^2_{L^2_{\frac{a+\gamma}{2}}}\right|^{\frac{1}{2}}\\ \lesssim& 2^{-\frac{\gamma \ell}{2}}| f|_{L^2}|h|_{I^{a,\gamma} } |\eta|_{I^{a,\gamma} } \end{split}$$ where the second inequality is by the Cauchy-Schwarz inequality and the last inequality is by $(\ref{LP})$. Since $\gamma>0$, this can be summed over $\ell$. Together with the sum $S_1$ and $I_1$, we obtain that $$\sum_{\ell=1}^{\infty}\sum_{j=0}^\infty \langle \Gamma(f,h_{j+\ell}),\eta_j\rangle \lesssim | f|_{L^2}|h|_{I^{a,\gamma} }|\eta|_{I^{a,\gamma} }.$$ Analogously, we expand the second sum in (\[trilinearsum\]) in terms of $T^k_+$, $T^k_*$, and $\Gamma_*$ as $$\begin{split} \sum_{j=0}^\infty \langle \Gamma(f,h_{j}),\eta_{j+\ell}\rangle =&\sum_{j=0}^\infty \Gamma_*(f,h_j,\eta_{j+\ell})+\sum_{k=-\infty}^\infty\sum_{j=0}^\infty(T^k_+-T^k_*)(f,h_{j},\eta_{j+\ell})\\ =&\sum_{j=0}^\infty \Gamma_*(f,h_j,\eta_{j+\ell})+\sum_{j=0}^\infty\sum_{k=-\infty}^j(T^k_+-T^k_*)(f,h_{j},\eta_{j+\ell})\\ &+\sum_{j=0}^\infty\sum_{k=j+1}^\infty(T^k_+-T^k_*)(f,h_{j},\eta_{j+\ell})\\ =&L_1+L_2+L_3. \end{split}$$ Regarding the sum $L_3$, we again use the cancellation estimates (\[can+\*\]) which tells us that $$\begin{split} &|(T^k_+-T^k_*)(f,h_j,\eta_{j+\ell})| \\ \lesssim & 2^{(\gamma-2)k}|f|_{L^2_{-m}}\Big| h_j \Big|_{L^2_{\frac{a+\gamma}{2}}}| \eta_{j+\ell}|_{L^2_{\frac{a+\gamma}{2}}}+2^{\gamma k}|f|_{L^2_{-m}}|| \tilde{h}_j ||_{L^2_{a,\gamma}({{\mathbb{R}^3}}\times{{\mathbb{R}^3}})}| \eta_{j+\ell}|_{L^2_{\frac{a+\gamma}{2}}}\\ \end{split}$$ This sum is exactly the same as $S_2$ term in (\[196\]) except that the roles of $h$ and $\eta$ has been changed. Thus, we obtain that obtain that $$\sum_{\ell=1}^{\infty}L_3\lesssim | f|_{L^2}|h|_{I^{a,\gamma} }|\eta|_{I^{a,\gamma} }.$$ Regarding $L_2$, the estimates (\[T\*\]) and (\[T+\]) for the decomposed pieces are used just as we did for the sum $S_1$. The only difference is that the roles of $h$ and $\eta$ are reversed. Regarding $L_1$, if $a>-\frac{3}{2}$, we use (\[Gamma\]) and obtain that $$|\Gamma_*(f,h_j,\eta_{j+\ell})|\lesssim 2^{-\frac{\gamma}{2}l}| f|_{L^2}2^{\frac{\gamma}{2}j}| h_j|_{L^2_{\frac{a}{2}}}2^{\frac{\gamma}{2}(j+\ell)}| \eta_{j+\ell}|_{L^2_{\frac{a}{2}}}$$ where we used that $1\leq 2^{\gamma j}=2^{-\frac{\gamma}{2}l}2^{\frac{\gamma}{2}j}2^{\frac{\gamma}{2}(j+\ell)}.$ Then the Cauchy-Schwarz inequality on the sum over $j$ is employed and use (\[LPd1\]) to get the desired upper bound. If $a<-\frac{3}{2}$ and $a+\gamma\geq0$, then we have the extra term as in (\[Gamma2\]). For the second term in (\[Gamma2\]), we use the inequalities that $$|h_j J^\delta|_{H^{\frac{\gamma}{2}+\epsilon}}\lesssim |h_j|^{1-\frac{\gamma}{2}-\epsilon}_{L^2_{-m}}||\nabla|h_j|^{\frac{\gamma}{2}+\epsilon}_{L^2_{-m}}$$ and that $$|\eta_{j+\ell} J^\delta|_{H^{\frac{\gamma}{2}-\epsilon}}\lesssim |\eta_{j+\ell}|^{1-\frac{\gamma}{2}+\epsilon}_{L^2_{-m}}||\nabla|\eta_{j+\ell}|^{\frac{\gamma}{2}-\epsilon}_{L^2_{-m}}.$$ Note that these hold as long as $0\leq \epsilon \leq \min\{\frac{\gamma}{2},1-\frac{\gamma}{2}\}.$ Then the second term in (\[Gamma2\]) is bounded by $$|f|_{L^2_{-m}}|h_j|^{1-\frac{\gamma}{2}-\epsilon}_{L^2_{-m}}||\nabla|h_j|^{\frac{\gamma}{2}+\epsilon}_{L^2_{-m}} |\eta_{j+\ell}|^{1-\frac{\gamma}{2}+\epsilon}_{L^2_{-m}}||\nabla|\eta_{j+\ell}|^{\frac{\gamma}{2}-\epsilon}_{L^2_{-m}}.$$ Let $G(j,\ell,i_1,i_2){\overset{\mbox{\tiny{def}}}{=}}2^{-\epsilon \ell}2^{(\frac{\gamma}{2}-i_1)j}||\nabla|^{i_1}h_j|_{L^2_{-m}}2^{(\frac{\gamma}{2}-i_2)(j+\ell)}||\nabla|^{i_2}\eta_{j+\ell}|_{L^2_{-m}}$ and define $S_{i_1,i_2}{\overset{\mbox{\tiny{def}}}{=}}\sum_{j=0}^\infty\sum_{\ell=1}^\infty G(j,\ell,i_1,i_2)$ for $i_1,i_2\in\{0,1\}$. Then we have that the second term is bounded by $|f|_{L^2_{-m}}(G(j,\ell,1,1))^{\frac{\gamma}{2}-\epsilon}(G(j,\ell,0,0))^{1-\frac{\gamma}{2}-\epsilon}(G(j,\ell,1,0))^{2\epsilon}$. Therefore, the sum on $|\Gamma_*|$ in $\ell$ and $j$ gives us that $$\sum_{j=0}^\infty\sum_{\ell=0}^\infty |\Gamma_*(f,h_j,\eta_{j+\ell})| \lesssim |f|_{L^2_{-m}}(S_{1,1})^{\frac{\gamma}{2}-\epsilon}(S_{0,0})^{1-\frac{\gamma}{2}-\epsilon}(S_{1,0})^{2\epsilon}+| f|_{L^2}|h|_{I^{a,\gamma} }|\eta|_{I^{a,\gamma} }$$ by Hölder’s inequality. We further take Cauchy-Schwarz on each sum $G$ over $j$ and get the same bound as before. Together with this estimate, we finally obtain that $$|\langle \Gamma(f,h),\eta\rangle| \lesssim | f|_{L^2}|h|_{I^{a,\gamma} }|\eta|_{I^{a,\gamma} }$$ and this proves the special case Theorem \[thm1\] when $l=0$. Here we also would like to mention a proposition that is used to prove other further compact estimates. Let $\phi(p)$ be an arbitrary smooth function which satisfies for some positive constant $C_\phi$ and $c$ that $$|\phi(p)|\leq C_\phi e^{-c{p^0}}.$$ Then we have that $$\label{C1} |\langle \Gamma(\phi,f),h\rangle |\lesssim |f|_{I^{a,\gamma}}|h|_{I^{a,\gamma}}.$$ If $\phi$ further satisfies a more smoothing condition that for some positive constant $C_\phi$ and $c$ $$|\nabla|^2\phi\leq C_\phi e^{-c{p^0}},$$ then we have $$\label{C2} |\langle \Gamma(f, \phi),h\rangle |\lesssim |f|_{L^2_{\frac{a+\gamma}{2}-2}}|h|_{L^2_{\frac{a+\gamma}{2}-2}}.$$ Additionally, for any $m\geq 0$, we have $$\label{C3} |\langle \Gamma(f, h),\phi\rangle |\lesssim |f|_{L^2_{-m}}|h|_{L^2_{-m}}.$$ For (\[C1\]), we expand the trilinear form as in (\[trilinearsum\]) and use Sobolev embeddings on the $L^2$-norm of $\phi$ to bound it by $L^\infty$-norm with some derivatives which are also bounded uniformly. For (\[C2\]), we use that $$\begin{split} |\langle \Gamma(f, \phi),h\rangle |&=\left|\sum_{j=0}^\infty \sum_{k=-\infty}^\infty (T^k_+(f,\phi_j,h)-T^k_-(f,\phi_j,h))\right|\\ &\lesssim |f|_{L^2_{\frac{a+\gamma}{2}-2}}|h|_{L^2_{\frac{a+\gamma}{2}-2}}\sum_{j=0}^\infty\sum_{k=-\infty}^\infty \min\{2^{(\gamma-2)k},2^{\gamma k}\}2^{-2j}. \end{split}$$ Similar proof works for (\[C3\]). Note that (\[C1\]) implies Lemma \[Lemma3\]. Also, this proposition further implies the following lemma: For any $l\in\mathbb{R}$, we have the uniform estimate $$|\langle Kf,h\rangle | \lesssim | f|_{L^2_{a+\gamma-\delta}}| h|_{L^2_{a+\gamma-\delta}}$$ where $\delta=\min\{\gamma,2\}.$ An immediate consequence of this lemma is Lemma \[Lemma2\] by letting $h=f$. More precisely, we use that the upper bound of the inequality in the lemma is bounded above by $$| f|^2_{L^2_{a+\gamma-\delta}} \leq \frac{\epsilon}{2}| f|^2_{L^2_{a+\gamma-\delta}}+C_\epsilon| f|^2_{L^2_{a+\gamma-\delta}}.$$ For the term $C_\epsilon| f|^2_{L^2_{a+\gamma-\delta}}$, we split the region into $|p|\leq R$ and $ |p|\geq R$. We choose $R>0$ large enough so that $C_\epsilon R^{-\delta}\leq \frac{\epsilon}{2}$. Then we obtain Lemma \[Lemma2\]. Main coercive estimates {#main coercive estimates} ======================= In this section, for any schwartz function $f$, we consider the quadratic difference arising in the inner product of the norm part $Nf$ with $f$. The main part is to estimate the norm $|f|^2_B$ which arises in the inner product and will be defined as follows. $$\begin{split} |f|^2_B&\eqdef\frac{1}{2}\int_{{\mathbb{R}^3}}dp\int_{{\mathbb{R}^3}}dq\hspace{1mm}\int_{\mathbb{S}^2} d\omega\hspace{1mm}\hspace{1mm}v_\phi \sigma(g,\theta)(f(p')-f(p))^2\sqrt{J(q)J(q')}\\ &\geq \frac{1}{2}\int_{{\mathbb{R}^3}}dp\int_{{\mathbb{R}^3}}dq\hspace{1mm}\int_{\mathbb{S}^2} d\omega\hspace{1mm}\hspace{1mm}v_\phi \sigma(g,\theta)(f(p')-f(p))^2\sqrt{J(q)J(q')}1_{\bar{g}\leq 1}.\\ \end{split}$$ Note that if $\bar{g}\leq 1$, we have $q^0\approx q'^0$ as well as $p^0\approx p'^0$. Thus, we can bound $\sqrt{J(q)J(q')}$ below as $\sqrt{J(q)J(q')}\gtrsim e^{-C{q'^0}}$ for some uniform constant $C>0$. By the alternative Carleman-type dual representation of the integral operator as in (\[alternative form\]), it is possible to write the lower bound of the norm as an integral of some kernel $K(p,p')$ as follows: $$\begin{split} |f|^2_B&\gtrsim\int_{{\mathbb{R}^3}}dp\int_{{\mathbb{R}^3}}dq\hspace{1mm}\int_{\mathbb{S}^2} d\omega\hspace{1mm}\hspace{1mm}v_\phi \sigma(g,\theta)(f(p')-f(p))^2e^{-Cq'^0}1_{\bar{g}\leq 1}\\ &\approx \int_{\mathbb{R}^3}\frac{dp}{{p^0}}\int_{\mathbb{R}^3}\frac{dp'\hspace{1mm}}{p'^0}(f(p')-f(p))^21_{\bar{g}\leq 1}\int_{\mathbb{R}^3}\frac{dq_s}{\sqrt{|q_s|^2+\bar{s}}}\hspace{1mm} \delta(q_s^\mu(p'_\mu-p_\mu)) s\sigma(g,\theta)e^{-Cq'^0}\\ &{\overset{\mbox{\tiny{def}}}{=}}\int_{{\mathbb{R}^3}}\frac{dp}{{p^0}}\int_{{\mathbb{R}^3}}\frac{dp'\hspace{1mm}}{{p'^0}} (f(p')-f(p))^21_{\bar{g}\leq 1}K(p,p'),\\ \end{split}$$ where the kernel $K(p,p')$ is defined as $$\label{K(p,p')} K(p,p')\eqdef\int_{\mathbb{R}^3}\frac{dq_s}{\sqrt{|q_s|^2+\bar{s}}}\hspace{1mm} \delta(q_s^\mu(p'_\mu-p_\mu)) s\sigma(g,\theta)e^{-Cq'^0}.$$ Our goal in this section is to make a coercive lower bound of this kernel and hence the norm $|f|_B$. First of all, the delta function in (\[K(p,p’)\]) implies that $(p'^\mu-p^\mu)(p'_\mu-p_\mu+2q'_\mu)=0.$ Then this implies that $$\begin{split} 2(p'^\mu-p^\mu)(q'_\mu-p_\mu)&=2p'^\mu q'_\mu-2p'^\mu p_\mu-2p^\mu q'_\mu+2p^\mu p_\mu\\ &=2p'^\mu q'_\mu-2p^\mu q'_\mu-p'^\mu p_\mu-p^\mu p'_\mu+p^\mu p_\mu+p'^\mu p'_\mu\\ &=(p'^\mu-p^\mu)(p'_\mu-p_\mu+2q'_\mu)=0. \end{split}$$ Then, we obtain that $$\begin{split} \bar{g}^2+\tilde{g}^2&=(p'^\mu-p^\mu)(p'_\mu-p_\mu)-2(p'^\mu-p^\mu)(q'_\mu-p_\mu)+(q'^\mu-p^\mu)(q'_\mu-p_\mu)\\ &=(p'^\mu-q'^\mu)(p'_\mu-q'_\mu){\overset{\mbox{\tiny{def}}}{=}}g'^2,\\ \end{split}$$ and we have $ \bar{g}^2+\tilde{g}^2=g'^2$ on this hyperplane as expected where $g'{\overset{\mbox{\tiny{def}}}{=}}g(p'^\mu,q'^\mu).$ Note that, from the assumptions on the collision kernel, we have $\sigma(g',\theta)=\Phi(g')\sigma_0(\theta)$ and $$\sigma_0(\theta)\approx\frac{1}{\sin\theta\cdot \theta^{1+\gamma}}\approx \frac{1}{\theta^{2+\gamma}}\approx \Big(\frac{g'}{\bar{g}}\Big)^{2+\gamma}.$$ Thus, $$\sigma(g',\theta)\approx \Phi(g')\Big(\frac{g'}{\bar{g}}\Big)^{2+\gamma}.$$ Together with this, we have $$\begin{split} K(p,p')&\approx \int_{\mathbb{R}^3}\frac{dq_s}{\sqrt{|q_s|^2+\bar{s}}}\hspace{1mm} \delta(q_s^\mu(p'_\mu-p_\mu)) s\Phi(g')\Big(\frac{g'}{\bar{g}}\Big)^{2+\gamma}e^{-Cq'^0}\\ &\gtrsim \int_{\mathbb{R}^3}\frac{dq_s}{\sqrt{|q_s|^2+\bar{s}}}\hspace{1mm} \delta(q_s^\mu(p'_\mu-p_\mu)) s\Big(\frac{g'}{\bar{g}}\Big)^{2+\gamma}e^{-Cq'^0}\frac{g'}{\sqrt{s}}g'^a\\ &\gtrsim \int_{\mathbb{R}^3}\frac{dq_s}{\sqrt{|q_s|^2+\bar{s}}}\hspace{1mm} \delta(q_s^\mu(p'_\mu-p_\mu)) e^{-Cq'^0}\frac{g'^{4+a+\gamma}}{\bar{g}^{2+\gamma}}\\ &\gtrsim \int_{\mathbb{R}^3}\frac{dq_s}{q'^0}\hspace{1mm} \delta(q_s^\mu(p'_\mu-p_\mu)) e^{-Cq'^0}\frac{g'^{4+a+\gamma}}{\bar{g}^{2+\gamma}}, \end{split}$$ where the first inequality is from the assumption on the collision kernel (\[hard\]) that $\Phi(g')\gtrsim \frac{g}{\sqrt{s}}g^a$ and that $s=g^2+4>g^2$, and the last inequality is by that $\sqrt{|q_s|^2+\bar{s}}\lesssim q'^0$ if $\bar{g}\leq 1$ by the geometry. Here, we have the following lower bound for the kernel $K(p,p')$. \[coer\] If $\bar{g}\leq 1$, the kernel $K(p,p')$ is bounded uniformly from below as $$K(p,p')\gtrsim\frac{(p'^0)^{2+\frac{a+\gamma}{2}}}{\bar{g}^{3+\gamma}}.$$ With this proposition, we can obtain the uniform lower bound for the norm $|f|_B$ as below. $$\begin{split} |f|^2_B&\gtrsim \int_{{\mathbb{R}^3}}\frac{dp}{{p^0}}\int_{{\mathbb{R}^3}}\frac{dp'\hspace{1mm}}{{p'^0}}\frac{(f(p')-f(p))^2}{\bar{g}^{3+\gamma}}(p'^0)^{2+\frac{a+\gamma}{2}} 1_{\bar{g}\leq 1} \\ &\gtrsim \int_{{\mathbb{R}^3}}dp\int_{{\mathbb{R}^3}}dp'\hspace{1mm}\frac{(f(p')-f(p))^2}{\bar{g}^{3+\gamma}}(p'^0)^{\frac{a+\gamma}{2}}1_{\bar{g}\leq 1} \\ &\gtrsim \int_{{\mathbb{R}^3}}dp\int_{{\mathbb{R}^3}}dp'\hspace{1mm}\frac{(f(p')-f(p))^2}{\bar{g}^{3+\gamma}}({p'^0}{p^0})^{\frac{a+\gamma}{4}} 1_{\bar{g}\leq 1}\\ \end{split}$$ Thus, the proof for our main coercive inequality is complete because we have that $$|f|^2_{L^2_{\frac{a+\gamma}{2}}}+|f|^2_{B}\gtrsim |f|^2_{I^{a,\gamma}}.$$ Here we prove Proposition \[coer\]. We begin with $$\begin{split} K(p,p')&\gtrsim \int_{\mathbb{R}^3}\frac{dq_s}{q'^0}\hspace{1mm} \delta(q_s^\mu(p'_\mu-p_\mu)) e^{-Cq'^0}\frac{g'^{4+a+\gamma}}{\bar{g}^{2+\gamma}}.\\ \end{split}$$ First, we take a change of variables from $q_s=p'-p+2q'$ to $q'$. Then we obtain that $$\begin{split} K(p,p')&\gtrsim \int_{\mathbb{R}^3}\frac{dq'}{q'^0}\hspace{1mm} \delta((p'^\mu-p^\mu+2q'^\mu)(p'_\mu-p_\mu)) e^{-Cq'^0}\frac{g'^{4+a+\gamma}}{\bar{g}^{2+\gamma}}\\ &= \int_{\mathbb{R}^3}\frac{dq'}{q'^0}\hspace{1mm} \delta(\bar{g}^2+2q'^\mu(p'_\mu-p_\mu)) e^{-Cq'^0}\frac{g'^{4+a+\gamma}}{\bar{g}^{2+\gamma}}.\\ \end{split}$$ Now we take a change of variables on $q'$ into polar coordinates as $q'\in {{\mathbb{R}^3}}\rightarrow (r,\theta,\phi)$ and choose the z-axis parallel to $p'-p$ such that the angle between $q'$ and $p'-p$ is equal to $\phi.$ Then we obtain that $$\label{KK} \begin{split} K(p,p')\gtrsim&\int_1^\infty d{q'^0} \int_0^\infty dr \int_0^{2\pi}d\theta \int_0^{\pi}d\phi\hspace{1mm} r^2\sin\phi\hspace{1mm} \\ &\times\frac{g'^{4+a+\gamma}}{\bar{g}^{2+\gamma}} \delta(\bar{g}^2+2q'^\mu(p'_\mu-p_\mu))\delta(r^2+1-(q'^0)^2)e^{-C{q'^0}}. \end{split}$$ The terms in the first delta function in (\[KK\]) can be written as $$\bar{g}^2+2q'^\mu(p'_\mu-p_\mu)=\bar{g}^2-2{q'^0}({p'^0}-{p^0})+2q'\cdot(p'-p)=\bar{g}^2-2{q'^0}({p'^0}-{p^0})+2r|p'-p|\cos\phi.$$ Also, note that the second delta function is $$\delta(r^2+1-(q'^0)^2)=\delta((r-\sqrt{(q'^0)^2-1})(r+\sqrt{(q'^0)^2-1}))=\frac{\delta(r-\sqrt{(q'^0)^2-1})}{2\sqrt{(q'^0)^2-1}},$$ because $r>0$. Now we reduce the integration against $r$ using this delta function and get $$\begin{split} K(p,p')\gtrsim&\int_1^\infty d{q'^0} \int_0^{2\pi}d\theta \int_0^{\pi}d\phi\hspace{1mm} \frac{(q'^0)^2-1}{2\sqrt{(q'^0)^2-1}}\sin\phi \hspace{1mm}\\ &\times\delta(\bar{g}^2-2{q'^0}({p'^0}-{p^0})+2\sqrt{(q'^0)^2-1}|p'-p|\cos\phi)\frac{g'^{4+a+\gamma}}{\bar{g}^{2+\gamma}}e^{-C{q'^0}}. \end{split}$$ Now, let $v=\cos\phi$. Then, $dv=-\sin\phi \hspace{1mm}d\phi$ and the integration is now rewritten as $$\begin{split} K(p,p')\gtrsim&\int_1^\infty d{q'^0} \int_0^{2\pi}d\theta \int_{-1}^{1}dv\hspace{1mm} \frac{(q'^0)^2-1}{2\sqrt{(q'^0)^2-1}}\\ &\times\delta(\bar{g}^2-2{q'^0}({p'^0}-{p^0})+2\sqrt{(q'^0)^2-1}|p'-p|v)\frac{g'^{4+a+\gamma}}{\bar{g}^{2+\gamma}}e^{-C{q'^0}}. \end{split}$$ Note that $$\delta(\bar{g}^2-2{q'^0}({p'^0}-{p^0})+2\sqrt{(q'^0)^2-1}|p'-p|v)=\frac{\delta\Big(v+\frac{\bar{g}^2-2{q'^0}({p'^0}-{p^0})}{2\sqrt{(q'^0)^2-1}|p'-p|}\Big)}{2\sqrt{(q'^0)^2-1}|p'-p|}.$$ We remark that $|\frac{\bar{g}^2-2{q'^0}({p'^0}-{p^0})}{2\sqrt{(q'^0)^2-1}|p'-p|}|\leq 1$. Then we further reduce the integration on $v$ by removing this delta function and get $$\begin{split} K(p,p')\gtrsim&\int_1^\infty d{q'^0} \int_0^{2\pi}d\theta \hspace{1mm} \frac{1}{|p'-p|}\frac{g'^{4+a+\gamma}}{\bar{g}^{2+\gamma}}e^{-C{q'^0}}\\ \gtrsim& \int_1^\infty d{q'^0} e^{-C{q'^0}}\hspace{1mm} \frac{g'^{4+a+\gamma}}{\bar{g}^{3+\gamma}{q'^0}}\\ \gtrsim &\int_1^\infty d{q'^0} e^{-C{q'^0}}\hspace{1mm}\frac{|p'^0-q'^0|^{4+a+\gamma}}{\bar{g}^{3+\gamma}(\sqrt{{q'^0}{p'^0}})^{4+a+\gamma}{q'^0}}\\ \gtrsim &\frac{1}{\bar{g}^{3+\gamma}(p'^0)^{2+\frac{a+\gamma}{2}}}\int_1^\infty d{q'^0} e^{-C{q'^0}}\hspace{1mm} \frac{|{p'^0}-{q'^0}|^{4+a+\gamma}}{({q'^0})^{3+\frac{a+\gamma}{2}}}\\ \approx &\frac{(p'^0)^{4+a+\gamma}}{\bar{g}^{3+\gamma}}\frac{1}{(p'^0)^{2+\frac{a+\gamma}{2}}} = \frac{(p'^0)^{2+\frac{a+\gamma}{2}}}{\bar{g}^{3+\gamma}} \end{split}$$ where $q=p'+q'-p$ and the second inequality is by $\frac{|p'-p|}{{q'^0}}\approx \frac{|q-q'|}{\sqrt{q'^0q^0}}\lesssim \bar{g}(q^\mu,q'^\mu)=\bar{g}$, the third inequality is by $\frac{|p'^0-q'^0|}{\sqrt{{p'^0}{q'^0}}}\leq g'$, and the last equivalence is by\ $\int_1^\infty d{(q'^0)} e^{-C{q'^0}}\frac{|p'^0-q'^0|^{4+a+\gamma}}{(q'^0)^k}\approx (p'^0)^{4+a+\gamma}$ for any $k\in\mathbb{R}$. This proves the proposition. Note that Lemma \[2.5\] has been proven in this proof above. Global existence {#global exist} ================ Local existence --------------- In this section, we use the estimates that we made in the previous sections to show the local existence results for small data. We use the standard iteration method and the uniform energy estimate for the iterated sequence of approximate solutions. The iteration starts at $f^0(t,x,p)=0$. We solve for $f^{m+1}(t,x,p)$ such that $$\label{8.1} (\partial_t+\hat{p}\cdot\nabla_x+N)f^{m+1}+Kf^{m}=\Gamma(f^m,f^{m+1}), \hspace{5mm} f^{m+1}(0,x,p)=f_0(x,p).$$ Using our estimates, it follows that the linear equation (\[8.1\]) admits smooth solutions with the same regularity in $H^N $ as a given smooth small initial data and that the solution also has a gain of $L^2((0,T);I^{a,\gamma}_{N})$. We will set up some estimates which is necessary to find a local classical solution as $m\rightarrow \infty$. We first define some notations. We will use the norm $||\cdot ||_H$ for $||\cdot||_{H^N }$ for convenience and also use the norm $||\cdot||_I$ for the norm $||\cdot||_{I^{a,\gamma}_{N}}$. Define the total norm as $$M(f(t))=||f(t)||^2_H+\int_0^td\tau ||f(\tau)||^2_I.$$ We will also use $|f|_{I^{a,\gamma}}$ for $\langle Nf,f\rangle$. Here we state a crucial energy estimate: \[8.2\] The sequence of iterated approximate solutions $\{f^m\} $ is well defined. There exists a short time $T^*=T^*(||f_0||^2_{H})>0$ such that for $||f_0||^2_H$ sufficiently small, there is a uniform constant $C_0>0$ such that $$\sup_{m\geq 0}\sup_{0\leq \tau \leq T^*} M(f^m(\tau))\leq 2 C_0||f_0||^2_H.$$ We prove this lemma by induction over $m$. If $m=0$, the lemma is trivially true. Suppose that the lemma holds for $m=k$. Let $f^{k+1}$ be the solution to the linear equation (\[8.1\]) with given $f^k$. We take the spatial derivative $\partial^\alpha$ on the linear equation (\[8.1\]) and obtain $$(\partial_t+\hat{p}\cdot\nabla_x)\partial^\alpha f^{m+1}N(\partial^\alpha f^{m+1})+K(\partial^\alpha f^{m})=\partial^\alpha\Gamma(f^m,f^{m+1}).$$ Then, we take a inner product with $\partial^\alpha f^{m+1}$. The trilienar estimate of Lemma (\[Lemma1\]) implies that $$\begin{split} &\frac{1}{2}\frac{d}{dt}||\partial^\alpha f^{m+1}||^2_{L^2_pL^2_x}+||\partial^\alpha f^{m+1}||^2_{I^{a,\gamma}}+(K(\partial^\alpha f^{m}),\partial^\alpha f^{m+1})\\ &=(\partial^\alpha \Gamma(f^{m},f^{m+1}),\partial^\alpha f^{m+1})\lesssim ||f^m||_H ||f^{m+1}||^2_I. \end{split}$$ We integrate over $t$ we obtain $$\label{8.5} \begin{split} \frac{1}{2}&||\partial^\alpha f^{m+1}(t)||^2_{L^2_pL^2_x}+\int_0^t d\tau ||\partial^\alpha f^{m+1}(\tau)||^2_{I^{a,\gamma}}+\int_0^t d\tau (K(\partial^\alpha f^{m}),\partial^\alpha f^{m+1})\\ &\leq \frac{1}{2}||\partial^\alpha f_0||^2_{L^2_pL^2_x}+C\int_0^td\tau||f^m||_H||f^{m+1}||^2_I. \end{split}$$ From the compact estimate (\[C1\]), for any small $\epsilon>0$ we have $$\begin{split} \Big|\int_0^t d\tau (K(\partial^\alpha f^{m}),\partial^\alpha f^{m+1})\Big| &\leq \int_0^t d\tau \Big( \frac{1}{2}||\partial^\alpha f^{m+1}(\tau)||^2_{L^2_{\frac{a+\gamma}{2}}}+C||\partial^\alpha f^{m+1}(\tau)||^2_{L^2} \Big)\\ &+\epsilon \int_0^t d\tau ||\partial^\alpha f^{m}(\tau)||^2_{L^2_{\frac{a+\gamma}{2}}}+C_\epsilon \int_0^t d\tau ||\partial^\alpha f^{m}(\tau)||^2_{L^2}. \end{split}$$ We use this estimate for (\[8.5\]) and take a sum over all the derivatives such that $|\alpha|\leq N$ to obtain $$\label{continuity1} \begin{split} M(f^{m+1}(t))\leq& C_0||f_0||^2_H+\int_0^t d\tau (C||f^{m+1}||_H(\tau)+C\epsilon ||f^m(\tau)||^2_I)\\ &+C_\epsilon\int_0^t d\tau ||f^m||^2_H(\tau) +C\sup_{0\leq\tau\leq t} M(f^{m+1}(\tau))\sup_{0\leq\tau\leq t} M^{1/2}(f^{m}(\tau))\\ \leq & C_0||f_0||^2_H+C_\epsilon t(\sup_{0\leq\tau\leq t} M(f^{m+1}(\tau))+\sup_{0\leq\tau\leq t} M(f^{m}(\tau)))\\ &+C\epsilon \sup_{0\leq\tau\leq t} M(f^{m}(\tau)) +C \sup_{0\leq\tau\leq t} M(f^{m+1}(\tau))\sup_{0\leq\tau\leq t} M^{1/2}(f^{m}(\tau)). \end{split}$$ Then by the induction hypothesis, we obtain that $$\begin{split} M(f^{m+1}(t))\leq & C_0||f_0||^2_H+C_\epsilon t(\sup_{0\leq\tau\leq t} M(f^{m+1}(\tau))+2C_0||f_0||^2_H)\\ &+2C\epsilon C_0||f_0||^2_H +C' \sup_{0\leq\tau\leq t} M(f^{m+1}(\tau))||f_0||_H\\ \leq & C_0||f_0||^2_H+C_\epsilon T^*(\sup_{0\leq\tau\leq t} M(f^{m+1}(\tau))+2C_0||f_0||^2_H)\\ &+2C\epsilon C_0||f_0||^2_H +C' \sup_{0\leq\tau\leq t} M(f^{m+1}(\tau))||f_0||_H,\\ \end{split}$$ where $C'=\sqrt{2C_0}C$. Then we obtain that $$(1-C'||f_0||_H-C_\epsilon T^*)\sup_{0\leq\tau\leq t} M(f^{m+1}(t))\leq (C_0+2C_\epsilon C_0T^*+2C\epsilon C_0)||f_0||^2_H.$$ Then, for sufficiently small $\epsilon$, $T^*$ and $||f_0||_H$, we obtain that $$\sup_{0\leq\tau\leq t} M(f^{m+1}(t))\leq 2C_0||f_0||^2_H.$$ This proves the Lemma by the induction argument. Now, we prove the local existence theorem with the uniform control on each iteration. \[local existence\] For any sufficiently small $M_0>0$, there exists a time $T^*=T^*(M_0)>0$ and $M_1>0$ such that if $||f_0||^2_H\leq M_1,$ then there exists a unique solution $f(t,x,p)$ to the linearized relativistic Boltzmann equation (\[Linearized B\]) on $[0,T^*)\times \mathbb{T}^3\times {{\mathbb{R}^3}}$ such that $$\sup_{0\leq t \leq T^* } M(f(t))\leq M_0.$$ Also, $M(f(t))$ is continuous on $[0,T^*)$. Furthermore, we have the positivity of the solutions; i.e., if $F_0(x,p)=J+\sqrt{J}f_0\geq 0$, then $F(t,x,p)=J+\sqrt{J}f(t,x,p)\geq 0$. *Existence and Uniqueness*. By letting $m\rightarrow \infty$ in the previous lemma, we obtain sufficient compactness for the local existence of a strong solution $f(t,x,p)$ to (\[Linearized B\]). For the uniqueness, suppose there exists another solution $h$ to the (\[Linearized B\]) with the same initial data satisfying $\sup_{0\leq t \leq T^* } M(h(t))\leq \epsilon.$ Then, by the equation, we have $$\label{8.6} \{\partial_t +\hat{p}\cdot \nabla_x\}(f-h)+L(f-h)=\Gamma(f-h,f)+\Gamma(h,f-h).$$ Then, by Sobolev embedding $H^2(\mathbb{T}^3)\subset L^\infty (\mathbb{T}^3)$ and Theorem \[thm1\], we have $$\begin{split} |(\{\Gamma(f-h,f)+\Gamma(h,f-h)\},f-h)|\lesssim &||h||_{L^2_pH_x^2}||f-h||^2_{I^{a,\gamma}}\\& +||f-h||^2_{L^2_{p,x}}||f||_{H_x^2I^{a,\gamma}}||f-h||_{I^{a,\gamma}}\\ =& T_1+T_2. \end{split}$$ For $T_1$, we have $$\int_0^t d\tau \hspace*{1mm} T_1(\tau)\leq \sqrt{\epsilon} \int_0^t d\tau ||f(\tau)-h(\tau)||^2_{I^{a,\gamma}}$$ because we have $\sup_{0\leq t \leq T^* } M(h(t))\leq \epsilon.$ For $T_2,$ we use Cauchy-Schwarz inequality and obtain $$\begin{split} \int_0^t d\tau \hspace*{1mm}T_2(\tau)\leq & \sqrt{\epsilon} \left(\sup_{0\leq \tau\leq t}||f(\tau)-h(\tau)||^2_{L^2_{p,x}}\int_0^t d\tau ||f(\tau)-h(\tau)||^2_{I^{a,\gamma}}\right)^{1/2}\\ \lesssim & \sqrt{\epsilon} \left(\sup_{0\leq \tau\leq t}||f(\tau)-h(\tau)||^2_{L^2_{p,x}}+\int_0^t d\tau ||f(\tau)-h(\tau)||^2_{I^{a,\gamma}}\right)\\ \end{split}$$ because $f$ also satisfies $\sup_{0\leq t \leq T^* } M(f(t))\leq \epsilon.$ For the linearized Boltzmann operator $L$ on the left-hand side of (\[8.6\]), we use Lemma \[2.10\] to obtain $$(L(f-h),f-h)\leq c||f-h||^2_{I^{a,\gamma}}-C||f-h||^2_{L^2(\mathbb{T}^3\times B_C)}$$ for some small $c>0$. We finally take the inner product of (\[8.6\]) and $(f-h)$ and integrate over $[0,t]\times \mathbb{T}^3\times {{\mathbb{R}^3}}$ and use the estimates above to obtain $$\begin{split} \frac{1}{2}||f(t)-h(t)||^2_{L^2_{p,x}}&+c\int_0^t d\tau \hspace{1mm} ||f(\tau)-h(\tau)||^2_{I^{a,\gamma}}\\ \lesssim &\sqrt{\epsilon}\left( \sup_{0\leq \tau \leq t}||f(\tau)-h(\tau)||^2_{L^2_{p,x}}+\int_0^t d\tau ||f(\tau)-h(\tau)||^2_{I^{a,\gamma}} \right)\\ &+\int_0^t d\tau ||f(\tau)-h(\tau)||^2_{L^2(\mathbb{T}^3\times B_C)}. \end{split}$$ By the Gronwall’s inequality, we obtain that $f=h$ because $f$ and $h$ satisfies the same initial conditions. This proves the uniqueness of the solution. *Continuity.* Let $[a,b]$ be a time interval. We follow the simliar argument as in (\[8.5\]) and (\[continuity1\]) with the time interval $[a,b]$ instead of $[0,t]$ and let $f^m=f^{m+1}=f$ and obtain that $$\begin{split} |M(f(b))-M(f(a))|&=|\frac{1}{2}||f(b)||^2_H-\frac{1}{2}||f(a)||^2_H+\int_a^b d\tau\hspace{1mm} ||f(\tau)||^2_I\\ &\lesssim \left(\int_a^b d\tau\hspace{1mm} ||f(\tau)||^2_I\right)\left(1+\sup_{a\leq\tau\leq b}M^{1/2}(f(\tau)) \right). \end{split}$$ As $a\rightarrow b$, we obtain that $|M(f(b))-M(f(a))|\rightarrow 0$ because $||f||^2_I$ is integrable in time. This proves the continuity of $M$. *Positivity* For the proof of positivity of the solution, we recall the old paper [@Alexandre] where we see the positivity of strong solutions to the non-relativistic Boltzmann equations without angular cut-off with the initial data $f_0\in H^M $ for $M\geq 5$ and with moderate singularity $0\leq \gamma \leq 1$. Similar to this proof, we consider the cut-off approximation $F^\epsilon$ to the relativistic Boltzmann equation except that the kernel $\sigma$ has been replaced by $\sigma_\epsilon$ where the angular singularity has been removed and $\sigma_\epsilon\rightarrow \sigma$ as $\epsilon\rightarrow 0$. We obtain that $F^\epsilon$ is positive. If our initial data is nice enough to be in $H^M $ for $M>5$, we conclude that $F=J+\sqrt{J}f\geq 0$ using the compactness argument from the uniqueness of the solution. If our initial solution is not regular enough, then we use the density argument that $H^M $ is dense in $H(\mathbb{T}^3\times {{\mathbb{R}^3}})$ and the approximation arguments and the uniqueness to show the positivity. If the angular cutoff is more singular as $1\leq \gamma <2,$ then the positivity can be obtained by using higher derivative estimates and following the same compactness argument as in the case with lower singularity. We notice that if the number of spatial derivatives is large enough, then we have the existence of a classical solution. For the lowest number of spatial derivatives, $N\geq 2$, we obtain the existence of a strong solution to the equation. Global existence {#global-existence} ---------------- In this section, we would like to derive the systems of macroscopic equations and balance laws with respect to the coefficients appearing in the expression for the hydrodynamic part $Pf$ and prove an coercive inequality of the microscopic part $\{I-P\}f$. With this coercivity estimates for the non-linear local solutions to the relativistic Boltzmann system, we will show that these solutions must be global with the standard continuity argument and by proving energy inequalities. We will also show rapid time decay of the solutions. For the relativistic Maxwellian solution $J$, we have normalized so that $\int_{{{\mathbb{R}^3}}}J(p)dp=1$. Here we introduce the following notations for the integrals: $$\begin{split} \lambda_0=\int_{{\mathbb{R}^3}}{p^0}Jdp, \hspace{5mm}&\lambda_{00}=\int_{{\mathbb{R}^3}}({p^0})^2Jdp, \hspace{5mm}\lambda_{1}=\int_{{\mathbb{R}^3}}(p_1)^2Jdp,\\ \lambda_{10}=\int_{{\mathbb{R}^3}}\frac{p_1^2}{{p^0}}Jdp, \hspace{5mm}&\lambda_{12}=\int_{{\mathbb{R}^3}}\frac{p_1^2p_2^2}{{p^0}^2}Jdp, \hspace{5mm}\lambda_{11}=\int_{{\mathbb{R}^3}}\frac{p_1^4}{{p^0}^2}Jdp,\\ &\lambda_{100}=\int_{{{\mathbb{R}^3}}} \frac{p_1^2}{{p^0}^2}Jdp. \end{split}$$ We also mention that the null space of the linearized Boltzmann operator $L$ is given by the 5-dimensional space $$N(L)=\text{span}\{\sqrt{J},p_1\sqrt{J},p_2\sqrt{J},p_3\sqrt{J},{p^0}\sqrt{J}\}.$$ Then we define the orthogonal projection from $L^2({{\mathbb{R}^3}})$ onto $N(L)$ by $P$. Then we can write $Pf$ as a linear combination of the basis as $$\label{Pfbasis} Pf=\left( \mathcal{A}^f(t,x)+\sum_{i=1}^{3}\mathcal{B}^f_i(t,x)p_i+\mathcal{C}^f(t,x){p^0}\right) \sqrt{J}$$ where the coefficients are given by $$\mathcal{A}^f=\int_{{\mathbb{R}^3}}f\sqrt{J}dp - \lambda_0c^f, \hspace{3mm} \mathcal{B}_i^f= \frac{\int_{{\mathbb{R}^3}}fp\sqrt{J}dp}{\lambda_i}, \hspace{3mm} \mathcal{C}^f=\frac{\int_{{{\mathbb{R}^3}}}f({p^0}\sqrt{J}-\lambda_0\sqrt{J})}{\lambda_{00}-\lambda_0^2}.$$ Then we can decompose $f(t,x,p)$ as $$\label{decomp} f=Pf+\{I-P\}f.$$ We start from plugging the expression (\[decomp\]) into (\[Linearized B\]). Then we obtain $$\label{hydro} \{\partial_t+\hat{p}\cdot \nabla_x\}Pf=-\partial_t\{I-P\}f-(\hat{p}\cdot \nabla_x+L)\{I-P\}f+\Gamma(f,f).$$ Note that we have expressed the hydrodynamic part $Pf$ in terms of the microscopic part $\{I-P\}f$ and the higher-order term $\Gamma$. We define an operator $l=-(\hat{p}\cdot \nabla_x+L)$ here. Using the expression (\[Pfbasis\]) of $Pf$ with respect to the basis elements, we obtain that the left-hand side of the (\[hydro\]) can be written as $$\begin{split} \partial_t \mathcal{A}\sqrt{J}+\sum_{i=1}^3\partial_i(\mathcal{A}+\mathcal{C}{p^0})\frac{p_i}{{p^0}}\sqrt{J}+\partial_t \mathcal{C} {p^0}\sqrt{J}+&\sum_{i=1}^{3}\partial_t\mathcal{B}_ip_i\sqrt{J}\\ &+\sum_{i=1}^3\partial_i\mathcal{B}_i\frac{p_i^2}{{p^0}}\sqrt{J}+\sum_{i=1}^{3} \sum_{i\neq j}\partial_j \mathcal{B}_i\frac{p_ip_j}{{p^0}}\sqrt{J} \end{split}$$ where $\partial_i=\partial_{x_i}$. For fixed $(t,x)$ we can write the left-hand side with respect to the following basis, $\{e_k\}_{k=1}^{13}$, which consists of $$\sqrt{J}, \hspace{3mm}\left(\frac{p_i}{{p^0}}\sqrt{J}\right)_{1\leq i \leq 3}, \hspace{3mm}{p^0}\sqrt{J}, \hspace{3mm}\left(p_i\sqrt{J}\right)_{1\leq i \leq 3},\hspace{3mm} \left(\frac{p_ip_j}{{p^0}}\sqrt{J}\right)_{1\leq i\leq j\leq 3}.$$ Then, we can rewrite the left-hand side as $$\begin{split} \partial_t \mathcal{A}\sqrt{J}+\sum_{i=1}^3\partial_i \mathcal{A}\frac{p_i}{{p^0}}&\sqrt{J}+\partial_t \mathcal{C} {p^0}\sqrt{J}+\sum_{i=1}^3 (\partial_i\mathcal{C}+\partial_t\mathcal{B}_i)p_i\sqrt{J}\\ &+\sum_{i=1}^{3} \sum_{j=1}^3((1-\delta_{ij})\partial_i\mathcal{B}_j+\partial_j\mathcal{B}_i)\frac{p_ip_j}{{p^0}}\sqrt{J}. \end{split}$$ Then we obtain a system of macroscopic equations $$\label{macroscopic} \begin{split} \partial_t \mathcal{A}&=-\partial_tm_a+l_a+G_a,\\ \partial_i \mathcal{A}&=-\partial_tm_{ia}+l_{ia}+G_{ia},\\ \partial_t \mathcal{C}&= -\partial_tm_{c}+l_{c}+G_{c},\\ \partial_i \mathcal{C}+\partial_t\mathcal{B}_i&= -\partial_tm_{ic}+l_{ic}+G_{ic},\\ (1-\delta_{ij})\partial_i\mathcal{B}_j+\partial_j\mathcal{B}_i&=-\partial_tm_{ij}+l_{ij}+G_{ij}, \end{split}$$ where the indices are from the index set defined as $D=\{a,ia,c,ic,ij| 1\leq i\leq j\leq 3\}$ and $m_\mu$, $l_\mu$, and $G_\mu$ for $\mu \in D$ are the coefficients of $\{I-P\}f$, $l\{I-P\}f$, and $\Gamma(f,f)$ with respect to the basis $\{e_k\}_{k=1}^{13}$ respectively. We also derive a set of equations from the conservation laws. For the perturbation solution $f$, we multiply the linearized Boltzmann equation by $\sqrt{J}, p_i\sqrt{J},{p^0}\sqrt{J}$ and integrate over ${{\mathbb{R}^3}}$ to obtain that $$\label{c1} \begin{split} \partial_t\int_{{{\mathbb{R}^3}}}f\sqrt{J} dp+\int_{{{\mathbb{R}^3}}}\hat{p}\cdot\nabla_xf\sqrt{J}dp&=0\\ \partial_t\int_{{{\mathbb{R}^3}}}f\sqrt{J}p_i dp+\int_{{{\mathbb{R}^3}}}\hat{p}\cdot\nabla_xf\sqrt{J}p_idp&=0\\ \partial_t\int_{{{\mathbb{R}^3}}}f\sqrt{J}{p^0} dp+\int_{{{\mathbb{R}^3}}}\hat{p}\cdot\nabla_xf\sqrt{J}{p^0}dp&=0.\\ \end{split}$$ These hold because $1, p_i, {p^0}$ are collisional invariants and hence $$\int_{{\mathbb{R}^3}}q(f,f)dp=\int_{{\mathbb{R}^3}}q(f,f)p_idp=\int_{{\mathbb{R}^3}}q(f,f){p^0}dp=0.$$ We will plug the decomposition $f=Pf+\{I-P\}f$ into (\[c1\]). We first consider the microscopic part. Note that $$\label{c2} \begin{split} &\int_ {{\mathbb{R}^3}}\hat{p}\cdot \nabla_x \{I-P\}f\sqrt{J}\left(\begin{array}{c} 1\\ p_i\\ {p^0}\end{array}\right)dp=\sum_{j=1}^3 \int_ {{\mathbb{R}^3}}\frac{p_j}{{p^0}}\partial_j \{I-P\}f\sqrt{J}\left(\begin{array}{c} 1\\ p_i\\ {p^0}\end{array}\right)dp\\ &=\sum_{j=1}^3 \partial_j \int_{{{\mathbb{R}^3}}}\{I-P\}f\sqrt{J}\left(\begin{array}{c} \frac{p_j}{{p^0}}\\ \frac{p_ip_j}{{p^0}}\\ p_j\end{array}\right)dp=\sum_{j=1}^3 \partial_j \langle \{I-P\}f, \sqrt{J}\left(\begin{array}{c} \frac{p_j}{{p^0}}\\ \frac{p_ip_j}{{p^0}}\\ 0\end{array}\right)\rangle.\\ \end{split}$$ Also, we have that $$\label{c3} \begin{split} \partial_t\int_ {{\mathbb{R}^3}}\{I-P\}f\sqrt{J}\left(\begin{array}{c} 1\\p_i\\{p^0}\end{array}\right)=\partial_t \langle \{I-P\}f,\sqrt{J}\left(\begin{array}{c} 1\\p_i\\{p^0}\end{array}\right)\rangle=0. \end{split}$$ On the other hand, the hydrodynamic part $Pf=(\mathcal{A}+\mathcal{B}\cdot p+\mathcal{C}{p^0})\sqrt{J}$ satisfies $$\label{c4} \begin{split} &\partial_t \int_{{{\mathbb{R}^3}}} \left(\begin{array}{c} 1\\p_i\\{p^0}\end{array}\right) Pf\sqrt{J}dp+ \int_ {{\mathbb{R}^3}}\hat{p}\cdot \nabla_x Pf\sqrt{J}\left(\begin{array}{c} 1\\p_i\\{p^0}\end{array}\right) dp\\ &=\partial_t \int_{{{\mathbb{R}^3}}} \left(\begin{array}{c} \mathcal{A}+\mathcal{B}\cdot p+\mathcal{C}{p^0}\\\mathcal{A}p_i+\mathcal{B}\cdot pp_i+\mathcal{C}{p^0}p_i\\\mathcal{A}{p^0}+\mathcal{B}\cdot p{p^0}+\mathcal{C}{p^0}^2\end{array}\right) \sqrt{J}dp+ \sum_{j=1}^3\int_ {{\mathbb{R}^3}}\partial_j \left(\begin{array}{c} \frac{p_j}{{p^0}}(\mathcal{A}+\mathcal{B}\cdot p+\mathcal{C}{p^0})\\\frac{p_ip_j}{{p^0}}(\mathcal{A}+\mathcal{B}\cdot p+\mathcal{C}{p^0})\\p_j\mathcal{A}+\mathcal{B}\cdot pp_j+\mathcal{C}{p^0}p_j\end{array}\right)\sqrt{J} dp\\ &=\left(\begin{array}{c} \partial_t \mathcal{A}+\lambda_0\partial_t \mathcal{C}\\ \lambda_1\partial_t\mathcal{B}_i\\ \lambda_0 \partial_t \mathcal{A}+\lambda_{00}\partial_t \mathcal{C} \end{array}\right)+\left(\begin{array}{c} \lambda_{10}\nabla_x\cdot \mathcal{B}\\ \lambda_{10}\partial_i \mathcal{A}+\lambda_1\partial_i \mathcal{C}\\ \lambda_1\nabla_x \cdot \mathcal{B}\end{array}\right).\\ \end{split}$$ Also, we have that $L(f)=L\{I-P\}f$. Together with (\[c1\]), (\[c2\]), (\[c3\]), and (\[c4\]), we finally obtain the local conservation laws satisfied by $(\mathcal{A},\mathcal{B},\mathcal{C})$: $$\begin{split} \partial_t \mathcal{A}+\lambda_0\partial_t \mathcal{C}+\lambda_{10}\nabla_x\cdot \mathcal{B}&=-\nabla_x \cdot \langle \{I-P\}f, \sqrt{J}\frac{p}{{p^0}}\rangle,\\ \lambda_1\partial_t \mathcal{B} +\lambda_{10}\nabla_x \mathcal{A}+\lambda_1\nabla_x \mathcal{C}&=-\nabla_x \cdot \langle \{I-P\}f, \sqrt{J}\frac{p\otimes p}{{p^0}}\rangle,\\\lambda_0\partial_t \mathcal{A}+\lambda_{00}\partial_t \mathcal{C}+\lambda_1 \nabla_x \cdot \mathcal{B}&= 0.\\ \end{split}$$ Comparing the first and the third conservation laws, we obtain $$\label{Conservation Laws} \begin{split} \partial_t \mathcal{A}\left(1-\frac{\lambda_0^2}{\lambda_{00}}\right)+\nabla_x\cdot \mathcal{B}\left(\lambda_{10}-\frac{\lambda_{0}\lambda_1}{\lambda_{00}}\right)&=-\nabla_x \cdot \langle \{I-P\}f, \sqrt{J}\frac{p}{{p^0}}\rangle,\\ \lambda_1\partial_t \mathcal{B} +\lambda_{10}\nabla_x \mathcal{A}+\lambda_1\nabla_x \mathcal{C}&=-\nabla_x \cdot \langle \{I-P\}f, \sqrt{J}\frac{p\otimes p}{{p^0}}\rangle,\\\left(\lambda_0-\frac{\lambda_{00}}{\lambda_0}\right)\partial_t \mathcal{C}+\left(\lambda_{10}-\frac{\lambda_1}{\lambda_0}\right) \nabla_x \cdot \mathcal{B}&= -\nabla_x \cdot \langle \{I-P\}f, \sqrt{J}\frac{p}{{p^0}}\rangle.\\ \end{split}$$ We also mention that we have the following lemma on the coefficients $\mathcal{A},\mathcal{B},\mathcal{C}$ by the conservation of mass, momentum, and energy: \[L8.5\] Let $f(t,x,p)$ be the local solution to the linearized relativistic Boltzmann equation (\[Linearized B\]) which is shown to exist in Theorem \[local existence\] which satisfies the mass, momentum, and energy conservation laws (\[zero\]). Then we have $$\int_{\mathbb{T}^3} \mathcal{A}(t,x)dx=\int_{\mathbb{T}^3} \mathcal{B}_i(t,x)dx=\int_{\mathbb{T}^3} \mathcal{C}(t,x)dx=0,$$ where $i\in{1,2,3}$. We also list two lemmas that helps us to control the coefficients in the linear microscopic term $l$ and the non-linear higher-order term $\Gamma$. \[L8.6\] For any coefficient $l_\mu$ for the microscopic term $l$, we have $$\sum_{\mu \in D} ||l_\mu||_{H_x^{N-1}}\lesssim \sum_{|\alpha|\leq N}||\{I-P\}\partial^\alpha f||_{L^2_{\frac{a+\gamma}{2}}(\mathbb{T}^3\times{{\mathbb{R}^3}})}.$$ In order to estimate the size for $H^{N-1}$ norm, we take $$\langle \partial^\alpha l(\{I-P\}f),e_k\rangle=-\langle\hat{p}\cdot\nabla_x(\{I-P\}\partial^\alpha f),e_k\rangle-\langle L(\{I-P\}\partial^\alpha f),e_k\rangle.$$ For any $|\alpha|\leq N-1$, the $L^2$-norm of the first part of the right-hand side is $$\begin{split} ||\langle\hat{p}\cdot\nabla_x(\{I-P\}\partial^\alpha f),e_k\rangle||^2_{L^2_x}&\lesssim \int_ {\mathbb{T}^3\times{{\mathbb{R}^3}}} dxdp |e_k||\{I-P\}\nabla_x\partial^\alpha f|^2\\ &\lesssim ||\{I-P\}\nabla_x\partial^\alpha f||^2_{L^2_{\frac{a+\gamma}{2}}(\mathbb{T}^3\times{{\mathbb{R}^3}})} \end{split}$$ Similarly, we have $$\begin{split} ||\langle L(\{I-P\}\partial^\alpha f),e_k\rangle||^2_{L^2_x}&\lesssim \Big|\Big||\{I-P\}\partial^\alpha f|_{L^2_{\frac{a+\gamma}{2}}}|\sqrt{J}|_{L^2_{\frac{a+\gamma}{2}}} \Big|\Big|^2_{L^2_x}\\ &\lesssim ||\{I-P\}\nabla_x\partial^\alpha f||^2_{L^2_{\frac{a+\gamma}{2}}(\mathbb{T}^3\times{{\mathbb{R}^3}})}. \end{split}$$ This completes the proof. \[L8.7\] Let $||f||^2_H\leq M$ for some $M>0$. Then, we have $$\sum_{\mu \in D} ||G_\mu||_{H_x^{N-1}}\lesssim \sqrt{M}\sum_{|\alpha|\leq N}||\partial^\alpha f||_{L^2_{\frac{a+\gamma}{2}}(\mathbb{T}^3\times{{\mathbb{R}^3}})}.$$ In order to estimate the size for $H^{N-1}$ norm, we consider $\langle \Gamma(f,f), e_k\rangle$. By (\[C3\]), for any $m\geq 0$, $$\begin{split} ||\langle \Gamma(f,f),e_k\rangle||_{H^{N-1}_x}&\lesssim \sum_{|\alpha|\leq N-1}\sum_{\alpha_1\leq\alpha}\Big|\Big||\partial^{\alpha-\alpha_1}f|_{L^2_{-m}}|\partial^{\alpha_1}f|_{L^2_{-m}}\Big|\Big|_{L^2_x}\\ &\lesssim ||f||_{L^2_{-m}H^N_x}\sum_{|\alpha|\leq N}||\partial^\alpha f||_{L^2_\frac{a+\gamma}{2}}\\ &\lesssim \sqrt{M}\sum_{|\alpha|\leq N}||\partial^\alpha f||_{L^2_{\frac{a+\gamma}{2}}(\mathbb{T}^3\times{{\mathbb{R}^3}})}. \end{split}$$ This completes the proof. These two lemmas above, the macroscopic equations, and the local conservation laws will together prove the following theorem on the coercivity estimate for the microscopic term $\{I-P\}f$ which is crucial for the energy inequality which will imply the global existence of the solution with the continuity argument. \[8.4\] Given the initial condition $f_0\in H$ which satisfies the mass, momentum, and energy conservation laws (\[zero\]) and the assumptions in Theorem \[local existence\], we can consider the local solution $f(t,x,p)$ to the linearized relativistic Boltzmann equation (\[Linearized B\]). Then, there is a constant $M>0$ such that if $$||f(t)||^2_H\leq M_0,$$ then there are universal constants $\delta>0$ and $C>0$ such that $$\sum_{|\alpha|\leq N} ||\{I-P\}\partial^\alpha f||^2_{I^{a,\gamma}}(t)\geq \delta \sum_ {|\alpha|\leq N}||P\partial^\alpha f||^2_{I^{a,\gamma}}(t)-C\frac{dI(t)}{dt},$$ where $I(t)$ is the interaction potential defined as $$I(t)=\sum_ {|\alpha|\leq N-1} \{I^\alpha_a(t)+I^\alpha_b(t)+I^\alpha_c(t)\}$$ and each of the sub-potentials $I^\alpha_a(t)$, $I^\alpha_b(t)$, and $I^\alpha_c(t)$ is defined as $$\begin{split} &I^\alpha_a(t)=\sum_{i=1}^3\int_ {\mathbb{T}^3} \partial_i\partial^\alpha m_{ia}\partial^\alpha \mathcal{A}(t,x) dx,\\ &I^\alpha_b(t)=-\sum_{i=1}^{3}\sum_ {j\neq i}\int_{\mathbb{T}^3}\partial_j\partial^\alpha m_{ij}\partial^\alpha \mathcal{B}_idx,\\ &I^\alpha_c(t)=\int_{\mathbb{T}^3}(\nabla\cdot \partial^\alpha \mathcal{B})\partial^\alpha \mathcal{C}(t,x)dx+\sum_{i=1}^3\int_ {\mathbb{T}^3} \partial_i\partial^\alpha m_{ic}\partial^\alpha \mathcal{C}(t,x) dx.\\ \end{split}$$ Since $Pf=\mathcal{A}+\mathcal{B}\cdot p+\mathcal{C}{p^0}$, we have that $$||P\partial^\alpha f(t)||^2_{I^{a,\gamma}}\lesssim ||\partial^\alpha \mathcal{A}(t)||^2_{L^2_x}+||\partial^\alpha \mathcal{B}(t)||^2_{L^2_x}+||\partial^\alpha \mathcal{C}(t)||^2_{L^2_x}.$$ Thus, it suffices to prove the following estimate: $$\label{8.19} \begin{split} ||\partial^\alpha \mathcal{A}(t)||^2_{H^N_x}&+||\partial^\alpha \mathcal{B}(t)||^2_{H^N_x}+||\partial^\alpha \mathcal{C}(t)||^2_{H^N_x}\\ &\lesssim \sum_{|\alpha|\leq N}||\{I-P\}\partial^\alpha f(t)||^2_{L^2_{\frac{a+\gamma}{2}}}+ M\sum_{|\alpha|\leq N}||\partial^\alpha f(t)||^2_{L^2_{\frac{a+\gamma}{2}}}+\frac{dI(t)}{dt}. \end{split}$$ Note that the term $ M\sum_{|\alpha|\leq N}||\partial^\alpha f(t)||^2_{L^2_{\frac{a+\gamma}{2}}}$ can be ignored because we have $$\begin{split} &\sum_{|\alpha|\leq N}||\partial^\alpha f(t)||^2_{L^2_{\frac{a+\gamma}{2}}}\lesssim \sum_{|\alpha|\leq N}||P\partial^\alpha f(t)||^2_{L^2_{\frac{a+\gamma}{2}}}+ \sum_{|\alpha|\leq N}||\{I-P\}\partial^\alpha f(t)||^2_{L^2_{\frac{a+\gamma}{2}}}\\ &\lesssim ||\partial^\alpha \mathcal{A}(t)||^2_{H^N_x}+||\partial^\alpha \mathcal{B}(t)||^2_{H^N_x}+||\partial^\alpha \mathcal{C}(t)||^2_{H^N_x}+\sum_{|\alpha|\leq N}||\{I-P\}\partial^\alpha f(t)||^2_{L^2_{\frac{a+\gamma}{2}}}. \end{split}$$ Therefore, with sufficiently small $M>0$, (\[8.19\]) will imply Theorem \[8.4\]. In order to prove (\[8.19\]), we will estimate each of the $\partial^\alpha$ derivatives of $\mathcal{A}, \mathcal{B}, \mathcal{C}$ for $0<|\alpha|\leq N$ separately. Later, we will use Poincaré inequality to estimate the $L^2$-norm of $\mathcal{A},\mathcal{B},\mathcal{C}$ to finish the proof. For the estimate for $\mathcal{A}$, we use the second equation in the system of macroscopic equations \[macroscopic\] which tells $\partial_i \mathcal{A}=-\partial_tm_{ia}+l_{ia}+G_{ia}$. We take $\partial_i\partial^\alpha$ onto this equation for $|\alpha|\leq N-1$ and sum over $i$ and obtain that $$-\Delta\partial^\alpha \mathcal{A}= \sum_{i=1}^3 (\partial_t\partial_i \partial^\alpha m_{ia} -\partial_i\partial^\alpha(l_{ia}+G_{ia})).$$ We now multiply $\partial^\alpha \mathcal{A}$ and integrate over $\mathbb{T}^3$ to obtain $$\begin{split} ||\nabla\partial^\alpha \mathcal{A}||^2_{L^2_x}&\leq ||\partial^\alpha (l_{ia}+G_{ia})||_{L^2_x}||\nabla\partial^\alpha \mathcal{A}||_{L^2_x}+\frac{d}{dt}\sum_{i=1}^3\int_ {\mathbb{T}^3} \partial_i\partial^\alpha m_{ia}\partial^\alpha \mathcal{A}(t,x) dx\\ & -\sum_{i=1}^3\int_ {\mathbb{T}^3} \partial_i\partial^\alpha m_{ia}\partial_t\partial^\alpha \mathcal{A}(t,x) dx. \end{split}$$ We define the interaction functional $$I^\alpha_a(t)=\sum_{i=1}^3\int_ {\mathbb{T}^3} \partial_i\partial^\alpha m_{ia}\partial^\alpha \mathcal{A}(t,x) dx.$$ For the last term, we use the first equation of the local conservation laws (\[Conservation Laws\]) to obtain that $$\int_ {\mathbb{T}^3} \sum_{i=1}^3|\partial_i\partial^\alpha m_{ia}\partial_t\partial^\alpha \mathcal{A}(t,x)| dx\leq \zeta ||\nabla\cdot \partial^\alpha \mathcal{B}||^2_{L^2_x}+C_\zeta ||\{I-P\}\nabla\partial^\alpha f||^2_{L^2_{\frac{a+\gamma}{2}}},$$ for any $\zeta>0$. Together with Lemma \[L8.6\] and Lemma \[L8.7\], we obtain that $$\label{maina} ||\nabla \partial^\alpha \mathcal{A}||^2_{L^2_x}-\zeta ||\nabla\cdot \partial^\alpha \mathcal{B}||^2_{L^2_x}\lesssim C_\zeta \sum_{|\alpha|\leq N} ||\{I-P\}\partial^\alpha f||^2_{L^2_{\frac{a+\gamma}{2}}}+\frac{dI^\alpha_a}{dt}+M\sum_{|\alpha|\leq N}||\partial^\alpha f||^2_{L^2_{\frac{a+\gamma}{2}}}.$$ For the estimate for $\mathcal{C}$, we use the fourth equation in the system of macroscopic equations \[macroscopic\] which tells $\partial_i \mathcal{C}+\partial_t\mathcal{B}_i=-\partial_tm_{ic}+l_{ic}+G_{ic}$. We take $\partial_i\partial^\alpha$ onto this equation for $|\alpha|\leq N-1$ and sum over $i$ and obtain that $$-\Delta\partial^\alpha \mathcal{C}= \frac{d}{dt}(\nabla\cdot \partial^\alpha \mathcal{B})+\sum_{i=1}^3 (\partial_t\partial_i \partial^\alpha m_{ic} -\partial_i\partial^\alpha(l_{ic}+G_{ic})).$$ We now multiply $\partial^\alpha \mathcal{C}$ and integrate over $\mathbb{T}^3$ to obtain $$\begin{split} ||\nabla\partial^\alpha \mathcal{C}||^2_{L^2_x}&\leq \frac{d}{dt}\int_{\mathbb{T}^3}(\nabla\cdot \partial^\alpha \mathcal{B})\partial^\alpha \mathcal{C}(t,x)dx-\int_{\mathbb{T}^3}(\nabla\cdot \partial^\alpha \mathcal{B})\partial_t\partial^\alpha \mathcal{C}(t,x)dx\\&+||\partial^\alpha (l_{ic}+G_{ic})||_{L^2_x}||\nabla\partial^\alpha \mathcal{C}||_{L^2_x}+\frac{d}{dt}\sum_{i=1}^3\int_ {\mathbb{T}^3} \partial_i\partial^\alpha m_{ic}\partial^\alpha \mathcal{C}(t,x) dx\\ & -\sum_{i=1}^3\int_ {\mathbb{T}^3} \partial_i\partial^\alpha m_{ic}\partial_t\partial^\alpha \mathcal{C}(t,x) dx. \end{split}$$ We define the interaction functional $$I^\alpha_c(t)=\int_{\mathbb{T}^3}(\nabla\cdot \partial^\alpha \mathcal{B})\partial^\alpha \mathcal{C}(t,x)dx+\sum_{i=1}^3\int_ {\mathbb{T}^3} \partial_i\partial^\alpha m_{ic}\partial^\alpha \mathcal{C}(t,x) dx.$$ We also use the third equation of the local conservation laws (\[Conservation Laws\]) to obtain that $$\int_ {\mathbb{T}^3} \sum_{i=1}^3|\partial_i\partial^\alpha m_{ic}\partial_t\partial^\alpha \mathcal{C}(t,x)| dx\leq \zeta ||\nabla\cdot \partial^\alpha \mathcal{B}||^2_{L^2_x}+C_\zeta ||\{I-P\}\nabla\partial^\alpha f||^2_{L^2_{\frac{a+\gamma}{2}}},$$ for any $\zeta>0$. Together with Lemma \[L8.6\] and Lemma \[L8.7\], we obtain that $$\label{mainc} ||\nabla \partial^\alpha \mathcal{C}||^2_{L^2_x}-\zeta ||\nabla\cdot \partial^\alpha \mathcal{B}||^2_{L^2_x}\lesssim C_\zeta \sum_{|\alpha|\leq N} ||\{I-P\}\partial^\alpha f||^2_{L^2_{\frac{a+\gamma}{2}}}+\frac{dI^\alpha_c}{dt}+M\sum_{|\alpha|\leq N}||\partial^\alpha f||^2_{L^2_{\frac{a+\gamma}{2}}}.$$ For the estimate for $\mathcal{B}$, we use the last equation in the system of macroscopic equations (\[macroscopic\]) which tells $(1-\delta_{ij})\partial_i\mathcal{B}_j+\partial_j\mathcal{B}_i=-\partial_tm_{ij}+l_{ij}+G_{ij}$. Note that when $i=j$, we have $$\partial_i\mathcal{B}_i=-\partial_tm_{ii}+l_{ii}+G_{ii}.$$ Also, if $i\neq j$, we have $$\partial_i\mathcal{B}_j+\partial_j\mathcal{B}_i=-\partial_tm_{ij}+l_{ij}+G_{ij}.$$ We take $\partial_j\partial^\alpha$ on both equations above for $|\alpha|\leq N-1$ and sum on $j$ to obtain $$\begin{split} \Delta\partial^\alpha \mathcal{B}_i&=-\partial_i\partial_i\partial^\alpha \mathcal{B}_i+2\partial_i\partial^\alpha l_{ii}+2\partial_i\partial^\alpha G_{ii}\\ &+\sum_{j\neq i}(-\partial_i\partial^\alpha l_{jj}-\partial_i\partial^\alpha G_{jj}+\partial_j\partial^\alpha l_{ij}+\partial_j\partial^\alpha G_{ij}-\partial_t\partial_j\partial^\alpha m_{ij}). \end{split}$$ We now multiply $\partial^\alpha \mathcal{B}_i$ and integrate over $\mathbb{T}^3$ to obtain $$\begin{split} ||\nabla\partial^\alpha \mathcal{B}_i||^2_{L^2_x}&\leq -\frac{d}{dt}\sum_ {j\neq i}\int_{\mathbb{T}^3}\partial_j\partial^\alpha m_{ij}\partial^\alpha \mathcal{B}_idx+\sum_ {j\neq i}\int_{\mathbb{T}^3}\partial_j\partial^\alpha m_{ij}\partial_t\partial^\alpha \mathcal{B}_i dx\\&+\sum_{\mu\in D}||\partial^\alpha (l_{\mu}+G_{\mu})||_{L^2_x}. \end{split}$$ We define the interaction functional $$I^\alpha_b(t)=-\sum_{i=1}^{3}\sum_ {j\neq i}\int_{\mathbb{T}^3}\partial_j\partial^\alpha m_{ij}\partial^\alpha \mathcal{B}_idx.$$ We also use the second equation of the local conservation laws (\[Conservation Laws\]) to obtain that $$\begin{split} \sum_{i=1}^3\sum_{j\neq i}\int_ {\mathbb{T}^3}&|\partial_j\partial^\alpha m_{ij}\partial_t\partial^\alpha \mathcal{B}_i(t,x)| dx\\ &\leq \zeta( ||\nabla\cdot \partial^\alpha \mathcal{A}||^2_{L^2_x}+||\nabla\cdot \partial^\alpha \mathcal{C}||^2_{L^2_x})+C_\zeta ||\{I-P\}\nabla\partial^\alpha f||^2_{L^2_{\frac{a+\gamma}{2}}}, \end{split}$$ for any $\zeta>0$. Together with Lemma \[L8.6\] and Lemma \[L8.7\], we obtain that $$\label{mainb} \begin{split} ||\nabla \partial^\alpha \mathcal{B}||^2_{L^2_x}-\zeta (||\nabla\cdot \partial^\alpha \mathcal{A}||^2_{L^2_x}+||\nabla\cdot \partial^\alpha \mathcal{C}||^2_{L^2_x})\lesssim& C_\zeta \sum_{|\alpha|\leq N} ||\{I-P\}\partial^\alpha f||^2_{L^2_{\frac{a+\gamma}{2}}}+\frac{dI^\alpha_b}{dt}\\&+M\sum_{|\alpha|\leq N}||\partial^\alpha f||^2_{L^2_{\frac{a+\gamma}{2}}}. \end{split}$$ Choose sufficiently small $\zeta>0$. Then, (\[maina\]), (\[mainc\]), and (\[mainb\]) implies that $$\label{mainabc} \begin{split} ||\nabla \mathcal{A}||^2_{H^{N-1}_x}+||\nabla \mathcal{B}||^2_{H^{N-1}_x}+||\nabla \mathcal{C}||^2_{H^{N-1}_x}\lesssim &\sum_{|\alpha|\leq N} ||\{I-P\}\partial^\alpha f||^2_{L^2_{\frac{a+\gamma}{2}}}+\frac{dI}{dt}\\ +M\sum_{|\alpha|\leq N}||\partial^\alpha f||^2_{L^2_{\frac{a+\gamma}{2}}}. \end{split}$$ On the other hand, with the Poincaré inequality and Lemma \[L8.5\], we obtain that $$||\mathcal{A}||^2\lesssim \left(||\nabla \mathcal{A}||+\left|\int_{\mathbb{T}^3}\mathcal{A}(t,x) dx\right|\right)^2=||\nabla \mathcal{A}||^2\lesssim \sum_{|\alpha|\leq N}||\partial^\alpha f||^2_{L^2_{\frac{a+\gamma}{2}}}.$$ This same estimate holds for $b$ and $c$. Therefore, the inequality (\[8.19\]) holds and this finishes the proof for the theorem. We now use this coercive estimate to prove that the local solutions from the theorem \[local existence\] should be global-in-time solutions by standard continuity argument. We will also prove that the solutions have rapid exponential time decay. Before we go into the proof for the global existence, we would like to mention a coercive lower bound for the linearized Boltzmann collision operator $L$ which also gives the positivity of the operator: \[coercive L\] There is a constant $\delta>0$ such that $$\langle Lf,f\rangle \geq \delta |\{I-P\}f|^2_{I^{a,\gamma}}.$$ By following the proof for Theorem 1.1 of [@Mouhot] with relativistic collision kernel and using that $g\geq \frac{|p-q|}{\sqrt{{p^0}{q^0}}}$, we can obtain that $$\langle Lf,f\rangle \geq \delta_1|f|^2_{L^2_{\frac{a}{2}}}$$ where $\delta_1>0$ is a constant. Also, by Lemma \[2.10\], we have $$\langle Lf,f\rangle \geq |f|^2_{I^{a,\gamma}}-C|f|^2_{L^2(B_C)}$$ for some $C>0$. Then, for any $\delta_2\in(0,1)$, $$\langle Lf,f\rangle=\delta_2\langle Lf,f\rangle+(1-\delta_2)\langle Lf,f\rangle\geq \delta_2|f|^2_{I^{a,\gamma}}-C\delta_2|f|^2_{L^2(B_C)}+(1-\delta_2)\delta_1|f|^2_{L^2_{\frac{a}{2}}}.$$ Since $C>0$ is finite, we have $|f|^2_{L^2_{\frac{a}{2}}}\gtrsim |f|^2_{L^2(B_C)}$. By choosing $\delta_2>0$ sufficiently small and supposing $f=\{I-P\}f$, we obtain the Theorem. Now, we define the dissipation rate $\mathcal{D} $ as $$\mathcal{D} =\sum_{|\alpha|\leq N} ||\partial^\alpha f(t)||^2_{I^{a,\gamma} }.$$ We will use the energy functional $\mathcal{E} (t)$ to be a high-order norm which satisfies $$\label{energy} \mathcal{E} (t)\approx \sum_{|\alpha|\leq N} || \partial^\alpha f(t)||^2_{L^2(\mathbb{T}^3\times {{\mathbb{R}^3}})}.$$ This functional will be precisely defined during the proof. Then, we would like to set up the following energy inequality: $$\frac{d}{dt}\mathcal{E} (t)+\mathcal{D} (t)\leq C\sqrt{\mathcal{E} (t)}\mathcal{D} (t).$$ We will prove this energy inequality and use this to show the global existence. (*Proof for Theorem \[MAIN\]*) We denote $\mathcal{D}{\overset{\mbox{\tiny{def}}}{=}}\mathcal{D}_0$ and $\mathcal{E}{\overset{\mbox{\tiny{def}}}{=}}\mathcal{E}_0$. By the definitions on interaction functionals, there is a sufficiently large constant $C''>0$ for any $C'>0$ such that $$||f(t)||^2_{L^2_pH^N_x}\leq (C''+1)||f(t)||_{L^2_pH^N_x}-C'I(t)\lesssim ||f(t)||^2_{L^2_pH^N_x}.$$ Note that $C''$ doesn’t depend on $f(t,x,p)$ but only on $C'$ and $I$. Here we define the energy functional $\mathcal{E}(t)$ as $$\mathcal{E}(t)=(C''+1)||f(t)||_{L^2_pH^N_x}-C'I(t).$$ Then, the above inequalities show that the definition of $\mathcal{E}$ satisfies (\[energy\]). Recall the local existence theorem \[local existence\] and Theorem \[8.4\] and choose $M_0\leq 1$ so that both theorems hold. We choose $M_1\leq \frac{M_0}{2}$ and consider initial data $\mathcal{E}(0)$ so that $$\mathcal{E}(0)\leq M_1<M_0.$$ From the local existence theorem, we define $T>0$ so that $$T=\sup\{t\geq 0| \mathcal{E}(t)\leq 2M_1\}.$$ By taking the spatial derivative $\partial^\alpha$ onto the linearized relativistic Boltzmann equation (\[Linearized B\]), integrating over $(x,p)$, and summing over $\alpha$, we obtain $$\label{last} \frac{1}{2}\frac{d}{dt}||f(t)||^2_{L^2_pH^N_x}+\sum_{|\alpha|\leq N}(L\partial^\alpha f,\partial^\alpha f)=\sum_{|\alpha|\leq N}(\partial^\alpha\Gamma(f,f),\partial^\alpha f).$$ By the estimates from Lemma \[Lemma1\], we have $$\sum_{|\alpha|\leq N}(\partial^\alpha\Gamma(f,f),\partial^\alpha f)\lesssim \sqrt{\mathcal{E}}\mathcal{D}.$$ Since our choice of $M_1$ satisfies $\mathcal{E}(t)\leq 2M_1\leq M_0,$ we see that the assumption for Theorem \[8.4\] is satisfied. Then, Theorem \[8.4\] and Theorem \[coercive L\] tells us that $$\begin{split} \sum_{|\alpha|\leq N}(L\partial^\alpha f,\partial^\alpha f)&\geq \delta ||\{I-P\}f||^2_{I^{a,\gamma}}\\ &\geq \frac{\delta}{2}||\{I-P\}f||^2_{I^{a,\gamma}}+\frac{\delta\delta'}{2}\sum_ {|\alpha|\leq N}||P\partial^\alpha f||^2_{I^{a,\gamma}}(t)-\frac{\delta C}{2}\frac{dI(t)}{dt}. \end{split}$$ Let $\delta''=\min\{\frac{\delta}{2},\frac{\delta\delta'}{2}\}$ and let $C'=\delta C$. Then, we have $$\frac{1}{2}\frac{d}{dt}\left( ||f(t)||^2_{L^2_pH^N_x}-C'I(t)\right)+\delta''\mathcal{D}\lesssim \sqrt{\mathcal{E}}\mathcal{D}.$$ We multiply (\[last\]) by $\frac{C''}{2}$ and add this onto this inequality above and use the positivity of $L$ from Theorem \[coercive L\] to conclude that $$\frac{d\mathcal{E}(t)}{dt}+\delta''\mathcal{D}(t)\leq C\sqrt{\mathcal{E}(t)}\mathcal{D}(t),$$ for some $C>0$. Suppose $M_1=\min\{\frac{\delta''^2}{8C^2},\frac{M_0}{2}\}.$ Then, we have $$\label{lastlast} \frac{d\mathcal{E}(t)}{dt}+\delta''\mathcal{D}(t)\leq C\sqrt{\mathcal{E}(t)}\mathcal{D}(t)\leq C\sqrt{2M_1}\mathcal{D}(t)\leq \frac{\delta''}{2}\mathcal{D}(t).$$ Now, we integrate over $t$ for $0\leq t\leq \tau<T$ and obtain $$\mathcal{E}(\tau)+\frac{\delta''}{2}\int_{0}^{\tau}\mathcal{D}(t)dt\leq \mathcal{E}(0)\leq M_1< 2M_1.$$ Since $\mathcal{E}(\tau)$ is continuous in $\tau$, $\mathcal{E}(\tau)\leq M_1$ if $T<\infty$. This contradicts the definition of $T$ and hence $T=\infty$. This proves the global existence. Also, notice that $\mathcal{E}(t)\lesssim \mathcal{D}(t)$. This and the equation (\[lastlast\]) show the exponential time decay. Appendix ======== Relativistic collision geometry ------------------------------- Consider the *center-of-momentum* expression for the collision operator. Under the expression, note that $$\begin{split} p'-q'&=g\omega+g(\gamma-1)(p+q)\frac{(p+q)\cdot\omega}{|p+q|^2}\\ &=g\omega+{\sqrt{s}}({p'^0}-{q'^0})(\gamma-1)(p+q)\frac{1}{|p+q|^2}\\ \end{split}$$ Thus, $\omega$ can be represented as $$\begin{split} \omega&=\frac{1}{g}(p'-q'-{\sqrt{s}}({p'^0}-{q'^0})(\gamma-1)(p+q)\frac{1}{|p+q|^2})\\ &=\frac{1}{g}(p'-q'-({p'^0}-{q'^0})\frac{p'+q'}{{{p^0}+{q^0}+\sqrt{s}}})\\ &=\frac{(p'-q')({p'^0}+{q'^0}+{\sqrt{s}})-({p'^0}-{q'^0})(p'+q')}{g({p'^0}+{q'^0}+{\sqrt{s}})}\\ &=\frac{({\sqrt{s}}+2{q'^0})p'-({\sqrt{s}}+2{p'^0})q'}{g({{p^0}+{q^0}+\sqrt{s}})}. \end{split}$$ On the other hand, $$\label{kw} \begin{split} \cos\theta:&=\frac{(p^\mu-q^\mu)(p'_\mu-q'_\mu)}{g^2}\\ &=\frac{-({p^0}-{q^0})({p'^0}-{q'^0})+(p-q)\cdot(p'-q')}{g^2}\\ &=\frac{1}{g^2}\Big(-({p^0}-{q^0})(\frac{g}{{\sqrt{s}}}\omega\cdot(p+q))+(p-q)\cdot(g\omega+g(\gamma-1)(p+q)\frac{(p+q)\cdot\omega}{|p+q|^2})\Big)\\ &=\frac{1}{g^2}\Big(-({p^0}-{q^0})(\frac{g}{{\sqrt{s}}}\omega\cdot(p+q))+g(p-q)\cdot\omega+g({p^0}^2-{q^0}^2)\frac{(p+q)\cdot\omega}{{\sqrt{s}}({{p^0}+{q^0}+\sqrt{s}})}\Big)\\ &=\frac{1}{g{\sqrt{s}}({{p^0}+{q^0}+\sqrt{s}})}\Big(-({p^0}-{q^0})({{p^0}+{q^0}+\sqrt{s}})\omega\cdot(p+q)\\ &\hspace{5mm}+{\sqrt{s}}(p-q)\cdot\omega({{p^0}+{q^0}+\sqrt{s}})+\omega\cdot(p+q)({p^0}^2-{q^0}^2)\Big)\\ &=\frac{-({p^0}-{q^0})\omega\cdot(p+q)+(p-q)\cdot\omega({{p^0}+{q^0}+\sqrt{s}})}{g({{p^0}+{q^0}+\sqrt{s}})}\\ &=\frac{({\sqrt{s}}+2{q^0})p-({\sqrt{s}}+2{p^0})q}{g({{p^0}+{q^0}+\sqrt{s}})}\cdot\omega\\ &=k\cdot\omega. \end{split}$$ Note that $|k|=1$. This expression on $\cos\theta$ gives us the intuition on the relationship between $\cos\theta$ expressed as the Lorentzian inner product of 4-vectors and that expressed as the usual Euclidean inner product of 3-vectors. Thus, we can see that even in the relativistic collisional kinetics, the geometry can be expressed by using the usual 3-vectors and the usual Euclidean inner product with the above translation. Dual representation ------------------- In this section, we develop the Carleman representation of the relativistic gain and loss terms which arise many times throughout this paper represented as an integral over $E_{q-p'}^p$ where the set is defined as: $$E_{q-p'}^p{\overset{\mbox{\tiny{def}}}{=}}\{P\in{{\mathbb{R}^3}}|(p'^\mu-p^\mu)(q_\mu-p'_\mu)=0\}.$$ We first derive the Carlelman dual representation of the relativistic gain term. The relativistic gain term part of the inner product $\langle \Gamma(f,h),\eta\rangle$ is written as $$\begin{split} \langle \Gamma^+(f,h),\eta\rangle=&c\int_{{{\mathbb{R}^3}}}\frac{dp}{{p^0}}\int_{{\mathbb{R}^3}}\frac{dp'}{{p'^0}}\int_{{\mathbb{R}^3}}\frac{dq}{{q^0}}\int_{{\mathbb{R}^3}}\frac{dq'}{{q'^0}}s\sigma(g,\omega)\delta^{(4)}(p^\mu+q^\mu-p'^\mu-q'^\mu)\\&\times f(q)h(p)\sqrt{J(q')}\eta(p'). \end{split}$$ We will reduce the integral by evaluating the delta function. Note that we have $$\int_{{\mathbb{R}^3}}\frac{dp}{{p^0}}\int_{{\mathbb{R}^3}}\frac{dq'}{{q'^0}}=\int _{{\mathbb{R}^4}}dp^\mu \int_{{\mathbb{R}^4}}dq'^\mu\delta(p^\mu p_\mu+1)\delta(q'^\mu q'_\mu+1)u({p^0})u({q'^0})$$ where $u(x)=1$ if $x\geq1$ and $=0$ otherwise. Then, we obtain that $$\begin{split} \langle \Gamma^+(f,h),\eta \rangle=&\frac{c}{4\pi}\int_{{\mathbb{R}^3}}\frac{dp'}{{p'^0}}\eta(p')\int_{{\mathbb{R}^3}}\frac{dq}{{q^0}} f(p)\int _{{\mathbb{R}^4}}dp^\mu h(p)\int_{{\mathbb{R}^4}}dq'^\mu e^{-\frac{q'^0}{2}}u({p^0})u({q'^0})\\ &\times \delta(p^\mu p_\mu+1)\delta(q'^\mu q'_\mu+1)s\sigma(g,\omega)\delta^{(4)}(p^\mu+q^\mu-p'^\mu-q'^\mu). \end{split}$$ We reduce the integral $\int_{{{\mathbb{R}^4}}}dq'^\mu$ by evaluating the last delta function and obtain $$\begin{split} \langle \Gamma^+(f,h),\eta \rangle=&\frac{c}{4\pi}\int_{{\mathbb{R}^3}}\frac{dp'}{{p'^0}}\eta(p')\int_{{\mathbb{R}^3}}\frac{dq}{{q^0}}f(q) \int _{{\mathbb{R}^4}}dp^\mu h(p) e^{-\frac{q^0+p'^0-p^0}{2}}u({p^0})\\ &\times u({q^0}-{p'^0}+{p^0})\delta(p^\mu p_\mu+1)\delta((q^\mu-p'^\mu+p^\mu)(q_\mu-p'_\mu+p_\mu)+1)s\sigma(g,\omega) \end{split}$$ The terms in the second delta function can be rewritten as $$(q^\mu-p'^\mu+p^\mu)(q_\mu-p'_\mu+p_\mu)+1=(q^\mu-p'^\mu)(q_\mu-p'_\mu)+2(q^\mu-p'^\mu)p_\mu=\tilde{g}^2+2p^\mu(q_\mu-p'_\mu).$$ Therefore, by evaluating the first delta function, we finally obtain the dual representation of the gain term as $$\begin{split} \langle \Gamma^+(f,h),\eta \rangle=\frac{c}{4\pi}\int_{{\mathbb{R}^3}}\frac{dp'}{{p'^0}}\eta(p')\int_{{\mathbb{R}^3}}\frac{dq}{{q^0}}f(q) \int _{E^p_{q-p'}} \frac{d\pi_p}{{p^0}}\frac{s}{2\tilde{g}}\sigma(g,\omega) h(p)e^{-\frac{q^0+p'^0-p^0}{2}} \end{split}$$ where the measure $d\pi_p$ is defined as $$d\pi_p= u({q^0}-{p'^0}+{p^0}) \delta(\frac{\tilde{g}^2+2p^\mu(q_\mu-p'_\mu)}{2\tilde{g}}).$$ We also want to compute the dual representation for the loss term. We start from the following. $$\begin{split} \langle \Gamma(f,h),\eta\rangle =&\int_{{\mathbb{R}^3}}dp\int_{{\mathbb{R}^3}}dq\hspace{1mm}\int_{\mathbb{S}^2} d\omega\hspace{1mm}\hspace{1mm}v_\phi f(q)h(p)\sigma(g,\omega)(\sqrt{J(q')}\eta(p')-\sqrt{J(q)}\eta(p))\\ =&\int_{{\mathbb{R}^3}}dp\int_{{\mathbb{R}^3}}dq\hspace{1mm}\int_{\mathbb{S}^2} d\omega\hspace{1mm}\hspace{1mm}v_\phi f(q)h(p)\Phi(g)\sigma_0(\theta)(\sqrt{J(q')}\eta(p')-\sqrt{J(q)}\eta(p)) \end{split}$$ Initially, suppose that $\int_{\mathbb{S}^2} d\omega\hspace{1mm} |\sigma_0(\theta)| <\infty$ and that $\int_{\mathbb{S}^2} d\omega\hspace{1mm} \sigma_0(\theta)=0$. Then, $$\langle \Gamma(f,h),\eta\rangle =\int_{{\mathbb{R}^3}}dp\int_{{\mathbb{R}^3}}dq\hspace{1mm}\int_{\mathbb{S}^2} d\omega\hspace{1mm}\hspace{1mm}v_\phi f(q)h(p)\sigma(g,\omega)\sqrt{J(q')}\eta(p').$$ This is the relativistic Boltzmann gain term and its dual representation is shown above to be the following: $$\label{relagain} \frac{c}{2}\int_{\mathbb{R}^3}\frac{dp'\hspace{1mm}}{{p'^0}}\int_{\mathbb{R}^3}\frac{dq}{{q^0}}\int_{E^{p}_{q-p'}} \frac{d\pi_{p}}{{p^0}}\frac{s\sigma(g,\theta)}{\tilde{g}}f(q)h(p)\sqrt{J(q')}\eta(p').$$ On the geometry $E_{q-p'}^p$, $(p'^\mu-p^\mu)(q_\mu-p'_\mu)=0$ Thus we have $\bar{g}^2+\tilde{g}^2=g^2$. Note that $$\begin{split} (p'^\mu-q'^\mu)(p_\mu-q_\mu)&=(2p'^\mu-p^\mu-q^\mu)(p_\mu-q_\mu)\\ &=(p'^\mu-p^\mu+p'^\mu-q^\mu)(p_\mu-p'_\mu+p'_\mu-q_\mu)\\ &=(p'^\mu-p^\mu)(p_\mu-p'_\mu)+(p'^\mu-q^\mu)(p'_\mu-q_\mu)\\ &=-\bar{g}^2+\tilde{g}^2. \end{split}$$ Since $ \cos\theta{\overset{\mbox{\tiny{def}}}{=}}\frac{(p'^\mu-q'^\mu)(p_\mu-q_\mu)}{g^2}, $ we have that $$\cos\theta{\overset{\mbox{\tiny{def}}}{=}}\frac{-\bar{g}^2+\tilde{g}^2}{\bar{g}^2+\tilde{g}^2}.$$ Define $ t=\frac{-\bar{g}^2+\tilde{g}^2}{\bar{g}^2+\tilde{g}^2}. $ Then, we obtain $ dt=d\bar{g}\frac{-4\bar{g}\tilde{g}}{(\bar{g}^2+\tilde{g}^2)^2}. $ Since $ \int_{-1}^1dt\sigma_0(t)=0, $ we have $$\int_0^{\infty}\frac{4\bar{g}\tilde{g}^2}{(\bar{g}^2+\tilde{g}^2)^2}\sigma_0\Big(\frac{-\bar{g}^2+\tilde{g}^2}{\bar{g}^2+\tilde{g}^2}\Big)d\bar{g}=0.$$ From the estimation part for the inequality on the set $E_{q-p'}^p$, we may find a proper variable $\omega'\in H^2$ such that $\mathbb{R}^+_0\times H^2=E_{q-p'}^p$. Then, the integral is now $$\int_{H^2}d\omega\hspace{1mm}'\int_0^{\infty}d\bar{g}\frac{4\bar{g}\tilde{g}^2}{(\bar{g}^2+\tilde{g}^2)^2}\sigma_0\Big(\frac{-\bar{g}^2+\tilde{g}^2}{\bar{g}^2+\tilde{g}^2}\Big)=0.$$ Then, we obtain $$\int_{E_{q-p'}^p}d\pi_p\frac{\tilde{g}^2}{(\bar{g}^2+\tilde{g}^2)^2}\sigma_0\Big(\frac{-\bar{g}^2+\tilde{g}^2}{\bar{g}^2+\tilde{g}^2}\Big)=0.$$ Therefore by multiplying constant terms with respect to $p$, we have $$\int_{E_{q-p'}^p}\frac{d\pi_p}{{p^0}}\frac{s\sigma(g,\theta)}{\tilde{g}}\frac{\tilde{s}\tilde{g}^4\Phi(\tilde{g})}{sg^4\Phi(g)}f(q)h(p')\eta(p')\sqrt{J(q)}=0.$$ Now we subtract this expression from the Carleman representation just written for\ $\langle \Gamma(f,h),\eta\rangle$ must equal the usual representation. This will be called the relativistic dual representation. Thus, $$\label{Dual} \begin{split} &\langle \Gamma(f,h),\eta\rangle \\ =&\int_{{\mathbb{R}^3}}dp\int_{{\mathbb{R}^3}}dq\hspace{1mm}\int_{\mathbb{S}^2} d\omega\hspace{1mm}\hspace{1mm}v_\phi f(q)h(p)\sigma(g,\omega)(\sqrt{J(q')}\eta(p')-\sqrt{J(q)}\eta(p))\\ =&\frac{c}{2}\int_{\mathbb{R}^3}\frac{dp'\hspace{1mm}}{{p'^0}}\int_{\mathbb{R}^3}\frac{dq}{{q^0}}\int_{E_{q-p'}^p}\frac{d\pi_p}{{p^0}} \frac{s\sigma(g,\theta)}{\tilde{g}}f(q)\eta(p')\{h(p)\sqrt{J(q')}-\frac{\tilde{s}\tilde{g}^4\Phi(\tilde{g})}{sg^4\Phi(g)}h(p')\sqrt{J(q)}\}. \end{split}$$ We claim that this representation holds even when the mean value of $\sigma_0$ is not zero. Suppose that $\int_{\mathbb{S}^2} d\omega\hspace{1mm} |\sigma_0(\theta)| <\infty$ and that $\int_{\mathbb{S}^2} d\omega\hspace{1mm} \sigma_0(\theta)\neq0$. Define $$\sigma_0^\epsilon(t) = \sigma_0(t)-1_{[1-\epsilon]}(t)\int_{-1}^1dt'\frac{\sigma_0(t')}{\epsilon}.$$ Then, we have $\int_{-1}^1\sigma_0^\epsilon(t)dt=0$ vanishing on $\mathbb{S}^2$. Now, define $$\begin{split} &\langle \Gamma_\epsilon(f,h),\eta\rangle \\ &=\int_{{\mathbb{R}^3}}dp\int_{{\mathbb{R}^3}}dq\hspace{1mm}\int_{\mathbb{S}^2} d\omega\hspace{1mm}\hspace{1mm}v_\phi f(q)h(p)\sigma_0^\epsilon(\cos\theta)(\sqrt{J(q')}\eta(p')-\sqrt{J(q)}\eta(p)). \end{split}$$ Note that $t=\cos\theta$. Then, $$\label{gamma} \begin{split} |&\langle \Gamma(f,h),\eta\rangle -\langle \Gamma_\epsilon(f,h),\eta\rangle |\\ =&\Big|\int_{{\mathbb{R}^3}}dp\int_{{\mathbb{R}^3}}dq\hspace{1mm}\int_{\mathbb{S}^2} d\omega\hspace{1mm} \hspace{1mm}v_\phi f(q)h(p)\Phi(g)\\ & \cdot(\sqrt{J(q')}\eta(p')-\sqrt{J(q)}\eta(p))1_{[1-\epsilon,1]}(\cos\theta)\frac{1}{\epsilon}\int_{-1}^1\sigma_0(t')dt'\Big|. \end{split}$$ Here, we briefly discuss some properties under the condition $\cos\theta=1$. By the definition, we have $$\cos\theta=\frac{(p^\mu-q^\mu)(p'_\mu-q'_\mu)}{g^2}.$$ Thus, if $\cos\theta=1$, $$\begin{split} (p^\mu-q^\mu)(p'_\mu-q'_\mu)&=g^2\\ &=(p^\mu-q^\mu)(p_\mu-q_\mu). \end{split}$$ Then we have $$(p^\mu-q^\mu)(p'_\mu-p_\mu)=0.$$ By the collision geometry $(p'^\mu-p^\mu)(p'_\mu-q_\mu)=0$, we have $$(p^\mu-p'^\mu)(p_\mu-p'_\mu)=\bar{g}^2=0.$$ Thus, we get $\bar{g}=0$. Equivalently, this means that $$({p'^0}-{p^0})^2=|p'-p|^2.$$ And this implies that ${p^0}={p'^0}$ and $p=p'$ because $$\begin{split} |{p'^0}-{p^0}|&= \Big|\frac{|p'|^2-|p|^2}{{p'^0}+{p^0}}\Big|\\ &< |p'-p|. \end{split}$$ Therefore, if $\cos\theta=1$, we have $p'^\mu=p^\mu$ and $q'^\mu=q^\mu$. Thus, as $\epsilon \rightarrow 0$, the norm in (\[gamma\])$\rightarrow 0$ because the integrand vanishes on the set $\cos\theta=1$. Therefore, we can call (\[Dual\]) as the dual representation because if we define $$T_{f}\eta(p)=\int_{{\mathbb{R}^3}}dq\hspace{1mm}\int_{\mathbb{S}^2} d\omega\hspace{1mm} \sigma(g,\theta)f(q)(\sqrt{J(q')}\eta(p')-\sqrt{J(q)}\eta(p)),$$ $$T^*_{f}h(p')=\frac{1}{{p'^0}}\frac{c}{2}\int_{\mathbb{R}^3}\frac{dq}{{q^0}}\int_{E_{q-p'}^p}\frac{d\pi_p}{{p^0}} \frac{s\sigma(g,\theta)}{\tilde{g}}f(q)\{h(p)\sqrt{J(q')}-\frac{\tilde{s}\tilde{g}^4\Phi(\tilde{g})}{sg^4\Phi(g)}h(p')\sqrt{J(q)}\},$$ then $$\langle \Gamma(f,h),\eta\rangle=\langle T_{f}\eta,h\rangle =\langle \eta,T^*_{f}h\rangle.$$ Representations in other variables ---------------------------------- The collision integral below can be represented in some other ways: $$\label{original eq} \int_{\mathbb{R}^3}\frac{dp}{{p^0}}\int_{\mathbb{R}^3}\frac{dq}{{q^0}}\int_{\mathbb{R}^3}\frac{dq'\hspace{1mm}}{{q'^0}}\int_{\mathbb{R}^3} \frac{dp'\hspace{1mm}}{{p'^0}}s\sigma(g,\theta)\delta^{(4)}(p'^\mu+q'^\mu-p^\mu-q^\mu)A(p,q,p')$$ where $A$ is some Schwartz function. The 12-fold integral can be written as an 8-fold integral in the *center-of-momentum* system by getting rid of the delta function and we obtain: $$\int_{\mathbb{R}^3}dp\int_{\mathbb{R}^3}dq\int_{\mathbb{S}^2}d\omega\hspace{1mm}\hspace{1mm}v_\phi \sigma(g,\omega)A(p,q,p')$$ where $v_\phi =\frac{g\sqrt{s}}{{p^0}{q^0}}$. ### Here we look for another expression as an integration on the set $\mathbb{R}^3\times\mathbb{R}^3\times E^{p'}_{p+q}$ where $E^{p'}_{p+q}$ is the hyperplane $$E^{p'}_{p+q}=\{p'\in\mathbb{R}^3:(p'^\mu-p^\mu)(p_\mu+q_\mu)=0\}.$$ We rewrite (\[original eq\]) as $$\int_{\mathbb{R}^3}\frac{dp}{{p^0}}\int_{\mathbb{R}^3}\frac{dq}{{q^0}}B(p,q,p')$$ where $B=B(p,q,p')$ is defined as $$\begin{split} B&=\int_{\mathbb{R}^3}\frac{dp'\hspace{1mm}}{{p'^0}}\int_{\mathbb{R}^3}\frac{dq'\hspace{1mm}}{{q'^0}}s\sigma(g,\theta)\delta^{(4)}(p'^\mu+q'^\mu-p^\mu-q^\mu)A(p,q,p')\\ &=\int_{\mathbb{R}^4\times\mathbb{R}^4}d\Theta(p'^\mu,q'^\mu)s\sigma(g,\theta)\delta^{(4)}(p'^\mu+q'^\mu-p^\mu-q^\mu)A(p^\mu,q^\mu,p'^\mu) \end{split}$$ where $d\Theta(p'^\mu,q'^\mu){\overset{\mbox{\tiny{def}}}{=}}dp'\hspace{1mm}^\mu dq'\hspace{1mm}^\mu u({q'^0})u({p'^0})\delta(s-g^2-4)\delta((p'^\mu-q'^\mu)(p'^\mu+q'^\mu))$ and $u(r)=0$ if $r<0$ and $u(r)=1$ if $r\geq0$. Now we apply the change of variable $$\bar{q'}^\mu=q'^\mu-p'^\mu.$$ Then with this change of variable the integral becomes $$B=\int_{\mathbb{R}^4\times\mathbb{R}^4}d\Theta(\bar{q'}^\mu,p'^\mu)s\sigma(g,\theta)\delta^{(4)}(2p'^\mu+\bar{q'}^\mu-p^\mu-q^\mu)A(p^\mu,q^\mu,p'^\mu)$$ where $d\Theta(\bar{q'}^\mu,p'^\mu){\overset{\mbox{\tiny{def}}}{=}}dp'\hspace{1mm}^\mu d\bar{q'}^\mu u(\bar{q'}^0+{p'^0})u({p'^0})\delta(s-g^2-4)\delta(\bar{q'}^\mu(p'^\mu+q'^\mu))$. This change of variables gives us the Jacobian$=1$. Finally we evaluate the delta function to obtain $$B=\int_{\mathbb{R}^4}d\Theta(p'^\mu)s\sigma(g,\theta)A(p^\mu,q^\mu,p'^\mu)$$ where we are now integrating over the four vector $p'^\mu$ and $d\Theta(p'^\mu)=dp'\hspace{1mm}^\mu u({q^0}+{p^0}-{p'^0})u({p'^0})\delta(s-g^2-4)\delta((p^\mu+q^\mu)(p_\mu+q_\mu-2p'_\mu)).$ We conclude that the integral is given by $$\label{E} B=\int_{E^{p'}_{p+q}} \frac{d\pi_{p'}}{2\sqrt{s}{p'^0}}s\sigma(g,\theta)A(p,q,p')$$ where $d\pi_{p'}=dp'\hspace{1mm}u({p^0}+{q^0}-{p'^0})\delta\left(-\frac{s}{2\sqrt{s}}-\frac{p'^\mu(p_\mu+q_\mu)}{\sqrt{s}}\right).$ This is an $2$ dimensional surface measure on the hypersurface $E^{p'}_{p+q}$ in $\mathbb{R}^3$. ### Similarly, we can also look for another expression as an integration on the set $\mathbb{R}^3\times\mathbb{R}^3\times E^{q}_{p'-p}$ where $E^{q}_{p'-p}$ is the hyperplane $$E^{q}_{p'-p}=\{q\in\mathbb{R}^3:(p'^\mu-p^\mu)(p_\mu+q_\mu)=0\}.$$ We rewrite (\[original eq\]) as $$\int_{\mathbb{R}^3}\frac{dp}{{p^0}}\int_{\mathbb{R}^3}\frac{dp'}{{p'^0}}B(p,q,p')$$ where $B=B(p,q,p')$ is defined as $$\begin{split} B&=\int_{\mathbb{R}^3}\frac{dq\hspace{1mm}}{{q^0}}\int_{\mathbb{R}^3}\frac{dq'\hspace{1mm}}{{q'^0}}s\sigma(g,\theta)\delta^{(4)}(p'^\mu+q'^\mu-p^\mu-q^\mu)A(p,q,p')\\ &=\int_{\mathbb{R}^4\times\mathbb{R}^4}d\Theta(q^\mu,q'^\mu)s\sigma(g,\theta)\delta^{(4)}(p'^\mu+q'^\mu-p^\mu-q^\mu)A(p^\mu,q^\mu,p'^\mu) \end{split}$$ where $d\Theta(q^\mu,q'^\mu){\overset{\mbox{\tiny{def}}}{=}}dq\hspace{1mm}^\mu dq'\hspace{1mm}^\mu u({q'^0})u({q^0})\delta(s-g^2-4)\delta((q^\mu-q'^\mu)(q^\mu+q'^\mu))$ and $u(r)=0$ if $r<0$ and $u(r)=1$ if $r\geq0$. Now we apply the change of variable $$\bar{q}^\mu=q'^\mu-q^\mu.$$ Then with this change of variable the integral becomes $$B=\int_{\mathbb{R}^4\times\mathbb{R}^4}d\Theta(\bar{q}^\mu,q^\mu)s\sigma(g,\theta)\delta^{(4)}(p'^\mu+\bar{q}^\mu-p^\mu)A(p^\mu,q^\mu,p'^\mu)$$ where $d\Theta(\bar{q}^\mu,q^\mu){\overset{\mbox{\tiny{def}}}{=}}dq\hspace{1mm}^\mu d\bar{q}^\mu u(\bar{q}^0+{q^0})u({q^0})\delta(s-g^2-4)\delta(\bar{q}^\mu(2q^\mu+\bar{q}^\mu))$. This change of variables gives us the Jacobian$=1$. Finally we evaluate the delta function to obtain $$B=\int_{\mathbb{R}^4}d\Theta(q^\mu)s\sigma(g,\theta)A(p^\mu,q^\mu,p'^\mu)$$ where we are now integrating over the four vector $q^\mu$ and $d\Theta(q^\mu)=dq\hspace{1mm}^\mu u({p^0}-{p'^0}+{q^0})u({q^0})\delta(s-g^2-4)\delta((p^\mu-p'^\mu)(2q_\mu+p_\mu-p'_\mu)).$ We conclude that the integral is given by $$\label{E2} B=\int_{E^{q}_{p'-p}} \frac{d\pi_{q}}{2\bar{g}{q^0}}s\sigma(g,\theta)A(p,q,p')$$ where $d\pi_{q}=dq\hspace{1mm}u({p^0}+{q^0}-{p'^0})\delta\left(\frac{\bar{g}}{2}+\frac{q^\mu(p_\mu-p'_\mu)}{\bar{g}}\right).$ This is an $2$-dimensional surface measure on the hypersurface $E^{q}_{p'-p}$ in $\mathbb{R}^3$. Alternative form of the collision operator ------------------------------------------ Here, we also want to introduce an alternative way of writing the collision operator. The 12-fold integral (\[original eq\]) will be written in 9-fold integral in this subsection in $(p,p',\bar{q})$ where we define $\bar{q}$ as below. We write (\[original eq\]) using Fubini as follows $$I{\overset{\mbox{\tiny{def}}}{=}}\int_{\mathbb{R}^3}\frac{dp}{{p^0}}\int_{\mathbb{R}^3} \frac{dp'\hspace{1mm}}{{p'^0}}\int_{\mathbb{R}^3}\frac{dq}{{q^0}}\int_{\mathbb{R}^3}\frac{dq'\hspace{1mm}}{{q'^0}} s\sigma(g,\theta)\delta^{(4)}(p'^\mu+q'^\mu-p^\mu-q^\mu)A(p,q,p').$$ By adding two delta functions and two step functions, we can express the integral above as follows $$\begin{split} I=\int_{\mathbb{R}^3}\frac{dp}{{p^0}}\int_{\mathbb{R}^3} \frac{dp'\hspace{1mm}}{{p'^0}}\int_{\mathbb{R}^4}&dq^\mu\int_{\mathbb{R}^4}dq'^\mu\hspace{1mm} u(q^0+q'^0)u(\underline{s}-4)\delta(\underline{s}-\underline{g}^2-4)\delta((q^\mu+q'^\mu)(q_\mu-q'_\mu))\\ &\times s\sigma(g,\theta)\delta^{(4)}(p'^\mu+q'^\mu-p^\mu-q^\mu)A(p,q,p') \end{split}$$ where we are now integrating over the 14-vector $(p, p', q^\mu, q'^\mu)$, $u$ is defined by $u(r)=0$ if $r<0$ and $u(r)=1$ if $r\geq 0$, and we let $\underline{g}{\overset{\mbox{\tiny{def}}}{=}}g(q^\mu,q'^\mu)$ and $\underline{s}{\overset{\mbox{\tiny{def}}}{=}}s(q^\mu,q'^\mu)$. We will convert the integral over $(q^\mu,q'^\mu)$ into the integral over $q^\mu-q'^\mu$ and $q^\mu+q'^\mu$. Now we apply the change of variables $$q_s^\mu{\overset{\mbox{\tiny{def}}}{=}}q^\mu+q'^\mu,\hspace{5mm}q_g^\mu{\overset{\mbox{\tiny{def}}}{=}}q^\mu-q'^\mu.$$ This will do the change $(q^\mu,q'^\mu)\rightarrow(q_s^\mu,q_g^\mu)$ with Jacobian = 16. With this change, the integral $I$ becomes $$\begin{split} I=\int_{\mathbb{R}^3}\frac{dp}{{p^0}}\int_{\mathbb{R}^3} \frac{dp'\hspace{1mm}}{{p'^0}}\int_{\mathbb{R}^4}&dq_s^\mu\int_{\mathbb{R}^4}dq_g^\mu\hspace{1mm} u(q_s^0)u(-q_s^\mu {q_s}_\mu-4)\delta(-q_s^\mu {q_s}_\mu-q_g^\mu {q_g}_\mu-4)\delta(q_s^\mu {q_g}_\mu)\\ &\times s\sigma(g,\theta)\delta^{(4)}(p'^\mu-p^\mu-q_g^\mu)A(p,\frac{q_s+q_g}{2},p'). \end{split}$$ Then we evaluate the third delta function to obtain $$\begin{split} I=\int_{\mathbb{R}^3}\frac{dp}{{p^0}}\int_{\mathbb{R}^3} \frac{dp'\hspace{1mm}}{{p'^0}}\int_{\mathbb{R}^4}&dq_s^\mu\hspace{1mm} u(q_s^0)u(-q_s^\mu {q_s}_\mu-4)\delta(-q_s^\mu {q_s}_\mu-\bar{g}^2-4)\\ &\times\delta(q_s^\mu(p'_\mu-p_\mu)) s\sigma(g,\theta)A(p,\frac{q_s+q_g}{2},p'). \end{split}$$ Note that $-q_s^\mu {q_s}_\mu-4=\bar{g}^2\geq 0$ by the first delta function, and thus we always have $u(-q_s^\mu {q_s}_\mu-4)=1$. Also, since $\bar{s}=\bar{g}^2+4$, we have $$\begin{split} u(q_s^0)\delta(-q_s^\mu {q_s}_\mu-\bar{g}^2-4)&=u(q_s^0)\delta(-q_s^\mu {q_s}_\mu-\bar{s})\\ &=u(q_s^0)\delta(({q_s}^0)^2-|q_s|^2-\bar{s})\\ &=\frac{\delta(q_s^0-\sqrt{|q_s|^2+\bar{s}})}{2\sqrt{|q_s|^2+\bar{s}}}. \end{split}$$ Then we finally carry out an integration using the first delta function and obtain $$\label{alternative form} \begin{split} I=\int_{\mathbb{R}^3}\frac{dp}{{p^0}}\int_{\mathbb{R}^3} \frac{dp'\hspace{1mm}}{{p'^0}}\int_{\mathbb{R}^3}&\frac{dq_s}{2\sqrt{|q_s|^2+\bar{s}}}\hspace{1mm} \delta(q_s^\mu(p'_\mu-p_\mu)) s\sigma(g,\theta)A(p,\frac{q_s+q_g}{2},p'). \end{split}$$ **Acknowledgements.** The author would like to express his deep appreciation for the guidance and valuable support of his doctoral advisor Robert M. Strain. He also would like to thank Philip T. Gressman for several helpful discussions regarding this work. Andreasson, H., Regularity of the gain term and strong $L^1$ convergence to equilibrium for the relativistic Boltzmann equation, SIAM J. Math. Anal. **27**(5), 1386-1405 (1996). Andréasson, H.,Calogero, S., Illner,R., On blowup for gain-term-only classical and relativistic Boltzmann equations, Math. Meth. Appl. Sci. **27**(18), 2231–2240 (2004). Alexandre, R., Morimoto, Y., Ukai, S., Xu, C., Yang, T., Regularizing effect and local existence for the non-cutoff Boltzmann equation, Arch, Ration. Mech. Anal. **198**(1), 39-123 (2010). Alexandre, R., Villani, C., On the Boltzmann equation for long-range interactions, Comm. Pure Appl. Math. **55**(1), 30-70 (2002). Alexandre, R., Morimoto, Y., Ukai, S., Xu, C., Yang, T., Global existence and full regularity of the Boltzmann equation without angular cutoff, Comm. Math. Phys., **304**(2), 513-581 (2011). Alexandre, R., Morimoto, Y., Ukai, S., Xu, C., Yang, T., Local existence with mild regularity for the Boltzmann equation, Kinet. Relat. Models **6**, 1011-1041 (2013). Alexandre, R., Morimoto, Y., Ukai, S., Xu, C., Yang, T., Regularizing effect and local existence for non-cutoff Boltzmann equation, Arch. Rational. Mech. Anal., **198**, 39-123 (2010). Alexandre, R., Morimoto, Y., Ukai, S., Xu, C., Yang, T., The Boltzmann equation without angular cutoff in the whole space: I, global existence for soft potential, J. Funct. Anal., **262**, 915-1010 (2012). Alexandre, R., Morimoto, Y., Ukai, S., Xu, C., Yang, T., The Boltzmann equation without angular cutoff in the whole space: II, global existence for hard potential, Analysis and Applications, **9**(2), 113-134 (2011). Alexandre, R., Morimoto, Y., Ukai, S., Xu, C., Yang, T., The Boltzmann equation without angular cutoff in the whole space: Qualitative properties of solutions, Arch. Rational Mech. Anal., **202**, 599-661 (2011). Bichteler, K., On the Cauchy problem of the relativistic Boltzmann equation, Commun. Math. Phys. **4**, 352–364 (1967). Boltzmann, L., *Lectures on gas theory*, Translated by Stephen G. Brush, University of California Press, Berkeley (1964). Reprint of the 1896-1898 Edition. MR0158708 (28:1931) Cercignani, C., The Newtonian limit of the relativistic Boltzmann equation, J. Math. Phys. **45**(11), 4042-4052 (2004). de Groot, S.R., van Leeuwen, W.A., van Weert, Ch.G., *Relativistic kinetic theory*, pp. 269-280, North-Holland Publishing Co., Amsterdam (1980). Dijkstra, J.J., van Leeuwen, W.A., Mathematical aspects of relativistic kinetic theory, Phys. A, **90**, 450-486 (1978). DiPerna, R. J., Lions, P.-L., On the Cauchy problem for Boltzmann equations: global existence and weak stability, Ann. of Math. **130**(2), 321-366 (1989). Dudy´nski, M., On the linearized relativistic Boltzmann equation. II. Existence of hydro-dynamics, J. Stat. Phys. **57**(1–2), 199–245 (1989). Dudy´nski, M., Ekiel-Je˙zewska, M.L., Causality of the linearized relativistic Boltzmann equation, Phys. Rev. Lett. **55**(26), 2831–2834 (1985). Dudy´nski, M., Ekiel-Je˙zewska, M.L., Global existence proof for relativistic Boltzmann equation, J. Stat. Phys. **66**(3–4), 991–1001 (1992). Dudy´nski, M., Ekiel-Je˙zewska, M.L., On the linearized relativistic Boltzmann equation. I. Existence of solutions, Commun. Math. Phys. **115**(4), 607–629 (1985). Dudy´nski, M., Ekiel-Je˙zewska, M.L., The relativistic Boltzmann equation - mathematical and physical aspects, J. Tech. Phys. **48**, 39–47 (2007). Escobedo, M., Mischler, S., Valle, M. A., Homogeneous Boltzmann Equation in Quantum Relativistic Kinetic Theory, Electron. J. Differ. Equ. Monogr. 4, Southwest Texas State University, San Marcos, TX, 2003. Folland, G.B., *Introduction to Partial Differential Equations*, pp. 194, Princeton University Press, Princeton (1976). Glassey, R.T., *The Cauchy problem in kinetic theory*, Society for Industrial and Applied Mathematics, Philadelphia (1996). MR1379589 (97i:82070) Glassey, R.T., Strauss, W.A., Asymptotic stability of the relativistic Maxwellian, Publ. Res. Inst. Math. Sci. **29**(2), 301–347 (1993). Glassey, R.T., Strauss, W.A., Asymptotic stability of the relativistic Maxwellian via fourteen moments, Trans. Th. Stat. Phys. **24**(4–5), 657–678 (1995) . Glassey, R.T., Strauss, W.A., On the derivatives of the collision map of relativistic particles, Trans. Th. Stat. Phys. **20**(1), 55–68 (1991). Grad, H., *Asymptotic theory of the Boltzmann equation. II, Rarefied Gas Dynamics*, Vol.I, pp. 26-59, Academic Press, New York (1963). MR0156656 (27:6577) Grafakos, L., *Classical Fourier Analysis*, Graduate Texts in Mathematics (Book 249), pp. 343. Springer, New York (2008). Gressman, P. T., Strain, R.M., Global classical solutions of the Boltzmann equation with long-range interactions, Proceedings of the National Academy of Sciences of the United States of America, **107**(13), 5744-5749 (2010). Gressman, P. T., Strain, R.M., Global Classical Solutions of the Boltzmann Equation without Angular Cut-off, Journal of the American Mathematical Society, **24**(3), 771-847 (2011). Guo, Y., Classical solutions to the Boltzmann equation for molecules with an angular cutoff, Arch. Rat. Mech. Anal. **169**(4), 305-353 (2003). Guo, Y., The Vlasov-Maxwell-Boltzmann system near Maxwellians, Invent. Math. **153**, 593-630 (2003). Ha, S., Lee, H., Yang, X., Yun, S., Uniform $L^2$ -stability estimates for the relativistic Boltzmann equation, J. Hyperbolic Differ. Equ., **6**, 295-312 (2009). Hsiao, L., Yu, H., Asymptotic stability of the relativistic Maxwellian, Math. Meth. Appl. Sci. **29**(13), 1481–1499 (2006). Israel, W., Relativistic kinetic theory of a simple gas, J. Math. Phys., **4**, 1163-1181 (1963). Jang, J., Global Classical Solutions of the Relativistic Boltzmann Equation with Long-Range Interactions and Soft-Potentials, in preparation. Kremer, G. M., Theory and applications of the relativistic Boltzmann equation, International Journal of Geometric Methods in Modern Physics, **11**, (2014). DOI: 10.1142/S0219887814600056 Lichnerowicz, A., Marrot, R., Propriétés statistiques des ensembles de particules en relativité restreinte, C. R. Acad. Sci. **210**, 759-761 (1940). Lions,P.-L., Compactness in Boltzmann’s equation via Fourier integral operators and applications. I, II, J. Math. Kyoto Univ., **34** 391-427, 429-461 (1994). Lions, P.-L., Compactness in Boltzmann’s equation via Fourier integral operators and applications.III, J. Math. Kyoto Univ., **34**, 539-584 (1994). Mouhot, C., Explicit coercivity estimates for the linearized Boltzmann and Landau operators, Comm. Partial Differential Equations **31**, 1321-1348 (2006). Pao, Y. P., Boltzmann collision operator with inverse-power intermolecular potentials. I, II, Comm. Pure. Appl. Math. **27**, (1974) 407-428; ibid. **27**, 559-571 (1974). Peskin, M. E., Schroeder, D.V., An Introduction to Quantum Field Theory, edited and with a foreword by David Pines, Addison-Wesley Publishing Company Advanced Book Program, Reading, MA, 1995. Speck, J., Strain, R. M., Hilbert expansion from the Boltzmann equation to relativistic fluids, Communications in Mathematical Physics **304**(1), 229-280 (2011). Stein, E.M., *Singular integrals and differentiability properties of functions*, Princeton Mathematical Series, No. 30, Princeton University Press, Princeton, N.J., 1970. MR0290095 (44:7280) Strain, R.M., An energy method in collisional kinetic theory, Ph.D. dissertation, Division of Applied Mathematics, Brown University, (2005). Strain, R.M., Asymptotic Stability of the Relativistic Boltzmann Equation for the Soft Potentials, Communications in Mathematical Physics, **300**, 529-597 (2010). Strain, R.M., Coordinates in the Relativistic Boltzmann Theory, Kinetic and Related Models, **4**(1), 345-359 (2011). Strain, R.M., Global Newtonian Limit for the Relativistic Boltzmann Equation near Vacuum, Society for Industrial and Applied Mathematics Journal on Mathematical Analysis, **42**(4), 1568-1601 (2010). Strain, R.M., Guo,Y., Almost exponential decay near Maxwellian, Comm. Partial Differential Equations **31**(1-3), 417–429 (2006). Strain, R.M., Guo,Y., Stability of the relativistic Maxwellian in a collisional plasma, Comm. Math. Phys. **251**(2), 263–320 (2004). Wennberg, B., The Geometry of Binary Collisions and Generalized Radon Transforms, Arch. Rational Mech. Anal. **139**, 291-302 (1997). Yang, T., Yu, H., Hypocoercivity of the relativistic Boltzmann and Landau equations in the whole space, J. Differential Equations **248**(6), 1518–1560 (2010). Yu, H., Smoothing effects for classical solutions of the relativistic Landau-Maxwell system, J. Differential Equations **246**(10), 3776–3817 (2009). Department of Mathematics, University of Pennsylvania, Philadelphia, PA 19104, USA,\ *E-mail address*: [jangjinw at math.upenn.edu](mailto:[email protected])
--- abstract: 'Recently the PVLAS collaboration reported the observation of a rotation of linearly polarized laser light induced by a transverse magnetic field - a signal being unexpected within standard QED. Two mechanisms have been proposed to explain this result: production of a single (pseudo-)scalar particle coupled to two photons or pair production of light millicharged particles. In this work, we study how the different scenarios can be distinguished. We summarize the expected signals for vacuum magnetic dichroism (rotation) and birefringence (ellipticity) for the different types of particles - including new results for the case of millicharged scalars. The sign of the rotation and ellipticity signals as well as their dependencies on experimental parameters, such as the strength of the magnetic field and the wavelength of the laser, can be used to obtain information about the quantum numbers of the particle candidates and to discriminate between the different scenarios. We perform a statistical analysis of all available data resulting in strongly restricted regions in the parameter space of all scenarios. These regions suggest clear target regions for upcoming experimental tests. As an illustration, we use preliminary PVLAS data to demonstrate that near future data may already rule out some of these scenarios.' author: - Markus Ahlers - Holger Gies - Joerg Jaeckel - Andreas Ringwald title: | On the Particle Interpretation of the PVLAS Data:\ Neutral versus Charged Particles --- \[intro\] Introduction ====================== The absorption probability and the propagation speed of polarized light propagating in a magnetic field depends on the relative orientation between the polarization and the magnetic field. These effects are known as vacuum magnetic dichroism and birefringence, respectively, resulting from fluctuation-induced vacuum polarization. In a pioneering experiment, the BFRT collaboration searched for these effects by shining linearly polarized laser photons through a superconducting dipole magnet. No significant signal was found, and a corresponding upper limit was placed on the rotation (dichroism) and ellipticity (birefringence) of the photon beam developed after passage through the magnetic field [@Semertzidis:1990qc; @Cameron:1993mr]. Recently, however, a follow-up experiment done by the PVLAS collaboration reported the observation of a rotation of the polarization plane of light after its passage through a transverse magnetic field in vacuum [@Zavattini:2005tm]. Moreover, preliminary results presented by the PVLAS collaboration at various seminars and conferences hint also at the observation of an ellipticity (birefringence) [@PVLASICHEP; @Cantatore:IDM2006]. These findings have initiated a number of theoretical and experimental activities, since the magnitude of the reported signals exceeds the standard-model expectations by far.[^1] If the observed effects are indeed true signals of vacuum magnetic dichroism and birefringence and not due to a subtle, yet unidentified systematic effect, they signal new physics beyond the standard model of particle physics. One obvious possible explanation, and indeed the one which was also a motivation for the BFRT and PVLAS experiments, may be offered by the existence of a new light neutral spin-$0$ boson $\phi$ [@Maiani:1986md]. In fact, this possibility has been studied in Ref. [@Zavattini:2005tm], with the conclusion that the rotation observed by PVLAS can be reconciled with the non-observation of a rotation and ellipticity by BFRT, if the hypothetical neutral boson has a mass in the range $m_\phi\sim (1-1.5)$ meV and a coupling to two photons in the range $g\sim (1.7-5.0)\times 10^{-6}$ GeV$^{-1}$. Clearly, these values almost certainly exclude the possibility that $\phi$ is a genuine QCD axion $A$ [@Weinberg:1977ma; @Wilczek:1977pj]. For the latter, a mass $m_A\sim 1$meV implies a Peccei-Quinn symmetry [@Peccei:1977hh; @Peccei:1977ur] breaking scale $f_A\sim 6\times 10^{9}$GeV. Since, for an axion, $g\sim \alpha |E/N|/(2\pi f_A)$ [@Bardeen:1977bd; @Kaplan:1985dv; @Srednicki:1985xd], one would need an extremely large ratio $|E/N|\sim 3\times 10^7$ of electromagnetic and color anomalies in order to arrive at an axion-photon coupling in the range suggested by PVLAS. This is far away from the predictions of any model conceived so far. Moreover, such a new, axion-like particle (ALP) must have very peculiar properties [@Masso:2005ym; @Jain:2005nh; @Jaeckel:2006id; @Masso:2006gc; @Mohapatra:2006pv; @Jain:2006ki] in order to evade the strong constraints on its two photon coupling from stellar energy loss considerations [@Raffelt:1996] and from its non-observation in helioscopes such as the CERN Axion Solar Telescope (CAST) [@Zioutas:2004hi]. A light scalar boson is furthermore constrained by upper limits on non-Newtonian forces [@Dupays:2006dp]. Recently, an alternative to the ALP interpretation of the PVLAS results was proposed [@Gies:2006ca]. It is based on the observation that the photon-initiated real and virtual pair production of millicharged particles (MCPs) $\epsilon^\pm$ in an external magnetic field would also manifest itself as a vacuum magnetic dichroism and ellipticity. In particular, it was pointed out that the dichroism observed by PVLAS may be compatible with the non-observation of a dichroism and ellipticity by BFRT, if the millicharged particles have a small mass $m_\epsilon\sim 0.1$ [eV]{} and a tiny fractional electric charge $\epsilon\equiv Q_\epsilon/e \sim 10^{-6}$. As has been shown recently [@Masso:2006gc], such particles may be consistent with astrophysical and cosmological bounds (for a review, see Ref. [@Davidson:2000hf]), if their tiny charge arises from gauge kinetic mixing of the standard model hypercharge U(1) with additional U(1) gauge factors from physics beyond the standard model [@Holdom:1985ag]. This appears to occur quite naturally in string theory [@Abel:2006qt]. It is very comforting that a number of laboratory-based [low-energy]{} tests of the ALP and MCP interpretation of the PVLAS anomaly are currently set up and expected to yield decisive results within the upcoming year. For instance, the Q&A experiment has very recently released first rotation data [@Chen:2006cd]. Whereas the Q&A experimental setup is qualitatively similar to PVLAS, the experiment operates in a slightly different parameter region; here, no anomalous signal has been detected so far. The interpretation of the PVLAS signal involving an ALP that interacts weakly with matter will crucially be tested by photon regeneration (sometimes called “light shining through walls”) experiments [@Sikivie:1983ip; @Anselm:1986gz; @Gasperini:1987da; @VanBibber:1987rq; @Ruoso:1992nx; @Ringwald:2003ns; @Gastaldi:2006fh] presently under construction or serious consideration [@Pugnat:2005nk; @Rabadan:2005dm; @Cantatore:Patras; @Kotz:2006bw; @Baker:Patras; @Rizzo:Patras; @ALPS]. In these experiments (cf. Fig. \[fig:ph\_reg\]), a photon beam is shone across a magnetic field, where a fraction of them turns into ALPs. The ALP beam can then propagate freely through a wall or another obstruction without being absorbed, and finally another magnetic field located on the other side of the wall can transform some of these ALPs into photons — [seemingly]{} regenerating these photons out of nothing. Another probe could be provided by direct astrophysical observations of light rays traversing a pulsar magnetosphere in binary pulsar systems [@Dupays:2005xs]. [![Schematic view of a “light shining through a wall” experiment. (Pseudo-)scalar production through photon conversion in a magnetic field (left), subsequent travel through a wall, and final detection through photon regeneration (right). []{data-label="fig:ph_reg"}](eps/lightshinning.eps "fig:"){width="8.5cm"}]{} Clearly, photon regeneration will be negligible for MCPs. Their existence, however, can be tested by improving the sensitivity of instruments for the detection of vacuum magnetic birefringence and dichroism [@Cameron:1993mr; @Zavattini:2005tm; @Chen:2006cd; @Rizzo:Patras; @Pugnat:2005nk; @Heinzl:2006xc]. Another sensitive tool is Schwinger pair production in strong electric fields, as they are available, for example, in accelerator cavities [@Gies:2006hv]. A classical probe for MCPs is the search for invisible orthopositronium decays [@Dobroliubov:1989mr; @Mitsui:1993ha], for which new experiments are currently running [@Badertscher:2006fm] or being developed [@Rubbia:2004ix; @Vetter:2004fs]. From a theoretical perspective, the two scenarios are substantially different: the ALP scenario is parameterized by an effective non-renormalizable dimension-5 operator, the stabilization of which almost inevitably requires an underlying theory at a comparatively low scale, say in between the electroweak and the GUT scale. By contrast, the MCP scenario in its simplest version is reminiscent to QED; it is perturbatively renormalizable and can remain a stable microscopic theory over a wide range of scales. [The present paper is devoted to an investigation of the characteristic properties of the different scenarios in the light of all available data collected so far. A careful study of the optical properties of the magnetized vacuum can indeed reveal important information about masses, couplings and other quantum numbers of the potentially involved hypothetical particles. This is quantitatively demonstrated by global fits to all published data. For further illustrative purposes, we also present global fits which include the preliminary data made available by the PVLAS collaboration at workshops and conferences. We stress that this data is only used here to qualitatively demonstrate how the optical measurements can be associated with particle-physics properties. Definite quantitative predictions have to await the outcome of a currently performed detailed data analysis of the PVLAS collaboration. Still, the]{} resulting fit regions can be viewed as a [preliminary estimate of]{} “target regions” for the various laboratory tests mentioned above. Moreover, the statistical analysis is also meant to help the theorists in deciding whether they should care at all about the PVLAS anomaly, and, if yes, whether [there is a pre-selection of phenomenological models or model building blocks that deserve to be studied in more detail. ]{} The paper is organized as follows. In the next section \[sec2\] we summarize the signals for vacuum magnetic dichroism and birefringence in presence of axion-like and millicharged particles. We use these results in Sec. \[sec3\] to show how the different scenarios can be distinguished from each other and how information about the quantum numbers of the potential particle candidates can be collected. In Sec. \[sec4\] we then perform a statistical analysis including all current data. We also use preliminary PVLAS data to show the prospects for the near future. We summarize our conclusions in Sec. \[conclusions\]. Vacuum Magnetic Dichroism, Birefringence, and Photon Regeneration {#sec2} ================================================================== We start here with some general kinematic considerations relevant to dichroism and birefringence, which are equally valid for the case of ALP and the case of MCP production. Let $\vec k$ be the momentum of the incoming photon, with $|\vec k|=\omega$, and let $\vec B$ be a static homogeneous magnetic field, which is perpendicular to $\vec k$, as it is the case in all of the [afore]{}-mentioned polarization experiments. The photon-initiated production of an ALP with mass $m_\phi$ or an MCP with mass $m_\epsilon$, leads, for $\omega > m_\phi$ or $\omega > 2 m_\epsilon$, respectively, to a non-trivial ratio of the survival probabilities $\exp(-\pi_{\parallel,\perp}(\ell))$ of a photon after it has traveled a distance $\ell$, for photons polarized parallel $\parallel$ or perpendicular $\perp$ to $\vec B$. This non-trivial ratio manifests itself directly in a dichroism: for a linearly polarized photon beam, the angle $\theta$ between the initial polarization vector and the magnetic field will change to $\theta + \Delta \theta$ after passing a distance $\ell$ through the magnetic field, with $$\cot (\theta+\Delta\theta)=\frac{E_{\parallel}}{E_{\perp}} =\frac{E^{0}_{\parallel}}{E^{0}_{\perp}} \exp\left(-\frac{1}{2}(\pi_{\parallel}(\ell) -\pi_{\perp}(\ell))\right). \label{eq1}$$ Here, $E_{\parallel,\perp}$ are the electric field components of the laser parallel and perpendicular to the external magnetic field, and the superscript “0” denotes initial values. For small rotation angle $\Delta\theta$, we have $$\label{delthet} \Delta\theta \simeq \frac{1}{4}(\pi_{\parallel} -\pi_{\perp})\, \sin(2\theta).$$ We will present the results for the probability exponents $\pi_{\parallel}-\pi_{\perp}$ for ALPs and MCPs in the following subsections. Let us now turn to birefringence. The propagation speed of the laser photons is slightly changed in the magnetic field owing to the coupling to virtual ALPs or MCPs. Accordingly, the time $\tau_{\parallel,\perp}(\ell)$ it takes for a photon to traverse a distance $\ell$ differs for the two polarization modes, causing a phase difference between the two modes, $$\Delta\phi=\omega (\tau_{\parallel}(\ell) -\tau_{\perp}(\ell)).$$ This induces an ellipticity $\psi$ of the outgoing beam, $$\label{psi} \psi =\frac{\omega}{2}(\tau_{\parallel}(\ell) -\tau_{\perp}(\ell))\sin(2\theta), \quad\quad\rm{for}\,\,\psi\ll1.$$ Again, we will present the results for $\tau_{\parallel}-\tau_{\perp}$ for ALPs and MCPs in the following subsections. Production of Neutral Spin-0 Bosons ----------------------------------- A neutral spin-0 particle can interact with two photons via $${\mathcal L}^{(+)}_{\rm{int}} =-\frac{1}{4}g\phi^{(+)}F_{\mu\nu}F^{\mu\nu} =\frac{1}{2}g\phi^{(+)}(\vec{E}^{2}-\vec{B}^{2}),$$ if it is a scalar, or $${\mathcal L}^{(-)}_{\rm{int}} =-\frac{1}{4}g\phi^{(-)}F_{\mu\nu}\widetilde{F}^{\mu\nu} =g\phi^{(-)}(\vec{E}\cdot\vec{B}),$$ if it is a pseudoscalar. In a homogeneous magnetic background $\vec{B}$, the leading order contribution to the conversion (left half of Fig. \[fig:ph\_reg\]) of (pseudo-)scalars into photons comes from the terms $\sim \vec{B}^2$ and $\sim \vec{E}\cdot\vec{B}$, respectively. The polarization of a photon is now given by the direction of the electric field of the photon, $\vec{E}_\gamma$, whereas its magnetic field, $\vec{B}_{\gamma}$ is perpedicular to the polarization. Therefore, only those fields polarized perpendicular (parallel) to the background magnetic field will have nonvanishing $\vec{B}_{\gamma}\cdot\vec{B}\neq0$ ($\vec{E}_{\gamma}\cdot\vec{B}\neq0$) and interact with the particles. Accordingly, for scalars we have, $$\pi^{(+)}_{\perp}\neq 0,\quad \pi^{(+)}_{\parallel}=0,\quad \tau^{(+)}_{\perp}\neq 0, \quad \tau^{(+)}_{\parallel}=0$$ whereas for pseudoscalars we find $$\pi^{(-)}_{\perp}= 0,\quad \pi^{(-)}_{\parallel}\neq 0,\quad \tau^{(-)}_{\perp}= 0, \quad \tau^{(-)}_{\parallel}\neq 0.$$ Apart from this, the interaction is identical in lowest order, $$\pi^{(+)}_{\perp}=\pi^{(-)}_{\parallel}\,\,{\rm{and}}\,\, \tau^{(+)}_{\perp}=\tau^{(-)}_{\parallel}.$$ Using Eqs. - we deduce $${{\Delta\theta}}^{(+)} =-{{\Delta\theta}}^{(-)},\,\, {\rm{and}}\,\,\psi^{(+)}=-\psi^{(-)}.$$ We can now summarize the predictions on the rotation ${{\Delta\theta}}$ and the ellipticity $\psi$ in (pseudo-)scalar ALP models with coupling $g$ and mass $m_\phi$ [@Maiani:1986md; @Raffelt:1987im]. We assume a setup as in the BFRT experiment with a dipole magnet of length $L$ and homogeneous magnetic field $B$. The polarization of the laser beam with photon energy $\omega$ has an angle $\theta$ relative to the magnetic field. The effective number of passes of photons in the dipole is $N_\text{pass}$. Due to coherence, the rotation ${{\Delta\theta}}$ and ellipticity $\psi$ depend non-linearly on the length of the apparatus $L$ and linearly on the number of passes $N_{\rm{pass}}$, instead of simply being proportional to $\ell=N_{\rm pass}L$; whereas the photon component is reflected at the cavity mirrors, the ALP component is not and leaves the cavity after each pass: $$\label{dicALP} -{{\Delta\theta}}^{(+)} ={{\Delta\theta}}^{(-)} = N_\text{pass}\left(\frac{gB\omega}{m_\phi^2}\right)^2 \sin^2\left(\frac{L m_\phi^2}{4\omega}\right)\sin2\theta,$$ $$\begin{aligned} \label{birALP} -\psi^{(+)} \!\!&=&\!\!\psi^{(-)} \\\nonumber \!\!&=&\!\! \frac{N_\text{pass}}{2}\left(\frac{gB\omega}{m_\phi^2}\right)^2\left(\frac{L m_\phi^2}{2\omega}- \sin\left(\frac{L m_\phi^2}{2\omega}\right)\right)\sin2\theta.\end{aligned}$$ For completeness, we present here also the flux of regenerated photons in a “light-shining through a wall” experiment (cf. Fig. \[fig:ph\_reg\]). In the case of a pseudoscalar, it reads $$\label{regALPps} \dot{N}_{\gamma\ {\rm reg}}^{(-)} = \dot{N_0}\left\lfloor \frac{N_\text{pass}+1}{2}\right\rfloor \frac{1}{16}\left(gBL\cos\theta\right)^4\left( \frac{\sin(\frac{L m_\phi^2}{4\omega})} {\frac{L m_\phi^2}{4\omega}} \right)^4,$$ where $\dot{N}_0$ is the original photon flux. For a scalar, the $\cos\theta$ is replaced by a $\sin\theta$. Equation (\[regALPps\]) is for the special situation in which a dipole of length $L$ and field $\vec{B}$ is used for generation as well as for regeneration of the ALPs as it is the case for the BFRT experiment. Note that only passes towards the wall count. Optical Vacuum Properties from Charged-Particle Fluctuations ------------------------------------------------------------ Let us now consider the interactions between the laser beam and the magnetic field mediated by fluctuations of particles with charge $\epsilon e$ and mass $m_\epsilon$. For laser frequencies above threshold, $\omega>2m_\epsilon$, pair production becomes possible in the magnetic field, resulting in a depletion of the incoming photon amplitude. The corresponding photon attenuation coefficients $\kappa_{\|,\bot}$ for the two polarization modes are related to the probability exponents $\pi_{\|,\bot}$ by $$\pi_{\|,\bot}=\kappa_{\|,\bot}\, \ell, \label{pikappa}$$ depending linearly on the optical path length $\ell$. Also the time $\tau_{\|,\bot}$ it takes for the photon to traverse the interaction region with the magnetic field exhibits the same dependence, $$\tau_{\|,\bot}=n_{\|,\bot}\, \ell, \label{taun}$$ where $n_{\|,\bot}$ denotes the refractive indices of the magnetized vacuum. ### Dirac Fermions {#sec:MCF} We begin with vacuum polarization and pair production of charged Dirac fermions [@Gies:2006ca], arising from an interaction Lagrangian $$\label{lintDsp} {\mathcal L}_{\rm int}^{\rm Dsp}= \epsilon\, e\, \overline{\psi}_{\epsilon} \gamma_\mu \psi_{\epsilon} A^\mu ,$$ with $\psi_\epsilon$ being a Dirac spinor [(“Dsp”)]{}. Explicit expressions for the photon absorption coefficients $\kappa_{\parallel,\perp}$ can be inferred from the polarization tensor which is obtained by integrating over the fluctuations of the $\psi_\epsilon$ field. This process $\gamma\to \epsilon^+\epsilon^-$ has been studied frequently in the literature for the case of a homogeneous magnetic field [@Toll:1952; @Klepikov:1954; @Erber:1966vv; @Baier:1967; @Klein:1968; @Adler:1971wn; @Tsai:1974fa; @Daugherty:1984tr; @Dittrich:2000zu]: $$\begin{aligned} \label{absorption} \pi_{\|,\bot}^{\rm Dsp} &\equiv\kappa_{\parallel,\perp}^{\rm Dsp}\ell = \frac{1}{2}\epsilon^3 e \alpha \frac{B \ell }{m_\epsilon}\, T_{\parallel,\perp}^{\rm Dsp}(\chi ) \\[1.5ex] \nonumber &= 1.09\times 10^6\ \epsilon^3 \left( \frac{\rm eV}{m_\epsilon} \right) \left( \frac{B}{\rm T}\right) \left( \frac{\ell}{\rm m}\right)\,T_{\parallel,\perp}^{\rm Dsp}(\chi ) ,\end{aligned}$$ where $\alpha = e^2/4\pi$ is the fine-structure constant. Here, $T_{\parallel,\perp }^{\rm Dsp}(\chi )$ has the form of a parametric integral [@Tsai:1974fa], $$\begin{gathered} \label{absorb} T_{\parallel,\perp}^{\rm Dsp} = \frac{4\sqrt{3}}{\pi\chi} \int\limits_0^1 {\rm d}v\ K_{2/3}\left( \frac{4}{\chi}\frac{1}{1-v^2}\right) \\ \times \frac{\left[ \left( 1-\frac{1}{3}v^2\right)_\parallel, \left(\frac{1}{2} +\frac{1}{6}v^2\right)_\perp \right]}{(1-v^2)}\end{gathered}$$ $\displaystyle = \begin{cases} \sqrt{\frac{3}{2}}\ {\rm e}^{-4/\chi}\ \left[(\frac{1}{2})_\parallel,(\frac{1}{4})_\perp\right] & \text{for} \,\,\chi\ll 1\,\,\text{,} \\ \frac{2\pi}{\Gamma\left(\frac{1}{6}\right)\Gamma\left(\frac{13}{6}\right)} \chi^{-1/3}\left[ (1)_\parallel,(\frac{2}{3})_\perp\right]& \text{for} \,\,\chi\gg 1\,\,\text{,} \end{cases} $ the dimensionless parameter $\chi$ being defined as $$\label{chi} \chi \equiv \frac{3}{2} \frac{\omega}{m_\epsilon} \frac{\epsilon e B}{m_\epsilon^2} = 88.6\ \epsilon\ \frac{\omega}{m_\epsilon}\ \left( \frac{\rm eV}{m_\epsilon}\right)^2 \left( \frac{B}{\rm T}\right) \,.$$ The above expression has been derived in leading order in an expansion for high frequency [[@Toll:1952; @Klepikov:1954; @Erber:1966vv; @Baier:1967; @Klein:1968; @Heinzl:2006pn]]{}, $$\label{semiclhf} \frac{\omega}{2m_\epsilon}\gg 1,$$ and of high number of allowed Landau levels of the millicharged particles [@Daugherty:1984tr], $$\begin{aligned} \nonumber \Delta N_{\rm{p}}\!\!&=&\!\!\frac{\Delta N_{\rm{Landau}}}{2}= \frac{1}{12}\left(\frac{\omega^{2}}{\epsilon\,eB}\right)^{2} \left( \frac{\Delta\omega}{\omega}+ \frac{\Delta B}{2 B}\right) \gg 1 \\[1.5ex]\label{peaks} \Leftrightarrow \epsilon\!\! &\ll &\!\! 4.9\times 10^{-3} \left(\frac{\omega}{\rm{eV}}\right)^{2} \left(\frac{\rm{T}}{B}\right) \left(\frac{\Delta\omega}{\omega} + \frac{\Delta B}{2 B}\right)^{\frac{1}{2}}.\end{aligned}$$ [In the above-mentioned laser polarization experiments, the variation]{} $\Delta\omega/\omega$ is typically small compared to $\Delta B/B\gtrsim 10^{-4}$. Virtual production can occur even below threshold, $\omega<2m_{\epsilon}$. Therefore, we consider both high and low frequencies. As long as Eq.  is satisfied, one has [@Tsai:1975iz] $$\label{refraction} n_{\parallel,\perp}^{\rm Dsp}= 1-\frac{\epsilon^{2}\alpha}{4\pi}\left(\frac{\epsilon\,eB}{m^{2}_{\epsilon}}\right)^{2} I_{\parallel,\perp}^{\rm Dsp}(\chi),$$ with $$\begin{gathered} I_{\parallel,\perp}^{\rm Dsp}(\chi)\!\!=\!\!2^{\frac{1}{3}}\left(\frac{3}{\chi}\right)^{\frac{4}{3}} \int^{1}_{0} {\rm d}v\, \frac{\left[\left(1-\frac{v^2}{3}\right)_{\parallel}, \left(\frac{1}{2}+\frac{v^2}{6}\right)_{\perp}\right]}{(1-v^{2})^{\frac{1}{3}}} \\ \times\tilde{e}^{\prime}_{0}\left[\begin{scriptstyle}- \left(\frac{6}{\chi}\frac{1}{1-v^2}\right)^{\frac{2}{3}}\end{scriptstyle}\right] \label{refrac}\end{gathered}$$ $\displaystyle = \begin{cases} - \frac{1}{45} \left[(14)_\parallel,(8)_\perp\right] & \text{for}\,\,\chi\ll 1\text{,} \\ \frac{9}{7}\frac{\pi^{\frac{1}{2}}2^{\frac{1}{3}} \left(\Gamma(\left(\frac{2}{3}\right)\right)^{2}}{\Gamma\left(\frac{1}{6}\right)} \chi^{-4/3}\left[ (3)_\parallel,(2)_\perp\right]& \text{for}\,\,\chi\gg 1\text{.} \end{cases} $ Here, $\tilde{e}_{0}$ is the generalized Airy function, $$\tilde{e}_{0}(t)=\int^{\infty}_{0}{\rm d}x\,\sin\left(tx-\frac{x^3}{3}\right),$$ and $\tilde{e}^{\prime}_{0}(t)={\rm{d}}\tilde{e}_{0}(t)/{\rm{d}}t$. ### Spin-0 Bosons {#sec:MCB} The optical properties of a magnetized vacuum can also be influenced by fluctuations of charged spin-0 bosons. The corresponding interaction Lagrangian is that of scalar QED (index “sc”), $$\label{lintsc} {\mathcal L}^{\rm sc}= -|D_\mu(\epsilon e A) \varphi_\epsilon|^2 - m_\epsilon^2 |\varphi_\epsilon|^2, \quad D_\mu =\partial_\mu -{\text{i}}\epsilon e A_\mu,$$ with $\varphi_\epsilon$ being a complex scalar field. The induced optical properties have not been explicitly computed before in the literature, but can be inferred straightforwardly from the polarization tensor found in [@Schubert:2000yt]. As derived in more detail in appendices \[appA\] and \[appB\], the corresponding results for dichroism and birefringence are similar to the familiar Dirac fermion case, $$\pi_{\|,\bot}^{\rm sc}\equiv \kappa_{\parallel,\perp}^{\rm sc}\ell = \frac{1}{2}\epsilon^3 e \alpha \frac{B \ell }{m_\epsilon}\, T_{\parallel,\perp}^{\rm sc}(\chi ) ,$$ where $$\begin{gathered} \label{eqDB4} T_{\parallel,\perp}^{\text{sc}} = \frac{2\sqrt{3}}{\pi\chi} \int\limits_0^1 {\rm d}v\ K_{2/3}\left( \frac{4}{\chi}\frac{1}{1-v^2}\right) \\ \times \frac{\left[ \left(\frac{1}{3}v^2\right)_\parallel, \left(\frac{1}{2} -\frac{1}{6}v^2\right)_\perp \right]}{(1-v^2)}\end{gathered}$$ $\displaystyle= \begin{cases} \frac{1}{2} \sqrt{\frac{3}{2}}\ {\rm e}^{-4/\chi}\ \left[(0 )_\parallel,(\frac{1}{4})_\perp\right] & \text{for} \,\,\chi\ll 1\,\,\text{,} \\ \frac{\pi}{\Gamma\left(\frac{1}{6}\right)\Gamma\left(\frac{13}{6}\right)} \chi^{-1/3}\left[ (\frac{1}{6})_\parallel,(\frac{1}{2})_\perp\right]& \text{for} \,\,\chi\gg 1\,\,\text{.} \end{cases}$ The zero coefficient in Eq. (\[eqDB4\]) holds, of course, only to leading order in this calculation. We observe that the $\bot$ mode dominates absorption in the scalar case in contrast to the spinor case. Hence, the induced rotation of the laser probe goes into opposite directions in the two cases, bosons and fermions. The refractive indices induced by scalar fluctuations read $$n_{\parallel,\perp}^{\rm sc}=1-\frac{\epsilon^{2}\alpha}{4\pi} \left(\frac{\epsilon\,eB}{m^{2}_{\epsilon}}\right)^{2} I_{\parallel,\perp}^{\rm sc}(\chi),$$ with $$\begin{gathered} I_{\parallel,\perp}^{\text{sc}}(\chi) \!\!=\frac{2^{\frac{1}{3}}}{2}\left(\frac{3}{\chi}\right)^{\frac{4}{3}} \int^{1}_{0} {\rm d}v\, \frac{\left[\left(\frac{v^2}{3}\right)_{\parallel}, \left(\frac{1}{2}-\frac{v^2}{6}\right)_{\perp}\right]} {(1-v^{2})^{\frac{1}{3}}} \\[1.5ex] {\times\tilde{e}^{\prime}_{0} \left[ \begin{scriptstyle}- \left(\frac{6}{\chi}\frac{1}{1-v^2}\right)^{\frac{2}{3}} \end{scriptstyle} \right]} \label{refracsc}\end{gathered}$$ $\displaystyle= \begin{cases} - \frac{1}{90} \left[(1)_\parallel,(7)_\perp\right] & \text{for} \,\,\chi\ll 1\,\,\text{,}\\ \frac{9}{14}\frac{\pi^{\frac{1}{2}}2^{\frac{1}{3}} \left(\Gamma\left(\frac{2}{3}\right)\right)^{2}} {\Gamma\left(\frac{1}{6}\right)} \chi^{-4/3}\left[ (\frac{1}{2})_\parallel,(\frac{3}{2})_\perp\right]& \text{for} \,\,\chi\gg 1\,\,\text{.} \end{cases}$ Again, the polarization dependence of the refractive indices renders the magnetized vacuum birefringent. We observe that the induced ellipticities for the scalar and the spinor case go into opposite directions. In particular, for small $\chi$, the $\bot$ mode is slower for the scalar case, supporting an ellipticity signal which has the same sign as that of Nitrogen[^2]. For the spinor case, it is the other way round. As a nontrivial cross-check of our results for the scalar case, note that the refractive indices for $\chi\ll 1$ precisely agree with the (inverse) velocities computed in Eqs.  and from the Heisenberg-Euler effective action of scalar QED. We conclude that a careful determination of the signs of ellipticity and rotation in the case of a positive signal can distinguish between spinor and scalar fluctuating particles.[^3] Finally, let us briefly comment on the case of [having both fermions and bosons. If there is]{} an identical number of bosonic and fermionic degrees of freedom with exactly the same masses and millicharges, i.e. if the millicharged particles appear in a supersymmetric fashion in complete supersymmetric chiral multiplets, one can check that the signals cancel. An exactly supersymmetric set of millicharged particles would cause neither an ellipticity signal nor a rotation of the polarization and one would have to rely on other detection principles as, for example, Schwinger pair production in accelerator cavities [@Gies:2006hv]. However, in nature supersymmetry is broken resulting in different masses for bosons and fermions. Now, the signal typically decreases rather rapidly for large masses (more precisely when $\chi\sim 1/m^{3}_{\epsilon}$ becomes smaller than one) and the lighter particle species will give a much bigger contribution. Accordingly, for a sufficiently large mass splitting the signal would look more or less as if we had only the lighter particle species, be it a fermion or a boson. [![image](eps/dependencies.eps){width="0.9\linewidth"}]{} Distinguishing between different Scenarios {#sec3} ========================================== In principle, one can set up a series of different experiments distinguishing between the different scenarios, ALPs or MCPs. For example, a positive signal in a light-shining-through-wall experiment [@Sikivie:1983ip; @Anselm:1986gz; @Gasperini:1987da; @VanBibber:1987rq; @Ruoso:1992nx; @Ringwald:2003ns; @Gastaldi:2006fh; @Pugnat:2005nk; @Rabadan:2005dm; @Cantatore:Patras; @Kotz:2006bw; @Baker:Patras; @Rizzo:Patras; @ALPS] would be a clear signal for the ALP interpretation, whereas detection of a dark current that is able to pass through walls would be a clear signal for the MCP hypothesis [@Gies:2006hv]. But even with a PVLAS-type experiment that measures only the rotation and ellipticity signals, one can collect strong evidence favoring one and disfavoring other scenarios. +:---------------------:+:---------------------:+:---------------------:+ | | $n_\|> n_\bot$ | $ n_\|< n_\bot$ | +-----------------------+-----------------------+-----------------------+ | $\pi_\|> \pi_\bot$ | ALP 0${}^{-}$ or\ | MCP $\frac{1}{2}$ | | | MCP $\frac{1}{2}$ | (large $\chi$) | | | (small $\chi$) | | +-----------------------+-----------------------+-----------------------+ | $\pi_\|< \pi_\bot$ | MCP $0$ (large | ALP 0${}^{+}$ or\ | | | $\chi$) | MCP $0$ (small | | | | $\chi$) | +-----------------------+-----------------------+-----------------------+ : Summary of the allowed particle-physics interpretation arising from a sign analysis of birefringence induced by different refractive indices $n_{\|,\bot}$ and dichroism induced by different probability exponents $\pi_{\|,\bot}$. \[tab1\] Performing one measurement of the absolute values of rotation and ellipticity, one can typically find values for the masses and couplings in all scenarios, such that the predicted rotation and ellipticity is in agreement with the experiment. One clear distinction can already be made by measuring the sign of the ellipticity and rotation signals. In the ALP scenario, a measurement of the sign of either the rotation or the ellipticity is sufficient to decide between a scalar or pseudoscalar. Measuring the sign of both signals already is a consistency check; [if the signal signs turn out to be inconsistent, the ALP scenarios for both the scalar and the pseudoscalar would be ruled out]{}. In the MCP scenario, a measurement of the sign of rotation decides between scalars and fermions. If only the sign of the ellipticity signal is measured, both options still remain, since the sign of the ellipticity changes when one moves from large to small masses: the hierarchy of the refractive indices is inverted in the region of anomalous dispersion. But at least the sign tells us if we are in the region of large or small masses, [corresponding to a small or large $\chi$ parameter, cf. Eq. . This sign analysis is summarized in Table \[tab1\].]{} More information can be obtained by varying the parameters of the experiment. In principle, we can vary all experimental parameters appearing in Eqs. , , and : the strength of the magnetic field $B$, the frequency of the laser $\omega$, and the length of the magnetic field inside the cavity ${L}$. Let us start with the magnetic field dependence. For the ALP scenario both rotation and ellipticity signals are proportional to $B^2$, $$\Delta\theta^\text{ALP} \sim B^{2}, \quad \psi^\text{ALP}\sim B^{2}$$ whereas for MCP’s we have $$\begin{aligned} \Delta\theta^\text{MCP}\!\!&\sim&\!\!\bigg\{ \begin{array}{ll} \exp\left(-\frac{\text{const}}{B}\right) & \quad B\,\, \text{small}\\ B^{\frac{2}{3}} & \quad B\,\, \text{large} \end{array} \\\nonumber \psi^\text{MCP}\!\!&\sim&\!\! \bigg\{ \begin{array}{ll} B^2 & \quad B\,\, \text{small}\\ B^{\frac{2}{3}}& \quad B\,\, \text{large}.\\ \end{array}\end{aligned}$$ In the left panels of Fig. \[dependencies\] we demonstrate the different behavior (for the ellipticity signal the $B^{\frac{2}{3}}$-dependence is not yet visible as it appears only at much stronger fields). The model parameters for ALPs and MCPs are chosen such that the absolute value of $\Delta\theta$ and $\psi$ matches the PVLAS results ($\lambda=1064$ nm, $B=5$ T, and $L=1$ m) shown as the crossing of the dotted lines together with their statistical errors. In a similar manner, the signals also depend on the wavelength of the laser light, which is shown in the center panels of Fig. \[dependencies\]. Finally, there is one more crucial difference between the ALP and the MCP scenario. Production of a single particle can occur coherently. This leads to a faster growth of the signal $$\Delta\theta^\text{ALP}\sim {L}^{2},\quad \psi^\text{ALP}\sim{L}^{2} \quad\,\, {L}\,\,\text{small}.$$ In the MCP scenario, however, the produced particles are essentially lost and we have only a linear dependence on the length of the interaction region, $$\Delta\theta^\text{MCP}\sim{L},\quad\psi^\text{MCP}\sim{L}.$$ This is shown in the right panels of Fig. \[dependencies\]. We conclude that studying the dependence of the signal on the parameters of the experiment can give crucial information to decide between the ALP and MCP scenarios, as we will also see in the following section. [ccc]{}\ \ &&\ $254$&$0.35$&$0.30$\ $34$&$0.26$&$0.11$\ \ &&\ $578$&$40.0$&$11.0$\ $34$&$1.60$&$0.44$\ \ &\ $0$&\ $\nicefrac{\pi}{2}$&\ Confrontation with Data {#sec4} ======================= In this Section, we want to confront the prediction of the ALP and MCP scenarios for vacuum magnetic dichroism, birefringence, and photon regeneration with the corresponding data from the BFRT [@Cameron:1993mr] and PVLAS [@Zavattini:2005tm; @PVLASICHEP; @Cantatore:IDM2006] collaborations, as well as from the Q&A experiment [@Chen:2006cd]. The corresponding experimental findings are summarized in Tables \[BFRTresults\], \[PVLASresults\], and \[QandAresults\], respectively. [cc]{}\ \ &\ $1064$&$3.9\pm0.2$\ $532$&$6.3\pm1.0$ [**(preliminary)**]{}\ \ &\ $1064$&$-3.4\pm0.3$ [**(preliminary)**]{}\ $532$&$-6.0\pm0.6$ [**(preliminary)**]{}\ [ccc]{}\ \ &\ $18700$&\ In the following we combine these results in a simple statistical analysis. For simplicity, we assume that the likelihood function $L_i$ of the rotation, the ellipticity and the photon regeneration rate follows a Gaussian distribution in each measurement $i$ with mean value and standard deviation as indicated in Tables \[BFRTresults\]-\[QandAresults\]. In the case of the BFRT upper limits, we approximate the likelihood functions by[^4] $L\propto\exp((\psi-\psi_\text{hypo})^2/(2\psi_\text{noise}^2))$. Taking these inputs as statistically independent values we can estimate the combined log-likelihood function as $\ln L \approx \sum_i \ln L_i$ [@Yao:2006px]. With these assumptions the method of maximum likelihood is equivalent to the method of least squares with $\chi^2=\text{const} - 2\sum_i\ln L_i$. A more sophisticated statistical analysis is beyond the scope of this work and requires detailed knowledge of the data analysis. ![image](eps/alp_pseudo_QA.eps){width="0.48\linewidth"}![image](eps/alp_scalar_QA.eps){width="0.48\linewidth"} ![image](eps/mcp_fermion_QA.eps){width="0.48\linewidth"}![image](eps/mcp_scalar_QA.eps){width="0.48\linewidth"} ALP hypothesis -------------- [c||c|c|c|c]{} $\chi^2/$d.o.f. & ALP 0${}^{-}$ & ALP 0${}^{+}$ & MCP $\frac{1}{2}$ & MCP 0\ BFRT, PVLAS, Q&A published data\ (d.o.f.$=6$) & 1.3 & 0.8 & 7.4 & 7.3\ + PVLAS\ preliminary data\ (d.o.f.$=9$) & 62.0 & 6.3 & 15.7 & 12.0\ only PVLAS\ pub. + prelim. data\ (d.o.f.$=2$) & 118.4 & 18.9 & 40.0 & 15.7 Figure \[ALPfit\] shows the results of a fit based on the pseudoscalar (left panels) or scalar (right panels) ALP hypothesis. The BFRT upper limits[^5] are shown by blue-shaded regions. The Q&A upper rotation limit is depicted as a gray-shaded region, but this limit exerts little influence on the global fit in the ALP scenario. The PVLAS results are displayed as green bands according to the $5\sigma$ confidence level (C.L.) with dark green corresponding to published data and light green corresponding to preliminary results. The resulting allowed parameter regions at 5$\sigma$ CL are depicted as red-filled islands or bands. Both upper panels show the result from all published data of all three experiments. Here, the results for scalar or pseudoscalar ALPs are very similar: in addition to the allowed 5$\sigma$ region at $m_\phi\simeq 1\dots 2\times 10^{-3}$ eV also reported by PVLAS [@Zavattini:2005tm], we observe further allowed islands for larger mass values. The $\chi^2$/d.o.f. (degrees of freedom) values for the fits are both acceptable with a slight preference for the scalar ALP ($\chi^2$/d.o.f.=0.8) in comparison with the pseudoscalar ALP ($\chi^2$/d.o.f.=1.3), cf. Table \[tab2\]. This degeneracy between the scalar and the pseudo-scalar ALP scenario is lifted upon the inclusion of the preliminary PVLAS data (center panels), since the negative sign of the birefringence signal with $n_\|<n_\bot$ strongly prefers the scalar ALP scenario. In addition, the size of the preliminary ellipticity result is such that the higher mass islands are ruled out, and the low mass island settles around $m_{\phi}\simeq 10^{-3}$ eV and $g\simeq 2\times 10^{-6}$ GeV${}^{-1}$. The results from a fit to PVLAS data only (published and preliminary) as displayed in the lower panels of Fig. \[ALPfit\] remain similar. MCP hypothesis -------------- Figure \[MCPfit\] shows the results of a fit based on the fermionic (left panels) or scalar (right panels) MCP hypothesis. The MCP hypothesis gives similar results for scalars and fermions if only the published data is included in the fit (upper panels). MCP masses $m_{\epsilon}$ larger than 0.1 eV are ruled out by the upper limits of BFRT. But the 5$\sigma$ CL region shows a degeneracy towards smaller masses. It is interesting to observe that the available Q&A data already approaches the ballpark of the PVLAS rotation signal in the light of the MCP hypothesis, whereas it is much less relevant for the ALP hypothesis. Including the PVLAS preliminary data, the fit for fermionic MCPs becomes different from the scalar MCP case: because of the negative sign of the birefringence signal, only the large-$\chi$/small-$m_{\epsilon}$ branch remains acceptable for the fermionic MCP, whereas the small-$\chi$/large-$m_\epsilon$ branch is preferred by the scalar MCP, cf. Table \[tab1\]. A $\chi^2/$d.o.f. comparison between the fermionic MCP ($\chi^2/$d.o.f.$=15.7$) and the scalar MCP ($\chi^2/$d.o.f.$=12.0$) points to a slight preference for the scalar MCP scenario. This preference is much more pronounced in the fit to the PVLAS data (published + preliminary) only, cf. Table \[tab2\]. The best MCP candidate would therefore be a scalar particle with mass $m_\epsilon\simeq0.07$ eV and charge parameter $\epsilon\simeq 2\times 10^{-6}$. ALP vs. MCP ----------- Let us first stress that the partly preliminary status of the data used for our analysis does not yet allow for a clear preference of either of the two scenarios, ALP or MCP. Based on the published data only, the ALP scenarios give a better fit, since the upper limits by BFRT and Q&A leave an unconstrained parameter space open to the PVLAS rotation data. By contrast, the BFRT and Q&A upper limits already begin to restrict the MCP parameter space of the PVLAS rotation signal in a sizable manner, which explains the better $\chi^2$/d.o.f. for the ALP scenario. Based on the (in part preliminary) PVLAS data alone, the MCP scenario would be slightly preferred in comparison with the ALP scenario, see Table \[tab2\], bottom row. The reason is that the PVLAS measurements of birefringence and rotation for the different laser wavelengths show a better internal compatibility in the scalar MCP case than in the scalar ALP scenario. Conclusions =========== The signal observed by PVLAS – a rotation of linearly polarized laser light induced by a transverse magnetic field – has generated a great deal of interest over the recent months. Since the signal has found no explanation within standard QED or from other standard-model sectors, it could be the first direct evidence of physics beyond the standard model. The proposed attempts to explain this result fall into two categories:\ 1. conversion of laser photons into a single neutral spin-0 particle (scalar or pseudoscalar) coupled to two photons (called axion-like particle or ALP) and\ 2. pair production of fermions or bosons with a small electric charge (millicharged particles or MCPs).\ The corresponding actions associated with these two proposals should be viewed as pure low-energy effective field theories which are valid at laboratory scales at which the experiments operate. A naive extrapolation of these theories to higher scales generically becomes incompatible with astrophysical bounds. In this paper, we have compared the different low-energy effective theories in light of the presently available data from optical experiments. We have summarized the formulas for rotation and ellipticity in the different scenarios and contributed new results for millicharged scalars. We have then studied how optical experiments can provide for decisive information to discriminate between the different scenarios: this information can be obtained in the form of size and sign of rotation and ellipticity and their dependence on experimental parameters like the strength of the magnetic field, the wavelength of the laser and the length of the magnetic region. Our main results are depicted in Figs. \[ALPfit\] and \[MCPfit\] which show the allowed parameter regions for the different scenarios. On the basis of the published data, none of the scenarios can currently be excluded. The remaining open parameter regions should be regarded as good candidates for the target regions of future experiments. As the preliminary PVLAS data illustrates, near future optical measurements can further constrain the parameter space and even decide between the different scenarios. For instance, a negative ellipticity $n_\|<n_\bot$ together with a rotation corresponding to probability exponents $\pi_\|>\pi_\bot$ would rule out the scalar or pseudo-scalar ALP interpretation altogether. Be it from optical experiments like PVLAS or from the proposed “light/dark current shining through a wall” experiments, we will soon know more about the particle interpretation of PVLAS. Acknowledgments =============== The authors would like to thank Stephen L. Adler, Giovanni Cantatore, Walter Dittrich, Angela Lepidi, Axel Lindner, Eduard Masso, and Giuseppe Ruoso for insightful discussions. H.G. acknowledges support by the DFG under contract Gi 328/1-3 (Emmy-Noether program). Birefringence in the small-$\omega$ limit: effective action approach {#appA} ==================================================================== [ Since the sign of the ellipticity signaling birefringence can be a decisive piece of information, distinguishing between the spin properties of the new hypothetical particles, let us check our results with the effective-action approach [@Dittrich:2000zu].]{} Since the formulas in this appendix are equally valid for the MCP scenario as well as standard QED, we denote the coupling and mass of the fluctuating particle with ${\tilde{\alpha}}$, or ${\tilde{e}}$, and ${\tilde{m}}$ with the dictionary: $$\begin{aligned} \text{MCP:}&&\quad {\tilde{e}}=\epsilon e,\quad {\tilde{\alpha}}=\epsilon^2 \alpha,\quad {\tilde{m}}=m_\epsilon, \nonumber\\ \text{QED:}&&\quad {\tilde{e}}= e,\quad {\tilde{\alpha}}= \alpha,\quad {\tilde{m}}=m_e. \label{preEq1} \end{aligned}$$ The effective action in one-loop approximation can be written as $$\Gamma[A]=S_{\text{cl}}[A]+\Gamma^1[A] = -\int_x \mathcal F + \Gamma^1[A], \label{Eq1}$$ where we have introduced the field-strength invariant $\mathcal F$ corresponding to the Maxwell action. The two possible invariants are $$\mathcal F = \frac{1}{4} F_{\mu\nu} F^{\mu\nu}=\frac{1}{2} (\vec{B}^2 - \vec{E}^2), \quad \mathcal G = \frac{1}{4} F_{\mu\nu} {\widetilde{F}}^{\mu\nu} = -\vec{E}\cdot \vec{B}. \label{eq2}$$ with ${\widetilde{F}}_{\mu\nu}=\frac{1}{2} \epsilon_{\mu\nu\kappa\lambda} F^{\kappa\lambda}$. Also useful are the two secular invariants $a,b$, corresponding to the eigenvalues of the field strength tensor, $$a=\sqrt{\sqrt{\mathcal{F}^2+\mathcal{G}^2}+\mathcal{F}},\quad b=\sqrt{\sqrt{\mathcal{F}^2+\mathcal{G}^2}-\mathcal{F}}, \label{eq3}$$ with the inverse relations $$|\mathcal{G}|=ab, \quad \mathcal{F}=\frac{1}{2} (a^2-b^2).\label{eq4}$$ Let us start with the fermion-induced effective action, i.e., the classic Heisenberg-Euler effective action. The one-loop contribution reads $$\begin{gathered} \Gamma^1_{\text{Dsp}} = \frac{1}{8\pi^2} \int_x \int_0^\infty \frac{ds}{s^3}\, {\text{e}}^{-{\text{i}}{\tilde{m}}^2 s}\\ \times\!\left( {\tilde{e}}as \cot ({\tilde{e}}as)\, {\tilde{e}}bs \coth({\tilde{e}}bs) + \frac{2}{3} ({\tilde{e}}s)^2\mathcal{F} -1 \right).\end{gathered}$$ Expanding this action to quartic order in the field strength results in $$\Gamma^1_{\text{Dsp}} = \int_x \left( c^{\text{Dsp}}_{\bot}\, \mathcal{F}^2 +c^{\text{Dsp}}_\| \mathcal{G}^2 \right), \label{eq5}$$ where the constant prefactors read $$c^{\text{Dsp}}_\bot=\frac{8}{45} \frac{{\tilde{\alpha}}^2}{{\tilde{m}}^4}, \quad c^{\text{Dsp}}_\|=\frac{14}{45} \frac{{\tilde{\alpha}}^2}{{\tilde{m}}^4}. \label{eq6}$$ It is straightforward to derive the modified Maxwell equations from [Eq. ]{}. From these, the dispersion relations for the two polarization eigenmodes of a plane-wave field in an external magnetic field can be determined [@Dittrich:2000zu], yielding the phase velocities in the low-frequency limit, $$v_\bot =1 -c^{\text{Dsp}}_\bot B^2 \sin^2 \theta_B,\quad v_\| =1-c^{\text{Dsp}}_\| B^2 \sin^2 \theta_B.\label{eq7}$$ Obviously, the $\bot$ mode is slightly faster than the $\|$ mode, since the coefficient $c^{\text{Dsp}}_\bot <c^{\text{Dsp}}_\|$. Next we turn to the effective action which is induced by charged scalar fluctuations, i.e., the Heisenberg-Euler effective action for scalar QED. The one-loop contribution now reads $$\begin{gathered} \Gamma^1_{\text{sc}} =- \frac{1}{16\pi^2} \int_x \int_0^\infty \frac{ds}{s^3}\, {\text{e}}^{-{\text{i}}{\tilde{m}}^2 s}\\ \times\!\left(\frac{ {\tilde{e}}as}{ \sin ({\tilde{e}}as)}\,\frac{ {\tilde{e}}bs}{ \sinh({\tilde{e}}bs)} - \frac{1}{3} ({\tilde{e}}s)^2\mathcal{F} -1 \right).\label{eqB1}\end{gathered}$$ There are three differences to the fermion-induced action: the minus sign arises from Grassmann integration in the fermionic case. The factor of 1/2 comes from the difference between a trace over a complex scalar and that over a Dirac spinor. The replacement of $\cot$ and $\coth$ by inverse $\sin$ and $\sinh$ is due to the Pauli spin-field coupling in the fermionic case. Expanding the scalar-induced action to quartic order in the field strength results in $$\Gamma^1_{\text{sc}} = \int_x \left( c^{\text{sc}}_{\bot}\, \mathcal{F}^2 +c^{\text{sc}}_\| \mathcal{G}^2 \right), \label{eqB2}$$ where the constant prefactors this time read $$c^{\text{sc}}_\bot=\frac{7}{90} \frac{{\tilde{\alpha}}^2}{{\tilde{m}}^4}, \quad c^{\text{sc}}_\|=\frac{1}{90} \frac{{\tilde{\alpha}}^2}{{\tilde{m}}^4}. \label{eqB3}$$ The velocities of the two polarization modes then results in $$v_\bot =1 -c^{\text{sc}}_\bot B^2 \sin^2 \theta_B,\quad v_\| =1-c^{\text{sc}}_\| B^2 \sin^2 \theta_B.\label{eqB4}$$ This time, the $\bot$ mode is significantly slower than the $\|$ mode, since the order of the coefficients is now reversed $c^{\text{sc}}_\bot >c^{\text{sc}}_\|$. In a birefringence experiment, the induced ellipticity in the two cases is different in magnitude as well as in sign. Already at this stage, we can expect that the same difference will also be visible in the dichroism. At higher frequencies, the slower mode necessarily has to exhibit a stronger anomalous dispersion. By virtue of dispersion relations, we can expect that this goes along with a larger attenuation coefficient. As a result, the direction of the induced rotation will be opposite for the two cases, as is confirmed by the explicit result in Sect. \[sec:MCB\]. Polarization tensors {#appB} ==================== The polarization tensor in an external constant magnetic field can be decomposed into $$\Pi^{\mu\nu}(k|B) = \Pi_0\, P_{0}^{\mu\nu}+\Pi_\|\, P_\|^{\mu\nu} + \Pi_\bot\, P_\bot^{\mu\nu}, \label{eqP1}$$ where the $P_i$ denote orthogonal projectors, and only the $\|,\bot$ components are relevant for the dichroism and birefringence experiments; the corresponding projectors $P_{\|,\bot}$ refer to the polarization eigenmodes discussed in the main text [@Dittrich:2000zu; @Gies:1999vb]. [Dropping terms of higher order in the light cone deformation $k^2\simeq 0$ as a self-consistent approximation,]{} the coefficient functions can be written as $$\Pi_{\|,\bot}=-\omega^2\sin^2 \theta_B \frac{\alpha}{4\pi} \left( \begin{array}{c} -2 \\ 1 \end{array} \right) \int\limits_0^\infty \frac{d s}{s} \int\limits_{-1}^1 \frac{d\nu}{2} {\text{e}}^{-{\text{i}}s \phi_0} \, N_{\|,\bot},\label{111236}$$ where the upper component holds for the spinor case and the lower for the scalar case. The phase reads in both cases $$\begin{aligned} \nonumber \phi_0&={\tilde{m}}^2 -\omega^2 \sin^2 \theta_B \left( \frac{1-\nu^2}{4} -\frac{1}{2} \frac{\cos \nu {\tilde{e}}Bs -\cos {\tilde{e}}Bs}{{\tilde{e}}Bs \sin {\tilde{e}}Bs} \right)\\ &\simeq {\tilde{m}}^2 +\omega\sin^2\theta_B \frac{(1-\nu^2)^2}{48} \, ({\tilde{e}}Bs )^2. \label{111235}\end{aligned}$$ For completeness, let us list the integrand functions of the spinor case first, $$\begin{aligned} N_\|^{\text{Dsp}}& =\frac{{\tilde{e}}Bs \cos \nu {\tilde{e}}Bs}{\sin {\tilde{e}}Bs}\nonumber \\ &-{\tilde{e}}Bs \cot {\tilde{e}}Bs \left( 1- \nu^2 +\nu \frac{\sin \nu {\tilde{e}}Bs}{\sin {\tilde{e}}Bs} \right), \nonumber \\ N_\bot^{\text{Dsp}}& =-\frac{{\tilde{e}}Bs \cos \nu {\tilde{e}}Bs}{\sin {\tilde{e}}Bs} +\frac{\nu {\tilde{e}}Bs \,\sin\nu {\tilde{e}}Bs \,\cot {\tilde{e}}Bs}{\sin {\tilde{e}}Bs} \nonumber \\ &+ \frac{2{\tilde{e}}Bs (\cos\nu {\tilde{e}}Bs -\cos {\tilde{e}}Bs)}{\sin^3 {\tilde{e}}Bs}. \label{111237}\end{aligned}$$ The corresponding lowest-order expansions in $\tilde{e}Bs$ which are relevant for the desired approximation are $$\begin{aligned} N_\|^{\text{Dsp}}&=\frac{1}{2}(1-\nu^2)\left( 1-\frac{1}{3}\nu^2\right)\, ({\tilde{e}}Bs)^2, \nonumber\\ N_\bot^{\text{Dsp}}&= \frac{1}{2}(1-\nu^2) \left( \frac{1}{2} +\frac{1}{6} \nu^2 \right)\, ({\tilde{e}}Bs)^2. \label{InsAbs8}\end{aligned}$$ [Inserting these expansions into [Eq. ]{}, the parameter integrations can be performed, resulting in the expressions listed in Sect. \[sec:MCF\].]{} Note that the expansion coefficients in [Eq. ]{} also pop up in the final result for the absorption coefficients and the refractive indices, see below. The corresponding integrand functions for the scalar case read[^6] [@Schubert:2000yt] $$\begin{aligned} N_\|^{\text{sc}}=& -\frac{{\tilde{e}}Bs}{\sin {\tilde{e}}Bs} \left( - \nu^2 +\nu \frac{\sin \nu {\tilde{e}}Bs}{\sin {\tilde{e}}Bs} \right), \label{eqP2}\\ N_\bot^{\text{sc}}=& +\frac{\nu {\tilde{e}}Bs \,\sin\nu {\tilde{e}}Bs }{\sin^2 {\tilde{e}}Bs} \nonumber\\&- \frac{{\tilde{e}}Bs}{\sin^3 {\tilde{e}}Bs} \left( 1+ \cos^2 eBs -2 \cos {\tilde{e}}Bs \cos \nu {\tilde{e}}Bs \right). \nonumber\end{aligned}$$ The corresponding expansions are $$\begin{aligned} N_\|^{\text{sc}}&=&- \frac{1}{2} (1-\nu^2)\, \left(\frac{1}{3} \nu^2 \right) \,({\tilde{e}}Bs)^2, \label{eqP3}\\ N_\bot^{\text{sc}}&=& -\frac{1}{2} (1-\nu^2) \left(\frac{1}{2} - \frac{1}{6} \nu^2\right)\, ({\tilde{e}}Bs)^2. \nonumber\end{aligned}$$ The overall minus sign difference between Eqs.  and will be used to cancel the minus sign difference between the scalar and the spinor case in [Eq. ]{}. Apart from the overall factor of 2, the desired formulas for the scalar case can be directly constructed from the spinor case by simple replacements as suggested by a comparison between Eqs.  and . With the findings of this section, we can directly obtain the results for the photon absorption coefficients and refractive indices as given in the main text. Rotation and Ellipticity at BFRT {#appC} ================================ The BFRT experiment uses a magnetic field with time-varying amplitude $B=B_0+\Delta B\cos(\omega_m t+\phi_m)$. The measured rotation and ellipticity correspond to the [Fourier coefficient of the]{} light intensity at frequency $\omega_m$. [To a good accuracy, the Fourier coefficient can be read off from the first-order Taylor expansion of the optical functions with respect to $\Delta B$.]{} The rotation effect for fermionic MCPs linear to $\cos(\omega_mt+\phi_m)$ is given by Eqs. (\[delthet\]) and (\[absorption\]) for $B=B_0$ and $\chi_0 = \chi(B_0)$ with $$\label{absorb_linear} T_{\parallel,\perp}^\text{Dsp} = \frac{4\sqrt{3}}{\pi\chi_0} \int\limits_0^1 {\rm d}v\ \frac{\Delta B}{B_0}\left[\left(\frac{4}{\chi_0}\frac{1}{1-v^2}\right)K_{5/3}\left( \frac{4}{\chi_0}\frac{1}{1-v^2}\right)-\frac{2}{3}K_{2/3}\left( \frac{4}{\chi_0}\frac{1}{1-v^2}\right)\right]\times \frac{\left[ \left( 1-\frac{1}{3}v^2\right)_\parallel, \left(\frac{1}{2} +\frac{1}{6}v^2\right)_\perp \right]}{(1-v^2)}.$$ The linear term for the ellipticity is given by Eq. (\[psi\]) and (\[refraction\]) for $B=B_0$ with $$\label{refrac_linear} I_{\parallel,\perp}^\text{Dsp}\!\!=\!\!2^{\frac{1}{3}}\left(\frac{3}{\chi_0}\right)^{\frac{4}{3}} \int^{1}_{0} {\rm d}v\, \frac{2}{3}\frac{\Delta B}{B_0}\left[\tilde{e}^{\prime}_{0}\left[\begin{scriptstyle}- \left(\frac{6}{\chi_0}\frac{1}{1-v^2}\right)^{\frac{2}{3}}\end{scriptstyle}\right]+\left(\frac{6}{\chi_0}\frac{1}{1-v^2}\right)^{\frac{2}{3}}\tilde{e}^{\prime\prime}_{0}\left[\begin{scriptstyle}- \left(\frac{6}{\chi_0}\frac{1}{1-v^2}\right)^{\frac{2}{3}}\end{scriptstyle}\right]\right]\times\frac{\left[\left(1-\frac{v^2}{3}\right)_{\parallel}, \left(\frac{1}{2}+\frac{v^2}{6}\right)_{\perp}\right]}{(1-v^{2})^{\frac{1}{3}}}.$$ The corresponding equations in the case of scalar MCPs are analogous. [99]{} Y. Semertzidis [*et al.*]{} \[BFRT Collaboration\], Phys. Rev. Lett.  [**64**]{}, 2988 (1990). R. Cameron [*et al.*]{} \[BFRT Collaboration\], Phys. Rev. D [**47**]{}, 3707 (1993). E. Zavattini [*et al.*]{} \[PVLAS Collaboration\], Phys. Rev. Lett.  [**96**]{}, 110406 (2006) \[arXiv:hep-ex/0507107\]. U. Gastaldi, on behalf of the PVLAS Collaboration, talk at ICHEP‘06, Moscow,\ http://ichep06.jinr.ru/reports/42\_1s2\_13p10\_gastaldi.ppt G. Cantatore for the PVLAS Collaboration, “Laser production of axion-like bosons: progress in the experimental studies at PVLAS,” talk presented at the 6th International Workshop on the Identification of Dark Matter (IDM 2006), Island of Rhodes, Greece, 11–16th September, 2006, http://elea.inp.demokritos.gr/idm2006\_files/talks/Cantatore-PVLAS.pdf S. L. Adler, hep-ph/0611267. S. Biswas and K. Melnikov, hep-ph/0611345. J. T. Mendonca, J. Dias de Deus and P. Castelo Ferreira, Phys. Rev. Lett.  [**97**]{}, 100403 (2006) \[arXiv:hep-ph/0606099\]. L. Maiani, R. Petronzio and E. Zavattini, Phys. Lett. B [**175**]{}, 359 (1986). S. Weinberg, Phys. Rev. Lett.  [**40**]{}, 223 (1978). F. Wilczek, Phys. Rev. Lett.  [**40**]{}, 279 (1978). R. D. Peccei and H. R. Quinn, Phys. Rev. Lett.  [**38**]{}, 1440 (1977). R. D. Peccei and H. R. Quinn, Phys. Rev. D [**16**]{}, 1791 (1977). W. A. Bardeen and S. H. Tye, Phys. Lett. B [**74**]{}, 229 (1978). D. B. Kaplan, Nucl. Phys. B [**260**]{}, 215 (1985). M. Srednicki, Nucl. Phys. B [**260**]{}, 689 (1985). E. Masso and J. Redondo, JCAP [**0509**]{}, 015 (2005) \[arXiv:hep-ph/0504202\]. P. Jain and S. Mandal, astro-ph/0512155. J. Jaeckel, E. Masso, J. Redondo, A. Ringwald and F. Takahashi, arXiv:hep-ph/0605313; hep-ph/0610203. E. Masso and J. Redondo, Phys. Rev. Lett.  [**97**]{}, 151802 (2006) \[arXiv:hep-ph/0606163\]. R. N. Mohapatra and S. Nasri, arXiv:hep-ph/0610068. P. Jain and S. Stokes, hep-ph/0611006. G. G. Raffelt, Stars As Laboratories For Fundamental Physics: The Astrophysics of Neutrinos, Axions, and other Weakly Interacting Particles, University of Chicago Press, Chicago, 1996. K. Zioutas [*et al.*]{} \[CAST Collaboration\], Phys. Rev. Lett.  [**94**]{}, 121301 (2005) \[arXiv:hep-ex/0411033\]. A. Dupays, E. Masso, J. Redondo and C. Rizzo, hep-ph/0610286. H. Gies, J. Jaeckel and A. Ringwald, Phys. Rev. Lett.  [**97**]{}, 140402 (2006) \[arXiv:hep-ph/0607118\]. S. Davidson, S. Hannestad and G. Raffelt, JHEP [**0005**]{}, 003 (2000) \[arXiv:hep-ph/0001179\]. B. Holdom, Phys. Lett. B [**166**]{}, 196 (1986). S. A. Abel, J. Jaeckel, V. V. Khoze and A. Ringwald, hep-ph/0608248. S. J. Chen, H. H. Mei and W. T. Ni \[Q& A Collaboration\], hep-ex/0611050. P. Sikivie, Phys. Rev. Lett.  [**51**]{}, 1415 (1983) \[Erratum-ibid.  [**52**]{}, 695 (1984)\]. A. A. Anselm, Yad. Fiz.  [**42**]{}, 1480 (1985). M. Gasperini, Phys. Rev. Lett.  [**59**]{}, 396 (1987). K. Van Bibber, N. R. Dagdeviren, S. E. Koonin, A. Kerman and H. N. Nelson, Phys. Rev. Lett.  [**59**]{}, 759 (1987). G. Ruoso [*et al.*]{} \[BFRT Collaboration\], Z. Phys. C [**56**]{}, 505 (1992). A. Ringwald, Phys. Lett. B [**569**]{}, 51 (2003). U. Gastaldi, hep-ex/0605072. P. Pugnat [*et al.*]{}, Czech. J. Phys.  [**55**]{}, A389 (2005); Czech. J. Phys.  [**56**]{}, C193 (2006). R. Rabadan, A. Ringwald and K. Sigurdson, Phys. Rev. Lett.  [**96**]{}, 110407 (2006). G. Cantatore \[PVLAS Collaboration\], 2nd ILIAS-CERN-CAST Axion Academic Training 2006, http://cast.mppmu.mpg.de/ U. Kötz, A. Ringwald and T. Tschentscher, hep-ex/0606058. K. Baker \[LIPSS Collaboration\], 2nd ILIAS-CERN-CAST Axion Academic Training 2006, http://cast.mppmu.mpg.de/ C. Rizzo \[BMV Collaboration\], 2nd ILIAS-CERN-CAST Axion Academic Training 2006, http://cast.mppmu.mpg.de/ K. Ehret [*et al.*]{} \[ALPS Collaboration\], LoI subm. to DESY directorate. A. Dupays, C. Rizzo, M. Roncadelli and G. F. Bignami, Phys. Rev. Lett.  [**95**]{}, 211302 (2005) \[arXiv:astro-ph/0510324\]. T. Heinzl [*et al.*]{}, hep-ph/0601076. H. Gies, J. Jaeckel and A. Ringwald, Europhys. Lett. [**76**]{}, 794 (2006) \[arXiv:hep-ph/0608238\]. M. I. Dobroliubov and A. Y. Ignatiev, Phys. Rev. Lett.  [**65**]{}, 679 (1990). T. Mitsui, R. Fujimoto, Y. Ishisaki, Y. Ueda, Y. Yamazaki, S. Asai and S. Orito, Phys. Rev. Lett.  [**70**]{}, 2265 (1993). A. Badertscher [*et al.*]{}, hep-ex/0609059. A. Rubbia, Int. J. Mod. Phys. A [**19**]{}, 3961 (2004). P. A. Vetter, Int. J. Mod. Phys. A [**19**]{}, 3865 (2004). G. Raffelt and L. Stodolsky, Phys. Rev. D [**37**]{}, 1237 (1988). J. S. Toll, Ph.D. thesis, Princeton Univ., 1952 (unpublished). N. P. Klepikov, Zh. Eksp. Teor. Fiz. [**26**]{}, 19 (1954). T. Erber, Rev. Mod. Phys.  [**38**]{}, 626 (1966). V. N. Baier and V. M. Katkov, Zh. Eksp. Teor. Fiz. [**53**]{}, 1478 (1967) \[Sov. Phys. JETP [**26**]{}, 854 (1968)\]. J. J. Klein, Rev. Mod. Phys.  [**40**]{}, 523 (1968). S. L. Adler, Annals Phys.  [**67**]{}, 599 (1971). W. y. Tsai and T. Erber, Phys. Rev. D [**10**]{}, 492 (1974). J. K. Daugherty and A. K. Harding, Astrophys. J.  [**273**]{}, 761 (1983). W. Dittrich and H. Gies, Springer Tracts Mod. Phys.  [**166**]{}, 1 (2000). T. Heinzl and O. Schroeder, J. Phys. A [**39**]{}, 11623 (2006) \[arXiv:hep-th/0605130\]. W. y. Tsai and T. Erber, Phys. Rev. D [**12**]{}, 1132, (1975). C. Schubert, Nucl. Phys. B [**585**]{}, 407 (2000) \[arXiv:hep-ph/0001288\]. W. M. Yao [*et al.*]{} \[Particle Data Group\], J. Phys. G [**33**]{}, 1 (2006). H. Gies, Phys. Rev. D [**61**]{}, 085021 (2000) \[hep-ph/9909500\]; [^1]: The incompatibility with standard QED has recently been confirmed again in a more careful wave-propagation study which also takes the rotation of the magnetic field in the PVLAS setup properly into account [@Adler:2006zs; @Biswas:2006cr]. The proposal of a potential QED effect in the rotating magnetic field [@Mendonca:2006pg] is therefore ruled out. [^2]: The sign of an ellipticity signal can actively be checked with a residual-gas analysis. Filling the cavity with a gas with a known classical Cotton-Mouton effect of definite sign, this effect can interfere constructively or destructively with the quantum effect, leading to characteristic residual-gas pressure dependencies of the total signal [@PVLASICHEP; @Cantatore:IDM2006]. [^3]: [In the sense of classical optics, the ellipticities of the various scenarios discussed here are indeed associated with a definite and unambiguous sign. This is not the case for the sign of the rotation which also depends on the experimental set up: in all our scenarios, the polarization axis is rotated towards the mode with the smallest probability exponent $\pi$ in Eq. . In the sense of classical optics, this can be either sign depending on the initial photon polarization relative to the magnetic field. In this work, the notion of the sign of rotation therefore refers to the two experimentally distinguishable cases of either $\pi_\|>\pi_\bot$ or $\pi_\|<\pi_\bot$. ]{} [^4]: We set the negative photon regeneration rate (Tab. \[BFRTresults\]) at BFRT for $\theta=0$ equal to zero. [^5]: As far as photon regeneration at BFRT is concerned, their photon detection efficiency $\eta$ was approximately 5.5%. Their laser spectrum with average power $\langle P\rangle \approx3$ W and average photon flux $\dot{N}_0=\langle P\rangle/\omega$ was dominated by the spectral lines $488$ nm and $514.5$ nm. We took an average value of $500$ nm in our fitting procedure. [^6]: Compared to [@Schubert:2000yt], we have accounted for a global minus sign arising from different global conventions for the polarization tensor and the effective action.
--- abstract: 'The most common procedure to solve a linear bilevel problem in the PES community is, by far, to transform it into an equivalent single-level problem by replacing the lower level with its KKT optimality conditions. Then, the complementarity conditions are reformulated using additional binary variables and large enough constants (big-Ms) to cast the single-level problem as a mixed-integer linear program that can be solved using optimization software. In most cases, such large constants are tuned by trial and error. We show, through a counterexample, that this widely used trial-and-error approach may lead to highly suboptimal solutions. Then, further research is required to properly select big-M values to solve linear bilevel problems.' author: - 'Salvador Pineda and Juan Miguel Morales [^1][^2][^3]' bibliography: - 'mendeley.bib' title: 'Solving Linear Bilevel Problems Using Big-Ms: Not All That Glitters Is Gold' --- Bilevel programming, optimality conditions, mathematical program with equilibrium constraints (MPEC). Introduction {#sec:intro} ============ environments are characterized by multiple decisions makers with divergent objectives that interact with each other. One of the simplest instances only considers two decision makers that make their decisions in a sequential manner. The player deciding first is called the *leader*, while the one deciding afterwards is called the *follower*. This non-cooperative sequential game is known as a *Stackelberg game* and was first investigated in [@VonStackelberg1952]. A Stackelberg game can be mathematically formulated as a bilevel problem (BP) [@Bard1998; @Dempe2002]. If the objective functions of both players and all constraints are linear, the resulting linear bilevel problem (LBP) can be generally formulated as follows: $$\begin{aligned} \min_{x\in \mathbb{R}^n} \quad & a^T x+ b^T y \label{bp1_1} \\ \text{s.t.} \quad & c^T_i x+ d^T_i y\leq e_i \quad \forall i \label{bp1_2} \\ & \min_{y\in \mathbb{R}^m} \quad p^T x+ q^T y \label{bp1_3} \\ & \,\, \text{s.t.} \quad \, r^T_j x+ s^T_j y\leq t_j \; (\lambda_j) \quad \forall j \label{bp1_4}\end{aligned}$$ \[bp1\] where $a,b,c_i,d_i,p,q,r_j,s_j$ and $e_i,t_j$ are vectors of appropriate dimension and scalars, respectively. The dual variable of the lower-level constraint \[bp1\_4\] is denoted by $\lambda_j$ in brackets. Mathematically, upper-level constraints that include upper- and lower-level variables can lead to disconnected feasible regions [@Colson2005a], which complicates the solution of the LBP as illustrated in [@Shi2005a]. Dealing with the solution to this variant goes beyond the purposes of this letter and thus, we assume $d_i=0$. Since the lower-level optimization problem is linear, it can be replaced with its KKT optimality conditions as follows: $$\begin{aligned} \min_{x\in \mathbb{R}^n,y\in\mathbb{R}^m} \quad & a^T x+ b^T y \label{bp2_1} \\ \text{s.t.} \quad & c^T_i x+ d^T_i y\leq e_i \quad \forall i \label{bp2_2} \\ & r^T_j x+ s^T_j y\leq t_j \quad \forall j \label{bp2_3} \\ & q + \sum_j \lambda_j s_j = 0 \label{bp2_4} \\ & \lambda_j \geq 0, \quad \forall j \label{bp2_5} \\ & \lambda_j \left( r^T_j x+ s^T_j y - t_j \right) = 0, \quad \forall j \label{bp2_6} \end{aligned}$$ \[bp2\] Non-linear complementarity constraints are further handled using the Fortuny-Amat mixed-integer reformulation [@Fortuny-Amat1981] as presented below: $$\begin{aligned} \min_{x\in \mathbb{R}^n,y\in\mathbb{R}^m} \quad & a^T x+ b^T y \label{bp3_1} \\ \text{s.t.} \quad & \eqref{bp2_2}-\eqref{bp2_5} \\ & \lambda_j \leq u_j M^D_j, \quad \forall j \label{bp3_5} \\ & -r^T_j x- s^T_j y + t_j \leq (1-u_j) M^P_j, \forall j \label{bp3_6} \\ & u_j \in \{0,1\}, \quad \forall j \label{bp3_7} \end{aligned}$$ \[bp3\] where $u_j$ are additional binary variables and $M^P_j,M^D_j$ are large enough constants. Model is a mixed-integer linear optimization problem that can be solved using commercial software. Notice that appropriate values for $M^P_j$ are often available, because they relate to primal variables, which are typically bounded by nature. However, $M^D_j$ are upper bounds on dual variables and therefore, tuning these large enough constants is a more challenging task. The most commonly used trial-and-error tuning procedure reported in the technical literature runs as follows: 1. Select initial values for $M^P_j$ and $M^D_j$. 2. Solve model . 3. Find a $j'$ such that $u_{j'}=0$ and $-r^T_{j'} x- s^T_{j'} y + t_{j'} = M^P_{j'}$. If such a $j'$ exists, increase the value of $M^P_{j'}$ and go to step 2). Otherwise, go to step 4). 4. Find a $j'$ such that $u_{j'}=1$ and $\lambda_{j'}=M^D_{j'}$. If such a $j'$ exists, increase the value of $M^D_{j'}$ and go to step 2). Else, the solution to *is assumed* to correspond to the optimal solution of the original bilevel problem . The trial-and-error procedure described above has been used in a great number of research works in the PES technical literature related to electricity grid security analysis [@Motto2005], transmission expansion planning [@Garces2009; @Jenabi2013b], strategic bidding of power producers [@Ruiz2009; @Zugno2013], generation capacity expansion [@Wogrin2011a; @Kazempour2011c], investment in wind power generation [@Baringo2014a; @Maurovich-Horvat2014] and market equilibria models [@Pozo2011; @Ruiz2012], among many others. Furthermore, its use is likely to continue in the future. However, as shown in the next section with a simple counterexample, this trial-and-error procedure does not guarantee global optimality of the original bilevel problem. Counterexample ============== Let us consider the following linear bilevel problem: $$\begin{aligned} \max_{x\in \mathbb{R}} \quad & z = x+y \label{bp4_1} \\ \text{s.t.} \quad & 0 \leq x \leq 2 \label{bp4_2} \\ & \min_{y\in \mathbb{R}} \quad y \label{bp4_3} \\ & \,\, \text{s.t.} \quad \, y \geq 0 \quad (\lambda_1) \label{bp4_4} \\ & \quad \quad \;\; x - 0.01y \leq 1 \quad (\lambda_2) \label{bp4_5}\end{aligned}$$ \[bp4\] It is easy to verify that the optimal solution to this problem is $z^*=102, x^*=2, y^*=100, \lambda_1^*=0, \lambda_2^*=100$. Following the procedure described in Section \[sec:intro\], we can reformulate as the following mixed-integer linear programming problem: $$\begin{aligned} \max_{x\in \mathbb{R},y\in\mathbb{R}} \quad & z = x+y \label{bp5_1} \\ \text{s.t.} \quad & 0 \leq x \leq 2 \label{bp5_2} \\ & y \geq 0 \label{bp5_4} \\ & x - 0.01y \leq 1 \label{bp5_5} \\ & 1 - \lambda_1 - 0.01\lambda_2 = 0 \label{bp5_6} \\ & \lambda_1, \lambda_2 \geq 0 \\ & \lambda_1 \leq u_1 M^D_1 \\ & y \leq (1-u_1) M^P_1 \\ & \lambda_2 \leq u_2 M^D_2 \\ & -x + 0.01y +1 \leq (1-u_2) M^P_2 \\ & u_1, u_2 \in \{0,1\}\end{aligned}$$ \[bp5\] Conclusion ========== This letter aims to raise concern about the widespread and continued use of the big-M approach to solve LBP within the PES community. We show, using a counterexample, that the trial-and-error procedure that is presently employed to tune the big Ms in many works published in PES journals may actually fail and provide highly suboptimal solutions. We advocate, instead, for the use of more sophisticated methods like the one proposed in [@Pineda2017] to properly tune the values of the big-Ms when solving LBP. [^1]: S. Pineda is with the Department of Electrical Engineering, University of Malaga, Spain. E-mail: [email protected]. [^2]: J. M. Morales is with the Department of Applied Mathematics, University of Malaga, Spain. E-mails: [email protected]; [email protected]. [^3]: This work was supported in part by the Spanish Ministry of Economy, Industry and Competitiveness through projects ENE2016-80638-R and ENE2017-83775-P. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 755705).
--- author: - 'Rong-Gen Cai,' - 'Shan-Ming Ruan,' - 'Shao-Jiang Wang,' - 'Run-Qiu Yang,' - 'Rong-Hui Peng.' bibliography: - 'ref.bib' title: Action Growth for AdS Black Holes --- Introduction ============ As a branch of theoretical computer science and mathematics, computational complexity theory [@0034-4885-75-2-022001] motivates a lot of studies in field theory [@TCS-066] and gravitational physics [@Susskind:2013aaa; @Susskind:2014rva; @Susskind:2016tae; @Stanford:2014jda; @Harlow:2013tf]. Especially, Susskind and his collaborators’ works [@Susskind:2013aaa; @Susskind:2014rva; @Susskind:2016tae; @Stanford:2014jda] shed some lights on the connection between quantum computational complexity and black hole physics. It is expected that computational complexity will be helpful for our understanding of black hole physics, holographic property of gravity, and especially Hawking radiation. And on the other hand, the holographic principle of gravity will provide us with some useful tools to study problems of complexity [@Brown:2015bva]. Maldacena and Susskind [@Maldacena:2013xja; @Susskind:2014yaa] have related the Einstein-Podolsky-Rosen (EPR) correlation in quantum mechanics to wormhole, or more precisely the Einstein Rosen (ER) bridge in gravity, and proposed the so-called $\mathrm{ER}=\mathrm{EPR}$ relation that the ER bridge between two black holes can be considered as EPR correlation. This relation allows Alice at one side of ER bridge to communicate with Bob locating at the another side through the ER bridge. However, a natural question is how difficult it is for Alice to send signal through ER bridge. It is worth noting that quantum computational complexity can be understood as a measure of how difficult it is to implement some unitary operations during computation. In quantum circuits [@Hayden:2007cs], complexity can also be defined as the minimal number of gates used for processing the unitary operation [@Susskind:2014rva]. As a result, Susskind proposed a new duality to relate the distance from the layered stretched horizon to computational complexity in [@Susskind:2013aaa], which at the first time shows the connection between horizon and complexity. The dual connection then is promoted to a conjecture that complexity of quantum state of dual CFT is proportional to the geometric length of ER bridge. Inspired by Hartman and Maldacena’s study about time evolution of entanglement entropy and tensor network description of quantum state [@Hartman:2013qma], Susskind and Stanford revised the previous conjecture and proposed a new one called Complexity-Volume (CV) duality [@Stanford:2014jda], $$\mathcal{C}(t_L,t_R) \sim \frac{V}{G_N L}$$ where $V$ is the spatial volume of the ER bridge that connects two boundaries at the times $t_L$ and $t_R$ and $L$ is chosen to be the AdS radius for large black hole and the Schwarzschild radius for small black hole. The CV duality means the complexity of dual boundary state $|\psi(t_L,t_R)\rangle$ is proportional to the $V/L$ rather than the length of the ER bridge. Although the conjecture has been tested in spherical shock wave geometries [@Stanford:2014jda], the appearance of $L$ seems unnatural. It is worth noting that there is an alternative definition for the holographic complexity [@Alishahiha:2015rta; @Barbon:2015ria; @Barbon:2015soa] by the extremal bulk volume of a co-dimension one time slice enclosed by the extremal surface area of co-dimension two time slice appearing in the holographic entanglement entropy [@Ryu:2006bv]. Refer to [@Momeni:2016ekm; @Momeni:2016qfv] for possible applications of this definition. In a recent letter ref.[@Brown:2015bva] and a detailed paper ref. [@Brown:2015lvg], these authors further proposed a Complexity-Action (CA) conjecture that the quantum complexity of a holographic state is dual to the action of certain Wheeler-DeWitt (WDW) patch in the AdS bulk, $$\begin{aligned} \hbox{CA conjecture :}\qquad\mathcal{C}=\frac{\mathcal{A}}{\pi\hbar}.\end{aligned}$$ It has been pointed out in [@Lloyd] that the growth rate of quantum complexity should be bounded by $$\begin{aligned} \label{eq:complexitybound} \frac{\mathrm{d}\mathcal{C}}{\mathrm{d}t}\leq\frac{2E}{\pi\hbar}.\end{aligned}$$ The authors of [@Brown:2015bva; @Brown:2015lvg] tested the CA conjecture by computing the growth rate of action within the WDW patch at late time approximation, which should also obey the quantum complexity bound *if* the CA conjecture is correct[^1]. Along with other examples, such as black hole with static shells and shock waves, the concrete forms of action growth bound for anti de-Sitter (AdS) black holes (BH) are claimed to be $$\begin{aligned} \hbox{neutral BH :\;\;}&\frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}=2M;\label{eq:neutral}\\ \hbox{rotating BH :\;\;}&\frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}\leq 2\left[(M-\Omega J)-(M-\Omega J)_{\mathrm{gs}}\right];\label{eq:rotating}\\ \hbox{charged BH :\;\;}&\frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}\leq 2\left[(M-\mu Q)-(M-\mu Q)_{\mathrm{gs}}\right];\label{eq:charged}\end{aligned}$$ and they should be precisely saturated for neutral static black hole, rotating Bañados-Teitelboim-Zanelli (BTZ) black hole [@Banados:1992wn], and small charged black hole respectively, where the ground states subscripted by “gs” are argued to make the combinations $(M-\Omega J)_{\mathrm{gs}}$ and $(M-\mu Q)_{\mathrm{gs}}$ to be zero for the last two examples. As already noted in [@Brown:2015bva; @Brown:2015lvg] that the intermediate and large charged black holes apparently violate the action growth bound they proposed, they argued that only the small charged black holes still obey the action growth bound due to BPS bound in supersymmetric theory, while in the case of intermediate and large charged black holes, the RN-AdS black holes are not a proper description of UV-complete holographic field theory. As we explicitly show in this paper, even the small charged black holes also violate the action growth bound. Based on our calculations made in this paper, we will present a universal formula for the action growth of stationary AdS black holes. In this paper, we first repeat the calculations of action growth rate for general $D$-dimensional Reissner-Nordström (RN)-AdS black hole (section \[sec:2\]), rotating/charged BTZ black hole (section \[sec:3\]). It is found that the original action growth bound is inappropriate, which causes the apparent violation for any size of charged black hole. We then investigate some other AdS black holes, such as Kerr-AdS black hole (section \[sec:4\]) and charged Gauss-Bonnet-AdS black hole (section \[sec:5\]). The exact results of growth rate of action are summarized as $$\begin{aligned} \hbox{neutral BH :\quad}&\frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}=2M;\label{eq:neutralexact}\\ \hbox{rotating BH :\quad}&\frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}=\left[(M-\Omega J)_+-(M-\Omega J)_-\right];\label{eq:rotatingexact}\\ \hbox{charged BH :\quad}&\frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}=\left[(M-\mu Q)_+-(M-\mu Q)_-\right],\label{eq:chargedexact}\end{aligned}$$ where the subscripts $\pm$ present evaluations on the outer and inner horizons of the AdS black holes. We conjecture that the action growth bound for general AdS black holes should be $$\begin{aligned} \label{eq:ourbound} \frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}\leq(M-\Omega J-\mu Q)_+-(M-\Omega J-\mu Q)_-,\end{aligned}$$ the equality is saturated for stationary AdS black holes in Einstein gravity and charged AdS black hole in Gauss-Bonnet gravity as we show in this paper, and for general non-stationary black hole, the inequality is expected. We also mention in appendix \[app:A\] a subtlety when dealing with singularities within the WDW patch at late time approximation. We find that for the Gauss-Bonnet-AdS black hole case, rather than naive computation with the boundary of WDW touching the singularity, the neutral case should be approached from the charged case. In conclusion, we leave unchanged the original statement that the stationary AdS black hole in Einstein gravity is the fastest computer in nature. D-dimensional RN-AdS black hole {#sec:2} =============================== Setup ----- Let us first consider the case for a general $D$-dimensional RN-AdS black hole with its action given by $$\begin{aligned} \mathcal{A}=\frac{1}{16\pi G}\int\mathrm{d}^Dx \sqrt{-g}(R-2\Lambda-GF^2)+\frac{1}{8\pi G}\int_{\partial\mathcal{M}}\mathrm{d}^{D-1}x \sqrt{-h}K,\end{aligned}$$ where the cosmological constant $\Lambda$ is related to the AdS radius $L$ by $\Lambda=-(D-1)(D-2)/2L^2$, $h$ represents the determinant of induced metric on the boundary $\partial\mathcal{M}$, $K$ is the trace of the second fundamental form. The trace of the energy-momentum tensor of electromagnetic field $T=(4-D)F^2/16\pi$ is non-vanishing except for the case in four dimensions. After applying the trace of the equations of motion, $$\begin{aligned} R-2\Lambda=-\frac{2(D-1)}{L^2}+G\frac{D-4}{D-2}F^2,\end{aligned}$$ the total Einstein-Hilbert-Maxwell bulk action becomes $$\begin{aligned} \mathcal{A}_{\mathrm{EHM}}=\frac{1}{16\pi G}\int\mathrm{d}^Dx\sqrt{-g}\left(-\frac{2(D-1)}{L^2}-\frac{2GF^2}{D-2}\right),\end{aligned}$$ where the field strength of the Maxwell field is $$\begin{aligned} F^2=-2\frac{(D-3)Q^2}{r^{2(D-2)}}\frac{4\pi}{\Omega_{D-2}}.\end{aligned}$$ Here we choose the convention for the RN-AdS metric as $$\begin{aligned} \label{eq:RNAdSmetric} \mathrm{d}s^2=-f(r)\mathrm{d}t^2+\frac{\mathrm{d}r^2}{f(r)}+r^2\mathrm{d}\Omega_{D-2}^2,\end{aligned}$$ where the inner and outer horizons are determined by $f(r_{\pm})=0$ with $$\begin{aligned} f(r)=1-\frac{8\pi}{(D-2)\Omega_{D-2}}\frac{2GM}{r^{D-3}}+\frac{8\pi}{(D-2)\Omega_{D-2}}\frac{GQ^2}{r^{2(D-3)}}+\frac{r^2}{L^2},\end{aligned}$$ where $M$ and $Q$ are the mass and charge of the black hole, respectively. Action growth rate ------------------ Following [@Brown:2015lvg], we can calculate the growth rate of Einstein-Hilbert-Maxwell bulk action within the WDW patch at late-time approximation as $$\begin{aligned} \frac{\mathrm{d}\mathcal{A}_{\mathrm{EHM}}}{\mathrm{d}t}&=\frac{\Omega_{D-2}}{16\pi G}\int_{r_-}^{r_+}\mathrm{d}r r^{D-2}\left(-\frac{2(D-1)}{L^2}-\frac{2GF^2}{D-2}\right)\nonumber\\ &=-\frac{\Omega_{D-2}}{8\pi GL^2}(r_+^{D-1}-r_-^{D-1})-\frac{Q^2}{D-2}(r_+^{3-D}-r_-^{3-D}).\end{aligned}$$ With extrinsic curvature associated with metric , $$\begin{aligned} K=\frac{1}{r^{D-2}}\frac{\partial}{\partial r}\left(r^{D-2}\sqrt{f}\right)=\frac{D-2}{r}\sqrt{f}+\frac{f'(r)}{2\sqrt{f}},\end{aligned}$$ the growth rate of York-Gibbons-Hawking (YGH) surface term within WDW patch at late-time approximation is $$\begin{aligned} \frac{\mathrm{d}\mathcal{A}_{\mathrm{YGH}}}{\mathrm{d}t}&=\frac{\Omega_{D-2}}{8\pi G}\left[r^{D-2}\sqrt{f}\left(\frac{D-2}{r}\sqrt{f}+\frac{f'(r)}{2\sqrt{f}}\right)\right]_{r_-}^{r_+}\nonumber\\ &=\frac{(D-1)\Omega_{D-2}}{8\pi GL^2}(r_+^{D-1}-r_-^{D-1})+\frac{Q^2}{D-2}(r_+^{3-D}-r_-^{3-D})\nonumber\\ &+\frac{(D-2)\Omega_{D-2}}{8\pi G}(r_+^{D-3}-r_-^{D-3}).\end{aligned}$$ Therefore the total growth rate of action for RN-AdS black hole within WDW patch at late time approximation is $$\begin{aligned} \frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}=\frac{(D-2)\Omega_{D-2}}{8\pi G}\left(r_+^{D-3}-r_-^{D-3}+\frac{r_+^{D-1}-r_-^{D-1}}{L^2}\right).\end{aligned}$$ The above result can be made more compact, if we first solve $M$ from $f(r_+)=0$ as $$\begin{aligned} \label{eq:m} M=\frac{1}{2}Q^2r_+^{3-D}+\frac{(D-2)\Omega_{D-2}}{16\pi GL^2}r_+^{D-3}(r_+^2+L^2)\end{aligned}$$ and then plug the above expression into $f(r_-)=0$ to get the expression for $Q$ in terms of $r_{\pm}$ as $$\begin{aligned} \label{eq:Q} Q^2=\frac{(D-2)\Omega_{D-2}}{8\pi G}r_+^{D-3}r_-^{D-3}\left(1+\frac{1}{L^2}\frac{r_+^{D-1}-r_-^{D-1}}{r_+^{D-3}-r_-^{D-3}}\right).\end{aligned}$$ It is easy to see that the growth rate of action can be rewritten as $$\begin{aligned} \label{eq:RNAdSD} \frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}=Q^2\left(\frac{1}{r_-^{D-3}}-\frac{1}{r_+^{D-3}}\right).\end{aligned}$$ When $D=4$, the above expression reduces to the one in [@Brown:2015lvg]. The mass can also be expressed in terms of $r_{\pm}$, if we plug back to to obtain $$\begin{aligned} \label{eq:M} M=\frac{(D-2)\Omega_{D-2}}{16\pi G}\left(r_+^{D-3}+r_-^{D-3}+\frac{1}{L^2}\frac{r_+^{2(D-2)}-r_-^{2(D-2)}}{r_+^{D-3}-r_-^{D-3}}\right),\end{aligned}$$ which will be used below. Bound violation --------------- Although the authors of [@Brown:2015lvg] have realized that in 4-dimensions the situation for intermediate-sized $(r_{+}\sim L)$ and large charged black holes $(r_{+}\gg L)$ leads to an apparent violation of the action growth bound , they *mis-claimed* that the small charged black holes $(r_{+}\ll L)$ precisely saturate the action growth bound . We will explicitly show below that the action growth bound is always *broken* for any *nonzero* size of the RN-AdS black holes in any dimensions $D\geq4$. Fixing the chemical potential $\mu=Q/r_+^{D-3}$ so that the ground state for $(M-\mu Q)_{\mathrm{gs}}$ is zero for $\mu^2<1$, one can explicitly show that the difference between the growth rate of action and the action growth bound , $$\begin{aligned} \frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}-2(M-\mu Q)=\frac{(D-2)\Omega_{D-2}}{8\pi GL^2}\frac{r_+^{D-3}r_-^{D-3}(r_+^2-r_-^2)}{r_+^{D-3}-r_-^{D-3}}\geq0,\end{aligned}$$ which is always positive for any nonzero size of the RN-AdS black holes $(r_+\geq r_->0)$ in any dimensions $D\geq4$, and becomes zero only for the asymptotic flat limit $L\rightarrow\infty$ or chargeless limit $Q\rightarrow0$, namely $r_-\rightarrow0$. In this sense it looks then very strange for the case of RN-AdS black holes to be an exception for the action growth bound made in [@Brown:2015bva; @Brown:2015lvg]. Bound restoration ----------------- We can eliminate the unappealing exception mentioned above by simply rewriting the growth rate of action of RN-AdS black hole as $$\begin{aligned} \label{eq:RNAdSDpm} \frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}=(M-\mu_+Q)-(M-\mu_-Q),\end{aligned}$$ where the chemical potentials on inner and outer horizons are defined as $\mu_-=Q/r_-^{D-3}$ and $\mu_+=Q/r_+^{D-3}$, respectively. Although can be easily inferred from as expression $(\mu_--\mu_+)Q$, we prefer the former formulation in order to keep the similar manner as . In addition, we would like to stress here that at first glance, the chemical potential $\mu_-$ at the inner horizon has no corresponding quantity at the boundary, but it indeed has some relation to the quantities defined in the boundary field theory, because $\mu_- $ is given by $Q/r_-^{D-3}$, and the latter can be expressed by the mass and charge of the black hole. But we prefer to keep the form (\[eq:RNAdSDpm\]) since it looks more simple. In the limit of zero charge, $Q\rightarrow0$, namely $r_-\rightarrow0$, we have $\mu_+Q\rightarrow0$ and $\mu_-Q\rightarrow2M$, which leads to a very special case that $(M-\mu_-Q)\rightarrow-(M-\mu_+Q)$ in the neutral limit $Q\rightarrow0$. It recovers the case of Schwarzschild-AdS black hole, $$\begin{aligned} \frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}\rightarrow2M,\qquad Q\rightarrow0.\end{aligned}$$ This explains why the authors of [@Brown:2015bva; @Brown:2015lvg] could find the saturated bound along with an overall factor of $2$. In the asymptotic flat limit, the action growth rate for the RN black hole is $$\begin{aligned} \frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}\rightarrow\frac{(D-2)\Omega_{D-2}}{8\pi G}(r_+^{D-3}-r_-^{D-3}),\qquad L\rightarrow\infty.\end{aligned}$$ If we further take the neutral limit, it gives us the growth rate of action for Schwarzschild black hole, $$\begin{aligned} \frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}\rightarrow\frac{(D-2)\Omega_{D-2}}{8\pi G}r_+^{D-3}=2M,\quad L\rightarrow\infty, Q\rightarrow0.\end{aligned}$$ Let us pause and have a few comments on the asymptotic flat limit. The conformal boundary of an asymptotic AdS space-time is timelike and dual to a conformal field theory, but the conformal boundary of an asymptotic flat space-time is null and dual to Galilean conformal field theory [@Bagchi:2010eg]. Although the casual structure and Penrose’s diagram are totally different for the asymptotic AdS space-time and its asymptotic flat limit, the contributions to the growth rate from the regions outside the horizon is vanished at late time approximation, therefore the growth rate for the asymptotic flat spacetime can be obtained by a naive extrapolation limit from the case of AdS spacetime. We will show in the subsequent sections that a more general result for the action growth, $$\begin{aligned} \frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}=(M-\Omega_+J-\mu_+Q)-(M-\Omega_-J-\mu_-Q),\end{aligned}$$ holds for the stationary AdS black holes discussed in this paper. We conjecture that the above bound can only be saturated for stationary black hole for gravity theory without higher derivative terms of curvature. As a means of illustrating this conjecture, $$\begin{aligned} \frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}\leq(M-\Omega_+J-\mu_+Q)-(M-\Omega_-J-\mu_-Q),\end{aligned}$$ we suggest to test the above bound for the AdS-Vaidya spacetimes, which is under investigation. Rotating/charged BTZ black hole {#sec:3} =============================== Rotating BTZ black hole ----------------------- We next consider the case for the rotating/charged BTZ black hole. The action growth rate of the WDW patch for rotating BTZ black hole in $D=2+1$ dimensions has been carried out in [@Brown:2015bva; @Brown:2015lvg] as $$\begin{aligned} \label{eq:rotatingBTZ} \frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}=\frac{2}{L^2}(r_+^2-r_-^2),\end{aligned}$$ where the inner and outer horizons are determined by $f(r_{\pm})=0$ with $$\begin{aligned} f(r)=\frac{r^2}{L^2}-M+\frac{J^2}{4r^2}\end{aligned}$$ under usual convention $8G\equiv1$. Similarly one can express both the mass and angular momentum in terms of $r_{\pm}$ as $$\begin{aligned} M&=\frac{r_+^2+r_-^2}{L^2};\\ J&=\frac{2r_+r_-}{L},\end{aligned}$$ and define the angular velocities on inner and outer horizons as $\Omega_-=J/2r_-^2$ and $\Omega_+=J/2r_+^2$, then we arrive at $$\begin{aligned} \label{eq:rotatingBTZpm} \frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}=(M-\Omega_+J)-(M-\Omega_-J).\end{aligned}$$ The situation for rotating BTZ black hole is very special because in this case one can explicitly find that $(M-\Omega_-J)=-(M-\Omega_+J)$, and this explains why the authors of [@Brown:2015bva; @Brown:2015lvg] could find the saturated bound along with an overall factor of $2$. In the non-rotating limit, it recovers the neutral result, $$\begin{aligned} \frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}=2(M-\Omega_+J)\rightarrow2M,\quad J\rightarrow0.\end{aligned}$$ The action growth rate involves simple cancelations of various thermodynamical quantities on inner and outer horizons, of which the first law of thermodynamics [@Detournay:2012ug; @Chen:2012mh] can be written as $\mathrm{d}M=\pm T_\pm\mathrm{d}S_\pm+\Omega_\pm\mathrm{d}J$. Here the temperatures and entropies defined on both horizons are of the forms of $$\begin{aligned} T_\pm=\frac{r_+^2-r_-^2}{2\pi L^2r_\pm},&\quad S_\pm=\frac{\pi r_\pm}{2G},\end{aligned}$$ which can be expressed in terms of left- and right-moving sectors of dual 2D CFT, $$\begin{aligned} \frac{1}{T_\pm}=\frac{1}{2}\left(\frac{1}{T_L}\pm\frac{1}{T_R}\right),&\quad S_\pm=S_R\pm S_L.\end{aligned}$$ Here the left- and right-moving sectors of dual 2D CFT are of the forms of $$\begin{aligned} T_{R,L}=\frac{r_+\pm r_-}{2\pi L2},&\quad S_{R,L}=\frac{\pi^2L}{3}c_{R,L}T_{R,L}=\frac{\pi}{4G}(r_+\pm r_-),\end{aligned}$$ where the Brown-Henneaux central charges $c_L=c_R=\frac{3L}{2G}$. Although the action growth rate contains quantity defined on inner horizon without dual field theory descriptions, one can re-express it in terms of the left- and right-moving sectors of dual 2D CFT, $$\begin{aligned} \frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}&=\frac{1}{2}\left(T_+S_++T_-S_-\right);\\ &=\frac{\pi^2L^2}{G}T_LT_R;\\ &=2\sqrt{T_LS_LT_RS_R},\end{aligned}$$ which now makes sense from the view point of field theory side. The same tricks are expected to be applied to other kinds of black holes [@Chen:2012ps] with CFT descriptions. Charged BTZ black hole ---------------------- Now we turn to the case of charged BTZ black hole [@Martinez:1999qi; @Clement:1995zt]. We follow the conventions from [@Cadoni:2007ck]. The total action reads $$\begin{aligned} \mathcal{A}=\frac{1}{16\pi G}\int\mathrm{d}^3x \sqrt{-g}(R-2\Lambda-4\pi GF^2)+\frac{1}{8\pi G}\int_{\partial\mathcal{M}}\mathrm{d}^2x \sqrt{-h}K,\end{aligned}$$ and the metric is given by $$\begin{aligned} \mathrm{d}s^2=-f(r)\mathrm{d}t^2+\frac{\mathrm{d}r^2}{f(r)}+r^2\mathrm{d}\theta^2,\end{aligned}$$ where the inner and outer horizons are defined by $f(r_{\pm})=0$ with $$\begin{aligned} f(r)=-M+\frac{r^2}{L^2}-\pi Q^2\ln\frac{r}{L}\end{aligned}$$ under usual convention $8G\equiv1$. After applying the on-shell condition, $$\begin{aligned} R-2\Lambda=-\frac{4}{L^2}-\frac{\pi}{2}F^2,\end{aligned}$$ the total Einstein-Hilbert-Maxwell bulk action becomes $$\begin{aligned} \mathcal{A}_{\mathrm{EHM}}=\frac{1}{2\pi}\int\mathrm{d}^3x\sqrt{-g}\left(-\frac{4}{L^2}-\pi F^2\right),\end{aligned}$$ where the field strength should be $$\begin{aligned} F^2=-2\frac{Q^2}{r^2}.\end{aligned}$$ Then one can easily compute that $$\begin{aligned} &\frac{\mathrm{d}\mathcal{A}_{\mathrm{EHM}}}{\mathrm{d}t}=-\frac{2}{L^2}(r_+^2-r_-^2)+2\pi Q^2\ln\frac{r_+}{r_-};\\ &\frac{\mathrm{d}\mathcal{A}_{\mathrm{YGH}}}{\mathrm{d}t}=+\frac{4}{L^2}(r_+^2-r_-^2)-2\pi Q^2\ln\frac{r_+}{r_-},\end{aligned}$$ thus the total growth rate of action reads $$\begin{aligned} \label{eq:chargedBTZ} \frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}=\frac{2}{L^2}(r_+^2-r_-^2).\end{aligned}$$ Analogy with the case of RN-AdS black hole, the mass and charge can be expressed in terms of $r_{\pm}$ as $$\begin{aligned} M&=\frac{r_+^2\ln\frac{r_-}{L}-r_-^2\ln\frac{r_+}{L}}{L^2\ln\frac{r_-}{r_+}};\\ Q^2&=\frac{r_+^2-r_-^2}{\pi L^2\ln\frac{r_+}{r_-}}.\end{aligned}$$ If we further define the chemical potential on the inner and outer horizon as $\mu_-=-2\pi Q\ln(r_-/L)$ and $\mu_+=-2\pi Q\ln(r_+/L)$, respectively, we can rewrite as $$\begin{aligned} \label{eq:chargedBTZpm} \frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}=(M-\mu_+Q)-(M-\mu_-Q),\end{aligned}$$ which shares exactly the same form with . As usual, the neutral charge limit $Q\rightarrow0$, namely $r_-\rightarrow0$, the mass $M\rightarrow r_+^2/L^2$, and $\mu_+Q\rightarrow0$, while $\mu_-Q\rightarrow2r_+^2/L^2$. As a result, $$\begin{aligned} \frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}\rightarrow2M,\quad Q\rightarrow0.\end{aligned}$$ Kerr-AdS black hole {#sec:4} =================== The Kerr-AdS black hole shares similar Penrose diagrams as the RN-AdS black hole, therefore the same region from inner horizon to outer horizon contributes to the growth rate of action within the WDW patch at late time approximation [^2]. We use the conventions and results in [@Gibbons:2004ai] for the thermodynamics of Kerr-AdS black holes. Here we only focus on the case in four dimensions for simplicity and clarity and the results can be easily generalized to the higher dimensional case. We start with the total action, $$\begin{aligned} \mathcal{A}=\frac{1}{16\pi G}\int_{\mathcal{M}}\mathrm{d}^Dx \sqrt{-g}(R-2\Lambda)+\frac{1}{8\pi G}\int_{\partial\mathcal{M}}\mathrm{d}^{D-1}x \sqrt{-h}K.\end{aligned}$$ The four dimensional Kerr-(anti)-de Sitter metric is obtained by Carter in [@Carter:1968ks] and can be written as $$\begin{aligned} \mathrm{d}s^2=&-\left(\frac{\Delta}{\rho^2}-\frac{\Delta_{\theta}\sin^2\theta}{\rho^2}a^2\right)\mathrm{d}t^2+\frac{\rho^2}{\Delta}\mathrm{d}r^2 +\frac{\rho^2}{\Delta_\theta}\mathrm{d}\theta^2\\ &+2\frac{a\Delta\sin^2\theta-a(r^2+a^2)\Delta_\theta\sin^2\theta}{\rho^2\Xi}\mathrm{d}t\mathrm{d}\phi+\frac{(r^2+a^2)^2\Delta_\theta\sin^2\theta-a^2\Delta\sin^4\theta}{\rho^2\Xi^2}\mathrm{d}\phi^2,\nonumber\end{aligned}$$ where $$\begin{aligned} &\Delta\equiv(r^2+a^2)(1+\frac{r^2}{L^2})-2mr,\quad\Xi\equiv1-\frac{a^2}{L^2}\\ &\Delta_\theta\equiv1-\frac{a^2\cos^2\theta}{L^2},\quad\rho^2\equiv r^2+a^2\cos^2\theta.\end{aligned}$$ It is easy to obtain the determinant of Kerr-AdS metric as $$\begin{aligned} \sqrt{-g}=\frac{\sin\theta}{\Xi}\rho^2.\end{aligned}$$ The outer and inner horizons are determined by the equation $\Delta(r_{\pm})=0 $, respectively. The first law of thermodynamics holds at both horizons, $$\begin{aligned} dM=TdS + \Omega dJ,\end{aligned}$$ where the physical mass $ M $, angular momentum $ J $, the angular velocity $\Omega_{\pm}$ and the area $A_{\pm}$ of outer and inner horizons can be expressed as $$\begin{aligned} M=\frac{m}{G\Xi^2},&\qquad J=\frac{ma}{G\Xi^2},\\ \Omega_{\pm}=\frac{a(1+r_{\pm}^2L^{-2})}{r^2_\pm +a^2},&\qquad A_{\pm}=\frac{4\pi(r_\pm^2 +a^2)}{\Xi}.\end{aligned}$$ By directly integrating the on-shell Einstein-Hilbert bulk action, $$\begin{aligned} \mathcal{A}_{\mathrm{EH}}=\frac{1}{16\pi G}\int\mathrm{d}^4x\sqrt{-g}\left(-\frac{6}{L^2}\right),\end{aligned}$$ we have $$\frac{\mathrm{d}\mathcal{A}_{\mathrm{EH}}}{\mathrm{d}t}=\left.\frac{-(r^3+a^2r)}{2GL^2\Xi}\right|_{r_{-}}^{r_{+}}.$$ It is worth noting that the induced metric on the null hypersurface $r_{\pm}$ should be defined by the induced metric on a timelike hypersurface with constant $r$ approaching $r_\pm$, $$\sqrt{-h}=\sqrt{\frac{-g}{g_{rr}}}=\frac{\sin{\theta}}{\Xi}\sqrt{\rho^2\Delta}.$$ From the definition of extrinsic curvature, its trace $K$ can be written as $$K=\nabla^{\mu}n_{\mu}=\frac{1}{\sqrt{-g}}\partial_\mu(\sqrt{-g}n^\mu),$$ where the normal vector $n^\mu=(0,\sqrt{\frac{\Delta}{\rho^2}},0,0)$. Then we can obtain the YGH boundary term, $$\begin{aligned} \frac{\mathrm{d}\mathcal{A}_{\mathrm{YGH}}}{\mathrm{d}t}&=\frac{1}{4G\Xi}\int_0^\pi\mathrm{d}\theta \sin\theta\left.\left(\frac{r\Delta}{\rho^2}+\frac{\Delta'(r)}{2}\right)\right|_{r_-}^{r_+}\label{eq:YGH}\\ &=\left.\frac{\Delta'(r)}{4G \Xi}\right|_{r_-}^{r_+}=\left.\frac{rL^2+2r^3+a^2r}{2G\Xi L^2}\right|_{r_-}^{r_+}.\end{aligned}$$ Here we have used $\Delta(r_{\pm})=0$ to get the second line. Combining the bulk action and boundary term, we have the growth rate of total action, $$\begin{aligned} \frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}&=\left.\frac{r^3 +rL^2}{2G\Xi L^2}\right|^{r_+}_{r_-}\label{eq:Kerr}\\ &=\frac{mr_+^2}{(r^2_+ +a^2)G\Xi}-\frac{m r_-^2 }{(r^2_- +a^2)G\Xi}\\ &=(M-\Omega_+J)-(M-\Omega_-J).\end{aligned}$$ Here we have used $\Delta(r_{\pm})=0 $ to get the second line and the thermodynamical quantities to rewrite the final result, which shares exactly the same form as the rotating BTZ black hole case. Simple extension to the case of Kerr-Newman-AdS black holes [@Caldarelli:1999xj] should be straightforward, and the action growth rate in the form of $$\begin{aligned} \frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}=(M-\Omega_+J-\mu_+Q)-(M-\Omega_-J-\mu_-Q)\end{aligned}$$ is expected. However the non-rotating limit of Kerr-AdS black hole might be a little tricky. It seems that the naive limit $a\rightarrow0$, namely $r_-\rightarrow0$ of growth rate of total action, $$\begin{aligned} \frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}=\left.\frac{r^3 +rL^2}{2G\Xi L^2}\right|^{r_+}_{r_-\rightarrow0}=\frac{2mL^2}{2GL^2}\equiv M,\end{aligned}$$ would not recover the result $2M$ of Schwarzschild-AdS black hole. The first term in parenthesis of is zero for $a\neq0$ due to $\Delta(r_{\pm})=0$, however, in the case of $a=0$, this term would give an extra $M$ to the total growth rate, $$\begin{aligned} \frac{1}{4G\Xi}\int_0^\pi\mathrm{d}\theta \sin\theta\left.\frac{r\Delta}{\rho^2}\right|_{r_-}^{r_+}&=-\frac{1}{4G}\int_0^\pi\mathrm{d}\theta \sin\theta\frac{r_-(-2mr_-)}{r_-^2}=\frac{m}{G}\equiv M,\end{aligned}$$ hence we recover the result $2M$ for the non-rotating limit. Charged Gauss-Bonnet-AdS black hole {#sec:5} =================================== In this section, we investigate the growth rate of the action in Gauss-Bonnet gravity in five dimensions. Gauss-Bonnet term naturally appears in the low energy effective action of heterotic string theory [@Gross:1986iv; @Zumino:1985dp] and can be derived from eleven-dimensional supergravity limit of M theory [@Antoniadis:1997eg; @Garraffo:2008hu]. We will confirm a reduced contribution of complexification rate in the presence of stringy corrections and propose a method to deal with singularities behind the horizon, both of which are mentioned as open questions in section 8.2.4 and section 8.2.6 of ref.[@Brown:2015lvg]. Gauss-Bonnet black hole and singularities inside horizon -------------------------------------------------------- The whole action of the Gauss-Bonnet gravity is $$\begin{aligned} \label{eq:GB} \mathcal{A}=\frac{1}{16\pi G}\int d^Dx \sqrt{-g}(R-2\Lambda+\alpha R_{GB}^2)+\mathcal{A}_{\partial\mathcal{M}} ,\end{aligned}$$ where $R_{GB}^2=R^2-4R_{\mu\nu}R^{\mu\nu}+R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}$ is the Gauss-Bonnet term. The appropriate boundary term was derived in [@Myers:1987yn; @Davis:2002gn] as $$\begin{aligned} \mathcal{A}_{\partial\mathcal{M}}=\frac{1}{8\pi G}\int_{\partial \mathcal{M}}\mathrm{d}^{D-1}x\sqrt{-h} \left( K+2\alpha\left(J-\widehat{G}^{ab}K_{ab}\right)\right),\end{aligned}$$ where $\widehat{G}^{ab}$ is the Einstein tensor related to the induced metric $h_{ab}$ and $J$ is the trace of tensor $ J_{ab}$ defined as $$\begin{aligned} \tensor{J}{_{ab}}= \frac13(2KK_{ac}\tensor{K}{^c_b}+K_{cd}K^{cd}K_{ab}-2K_{ac}K^{cd}K_{db} -K^2K_{ab}).\end{aligned}$$ Using the Gauss-Codazzi equations [@Davis:2002gn], we can get $$\begin{aligned} J-\widehat{G}^{ab}K_{ab}=&-KK_{ab}K^{ab}-\frac13K^3 +\frac34 K_{ac}K^{cd}\tensor{K}{_d^a}+K^2h_{ab}K^{ab}-K_{cp}K^{pc}K^{bd}h_{bd}\nonumber\\ &-2\tensor{R}{^a_{qcp}}\tensor{h}{^c_a}\tensor{h}{^q_b}\tensor{h}{^p_d}K^{bd}+\tensor{R}{^a_{qcp}}\tensor{h}{^c_a}h^{qp}h_{bd}K^{bd}.\end{aligned}$$ The exact vacuum solution follows from [@Cai:2001dz] as $$\begin{aligned} ds^2=-f(r)dt^2 +\frac1{f(r)}dr^2 +r^2h_{ij}dx^idx^j,\end{aligned}$$ where $ h_{ij}dx^idx^j $ represents the line element of $(D-2)$-dimensional hypersurface with constant curvature $(D-2)(D-3)k$ and volume $ \Omega_{D-2}$, and the metric function $f(r)$ can be expressed as $$\begin{aligned} f(r)=k+\frac{r^2}{2\widetilde{\alpha}}\left( 1\pm\sqrt{1+\frac{64\pi G\widetilde{\alpha}M}{(D-2)\Omega_{D-2}r^{D-1}}-\frac{4\widetilde{\alpha}}{L^2}}\right),\end{aligned}$$ where $\widetilde{\alpha}=\alpha(D-3)(D-4)$ and $M$ represents the gravitational mass. Under the limit of $\alpha\rightarrow 0$, one can find that the minus branch solution will become the standard Schwarzschild-AdS solution with $$\begin{aligned} f(r)\rightarrow k-\frac{16\pi GM}{(D-2)\Omega_{D-2}r^{D-3}}+\frac{r^2}{L^2}.\end{aligned}$$ Hence we only consider the case with $k=1$ in five dimensions and the minus branch with expected asymptotical behavior. In order to simplify the calculation, we choose $$\begin{aligned} f(r)=1+\frac{r^2}{4\alpha}\left(1-\sqrt{1+8\alpha\left(\frac{m}{r^4}-\frac{1}{L^2}\right)}\right).\end{aligned}$$ By solving the equation $ f(r_h)=0 $, one can find the event horizon of the black hole is located at $$\begin{aligned} r_h=\sqrt{\frac{-L^2+\sqrt{L^4+4L^2(m-2\alpha)}}{2}}.\end{aligned}$$ Unlike the case of the Schwarzschild-AdS solution, there are not only the singularity located at $ r=0 $ but also a singularity located at $ \widetilde{r}_- $ if the Gauss-Bonnet coupling $\alpha>L^2/8$,[^3] which is the solution of equation $$\begin{aligned} \label{eq:rtilde} \sqrt{1+8\alpha\left(\frac{m}{r^4}-\frac{1}{L^2}\right)}=0.\end{aligned}$$ Due to the presence of singularities $r=0$ or $r=\tilde{r}_-$ behind the event horizon $r=r_h$, the Penrose diagram is generally different from the case of Schwarzschild-AdS black hole. As we show in Appendix \[app:A\], a reasonable result can not be obtained by directly integrating the action in the region with its boundary approaching any of these two singularities. Therefore we will handle the case of singularities by hiding them behind the inner horizon introduced by adding charge into the Gauss-Bonnet-AdS black hole, namely the charged Gauss-Bonnet-AdS black hole, which will be calculated below. The case of the Gauss-Bonnet-AdS black hole should be deduced from the zero charge limit, where the above singularities will always be behind the inner horizon. Charged Gauss-Bonnet-AdS black hole {#charged-gauss-bonnet-ads-black-hole} ----------------------------------- The charged Gauss-Bonnet-AdS black hole solution reads [@Wiltshire:1985us; @Torii:2005nh], $$\begin{aligned} f(r)=k\pm\frac{r^2}{2\widetilde{\alpha}}\left(1-\sqrt{1+4\widetilde{\alpha}\left(\frac{m}{r^{D-1}}-\frac{1}{L^2}-\frac{q^2}{r^{2D-4}}\right)}\right),\end{aligned}$$ and the potential form is defined by $$\begin{aligned} A_t=-\frac{1}{c}\frac{q}{r^{D-3}},\qquad c=\sqrt{\frac{2(D-3)}{D-2}}.\end{aligned}$$ The parameter $ m $ and $ q $ can be respectively related to the physical mass $M$ and charge $Q$ by $$\begin{aligned} \label{eq:charge} m=\frac{16\pi GM}{(D-2)\Omega_{D-2}},\qquad q^2=\frac{8\pi GQ^2}{(D-2)\Omega_{D-2}}.\end{aligned}$$ In the following calculations we only consider the case with $k=1$ and $ D=5 $. Then the horizons of the solution are determined by the equation $f(r_{\pm})=0$, [^4] namely, $$\begin{aligned} \label{eq:constraint} \frac{r^4_\pm}{L^2}+r^2_\pm+\frac{q^2}{r^2_\pm}=m-2\alpha.\end{aligned}$$ Here we only consider the case of grand canonical ensemble and fix the potential $\mu_{\pm}={Q}/{r^{D-3}_\pm}$. Therefore no boundary term is needed for the Maxwell field. The Penrose diagram of the charged Gauss-Bonnet-AdS black hole is presented in figure \[fig:chargedGB\]. ![The Penrose diagram of the charged Gauss-Bonnet-AdS black hole. The singularities $r=0$ or $r=\widetilde{r}_-$ are presented with wiggly lines. The growth rate of WDW patch at late time approximation comes from the spacetime region that lies outside the inner horizon and inside the outer horizon.[]{data-label="fig:chargedGB"}](chargedGB.png "fig:"){width="80.00000%"}\ As in the case of RN-AdS black hole in [@Brown:2015lvg], the contribution to the growth rate of total action at late time approximation comes from the WDW patch that lies outside the inner horizon and inside the outer horizon. The contribution from the extra matter field reads $$\begin{aligned} \mathcal{A}_{\mathrm{Maxwell}}= -\frac{1}{16\pi G}\int_{\mathcal{M}}\mathrm{d}^D x \sqrt{-g}F_{\mu\nu}F^{\mu\nu}=\frac{1}{8\pi G}\int_{\partial\mathcal{M}}\mathrm{d}^{D-1}S_\nu \sqrt{-h}A_\mu F^{\mu\nu},\end{aligned}$$ where we have used Maxwell equation and Stokes’s theorem. The growth rate of matter action is $$\begin{aligned} \label{eq:Maxwell} \frac{\mathrm{d}\mathcal{A}_{\mathrm{Maxwell}}}{\mathrm{d}t}=\left.\frac{\Omega_3}{8\pi G}r^3(-\frac1c\frac{q}{r^2})(\frac{D-3}{c})\frac{q}{r^3} \right|^{r_+}_{r_-}=-3\left.\frac{\Omega_3}{16\pi G}\frac{q^2}{r^2}\right|^{r_+}_{r_-} =-\left.\frac{Q^2}{2r^2}\right|^{r_+}_{r_-}.\end{aligned}$$ The contribution from the Einstein-Hilbert-Gauss-Bonnet (EHGB) action is $$\begin{aligned} \label{eq:EHGB} \frac{\mathrm{d}\mathcal{A}_{\mathrm{EHGB}}}{\mathrm{d}t}=\frac{\Omega_3}{16\pi G}\left[3(\frac{r^4}{L^2}+r^2-r^2f-\frac13f'r^3)+12\alpha(\frac12f^2-f-rf'+rf'f)\right]^{r_+}_{r_-},\end{aligned}$$ and the contribution from the boundary term is $$\begin{aligned} \label{eq:boundary} \frac{\mathrm{d}\mathcal{A}_{\partial\mathcal{M}}}{\mathrm{d}t}=&\frac{\Omega_3}{8\pi G}\left[r^3(\frac{3}{r}f+\frac{1}{2}f')+2\alpha(-2f^2+3rf'+6f-3rf'f)\frac{}{}\right]^{r_+}_{r_-}.\end{aligned}$$ Combining the above results , one can find that the total action growth rate of the charged Gauss-Bonnet-AdS black hole reads, $$\begin{aligned} \label{eq:GBRNBH} \frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}=\frac{\Omega_3}{16\pi G}&\left[3(\frac{r^4}{L^2}+r^2+r^2f-\frac{q^2}{r^2})+\alpha(-2f^2+12f)\frac{}{}\right]^{r_+}_{r_-}.\end{aligned}$$ Recall that the boundary is located at $r_\pm$ satisfying $f(r_\pm)=0$, we have a remarkably simple result, $$\begin{aligned} \frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}&=\frac{\Omega_3}{16\pi G}\left[3(\frac{r^4}{L^2}+r^2-\frac{q^2}{r^2})\right]^{r_+}_{r_-}\nonumber\\ &=\frac{3\Omega_3}{16\pi G}\left[-2\alpha +m -\frac{2q^2}{r^2}\right]^{r_+}_{r_-}\nonumber\\ &=\frac{6\Omega_3}{16\pi G}\left(\frac{q^2}{r^2_-}-\frac{q^2}{r^2_+}\right)\label{eq:GBRNAdSq}\\ &=Q^2\left(\frac{1}{r^2_-}-\frac{1}{r^2_+}\right)\nonumber\\ &=(M-\mu_+Q)-(M-\mu_-Q).\label{eq:GBRNAdS}\end{aligned}$$ where we have used to get the second line and to get the fourth line. The final result shares exactly the same form as the general $D$-dimensional RN-AdS black hole as well as charged BTZ black hole . When $\alpha \to 0$, it naturally goes to the result of the RN-AdS black holes. In the following subsection, we will discuss the limit when $q \to 0$. Neutral limit of charged GB-AdS black hole ------------------------------------------ Now we come back to the case of the Gauss-Bonnet-AdS black hole, which we argued should be deduced from zero charge limit of the charged Gauss-Bonnet-AdS black hole to avoid the encounter with singularities. One can consider the inner horizon in the limit of $q\rightarrow 0$ as the cut-off screen for the spacetime near the singularities. Maybe one can also use other cut-off screen but need a reasonable method to take limit in order to avoid of approaching the singularity located at $\widetilde{r}_-$ if exists. We choose the inner horizon just because it will be easy to deal with from the charged case. From one can express $q^2$ in terms of $r_{\pm}$ as $$\begin{aligned} \label{eq:q2} q^2=r_+^2r_-^2\left(1+\frac{r_+^2 +r_-^2}{L^2}\right).\end{aligned}$$ Substituting into , we arrive at $$\begin{aligned} \begin{split} \lim_{q\rightarrow0}\frac{\mathrm{d}\mathcal{A}}{\mathrm{d}t}&=\lim_{r_-\rightarrow0}\frac{6\Omega_3}{16\pi G}\left(1+\frac{r_+^2 +r_-^2 }{L^2}\right)\left( r_+^2 -r_-^2\right)\\ &=\frac{6\Omega_3}{16\pi G}r_+^2\left(1+\frac{r_+^2}{L^2}\right) \\ &=\frac{6\Omega_3}{16\pi G} (m-2\alpha)\\ &=2M-\frac{3\alpha\Omega_3}{4\pi G}, \end{split}\end{aligned}$$ where we have used in the zero charge limit $q \rightarrow 0$ to get the third line. When $\alpha \to 0$, the above results goes to the case of a Schwarzschild-AdS black hole, as expected. Therefore we claim that the growth rate of action for uncharged and non-rotating AdS black hole in Gauss-Bonnet gravity is smaller than its Einstein gravity counterpart, namely, $$\begin{aligned} \label{eq:GBAdS} \frac{d\mathcal{A}}{dt} = 2M-\frac{3\alpha \Omega_3}{4\pi G} < 2M,\end{aligned}$$ where we only consider $\alpha>0 $ inspired by string theory. This confirms the speculation that stringy corrections should reduce the computation rate of black hole solutions mentioned as an open question in section 8.2.4 of ref.[@Brown:2015lvg]. It seems that the neutral bound can only be saturated for Einstein gravity and it would be interesting to investigate whether higher order stringy corrections or some correction terms from other kinds of gravity theories like Lovelock gravity theory [@Lovelock:1971yv] will arrive at the same conclusion. Conclusions and discussions {#sec:6} =========================== We have investgated in this paper the original action growth rate proposed in the recent papers [@Brown:2015bva; @Brown:2015lvg], which passed for various examples of stationary AdS black holes. In the example of general $D$-dimensional RN-AdS black hole, it is found that the original action growth rate is apparently violated even for the case of small charged black hole in addition to the cases of intermediate and large charged black holes. It is also found that the precise saturations of original action growth rate for Schwarzschild-AdS black hole and rotating BTZ black hole along with an overall factor of $2$ are purely coincidence in the view point of the results presented in this paper . The action growth rate are further tested in the context of charged BTZ black hole and Kerr-AdS black hole, which are shown explicitly sharing the exactly same manner as in the case of RN-AdS black hole and rotating BTZ black hole. Both the action growth rates can reduce to for neutral static case, which is also true for the original action growth rate . In the end, we test the action growth rate in the case of charged Gauss-Bonnet-AdS black hole and find the exactly same equality as well. Furthermore we also confirm in the neutral limit the action growth rate of black hole is slowed down in the presence of stringy corrections unless it is charged. We thus conclude that the stationary AdS black hole in Einstein gravity is the fastest computer in nature. Here some remarks are in order on what we did in this paper. Firstly according to the holographic principle of gravity, a complexity bound of a boundary state should be expressed in terms of physics quantities well defined in the boundary field theory. However, some quantities in (\[eq:ourbound\]) like $\mu_-$ and $\Omega_-$ are defined on the inner horizon of a black hole, and those quantities have no corresponding definitions in the dual field theory. As we stressed in the context, at first glance, it is true. But after a second look, we know that those quantities are all can be expressed in terms of black hole parameters like mass, charge and angular momentum, according to no hair theorem of black hole. Therefore it looks unphysical at first sight for the presence of those quantities defined on the inner horizon, but they can always be expressed as certain combinations of those quantities defined on the outer horizon, which is totally acceptable from the view point of field theory side. Furthermore, for those black holes with dual CFT descriptions, for example, the rotating BTZ black hole, the growth rate of action can be simply re-expressed as $2\sqrt{T_LS_LT_RS_R}$, where $T_{L,R}$ and $S_{L,R}$ are the temperatures and entropies from the left- and right-moving sectors of dual 2D CFT. Secondly, the action growth rates , when compared with the original action growth rate , have no necessity for the notion of ground state, which saves us the argument made in Appendix A of [@Brown:2015lvg]. Nevertheless, if the notion of ground state means a frozen complexity growth, then the “ground state” of our revised version of action growth rate is nothing but the zero temperature state, namely the extremal black hole with inner and outer horizons coincided. Therefore, one can rewrite the action growth rate , for example, as $(M-\Omega J)|_-^+-[(M-\Omega J)|_-^+]_{\mathrm{gs=extremal}}$, which will reduce to the original result for the rotating BTZ black hole by noting that $(M-\Omega_-J)=-(M-\Omega_+J)$. For an extremal black hole, following our calculations, the action growth rate goes to zero. This is an expected result since the complexification rate must vanish for a ground state. Thirdly, both the action growth rates and our results are nothing but conjectures if the Complexity-Action duality (the complexity of a boundary state is dual to the action of the corresponding Wheeler-DeWitt patch in the bulk) is correct. Without further progress made in how to define the complexity of boundary state from field theory side alone, one can only test this conjecture by computing the growth rate of its bulk dual. In this work, we just follow the logic in refs. [@Brown:2015bva; @Brown:2015lvg] and calculate some exact results for the action growth rate of the WDW patch at late-time approximation in some AdS black holes. It worth noting that it is by no means that we have found any new non-trivial bound for the complexity growth other than the works done in [@Brown:2015bva; @Brown:2015lvg]. In such calculations some subtleties arise as in [@Brown:2015lvg]. One of them is the contribution from the singularity which have been stressed in [@Brown:2015lvg] and in this work. The other two concern with the inner horizon of black hole and the contribution from the part of the WDW patch behind the past horizon. Fourthly, the presence of inner horizon is intriguing since the whole point of the growth of complexity is its duality to the growth of black hole interior, which takes concrete form of WDW patch served as the spacetime region dual of computational complexity of the boundary CFT state. As argued in the section 3.2 of [@Brown:2015lvg], before taking the limit of late-time approximation, the entire WDW patch lies outside of the inner horizon, which means the action is not sensitive to quantum instabilities of the inner horizon so long as the horizon remains null. The classical instability of inner horizon is not considered here just as in [@Brown:2015lvg], since we all rely on the assumption that the black hole interior from static solution is trustable as long as the complexity is concerned, which certainly calls for further investigation. Usually the inner horizon will turn to a curvature singularity when black hole gets perturbations or some matter is added into the theory under consideration. In that case, one has to re-calculate the action growth rate since the Penrose diagrams for those black holes are totally different from the one like the RN-AdS black hole. Fifthly, the contribution to the action growth rate from the corner term of the WDW patch inside the past horizon is negligible so long as the late-time approximation is concerned as argued in [@Brown:2015lvg]. We expect that the GB gravity makes no difference for this point. However we cry for a systematic investigation of regulating the action growth rate in that patch within different gravity theories in future. Finally, in the calculation of the action for the Gauss-Bonnet gravity, we found that the results are different if one takes the contribution from different singularities at $r=0$ and $r=\tilde{r}_-$, respectively. To avoid such an ambiguity, we add the Maxwell field to the theory and in that case an inner horizon appears and both singularities are hidden behind the inner horizon. Then the result for the Gauss-Bonnet black hole is obtained by taking the limit of vanishing charge. Then a natural question arises, what is the guiding principle for dealing with the singularity when the action growth rate within the WDW path at late-time approximation is concerned? It is worth noting that, the action growth rate of Schwarzschild-AdS black hole has saturated the neutral static case , which is the consensus for both the original proposal and our revised version. We argue that whether or not the neutral static action growth rate could come back to $2M$ at the leading order term of gravity correction is our guiding principle when dealing with singularity. In Einstein gravity, the neutral limit of action growth rate for RN-AdS black hole and charged BTZ black hole naturally reduces to the neutral static case , and the non-rotating limit of Kerr-AdS black hole and rotating BTZ black hole also reduces to the neutral static case . Therefore, when dealing with singularity within Einstein gravity, one can either directly calculate the neutral static case, or first shield the singularity with some cutoff screen, which is conveniently chosen as the inner horizon generated by adding charge or angular momentum into the neutral static black hole, and then take the neutral non-rotating limit. However, this is not the case for gravity theories other than Einstein gravity, for example, the Gauss-Bonnet gravity. The direct calculation of action growth rate for neutral GB-AdS black hole in Appendix A can not come back to the neutral static case at the leading order term of GB coupling. Therefor the reasonable approach to deal with singularity in GB gravity is to firstly screen the singularity with inner horizon in case of charged GB-AdS black hole, and then take the limit of neutral charge, because this will give us the neutral static result at the leading order term of GB coupling. Alternative approach by perturbatively computing the action growth rate might not work out, since the action growth rate is calculated on-shell which requires the full solution of Gauss-Bonnet equation of motion. The Gauss-Bonnet-AdS black hole {#app:A} =============================== In this appendix, we present the direct calculation of the growth rate of action for the Gauss-Bonnet-AdS black hole instead of taking zero charge limit from charged Gauss-Bonnet-AdS black hole. Unlike the case of charged Gauss-Bonnet-AdS black hole , the growth rate of action for the Gauss-Bonnet AdS black hole cannot come back to the case of Schwarzschild-AdS black hole in the limit of zero Gauss-Bonnet coupling $\alpha \rightarrow 0$. The Penrose diagram of the neutral Gauss-Bonnet-AdS black hole is presented in figure \[fig:neutralGB\] ![The Penrose diagram of the neutral Gauss-Bonnet-AdS black hole. The WDW patch can be ended either on both singularities $r=0$ or $r=\widetilde{r}_-$. []{data-label="fig:neutralGB"}](neutralGB.png "fig:"){width="40.00000%"}\ . Singularity located at $r=0$ ---------------------------- As mentioned, from the GB-AdS black hole case, there are two singularities. Let us first consider the singularity $r=0$ as the inner boundary. In this case, using with the range of evaluation replaced by $(0,r_h)$, we can easily get $$\begin{aligned} \frac{\mathrm{d}\mathcal{A}_0}{\mathrm{d}t}&=\frac{\Omega_3}{16\pi G}\left[3\left(\frac{r^4_h}{L^2}+r_h^2\right)-\alpha(12f(0)-2f^2(0))\right]\nonumber\\ &=\frac{\Omega_3}{16\pi G}\left[3(m-2\alpha)+(8\sqrt{\frac{m\alpha}{2}}-10\alpha+m)\right]\nonumber\\ &=\frac{\Omega_3}{16\pi G}\left(4m-16\alpha +4\sqrt{2m\alpha}\right),\end{aligned}$$ where we have used $$\begin{aligned} f(0)=\lim_{r\rightarrow 0}f(r) = 1-\sqrt{\frac{m}{2\alpha}}.\end{aligned}$$ Finally, we write down the growth rate of action for Gauss-Bonnet-AdS black hole as $$\frac{\mathrm{d}\mathcal{A}_0}{\mathrm{d}t}=\frac34 M +\sqrt{\frac{2M\alpha \Omega_3}{3\pi G}}-\frac{\alpha\Omega_3}{\pi G}.$$ We see that the above result cannot return back to the case of Schwarzschild-AdS black hole in the limit of $\alpha \rightarrow 0$. This indicates that the above calculation is not trustable. Singularity located at $\widetilde{r}_-$ ---------------------------------------- Taking the singularity $\widetilde{r}_-$ as the inner boundary, one can solve and find that $$\begin{aligned} \widetilde{r}_-^2=\sqrt{\frac{8\alpha L^2 m}{(8\alpha-L^2)}} ,\quad f(\widetilde{r}_-)=1+\sqrt{\frac{mL^2}{2\alpha(8\alpha-L^2)}}.\end{aligned}$$ Similarly we only need to replace the range of evaluation in with $(\widetilde{r}_-,r_+)$ and get $$\begin{aligned} \frac{\mathrm{d}\mathcal{A}_{\widetilde{r}_-}}{\mathrm{d}t}&=\frac{\Omega_3}{16\pi G}\left[\frac{}{}-\alpha\left(-2f^2(\widetilde{r}_-)+12f(\widetilde{r}_-)\right)+3\left(\frac{r_h^4}{L^2}+r_h^2\right)-3\left(\frac{\widetilde{r}_-^4}{L^2}+\widetilde{r}_-^2+\widetilde{r}_-^2\left(1+\frac{\widetilde{r}_-^2}{4\alpha}\right) \right)\right]\nonumber\\ &=\frac{\Omega_3}{16\pi G}\left[3m-16\alpha-8\sqrt{\frac{8\alpha L^2 m}{(8\alpha-L^2)}}-\frac{m}{8\alpha-L^2}(5\alpha^2+24\alpha)\right].\end{aligned}$$ In this case, the condition $8\alpha>L^2$ for the presence of singularity $\widetilde{r}_-$ simply prevents us from taking the limit $\alpha\rightarrow0$ [^5]. But if naively takes the limit of $\alpha \to 0$, one can see that the above result also cannot return back to the case of Schwarzschild-AdS black hole, which indicates that the above approach is problematic. This work was supported in part by the National Natural Science Foundation of China under Grants No.11375247 and No.11435006. [^1]: Therefore we will use “complexity bound” to infer both “growth rate of quantum complexity for dual holographic state” and “growth rate of action within WDW patch at late time approximation” interchangeably. [^2]: Although we don’t analyze the spacetime structure for the WDW patch because it will be very similar to the case in the paper[@Brown:2015lvg], it is actually very important to get the reasonable contribution to the growth region of WDW patch by careful and complicated cancellation of corners and surface regions. [^3]: As noted in [@Cai:2001dz], in order to have a well-defined vacuum solution with $m=0$, the Gauss-Bonnet coupling $\alpha$ should satisfy $\alpha \le L^2/8$. In that case the singularity at $\tilde{r}_-$ does no longer appear. However, for our aim here, we relax this condition and consider the case with an additional singularity at $\tilde {r}_-$. [^4]: We only consider the case which allows the equation to have two positive roots and share the similar Penrose diagram to the one in RN-AdS spacetime. In [@Torii:2005nh] there are general discussions about the parameters, solutions and corresponding spacetime structures for the Gauss-Bonnet gravity with electric charge. [^5]: We thank Ran Li for pointing this to us.
--- abstract: 'Photon-photon scattering of gamma-rays on the cosmic microwave background has been studied using the low energy approximation of the total cross section by @1989ApJ...344..551Z [@1990ApJ...349..415S]. Here, the cosmic horizon due to photon-photon scattering is accurately determined using the exact cross section and we find that photon-photon scattering dominates over the pair production at energies smaller than 1.68 GeV and at redshifts larger than 180.' author: - 'G. V. Vereshchagin' title: 'Cosmic horizon for GeV sources and photon-photon scattering' --- Introduction ============ Photon-photon scattering $\gamma_{1}\gamma_{2}\longrightarrow\gamma _{1}^{\prime}\gamma_{2}^{\prime}$ is a nonlinear electrodynamical process, allowed by quantum electrodynamics, but much less well known compared to pair production from two photons, $\gamma_{1}\gamma_{2}\longrightarrow e^{+}e^{-}$. The latter has not yet been directly observed, but has well known astrophysical implications . The total cross section of photon-photon scattering in the low energy approximation can be found in most textbooks on the topic, see e.g. [@1982els..book.....B]. The exact cross section for arbitrary energies has been determined numerically, see e.g. [@1951PhRv...83..776K; @1965NCim...35.1182D]. Photon-photon scattering involving cosmic microwave background (CMB) photons has been considered in the cosmological context in @1989ApJ...344..551Z [@1990ApJ...349..415S]. Using the low energy approximation these authors obtained analytical expressions for the cosmic horizon, i.e., the redshift as a function of particle energy found by equating the optical depth to unity. In the limit of large redshifts in the Einstein-de-Sitter universe they found $$z=5.002\times10^{3}T_{2.7}^{-4/5}h_{50}^{2/15}\varepsilon_{obs}^{-2/5}% ,\label{EHslope}%$$ where the dimensionless observed energy $\varepsilon_{obs}=E_{obs}/(m_{e}c^{2})$ of the gamma-ray photon is expressed in terms of the electron rest mass energy $m_{e}c^{2}$, the temperature $T$ of the cosmic microwave background is normalized to $2.7$ K, and the Hubble parameter is $H_{0}=50 h_{50}$ $km/s/Mpc$. It was recognized that the slope of the relation (\[EHslope\]) differs slightly from the slope of the Fazio-Stecker relation [@1970Natur.226..135F], corrected by [@1989ApJ...344..551Z] to read $$z=8.84\times10^{3}\varepsilon_{obs}^{-0.478}.\label{BWslope}%$$ The horizon relations for pair production from two photons $\gamma_{1}\gamma _{2}\longrightarrow e^{+}e^{-}$ and for the photon-photon scattering $\gamma_{1}\gamma_{2}\longrightarrow\gamma_{1}^{\prime}\gamma_{2}^{\prime}$ were determined, and found to have a crossing point at the approximate redshift $z_{cr}\simeq3\times10^{2}$. The authors concluded that photon-photon scattering dominates over pair production at larger redshifts. In this paper we revisit the derivation of the cosmic horizon relation for photon-photon scattering on the CMB photon background by considering the exact cross section found by [@1998PhRvD..57.2443D], instead of the approximate one valid only in the low energy limit. One might argue that the difference between the exact cross section and its low energy approximation would be small even near the pair production threshold, but in fact the ratio between the exact and approximate cross sections at the threshold is $7.26$. We emphasize that due to the very similar slopes of the two functions (\[EHslope\]) and (\[BWslope\]), even a small change in the cross section results in a significant shift of the crossing point $z_{cr}$. In this paper it is shown that the above mentioned crossing point is located at a lower redshift than previously determined, namely $z_{cr}\simeq180$, and the corresponding photon energy is $1.68$ GeV. These new results are essential for photon propagation from sources located at very high redshifts, above 100. Specifically, such photons are present in models involving exotic particles, which decay into photons in a high redshift universe, see e.g. [@2006MNRAS.369.1719M; @2017JCAP...03..043P] and references therein. Exact cross section for photon-photon scattering ================================================ The approximate cross section for photon-photon scattering in the low energy approximation is given by$$\sigma=\frac{7\times139}{3^{4}5^{3}\pi}\alpha^{4}r_{0}^{2}\varepsilon_{CM}% ^{6}, \label{sigmaEHan}%$$ where $\alpha$ is the fine structure constant, $r_0$ is classical electron radius, $\varepsilon_{CM}=\sqrt{\varepsilon_{1}\varepsilon_{2}\left( 1-\cos\vartheta\right) /2}=\sqrt{x\left( 1-\cos\vartheta\right) /2}$ is the center-of-momentum energy, $\varepsilon_{1}=h\nu_1/m_e c^2$ and $\varepsilon_{2}=h\nu_2/m_e c^2$ are, respectively, the dimensionless energies of the high energy photon and the CMB photon, $h$ is Planck’s constant, $\nu$ is the photon frequency, $m_e$ the electron mass and $c$ the speed of light. The cosmic horizon is obtained taking into account both the cosmic evolution of the CMB and the cosmological redshift of the high energy photon as follows. First, the cross section is averaged over all angles and integrated over the photon energy using the isotropic distribution function for the CMB photons. Then the result is integrated over distance (redshift) to obtain the optical depth as a function of the redshift of the source and the energy of the observed photon. Equating this optical depth to unity results in the relation between the redshift of the source and the observed energy. Equation (\[EHslope\]) was obtained precisely in this way. Instead of using the approximate cross section (\[sigmaEHan\]), we take the exact cross section represented by the dotted curve in Fig. \[sigmaEHfig\]. [EHcrosssection]{} It is important to emphasize that the exact form of the cross section near the threshold for pair production at $x\equiv\varepsilon_{1}\varepsilon_{2}=1$ is crucial. The solid curve in Fig. \[sigmaEHfig\] represents the angle averaged cross section. This function is integrated further with the photon distribution function and it is the value near its peak which determines the dependence of the optical depth on the particle energy and distance. It is clear that averaging over angles makes the cross section smoother and shifts the peak to higher values of the variable $x$. The optical depth and the cosmic horizon for photon-photon scattering ===================================================================== The computation of the optical depth is straightforward, for details see e.g. . The optical depth is given by$$\begin{aligned} \tau & =4\pi\frac{c}{H_{0}}\left( \frac{h}{m_{e}c}\right) ^{-3}\left( \frac{kT_{0}}{m_{e}c^{2}}\right) ^{3}\left(\frac{1}{y_0}\right)^3\int_{0}^{z}\frac{dz^{\prime}}{\left( 1+z^{\prime}\right) ^{4}H\left( z^{\prime}\right) }\times\label{tau}\\ & \int_{0}^{\infty}\frac{x^{2}dx}{\exp\left( x/y\right) -1}\int_{0}^{\pi }\sigma\left( x,y,z^{\prime},\vartheta\right) \left( 1-\cos\vartheta \right) \sin\vartheta d\vartheta,\nonumber\end{aligned}$$ where the variables $x=\varepsilon_{1}\varepsilon_{2}$ and $y=\varepsilon_{2}kT/(m_{e}c^{2})$ depend on the redshift, the index “0" refers to the observed photon at redshift $z=0$, $$H(z)=[\Omega_{r}(1+z)^{4}+\Omega_{M}(1+z)^{3}+\Omega_{\Lambda}]^{1/2}% ,\label{free}%$$ and $\Omega_{r}=8.4\times10^{-5}$, $\Omega_{M}=0.3089$ and $\Omega_{\Lambda}=0.6911$ are the present normalized densities of radiation, matter and dark energy, respectively, while $H_0=67.7$ km/s/Mpc. We compute the integral (\[tau\]) numerically, using the latest cosmological parameters given by the . The result for optical depth $\tau=1$ is shown in Fig. \[zefig\] by dashed curve as a function of the energy $E=h\nu_1$ of the high energy photon observed today on Earth. Also shown is the cosmic horizon for the pair production from two photons, computed in . [zerelation]{} The high redshift (low energy) asymptotes for cosmic horizons shown in Fig. \[zefig\] are power laws given approximately by $$z=0.786\left(\frac{E}{E_{BW}}\right)^{-0.405}, \label{zeEH}%$$ for photon-photon scattering and by$$z=0.257\left(\frac{E}{E_{BW}}\right)^{-0.488}, \label{zeBW}%$$ for pair production from two photons, respectively, where $E_{BW}=(m_{e}c^{2})^{2}/kT_{0}\simeq1.11\times10^{15}$ eV. The photon-photon scattering starts to dominate over pair production at energies smaller than $E_{cr}=1.68$ GeV and redshifts larger than $z_{cr}\simeq180$. It is important to underline that, unlike pair production by two photons, photon-photon scattering of a high energy photon on low energy background is a process that “splits" the high energy photon into two photons, each of which carries away on average half of the initial energy. This makes the mean free path shown in Fig. \[zefig\] also equivalent to the energy loss distance. For the sake of comparison also the result corresponding to $\tau=5$ is shown by dash-dotted curve. In Fig. \[zefig\] the cosmic horizon due to extragalactic background light (EBL) is represented as the dotted curve. It is clear that the dominance of the photon-photon scattering occurs at energies lower than those relevant for the absorption by the EBL, and at much larger redshifts. Conclusions =========== The photon-photon scattering at cosmological distances is revisited using the recently obtained exact cross section rather than the low energy approximation adopted in previous work. Since the exact cross section near the pair production threshold is larger than the approximate one obtained in the low energy limit, the dominance of the photon-photon scattering over pair production by two photons occurs at smaller redshifts than previously thought, namely redshifts larger than $z_{cr}\simeq180$. This corresponds to energies smaller than $E_{cr}=1.68$ GeV. These results are relevant for high energy photons produced during the Dark Ages which follows the decoupling of matter and radiation, e.g. by photons resulting from the decay of unstable particles. [**Acknowledgements.**]{} I would like to thank the anonymous referee for his/her remarks and suggestions which improved presentation of this paper. [99]{} V. B., [Lifshitz]{} E. M., [Pitaevskii]{} V. B., 1982, [Quantum Electrodynamics]{}. Elsevier B., 1965, Il Nuovo Cimento, 35, 1182 D. A., [Kao]{} C., [Repko]{} W. W., 1998, , 57, 2443 G. G., [Stecker]{} F. W., 1970, , 226, 135 A., [Rodighiero]{} G., [Vaccari]{} M., 2008, , 487, 837 Y., [Inoue]{} S., [Kobayashi]{} M. A. R., [Makiya]{} R., [Niino]{} Y., [Totani]{} T., 2013, , 768, 197 R., [Neuman]{} M., 1951, Physical Review, 83, 776 M., [Ferrara]{} A., [Pierpaoli]{} E., 2006, , 369, 1719 P. A. R., [Aghanim]{} N., [Arnaud]{} M., [Ashdown]{} M., [Aumont]{} J., [Baccigalupi]{} C., [Banday]{} A. J., [Barreiro]{} R. B., [Bartlett]{} J. G., et al. 2016, , 594, A13 V., [Lesgourgues]{} J., [Serpico]{} P. D., 2017, , 3, 043 R., [Vereshchagin]{} G., [Xue]{} S.-S., 2010, , 487, 1 R., [Vereshchagin]{} G. V., [Xue]{} S.-S., 2016, , 361, 82 F. W., [de Jager]{} O. C., [Salamon]{} M. H., 1992, , 390, L49 R., [Zdziarski]{} A., 1990, , 349, 415 A. A., [Svensson]{} R., 1989, , 344, 551
--- author: - 'C.-C. Ngeow, R. Szab[ó]{}, L. Szabados, A. Henden, M.A.T. Groenewegen & the [*Kepler*]{} Cepheid Working Group' title: 'Ground-Based [$BVRI$]{} Follow-Up Observations of the Cepheid V1154 Cyg in [*Kepler’s*]{} Field' --- Introduction {#sec:1} ============ V1154 Cyg ($P=4.925454$ days) is a known Cepheid located within [*Kepler’s*]{} field-of-view. Analysis of this Cepheid using [*Kepler’s*]{} light curves has been published in [@sza11] (hereafter S11). Some of the ground based follow-up observations (including optical and spectroscopic observations) can be found in [@mol07; @mol09] and S11. The aim of this work is to provide further details of the $BVRI$ follow-up for V1154 Cyg, to supplement S11. Details of observations and data reduction are given in S11. Results: Basic Properties of V1154 Cyg {#sec:1} ====================================== $BVRI$ light curve properties and radial velocity measurements from spectroscopic observation suggested V1154 Cyg is a [*bona fide*]{} fundamental mode Cepheid. Figure \[fig\_compare\] compares our light curves to the light curves presented in [@ber08]. Table \[tab\] summarizes the $BVRI$ intensity mean magnitudes and amplitudes (from Fourier fit to the light curves) based on S11 light curves. Baade-Wesselink (BW) surface brightness method was used to derive the distance and mean radius of V1154 Cyg, by combining the published radial velocity (RV) data and available light curves (details of the method can be found in [@gro08]). Figure \[fig\_bw\] presents the results of BW analysis. The distance and radius of V1154 Cyg are: $D = 1202\pm72\pm68$pc, $R/R_{\mathrm{sun}} = 23.5\pm1.4\pm1.3$, the first error is the formal fitting error, the second error is based on a Monte Carlo simulation taking into account the error in the photometry, RV data, $E(B-V)$ and the p-factor ($p = 1.255$ is adopted with a 5% error). ![Comparison of the $BVRI$ light curves from S11 (open circles) and Berdnikov (2008, crosses). Note that a correction of $+0.054$ mag needs to be added to S11 $B$ band data to bring into agreement between the two light curves.[]{data-label="fig_compare"}](ngeowP_Fig1.eps) ![Results of the BW analysis for V1154 Cyg, including the fitting of $V$ band light curve, $(V-R)$ color curve, RV curve and angular diameters (from left to right).[]{data-label="fig_bw"}](ngeowP_Fig2a.eps "fig:") ![Results of the BW analysis for V1154 Cyg, including the fitting of $V$ band light curve, $(V-R)$ color curve, RV curve and angular diameters (from left to right).[]{data-label="fig_bw"}](ngeowP_Fig2b.eps "fig:") ![Results of the BW analysis for V1154 Cyg, including the fitting of $V$ band light curve, $(V-R)$ color curve, RV curve and angular diameters (from left to right).[]{data-label="fig_bw"}](ngeowP_Fig2c.eps "fig:") ![Results of the BW analysis for V1154 Cyg, including the fitting of $V$ band light curve, $(V-R)$ color curve, RV curve and angular diameters (from left to right).[]{data-label="fig_bw"}](ngeowP_Fig2d.eps "fig:") [p[4.5cm]{}p[1.5cm]{}p[1.5cm]{}p[1.5cm]{}p[1.5cm]{}]{} Band: & $B$ & $V$ & $R$ & $I$\ Intensity mean magnitude & 10.107 & 9.185 & 8.659 & 8.168\ Amplitude & 0.547 & 0.390 & 0.314 & 0.250\ CCN thanks the funding from National Science Council (of Taiwan) under the contract NSC 98-2112-M-008-013-MY3. The research leading to these results has received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 269194 (IRSES/ASK). This project has been supported by the ‘Lendület’ program of the Hungarian Academy of Sciences and the Hungarian OTKA grants K83790 and MB08C 81013. R.Sz. was supported by the János Bolyai Research Scholarship of the Hungarian Academy of Sciences. [5]{} Szab[ó]{}, R., Szabados, L., Ngeow, C.-C., et al. 2011, MNRAS, 413, 2709 Molenda-Żakowicz, J., Frasca, A., Latham, D. W. & Jerzykiewicz, M. 2007, Acta Astronomica, 57, 301 Molenda-Żakowicz, J., Jerzykiewicz, M. & Frasca, A. 2009, Acta Astronomica, 59, 213 Berdnikov, L. N. 2008, VizieR On-line Data Catalog: II/285 Groenewegen, M. A. T. 2008, A & A, 488, 25
--- abstract: | Determining the primary site of origin for metastatic tumors is one of the open problems in cancer care because the efficacy of treatment often depends on the cancer tissue of origin. Classification methods that can leverage tumor genomic data and predict the site of origin are therefore of great value. Because tumor DNA point mutation data is very sparse, only limited accuracy (64.5% for 12 tumor classes) was previously demonstrated by methods that rely on point mutations as features [@DeepGene]. Tumor classification accuracy can be greatly improved (to over 90% for 33 classes) by relying on gene expression data [@RNAClass]. However, this additional data is often not readily available in clinical setting, because point mutations are better profiled and targeted by clinical mutational profiling. Here we sought to develop an accurate deep transfer learning and fine-tuning method for tumor sub-type classification, where predicted class is indicative of the primary site of origin. Our method significantly outperforms the state-of-the-art for tumor classification using DNA point mutations, reducing the error by more than 30% at the same time discriminating over many more classes on The Cancer Genome Atlas (TCGA) dataset. Using our method, we achieve state-of-the-art tumor type classification accuracy of 78.3% for 29 tumor classes relying on DNA point mutations in the tumor only. author: - | Alena  Harley\ Human Longevity Inc.\ Mountain View, CA 94305\ `[email protected]`\ title: 'Deep Discriminative Fine-Tuning for Cancer Type Classification' --- Introduction ============ Approximately 15% of cancers metastasize, [*i.e.*]{} cancer cells break away from where they are first formed (the primary site or tissue of origin) and travel through the blood or lymph system to form new metastatic tumor. Metastatic tumors require further testing to determine the primary site, since the efficacy of cancer treatment is often dependant on the primary site of origin. Some metastatic tumors (4%) are never fully diagnosed and remain cancer of unknown primary origin. Patients with cancer of unknown primary origin typically have poor survival. Hence, accurate methods that infer the tissue of origin are of great interest. These methods are also important in the context of blood or urine screening (*i.e.* liquid biopsy) for early detection of cancer. The detection and sequencing of cell-free circulating tumor DNA, as well as circulating tumor cells, has recently been successfully implemented in clinical setting for several cancer types. Once tumor mutations are found in these fluids, methods that can immediately determine the location of the tumor site enable quicker diagnosis and treatment. Cancer classification using point mutations in tumors is challenging, mainly because the data is very sparse. Many tumours have only a handful of mutations in coding regions, and many mutations are unique, resulting in a long tail of ’private mutations’. It has been previously demonstrated that classifiers that rely on the point mutations in a tumor achieve limited accuracy, particularly 64.5% on 12 tumor classes [@DeepGene]. More accurate methods for cancer sub-type classification have been developed but they rely on gene expression data that is often not readily available. The accuracy achieved in this setting is over 90% on 33 tumor classes [@RNAClass]. Accurate computational methods that can predict tumor class from DNA point mutations alone without relying on additional gene expression data which is not readily available are of great interest. Here, we present state-of-the-art deep transfer learning and fine-tuning classification method for tumor sub-type indicative of the primary site of origin. Our method does not require gene expression data and relies on availability of DNA point mutations only. Methodology =========== We used The Cancer Genome Atlas (TCGA) cancer genomic dataset [@TCGA] both for training and testing. Details for the following steps are provided below: 1) the data set and its pre-processing, 2) creation of the gene embedding matrix and encoding tumor samples as images, 3) transfer learning and fine-tuning protocol used for training. Dataset and its pre-processing ------------------------------ The Cancer Genome Atlas (TCGA) cancer genomic dataset includes 9,642 tumor-normal exome pairs across 33 different cancer sub-types [@TCGA]. We downloaded Mutation Annotation Format (MAF) files from the Genomic Data Commons website (accessed May, 2018) [@GDC]. The colon and rectal cancer cohort (COADREAD), glioblastoma multiforme and lower grade glioma (GBMLGG) cohort, as well as stomach and esophageal carcinoma cohort (STES) were each treated as single cohort instead of splitting them into sub-cohorts since these are often analyzed together in TCGA studies, thus resulting in 29 cancer sub-types. We removed silent mutations, resulting in a total of 1.3 million non-silent mutations spread across 18,222 genes. The dataset was split – 80% of samples within each of the 29 tumor types were used for training and 20% were used for testing. Using training set only we ran MutSigCV [@MutSigCV] to identify significantly mutated genes among the non-silent mutations that were detected in each training set for each tumor type. This let us extract important features of the very sparse dataset. MutSigCV detects genes with higher mutation occurrences than what is expected by chance, taking into account the covariates that include a given gene’s base composition, its length, and the background mutation rate. We were left with 1,348 unique significantly mutated genes by setting cut-off to false discovery rate $q<$0.02. To learn biologically relevant embedding of the data, we trained Gene2Vec embedding. We utilized database of all known pathways – MSigDB [@MSigDB] version 6.2, containing 17,810 pathways. In the spirit of Word2Vec [@Mikolov2013], we mapped pathway-similar genes to nearby points. Here we assumed that genes that appear in the same pathway contexts share biological function. In our implementation we used standard Skip-Gram model when defining Gene2Vec. Gene pairs (33 million) were constructed from the pathway data, and we tried to predict each context gene from its target gene. We used Glorot weight initialization [@Glorot], and optimized the Noise Contrastive Estimation (NCE) loss function [@Guntmann] (see equation \[eq1\]) using Adam [@Kingma2014]. $$J_N(\Theta)=\frac{1}{2*N}\sum_n{ln[h(x_n;\Theta)] + ln[1-h(y_n;\Theta)]} \label{eq1}$$ Here, $\Theta$ is the set of parameters we optimized, $X= (x_1,...,x_N)$ is the observed data, $Y=(y_1,...,y_N)$ is an artificially generated set of noise, and $h$ denotes the logistic function. We set batch size to 256, number of negative samples to 128, embedding size to 1,348 to match the number of significantly mutated genes, since the goal later is to produce square embeddings of tumor samples. Transformation of mutation data into images ------------------------------------------- We then extracted learned Gene2Vec embedding for 1,348 significantly mutated genes in our training set, producing a square matrix. TCGA data set is relatively small, so in order to use deep transfer learning methods (trained on images) we used a spectral clustering algorithm [@Stella2003] to create visual structure in the embedding matrix. Spectral clustering is a technique for putting $N$ data points in an $I$-dimensional space into several clusters. Training and test samples were then encoded using spectrally clustered gene embedding using non-silent mutations in the set of significantly mutated genes. Other mutation types were ignored. If a sample did not contain mutations in any of the $1,348$ significantly mutated genes, we queried the embedding to return $10$ closest genes and if any of them were mutated in the sample, the embedding for the closest gene was copied in that row of the matrix. This was done to make our encoding more versatile and to address samples that contain no or very few non-silent mutations in the set of $1,348$ selected genes. The encoded image for significantly mutated genes was replicated in red and blue channels of the final image. The green channel was used to encode both significantly mutated genes and the closest gene embedding if a significantly mutated gene was not altered. An example of an embedding for stomach cancer sample is given in Figure \[fig1\]. ![Example of an embedding for stomach cancer sample.[]{data-label="fig1"}](stes) Transfer learning and fine-tuning training protocol --------------------------------------------------- Unfortunately, in cancer genomic application domains, training data is scarce, and approaches such as data augmentation are not applicable. Thus, we used transfer representation learning, which can remedy the insufficient training data issue. We pre-train ResNet 34 [@ResNet] on ImageNet data set [@ImageNet]. And then used the pre-trained weights as an initialization for the target task of tumor classification utilizing our tumor image embeddings. Images were re-scaled to 512x512 and normalized to match mean and standard deviations of ImageNet images, batch size was set to 32. During the first stage of fine tuning all but last custom fully connected layer of the ResNet 34 were frozen [@ULMfit]. The learning rate was chosen to be 0.01 using learning rate finder, see [@CyclicLR] and its implementation in [@FastAI]. The slanted triangular learning rates training schedule [@ULMfit] was used for 10 cycles, and both training and validation loss were still decreasing [@ULMfit; @CyclicLR; @SuperConv]. In the second stage, discriminative fine-tuning [@ULMfit] was used with a sequence of $10^{-6}$ to $10^{-3}$ learning rates. Discriminative fine-tuning splits layers of the deep neural network into groups and applies a different learning rate for each group since different layers should be fine-tuned to different extents; the earliest residual blocks have the smallest, and the fully connected layer has the largest learning rate [@ULMfit]. The learning rate for the last layer $\eta^{L}$ used in stage two was also determined using learning rate finder [@CyclicLR; @FastAI]. We empirically found that learning rate $\eta^1=\eta^L/1000$ for the first layer worked best. In stage two of the training we used slanted triangular learning rates training schedule [@ULMfit] for 12 cycles. Results ======= Quality of gene embedding ------------------------- To explore our Gene2Vec embedding, we tested to see if the embedding captured the functional relatedness of genes in terms of their pathway membership. We queried the nearest embedding of a few key cancer genes to spot check the embedding. Since Word2Vec embedding is designed to work well with linear relationships, we examined nearest gene neighbours to NRAS kinase were (cosine similarity of embeddings is listed in parenthesis): HRAS (0.761), KRAS (0.732), PIK3R1 (0.723), MAPK1 (0.722), GRB2 (0.721), AKT1 (0.706), RAF1 (0.704), and MAP2K2 (0.702). The majority of these genes recapitulate the extracellular signal-regulated MAP kinase pathway (RAF/MEK/ERK) that transmits signals from activated cell surface receptors to many cytoplasmic and nuclear targets. The nearest genes for tumor suppressor APC were: CTNNB1 (0.659), PLK1 (0.639), CDKN1A (0.628), PTEN (0.620), CCNB1 (0.620), TP53 (0.620), AKT1 (0.620), and TGFB1 (0.614). These genes recapitulate TGF-Beta signalling pathway which regulates cell proliferation. Tumor classification results ---------------------------- Table \[tbl1\] provides the results of tumor classification. Our deep learning method outperforms the best performing machine learning method. Here, we provide results for XGBoost boosted trees algorithm [@xgb], since this method was the most competitive. We also ran variants of random forest and support vector classification methods but their performance was worse. [lll]{}\ (r)[1-2]{} Method & Accuracy\ XGBoost & $51.2\%$\ ResNet 34 stage 1 & $73.2\%$\ ResNet 34 stage 2 & $78.3\%$\ \[tbl1\] TCGA tumor cohorts can be generally grouped into the following organ systems: central nervous system (GBMLGG), core gastrointestinal (STES, COADREAD), developmental gastrointestinal (LIHC, PAAD, CHOL), endocrine (THCA and ACC), gynecologic (OV, UCEC, CESC, BRCA), head and neck (HNSC), hematologic and lymphatic malignancies (LAML, DLBC, THYM), melanocytic (SKCM and UVM), neural-crest-derived tissues (PCPG), soft tissue (SARC and UCS), thoracic (LUAD, LUSC), urologic (BLCA, PRAD, TGCT, KIRC, KICH, KIRP)[^1]. We observed that our mis-classifications are primarily within the same organ systems: ovarian serous cystadenocarcinoma and breast carcinoma; cervical and endocervical cancer and breast carcinoma. We also observed that ovarian serous cystadenocarcinoma was the class with the most errors. This is not surprising since this cancer type have important drivers in the space of non-point mutations – copy number variants. Discussion ========== Deep neural networks provide the state-of-the-art performance in multiple domains such as images, text, speech. However, in the health and particularly genomic sub-domains there are fewer such examples that outperform other machine learning methods (boosted trees, random forests, and support vector machines). In this paper we describe a way to encode genomic data as images in such a way that transfer learning and fine-tuning can be used to outperform other machine learning methods by a large margin. Our three main contributions are: 1. Generation of biologically relevant encoding for genomic mutations leveraging pathway information, Gene2Vec embedding, spectral clustering and image creation. 2. An effective training protocol, fine-tuned to the problem at hand, similar training protocol was first introduced in [@ULMfit]. The protocol leverages state-of-the-art transfer learning and fine-tuning techniques. 3. Development of a state-of-the-art classifier for cancer primary site of origin. As part of the future work, we look forward to 1) improving our understanding of the genes and pathways that are recurrently mutated in cancer by developing better methods to discover significantly mutated genes, 2) integrating DNA copy number data to increase the power to detect new mutational patterns and cancer sub-types, 3) increasing accuracy and addressing more fine-tuned cancer sub-class classification. [9]{} Yuan, Y., Shi, Y., Li, C., Kim, J., Cai, W., Han, Z., & Feng, D.D. (2016) DeepGene: an advanced cancer type classifier based on deep learning and somatic point mutations. BMC Bioinformatics, [**17**]{} (Suppl 17):476, <https://doi.org/10.1186/s12859-016-1334-9>. Li, Y., Kang, K., Krahn, J.M., Croutwater, N., Lee, K., Umbach, D.M., & Li, L. (2017) A comprehensive genomic pan-cancer classification using The Cancer Genome Atlas gene expression data. BMC Genomics, [**18**]{} (1):508, <https://doi.org/10.1186/s12864-017-3906-0>. The Cancer Genome Atlas homepage, <http://cancergenome.nih.gov/abouttcga>. Grossman, Robert L., Heath, Allison P., Ferretti, Vincent, Varmus, Harold E., Lowy, Douglas R., Kibbe, Warren A., Staudt, Louis M. (2016) Toward a Shared Vision for Cancer Genomic Data. New England Journal of Medicine, [**375**]{}:12, 1109-1112, <https://gdc.cancer.gov/about-gdc>. Lawrence , M. S., Stojanov, P., Polak, P., Kryukov, G. V., et al. (2013) Mutational heterogenieity in cancer and the search for new cancer genes. Nature, [**499**]{} (7457):214-218, <https://doi.org/10.1038/nature12213>. Subramanian, Tamayo, et al. (2005) Molecular Signatures Database (MSigDB), PNAS, [**102**]{}:15545-15550, <http://software.broadinstitute.org/gsea/msigdb/index.jsp> Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., & Dean, J. (2013) Distributed representations of words and phrases and their compositionality. Neural information processing systems. Glorot, X., & Bengio, Y. (2010) Understanding the difficulty of training deep feedforward neural networks. In AISTATS, [**9**]{}: 249–256. Guntman, M., & Hyvarine,A. (2010) Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. AISTATS, [**9**]{}:297–304. Kingma, D.P., & Ba, J. (2014) Adam: A Method for Stochastic Optimization. ICLR, <https://arxiv.org/abs/1412.6980>. Yu, S. X., & Shi, J. (2003) Multiclass Spectral Clustering. <http://www1.icsi.berkeley.edu/~stellayu/publication/doc/2003kwayICCV.pdf>. He, K., Zhang, X., & Ren, S. (2016) Deep Residual Learning for Image Recognition. CVPR. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009) ImageNet: A Large-Scale Hierarchical Image Database. CVPR. Howard, J., & Ruder, S. (2018) Universal Language Model Fine-tuning for Text Classification. ACL, <https://arxiv.org/pdf/1801.06146.pdf>. Smith, L.N. (2017) Cyclical Learning Rates for Training Neural Networks. WACV, <https://arxiv.org/abs/1506.01186>. FastAi, <http://www.fast.ai/>. Smith, L.N., & Topin, N. (2017) Super-Convergence: Very Fast Training of Residual Networks Using Large Learning Rates. <https://arxiv.org/pdf/1708.07120.pdf>. van der Maaten, L., & Hinton, G. (2008) Visualizing data using t-SNE. The Journal of Machine Learning Research, [**9**]{}:2579-2605. Chen, T., Guestrin, C. (2016) XGBoost: A Scalable Tree Boosting System. SIGKDD, 785-794. [^1]: See TCGA [@TCGA] for list of abbreviations used.
--- abstract: 'Exact solutions with torsion in Einstein-Gauss-Bonnet gravity are derived. These solutions have a cross product structure of two constant curvature manifolds. The equations of motion give a relation for the coupling constants of the theory in order to have solutions with nontrivial torsion. This relation is not the Chern-Simons combination. One of the solutions has a $AdS_2\times S^3$ structure and is so the purely gravitational analogue of the Bertotti-Robinson space-time where the torsion can be seen as the dual of the covariantly constant electromagnetic field.' author: - | F. Canfora$^{1,2}$, A. Giacomini$^1$, S. Willison$^1$\ $^1$ Centro de Estudios Cientificos (CECS), Casilla 1469 Valdivia, Chile.\ $^2$ Istituto Nazionale di Fisica Nucleare, Sezione di Napoli, GC Salerno.\ e-mail: [email protected], [email protected], [email protected] title: 'Some exact solutions with torsion in 5-D Einstein-Gauss-Bonnet gravity' --- Keywords: Einstein-Gauss-Bonnet gravity, torsion. PACS: 04.50.+h, 04.20.Jb, 04.40.Nr Preprint: CECS-PHY-07/11 Introduction ============ It is a well known fact that in four dimensions the Einstein-Hilbert action (plus a cosmological term) is the only functional that can be built out of the curvature invariants leading to second order field equations. In fact higher order terms in the curvature invariants lead generically to higher order field equations which at quantum level would lead to ghosts spoiling the unitarity of the theory[@Zwiebach:1985uq]. The Einstein Hilbert action can however be generalized in a straightforward way to higher dimensions. Indeed there exist large class of theories containing higher powers in the curvature which lead to second order equations for the metric, known as Lovelock theories [@Lovelock]. Now in the Einstein-Hilbert action, the vielbein $e^{a}$ and spin connection $\omega ^{ab}$ can be treated as independent fields. This is known as the first order formalism since the field equations involve only first derivatives: such a formalism is mandatory when dealing with Fermionic fields. One of the characteristic features of the Einstein-Hilbert action (in $n$ dimensions) in the first order formalism is that its variation with respect to the spin connection gives equations of motion of the form $$\epsilon _{a_{1}\ldots a_{n}}T^{a_{1}}\wedge e^{a_{2}}\wedge \cdots \wedge e^{a_{n-2}}=0$$which simply imply the vanishing of the torsion two-form $T^{a}=0$. However in dimension higher than four the Einstein-Hilbert action is no longer the unique possible first order action. In fact, the Lovelock theories also admit a first order formulation [@Zumino:1985dp] [^1]. In five dimensions for example one can add to the standard Einstein-Hilbert action with a cosmological constant an extra term to the usual action known as the Gauss-Bonnet term which in familiar notation reads $$\int d^{5}x\sqrt{g}\left( R^{2}-4R_{\mu \nu }R^{\mu \nu }+R_{\mu \nu \rho \sigma }R^{\mu \nu \rho \sigma }\right) .$$Above $R_{\mu \nu \rho \sigma }$ is the Riemann tensor, $R_{\mu \nu }$ the Ricci tensor and $R$ the Ricci scalar. We shall employ the differential form notation. Introducing the curvature two-form, $R^{ab}$, the above term is equal to $$\int R^{ab}\wedge R^{cd}\wedge e^{e}\,\epsilon _{abcde}.$$Besides being a natural generalization of General Relativity to the five dimensional case, Einstein-Gauss-Bonnet theories appear to be quite compatible with the available astrophysical and cosmological experimental data (see, for an incomplete list of references, [CEN06,CN06,GOT06,GTS05,KM06,LN07, NO07,NOS05,NOS06a,STT05,San06,SB06]{}). The addition of Lovelock terms to the action affects the equations of motion in such a way that they no longer imply that the torsion vanishes. Instead, it becomes a new propagating degree of freedom. The presence of torsion could have interesting phenomenological consequence (see, for instance, Ref. [AD01]{}). Torsion has a deep geometrical meaning which could shed some light on non-perturbative features of gravitational theories which cannot be taken into account in the standard formalism. Generally the Lovelock equations of motion, combined with the Bianchi identities, give very strong constraints on the torsion [Troncoso:1999pk]{}. In most cases one obtains an over-determined system of equations, making it extremely difficult to find exact solutions with non-vanishing torsion. There is a special case where solutions with torsion are known: in odd dimensions, for a certain tuning of the coupling constants, the Lovelock theory becomes equivalent to a Chern-Simons theory [@Chamseddine:1989nu]. Such theories have an enlarged local symmetry group which allows (roughly speaking) to fit inside a bigger curvature for the AdS group both the standard curvature and the torsion providing one with the needed mathematical structures to formulate a supersymmetric theory as well. Because of such local symmetry, the field equations are suitable to investigate non trivial configurations such as black holes and worm holes (see, for instance, [GOT07]{}). This combination of coefficients is unique in that the field equations do not place strong constraints on the torsion. A black hole with torsion was found in Ref. [@Ar06]. In that case, due to the enhanced gauge symmetry of the Chern-Simons theory, the solution was related to a torsion-free solution by a gauge transformation. Other solutions with torsion in Chern-Simons gravity are given in Refs. [@Banados]. In this paper we will exclude the Chern-Simons combination. The goal of this paper will be to present some exact solutions with non-vanishing torsion in five dimensional Lovelock gravity in the non-Chern-Simons case. It seems that until now, no explicit solutions have been considered in the literature (In Ref. [@Wheeler] there is a nice general discussion of spherically symmetric torsion, where it was shown that static black holes in higher dimensional Lovelock gravity generically have zero torsion). In order to overcome the previously explained difficulties we use an ansatz of a space-time being the cross product of the form $N_{2}\times M_{3}$ where both submanifolds are of constant curvature and $M_{3}$ is spacelike. The usefulness of this ansatz is due to the fact that one can search for a torsion with components only in $M_{3}$. Because this submanifold is three dimensional, a totally anti-symmetric torsion tensor $T_{ijk}\propto \epsilon _{ijk}$ will respect the symmetry. Note also that such a torsion is proportional to the Hodge dual of the curvature two form on $M_{3}$. In this sense, the ansatz is inspired by an analogy with BPS states in gauge theory, as will be explained later. The equations of motion impose a relation between coupling constants which is *not* the Chern-Simons combination. One of the possible solutions is $AdS_{2}\times S_{3}$ which is analogous to the Bertotti-Robinson solution [@Bertotti:1959pf] where the role of the electromagnetic field is taken by a covariantly constant torsion. The analogy to a BPS state is clear here as the Bertotti-Robinson solution is indeed BPS. Because of this analogy one may wonder if also the solution presented here is a BPS state. However there is no obvious way to write a Killing spinor equation because, to the authors knowledge, the supersymmetric extension of the theory is not known (although some features of BPS states are indeed present as will be explained later). Anyway this BPS analogy proved to be extremely useful to find nontrivial solutions with torsion, which up to now has been an extremely difficult task. It will be shown that this solution has no zero torsion limit. On the other hand it is easy to see that the torsion free $AdS_2 \times S^3$ is a solution for Einstein-Gauss-Bonnet gravity. The new solution presented here seems to be therefore a topological excitation. It will be shown that also other solutions with the cross product structure of constant curvature manifolds exist. The structure of the paper will be as follows: In section 2 the Einstein-Gauss-Bonnet theory with torsion is reviewed. In section 3, the new solutions are described. In section 4, the analogy with BPS states is developed and some interesting features of the solutions are investigated. Section 5 is a summary of the main conclusions. Einstein-Gauss-Bonnet theory with torsion {#Results_Section} ========================================= Gravity with Torsion -------------------- Since the Kaluza-Klein idea and with the advent of string theories the possibilities to have extra dimensions comes into play. As mentioned previously, the Lovelock Lagrangian is the natural extension of GR to higher dimensions. Let us briefly review Lovelock gravity in first order formalism, the relation with the second order formalism will be described in the next subsection in the five dimensional case. The action has the form (for a detailed review, see Ref. [@TZ-CQG]): $$\begin{aligned} I_{D}& =\kappa \int \sum_{p=0}^{\left[ D/2\right] }\alpha _{p}L^{(D,p)}, \\ L^{(D,p)}& \equiv \varepsilon _{a_{1}\dots a_{D}}R^{a_{1}a_{2}}\wedge \cdots \wedge R^{a_{2p-1}a_{2p}}\wedge e^{a_{2p+1}}\wedge \cdots \wedge e^{a_{D}}.\end{aligned}$$Here $e^{a}=e_{\mu }^{a}dx^{\mu }$ is the vielbein, $\omega _{\ b}^{a}=\omega _{b\mu }^{a}dx^{\mu }$ the spin connection, $\eta _{ab}$ the Minkowski metric in the vielbein indices, $g_{\mu \nu }=\eta _{ab}e_{\mu }^{a}e_{\nu }^{b}$ is the spacetime metric. The curvature and torsion are: $$\begin{aligned} R_{\ b}^{a}& \equiv \left( d\omega +\omega \wedge \omega \right) _{\ b}^{a}, \\ T^{a}& \equiv De^{a}=de^{a}+\omega _{\ b}^{a}\wedge e^{b}\ .\end{aligned}$$It is manifest in this way that the vielbein indices $a,b,..$ behave as internal gauge indices. The Bianchi identities read$$\begin{aligned} DR_{\ b}^{a}& =dR_{\ b}^{a}+\omega _{\ c}^{a}\wedge R_{\ b}^{c}+\omega _{\ b}^{c}\wedge R_{\ c}^{a}=0, \\ DT^{a}& =R_{\ b}^{a}\wedge e^{b}.\end{aligned}$$ In $D=4$ the above tells that one is free to add a further term (the so called Gauss-Bonnet term) to $S_{EH}$ which, being a topological invariant, does not change the Euler-Lagrange equations. In higher dimensions the situation changes: extremizing the Lovelock Lagrangian of order $p$ one obtains the following types of equations: $$\begin{aligned} \sum_{p=0}^{\left[ D/2\right] }(D-2p)\alpha _{p}\, \Xi _{a}^{(p)} &= 0\, , \\ \sum_{p=0}^{\left[ D/2\right] }p(D-2p)\alpha _{p}\, \Xi _{ab}^{(p)} & = 0\, ,\end{aligned}$$ where $$\begin{aligned} \Xi _{a}^{(p)}& \equiv\varepsilon _{ab_{2}..b_{D}}R^{b_{2}b_{3}}..R^{b_{2p}b_{2p+1}}e^{b_{2p+2}} \dots e^{b_{D}}\, , \label{loveq1} \\ \Xi _{ab}^{(p)}& \equiv\varepsilon _{aba_{3}..a_{D}}R^{a_{3}a_{4}}..R^{a_{2p-1}a_{2p}} T^{a_{2p+1}}e^{a_{2p+2}}\dots e^{a_{D}}\, . \label{loveq2}\end{aligned}$$ In four dimension $S_{EH}$ is basically the only first order action[^2]. In this case the torsion plays no role, at least classically. In higher dimensions the torsion emerges as a natural geometrical object. Five dimensional case --------------------- In this paper, we consider the five dimensional Einstein-Gauss-Bonnet action which in the familiar formalism reads:$$I=\kappa \int d^{5}x\sqrt{g}\left( R-2\Lambda +\alpha \left( R^{2}-4R_{\mu \nu }R^{\mu \nu }+R_{\alpha \beta \gamma \delta }R^{\alpha \beta \gamma \delta }\right) \right) \ , \label{Itensor}$$where $\kappa $ is related to the Newton constant, $\Lambda $ to the cosmological term, and $\alpha $ is the Gauss-Bonnet coupling. For later convenience, it is useful to express the action (\[Itensor\]) in terms of differential forms as [^3] $$I=\int \left( \frac{c_{0}}{5}e^{a}e^{b}e^{c}e^{d}e^{e}+\frac{c_{1}}{3}% R^{ab}e^{c}e^{d}e^{e}+c_{2}R^{ab}R^{cd}e^{e}\right) \epsilon _{abcde} \label{action}$$where, as explained in the previous section, $e^{a}=e_{\mu }^{a}dx^{\mu }$ is the vielbein, and $R^{ab}=d\omega ^{ab}+\omega _{\text{ \ }f}^{a}\omega ^{fb}$ is the curvature $2$-form for the spin connection $\omega ^{ab}=\omega _{\ \mu }^{ab}dx^{\mu }$. The equations of motion obtained by varying the action with respect to the spin connection $\omega ^{ab}$ read $$\mathcal{E}_{ab}\equiv T^{c}\left( c_{1}e^{d}e^{e}+2c_{2}R^{de}\right) \epsilon _{abcde}=0 \label{equationtorsion}$$The equations of motion obtained varying the action with respect to the vielbein $e^{a}$ read $$\mathcal{E}_{e}\equiv \left( c_{o}e^{a}e^{b}e^{c}e^{d}+c_{1}R^{ab}e^{c}e^{d}+c_{2}R^{ab}R^{cd}\right) \epsilon _{abcde}=0. \label{equationcurvature}$$When the coefficients of the theory satisfy a special fine-tuning the above action turns out to be of a Chern-Simons theory[@Chamseddine:1989nu]. In this section the case of Chern-Simons will be explicitly excluded by imposing the inequality: $$c_{1}^{2}\neq 4c_{0}c_{2}\ .$$or, equivalently $$\frac{4\alpha \Lambda }{3}\neq -1.$$ Exact solutions with torsion ============================ The main idea is that an ansatz, inspired by BPS states in field theory, could allow one to find non trivial solutions with non vanishing torsion. As BPS states in field theory have non trivial topological charges, in the present case such *vacuum solutions* of Einstein-Gauss-Bonnet gravity could manifest some properties of solutions in the presence of matter fields carrying some charges. For a suitable choice of the coefficients which *is not* Chern-Simons, a set of solutions will be constructed. One of these solutions is the vacuum analogue of the Bertotti-Robinson metric in which the torsion plays the role of the electromagnetic field. This will be derived in the next subsection before proceeding to the more general solutions. $AdS_{2}\times S_{3}$ solution {#AdS_Section} ------------------------------ We search for a $AdS_{2}\times S_{3}$ solution with torsion. The idea is to find the analogous of a Bertotti-Robinson metric in which the torsion plays the role of the electromagnetic field. Therefore the following ansatz for the metric is natural $$\begin{gathered} ds^2 =\frac{l^2}{x^2} \left( - dt^2 + dx^2\right) + \frac{r_0^2}{4}\left( d\phi^2 + d\theta^2 + d\psi^2 + 2 \cos\theta d\phi d\psi\right)\, .\end{gathered}$$ As vielbein we choose: $$e^{0}=\frac{l}{x}dt\;\;\;;\;\;\;e^{1}=\frac{l}{x}dx\;\;\;;\;\;\;e^{i}= r_0% \tilde{e}^{i} \label{1ansatz1}$$where $r_0$, the radius of the 3-sphere, is a constant and $\tilde{e}^{i}$ is the intrinsic vielbein on the unit sphere. For definiteness, the Poincaré coordinates have been used for the two dimensional Anti de Sitter space and Euler angles for the sphere. We make the following ansatz for the torsion, consistent with the spherical symmetry: $$T^{1}=0\;\;\;;\;\;\;T^{0}=0\;\;\;;\;\;\;T_{i}=\frac{H}{r_0}% e^{j}e^{k}\epsilon _{ijk}\, , \label{1torsion1}$$ where $H$ is a constant. Note that on the unit three-sphere there exists a choice of intrinsic vielbein such that $\tilde{\omega}^{ij}=-\epsilon ^{ijk}\tilde{e}_{k}$, where $\tilde{\omega}^{ij}$ is the intrinsic Levi-Civita spin connection. With this choice, the 5-dimensional spin connection reads $$\omega^{01}=-\frac{1}{x}dt\, , \qquad \omega^{ij} =(H+1)\tilde{\omega}^{ij} = - (H+1)\epsilon ^{ijk}\tilde{e}_{k} \, . \label{2ansatz2}$$ Thanks to the geometric structure of the sphere, the torsion can be written in a way that is homogeneous and isotropic thanks to the invariance of the tensor $\epsilon ^{ijk}$ on the sphere. The naturalness of this ansatz will be discussed more in section 4. The curvature turns out to be $$R^{01}=-\frac{1}{l^{2}}e^{0}e^{1};\;\;\;R^{ij}=\frac{1-H^{2}}{r_0^{2}}% \,e^{i}e^{j} \label{1curvature1}$$One recovers the torsionless case setting $H=0$. Inserting Eqs. ([1ansatz1]{}), (\[1torsion1\]) and (\[1curvature1\]) in the equations of motion one obtains from the $(ij)$ component of (\[equationtorsion\]): $$c_{1}-\frac{2c_{2}}{l^{2}}=0\qquad \mathrm{or}\qquad H=0\,. \label{AdSxS_Solution_1}$$The other components of equation (\[equationtorsion\]) are automatically satisfied. From the $(0)$ and $(1)$ components of eq. ([equationcurvature]{}) we get $$4c_{0}+2c_{1}\frac{(1-H^{2})}{r_0^{2}}=0 \label{AdSxS_Solution_2}$$The $(i)$ component of eq. (\[equationcurvature\]) gives $$12c_{0}+2c_{1}\left( -\frac{1}{l^{2}}+\frac{1-H^{2}}{r_0^{2}}\right) -\frac{% 4c_{2}}{l^{2}}\frac{(1-H^{2})}{r_0^{2}}=0 \label{AdSxS_Solution_3}$$ It is worth stressing here that the form of the torsion of Eq. ([1torsion1]{}) greatly simplify the equations of motions (which in the Einstein-Gauss-Bonnet are quite complicated). In particular, some key identities have been used in deriving the above equations. The first one is that expressions like $$\epsilon _{ijk}T^{i}e^{j}e^{k}=0 \label{1I1}$$identically vanish due to the fact that the torsion contains always two angular vielbeins so that such exterior products are zero because there are only three independent angular vielbeins. The second identity is that expressions like$$\epsilon _{ijk}T^{i}e^{j}e^{0}\approx \epsilon _{ijk}\left( \epsilon ^{imn}e_{m}e_{n}\right) e^{j}e^{0}=\left( \delta _{j}^{m}\delta _{k}^{n}-\delta _{k}^{m}\delta _{j}^{n}\right) e_{m}e_{n}e^{j}e^{0}=0 \label{2I2}$$also vanish since there is always a wedge product of an angular vielbein with itself. Substituting equations (\[AdSxS\_Solution\_1\]) and (\[AdSxS\_Solution\_2\]) in (\[AdSxS\_Solution\_3\]) one finds that there exist solutions with torsion only if the coupling constant satisfy the following relation $$c_{1}^{2}=12c_{0}c_{2} \label{nocs}$$Note that this is *not* the Chern-Simons combination of the coupling constants. The $AdS_2$ length scale $l$ is completely determined by the coupling coefficients: $$\label{Ads_length} \frac{1}{l^2} = \frac{c_1}{2c_2}\, ,$$ The sphere radius $r_0$ and the torsion parameter $H$ are related by $$1- H^2 = - \frac{2c_{0}}{c_{1}}r_0^{2}\quad \Rightarrow \quad H^2 = 1 + \frac{r_0^2}{3l^2}\, .$$ In summary, the space-time $AdS_2\times S_3$ with vielbein given by ([1ansatz1]{}) and with torsion and curvature $$\begin{gathered} T_{i}=\pm \sqrt{\frac{1}{r_0^2} + \frac{1}{3l^2}}\ \epsilon _{ijk}\, e^{j}e^{k}\, , \label{Answer_Torsion} \\ R^{01}=-\frac{1}{l^2} e^{0}e^{1}\,, \qquad R^{ij}=-\frac{1}{3l^{2}}% \,e^{i}e^{j}\, ,\end{gathered}$$ with $l$ given by (\[Ads\_length\]), is a solution of the Einstein-Gauss-Bonnet theory, provided that the relation (\[nocs\]) among the coupling constants holds, with $c_2/c_1$ and $c_0/c_1$ positive. The torsion is bounded from below for finite sized sphere and AdS length scale. This means that there is no continuous zero torsion limit. Moreover, the torsion is fully antisymmetric. This allows an intriguing analogy with gravity in the presence of a constant electromagnetic field. It is natural to define a three form $T \equiv T_{ijk}e^i\wedge e^j \wedge e^k = 3! \frac{H}{r_0} \, e^2\wedge e^3 \wedge e^4$. Also we define the Hodge dual, the two form $*T = - 3! \frac{H}{r_0} e^0\wedge e^1$. Due to the Bianchi identities, these are covariantly constant. $$DT = 0,\qquad D *T =0.$$ The analogy with electromagnetic field is made by defining $F \equiv *T$. Thus $F$ is seen to obey the source-free Maxwell equations, making manifest the close resemblance with the electromagnetic field of the Bertotti-Robinson solution. Product of two constant curvature manifolds with torsion -------------------------------------------------------- In the previous section a product manifold was considered. It was seen that the torsion could be introduced on the three-sphere in a way that was consistent with spherical symmetry. Furthermore such a torsion satisfied the equations of motion, provided that there was a rather curious relation ([nocs]{}) between the coupling constants in the action. A key feature of that solution was that the torsion tensor $T_{ijk} \propto \epsilon_{ijk}$ is manifestly of a form which is homogeneous and isotropic in the three-dimensional subspace (in that case a sphere). It is natural to generalise this solution to more general product manifolds involving a three dimensional manifold with constant curvature, which could be positive, negative or zero. In this section we shall study solutions whose metric is the direct product of two manifolds of constant Riemannian curvature, with torsion living in a three-dimensional manifold. Let $N_2$ be a two-dimensional manifold with Minkowskian signature and constant curvature. The metric is given by $ds^2_N = - e^0 \otimes e^0 + e^1 \otimes e^1$, where $e^0$ and $e^1$ are the vielbeins with Levi-Civita connection $\hat{\omega}^{01}$ and curvature satisfying $$\begin{gathered} \label{curvature_01} \hat{R}^{01} = \Lambda_N\, e^0 \wedge e^1\, .\end{gathered}$$ Let $M_3$ be a three-dimensional manifold of constant curvature with Euclidean signature. The metric is $ds^2_M = \delta_{ij}\, e^i \otimes e^j$. The three-manifold has a Levi-Civita connection[^4] $\hat{\omega}% ^{ij}$ and corresponding curvature $\hat{R}^{ij}$ satisfying $$\begin{gathered} \label{curvature_ij} \hat{R}^{ij} = \Lambda_M\, e^i \wedge e^j\, .\end{gathered}$$ The five-dimensional spacetime will be a product space $N_2 \times M_3$, the metric being $$\begin{gathered} ds^2 = ds^2_N + ds^2_M\,.\end{gathered}$$ The torsion is introduced onto the $M_3$ in such a way as to respect the symmetry. This is guaranteed by the invariance property of the alternating tensor $\epsilon_{ijk}$. Let us make the ansatz: $$\begin{gathered} \label{symmetric_Torsion} T_i = \tau \, \epsilon_{ijk} e^j \wedge e^k\, .\end{gathered}$$ We shall further assume that $\tau$ is constant, which implies that $T^i$ is covariantly constant with respect to the Levi-Civita connection. In the appendix a more general ansatz is analysed where the symmetry of $N_2$ is relaxed and it is shown that the only solution is the one discussed here. The spin connection on $M_3$ now takes the form $\hat{\omega}^{ij} + K^{ij}$ where $K^{ij}$ is the contorsion 1-form. According to ([symmetric\_Torsion]{}) the contorsion is: $$\begin{gathered} \label{Symmetric_contorsion} K_{ij} = -\tau\, \epsilon_{ijk} e^k\, .\end{gathered}$$ Let $R^{ab}$ denote the five-dimensional curvature tensor. It is a sum of the torsion-free curvature, with components (\[curvature\_01\]) and ([curvature\_ij]{}), and a part which comes from the torsion. This can conveniently be obtained by expanding $R^{ab}=d(\hat{\omega}^{ab}+K^{ab})+(% \hat{\omega}_{\ c}^{a}+K_{\,c}^{a})\wedge (\hat{\omega}^{cb}+K^{cb})$ to give the well-known formula: $$\label{well_known} R^{ab}=\hat{R}^{ab}+\hat{D}K^{ab}+K_{\ c}^{a}\wedge K^{cb}\,.$$   From (\[Symmetric\_contorsion\]) it can be seen that the contorsion is covariantly constant with respect to the Levi-Civita connection, $\hat{D}% K^{ij}=\kappa \left( \epsilon _{ljk}\hat{\omega}_{\ i}^{l}+\epsilon _{ilk}% \hat{\omega}_{\ j}^{l}+\epsilon _{ijl}\hat{\omega}_{\ k}^{l}\right) \wedge e^{k}=0$. The non-vanishing components of the curvature are $$\label{Symmetric_Curvature} R^{01}=\Lambda _{N}e^{0}\wedge e^{1}\,,\qquad R^{ij}=\left( \Lambda _{M}-\tau ^{2}\right) e^{i}\wedge e^{j}\,.$$The effect of the homogeneous and isotropic torsion is to rescale the three-dimensional part of the curvature. Now it remains to check that the field equations are satisfied by torsion (\[symmetric\_Torsion\]) and curvature (\[Symmetric\_Curvature\]). First let us check equation (\[equationtorsion\]) coming from the variation w.r.t. the spin connection. Since $\epsilon_{ijk} T^j \wedge e^k = 0$ the only non-trivial component is: $$\begin{gathered} \label{dev1} 0 = \mathcal{E}_{ij} = 2\epsilon_{ijk} T^k \wedge \left(c_1 e^0 \wedge e^1 + 2c_2 R^{01}\right)\, .\end{gathered}$$ Now we check equation (\[equationcurvature\]) coming from the variation w.r.t. the vielbein. $$\begin{aligned} 0 = \mathcal{E}_0 & = \epsilon_{ijk} \left( 4 c_0 e^1 \wedge e^i \wedge e^j \wedge e^k + 2 c_1 e^1 \wedge e^i \wedge R^{jk}\right)\, , \label{dev2} \\ 0 = \mathcal{E}_1 & = \epsilon_{ijk} \left( 4 c_0 e^0 \wedge e^i \wedge e^j \wedge e^k + 2 c_1 e^0 \wedge e^i \wedge R^{jk}\right)\, , \label{dev3} \\ 0 = \mathcal{E}_i & = \epsilon_{ijk} \Big( 12 c_0 e^0 \wedge e^1 \wedge e^j \wedge e^k \notag \\ &\qquad\quad + 2 c_1 ( R^{01} \wedge e^j \wedge e^k + e^0\wedge e^1 \wedge R^{jk}) + 4 c_2 R^{01}\wedge R^{ij} \Big)\, . \label{dev4}\end{aligned}$$ Equation (\[dev1\]) implies: $$\begin{gathered} \Lambda_N = -\frac{c_1}{2c_2}\, .\end{gathered}$$ Equations (\[dev2\]) and (\[dev3\]) imply $$\begin{gathered} \Lambda_M - \tau^2 + \frac{2c_0}{c_1} = 0\, .\end{gathered}$$ Substituting these two equations in (\[dev4\]) gives the relation between the coupling constants $c_1^2 = 12 c_0c_2$. Summary of the solutions ------------------------ We have found solutions for the special class of Lovelock theories satisfying[^5] $c_1^2 = 12 c_0c_2$. Since the possibility of a vanishing Einstein-Hilbert term is excluded, we may normalise $c_1 = 1$ without loss of generality. The action is thus: $$\begin{gathered} \label{special_action} I=\int \left( \frac{1}{60c_2}e^{a}e^{b}e^{c}e^{d}e^{e}+\frac{1}{3}% R^{ab}e^{c}e^{d}e^{e}+c_{2}R^{ab}R^{cd}e^{e}\right) \epsilon _{abcde}\, .\end{gathered}$$ The metric is the product of $N_2 \times M_3$ with Riemannian curvature: $$\begin{gathered} \hat{R}^{01} = - \frac{1}{2c_2} e^0 \wedge e^1\, , \quad \hat{R}^{ij} = \Lambda_M e^i \wedge e^j\, .\end{gathered}$$ The curvature and torsion are: $$\begin{gathered} R^{01} = - \frac{1}{2c_2} e^0 \wedge e^1\, , \quad R^{ij} = - \frac{1}{6c_2} e^i \wedge e^j\, , \\ T_i = \pm \sqrt{ \Lambda_M + \frac{1}{6c_2} } \,\, \epsilon_{ijk}\, e^j \wedge e^k\, .\end{gathered}$$ We see that the full non-Riemannian curvature is completely determined by the coupling constant $c_2$. There is just one constant of integration $% \Lambda_M$, which characterizes both the Riemannian curvature of $M_3$ and the torsion. ----------------------------- ------------- ------------------------------ ----------------------- $\mathbb{R}_2 \times M_3$ No solution AdS$_2 \times S_3$ $c_2 > 0$ $\Lambda_M$ arbitrary No zero torsion limit AdS$_2 \times H_3$ $c_2 > 0$ $- 1/6c_2 \leq \Lambda_M <0$ Zero torsion limit AdS$_2 \times \mathbb{R}_3$ $c_2 > 0$ $\Lambda_M = 0$ No zero torsion limit dS$_2 \times S_3$ $c_2 < 0$ $1/6|c_2| \leq \Lambda_M $ Zero torsion limit dS$_2 \times H_3$ ($c_2 < 0$) No solution dS$_2 \times \mathbb{R}_3$ ($c_2 < 0$) No solution ----------------------------- ------------- ------------------------------ ----------------------- Note that the generalisation to the case that $N_2$ is Euclidean and with $% M_3$ Lorentzian is straightforward. The field equations for Einstein-Gauss-Bonnet can also be written as follows: $$\begin{aligned} & T^{c}\left( R^{de}+\frac{(\Lambda _{+}+\Lambda _{-})}{2}e^{d}e^{e}\right) \epsilon _{abcde}=0 \label{eto1} \\ & \left( R^{ab}+\Lambda _{+}e^{a}e^{b}\right) \left( R^{cd}+\Lambda _{-}e^{c}e^{d}\right) \epsilon _{abcde}=0 \label{eto2}\end{aligned}$$so that, assuming that the torsion vanishes there are two possible (A)dS vacua with different cosmological constant. Here we have found a third kind of solution with a high degree of symmetry: the product of maximally symmetric spaces with nonzero torsion. In the solutions we have found above, it is the average of the two cosmological constants which is important, $$\pm \frac{1}{l^{2}}=\frac{(\Lambda _{+}+\Lambda _{-})}{2},$$ because it determines the cosmological constant of the $N_2$. Field theoretical features in first-order gravitational theories ================================================================ In Ref. [@Ca07] some analogies were investigated between BPS states in field theory on the one hand and Gravitational theories with torsion on the other. Let us briefly revisit this subject in the light of the solutions found in section \[Results\_Section\]. For detailed reviews on BPS states in field theory see Refs. [@OW99; @To05]. We have focused on the Lovelock theories because they have a first order formalism. It is not a scope of the present paper to analyze all the possible higher curvature corrections (which are expected on various theoretical grounds ranging from string theory to Kaluza-Klein reductions) to standard Einstein-Hilbert action, because such corrections are higher order in derivatives. A detailed review on how generic higher curvature corrections may arise and on their interesting physical effects can be found in [@Sch06] and references therein. On the other hand, the formal analogy pointed out in [@Ca07] is only based on the geometrical roles of the Higgs field and the gauge connection on one hand and of the vielbein and the spin connection on the other hand while the detailed form of the field equations is not so important. Therefore, the present approach to analyze the dynamical role of torsion could also work in some more general cases not belonging to the Lovelock class. In the Yang-Mills-Higgs theory, the BPS equations involve typically linear relations (such as higher dimensional self-duality conditions) among $% D^{a}\phi $ \[the covariant derivative of the Higgs field\] and $F^{ab}$ \[the Yang-Mills field strength\] in which $\phi $ can enter quadratically (as, for instance, in the vortex case[@OW99; @To05]). Inspired by this, a natural ansatz for gravity with torsion is the linear relation $$T^{c}=f_{ab}^{c}\left( \alpha R^{ab}+\beta e^{a}e^{b}\right) \label{gravBPS1}$$where $f_{ab}^{c}$ is an appropriately chosen three index tensor and $\alpha $ and $\beta $ are two constants. Now there does not exist a genuine invariant tensor with three indices (that is, $f_{ab}^{c}$ in Eq. (\[gravBPS1\])) in the five-dimensional Lorentz group. A solution involving such a tensor would necessarily break some of the Lorentz symmetry. One of the features of topological defects is precisely that they partially break Lorentz invariance (the surviving generators are the ones leaving invariant the defects). For instance, when in quantum field theory one expands around the trivial vacuum (all the fields equal to zero) the Lorentz generators annihilate the vacuum (see, for instance, [@We96]). When expanding around non trivial saddle points (that is, topological defects) this is not so since the position and the structure of the topological defects make the action of the Lorentz generators on the vacuum non trivial. In the case of our solutions, there is a three-dimensional submanifold of constant curvature. Thus it is natural to choose the tensor $% f_{ijk}=\epsilon _{ijk}$ consistent with the unbroken Lorentz generators. Since our solutions are of the form (\[gravBPS1\]), there is some analogy between our solutions an BPS states. However, because of the very different structure of Lovelock action, it is not easy to make this analogy precise. It is not easy to construct an energy functional from which to deduce a BPS bound. In spite of the fact that the supergravity or a BPS bound are not known, some interesting features of BPS states can be analyzed without the tools of SUSY. In particular, in the vacuum analogue of the Bertotti-Robinson it appears that the torsion is bounded from below for finite sized sphere and AdS length scale. This means that there is no continuous zero torsion limit. So that such a solution appears as a topological excitation in which intrinsically the torsion *cannot be small*. Another interesting feature is related to the rigidity of the solutions due to the presence of torsion. In the non Chern-Simons case, the equations of motions have to fulfil a strong compatibility condition: taking the covariant derivative of Eq. (\[equationcurvature\]) and comparing with Eq. (\[equationtorsion\]) it turns out that [@Troncoso:1999pk] $$\begin{aligned} e^{b}R^{cd}T^{e}\epsilon _{abcde} &=&0, \\ e^{b}e^{c}e^{d}T^{e}\epsilon _{abcde} &=&0.\end{aligned}$$Such conditions are absent in the Chern-Simons case because of the tuning of the coefficients. The rigidity of the above conditions makes clear why it is so difficult to find exact solutions with intrinsic torsion (when the torsion is absent the above conditions are trivial). It is then apparent that the “BPS-inspired" ansatz for the torsion proposed in [@Ca07] is quite good because it naturally provides one with a method to solve the above conditions due to the identities (\[1I1\]) and (\[2I2\]). As further evidence that the solution may be a topologically non-trivial vacuum, it is worth pointing out that the action (\[special\_action\]), evaluated on the solutions is zero (compare with Euclidean instantons which have finite action), as can be easily checked. In contrast, the action evaluated on the two $AdS_5$ solutions, does not vanish. Conclusions and outlook ======================= It has been shown that in five dimensional Einstein-Gauss-Bonnet gravity there are exact solutions with non vanishing intrinsic torsion. These were found for a special choice of coupling constants given by the action ([special\_action]{}), which *is not the Chern-Simons combination*. To the best of the authors knowledge, no solutions with torsion in non Chern-Simons Einstein-Gauss-Bonnet gravity have been found before. The analogies with field theory and the peculiar features of such states have been discussed. Among the solutions found there is an analogue of the Bertotti-Robinson metric in which the torsion is the dual of a covariantly constant electromagnetic field. However, without knowing the supersymmetric extension of this theory, there is no obvious way to write the Killing spinor equation. This solution is, in a sense a topological excitation in which there is no continuous zero torsion limit. Because of the rigidity of the Einstein-Gauss-Bonnet equations of motions in the presence of nontrivial torsion there are strong constraints on small excitations (besides the re-scaling of the physical parameters appearing in the solutions). It is reasonable to expect that, for this reason, the above exact solutions with the “BPS-inspired" torsion could be a topologically non-trivial vacuum state. In view of the analogy with the Bertotti-Robinson solution, one may also expect the existence of a solution analogue of the extreme AdS-Reissner-Nordstrom black hole which interpolates between the Bertotti-Robinson and the AdS metric. This is an open question for future research. Acknowledgements {#acknowledgements .unnumbered} ================ The authors would like to thank Ricardo Troncoso for many stimulating discussions and important bibliographic suggestions. We also thank E. Gravanis, H. Maeda, J. Oliva and J. Zanelli for many suggestions and continuous encouraging. This work has been partially supported by PRIN SINTESI 2007 and by Proy. FONDECYT N${{}^\circ}$3070055, 3070057, 3060016 and by institutional grants to CECS of the Millennium Science Initiative, Chile, and Fundaciòn Andes, and also benefits from the generous support to CECS by Empresas CMPC. On the generality of the ansatz =============================== Let us now argue that the solutions found in the previous section are quite general for a product metric involving a maximally symmetric three-manifold $% M_3$. Let us now assume that the metric is $N_2\times M_3$, where now $N_2$ is an arbitrary two-manifold. That is to say, we shall not specify the form of $\hat{R}^{01}$. We shall keep the same ansatz for the torsion on $M_3$ except that, since $% N_2$ need not be maximally symmetric, $\tau$ may depend on $(x,t)$, the co-ordinates of $N_2$. Thus we look for solutions of the form: $$\begin{gathered} T^0 = F(x,t)\, e^0 \wedge e^1\, , \quad T^1 = G(x,t)\, e^0 \wedge e^1\, ,\quad T_i = \tau(x,t)\, \epsilon_{ijk} e^j \wedge e^k\, .\end{gathered}$$ The contorsion is: $$\begin{gathered} K^{01} = F e^0 - G e^1\, , \qquad K_{ij} = -\tau\, \epsilon_{ijk} e^k\, ,\end{gathered}$$ The components of curvature along $M_3$ are found using the formula ([well\_known]{}). $$\begin{gathered} R^{ij} = \left(\Lambda_M - \tau^2 \right) e^i e^j - d\tau \,\epsilon^{ijk}\, e_k\, .\end{gathered}$$ Let us study the field equations (\[equationtorsion\]) and ([equationcurvature]{}). The component $\mathcal{E}_{01} =0$ tells us immediately that $d\tau$ vanishes: $$\begin{gathered} \tau = \text{constant}\, .\end{gathered}$$ The components $\mathcal{E}_{0i} =0$ and $\mathcal{E}_{1i} =0$ imply: $$\begin{gathered} \label{contradiction1} 1-\tau^2 +\frac{c_1}{2c_2} =0\qquad \text{or} \qquad F =0,\ G = 0\ .\end{gathered}$$ The component $\mathcal{E}_{0} = 0$ gives: $$\begin{gathered} \label{contradiction2} 1-\tau^2 + \frac{2 c_0}{c_1} =0\, .\end{gathered}$$ Comparing equations (\[contradiction1\]) and (\[contradiction2\]) we have either $4c_0c_2 = c_1^2$ or $F = G=0$. The first of these alternatives is precisely the Chern-Simons combination. So we conclude that, for $4c_0c_2 \neq c_1^2$, we have $F = G = 0$ and $\tau = $ constant. Finally, $\mathcal{E}_{ij} =0$ imposes that the curvature of $N_2$ is constant. Thus the solution reduces to that of the previous section. [99]{} B. Zwiebach, Phys. Lett. B **156** (1985) 315. D. Lovelock, *J. Math. Phys.* **12** (1971) 498. B. Zumino, Phys. Rept. **137** (1986) 109. A. Mardones and J. Zanelli, Class. Quant. Grav. **8**, 1545 (1991). G. Cognola, E. Elizalde, S. Nojiri, S. D. Odintsov and S. Zerbini, to appear in *Phys. Rev*. **D**, hep-th/0611198. B. Carter and I. Neupane, *JCAP* **0606** 004 (2006). Z. Guo, N. Ohta and S. Tsujikawa, *Phys.Rev.* **D75** (2007) 023520. G. Calcagni, S. Tsujikawa and M. Sami, *Class. Quant. Grav*. **22**, 3977 (2005). T. Koivisto and D. Mota, *Phys. Rev*. **D75** (2007) 023518; *Phys. Lett.* **B644** (2007) 104. B. Leith and I. Neupane, *JCAP* **0705** (2007) 019. S. Nojiri, S. D. Odintsov, P. V. Tretyakov,* Dark energy from modified F(R)-scalar-Gauss-Bonnet gravity*, hep-th/0704.2520. S. Nojiri, S. D. Odintsov, M. Sasaki, *Phys.Rev.* **D71** (2005) 123509. S. Nojiri, S. D. Odintsov and M. Sami, *Phys. Rev.* **D74**, 046004 (2006). M. Sami, A. Toporensky, P. Tretjakov and S. Tsujikawa, *Phys. Lett.* **B 619,** 193 (2005). A. Sanyal, *Phys. Lett*. **B645** (2007) 1. T. Sotiriou and E. Barausse, *Phys. Rev*. **D75**, 084007 (2007). M.Adak, T.Dereli, L.H.Ryder, *Class.Quant.Grav*. **18** (2001) 1503. R. Troncoso and J. Zanelli, Class. Quant. Grav.  **17**, 4451 (2000) \[arXiv:hep-th/9907109\]. A. H. Chamseddine, Phys. Lett. B **233** (1989) 291. Gustavo Dotti, Julio Oliva, Ricardo Troncoso, *Exact solutions for the Einstein-Gauss-Bonnet theory in five dimensions: Black holes, wormholes and spacetime horns*, hep-th/0706.1830. R. Aros, M. Contreras, *Phys.Rev*. **D73** (2006) 087501. M. Banados, Phys. Lett. B **579**, 13 (2004) \[arXiv:hep-th/0310160\]. R. Aros, M. Romo and N. Zamorano, arXiv:0705.1162 \[hep-th\]. J. T. Wheeler, Nucl. Phys. **B273**, 732 (1986). B. Bertotti, Phys. Rev. **116**, 1331 (1959). I. Robinson, Bull. Acad. Pol. Sci. Ser. Sci. Math. Astron. Phys. **7**, 351 (1959). R. Troncoso and J. Zanelli, Int. J. Theor. Phys. **38**, 1181 (1999) \[arXiv:hep-th/9807029\]. O. Chandia, J. Zanelli, *Phys.Rev.* **D55** (1997) 7580. F. Canfora, *Some solutions with torsion in Chern-Simons gravity and observable effects*, preprint CECS-PHY-07/12, gr-qc/0706.3538. D. I. Olive, P. C. West editors, *Duality and Supersymmetric Theories*, (Cambridge University Press, 1999). D. Tong, *TASI Lectures on Solitons*, hep-th/0509216. S. Weinberg, *The Quantum Theory of Fields*, Vol I and II, Cambridge University Press (1996). H.-J. Schmidt, *Fourth order gravity: equations, history, and application to cosmology*, *Int. J. Geom. Methods Physics* **4** (2007) in print; gr-qc/0602017. [^1]: As well as the Lovelock terms, one can also add to the action terms explicitly involving the torsion and Lorentz Chern-Simons terms related to the Pontryagin form [@Mardones:1990qc]. However, in this paper we focus on five dimensions, where no such terms exist. [^2]: It should be noted that there are more general actions with torsion that can be constructed if one does not insist on a first order theory. In four dimension the torsion plays an important geometrical role since the important topological invariant (constructed by Nieh and Yan) can be constructed $N=T^{a}T_{a}-e^{a}e^{b}R_{ab}$. Such an invariant appears in the anomalous term of the divergence of the chiral anomaly [@CZ97]. [^3]: The relationship between the constants appearing in Eqs (\[Itensor\]) and (\[action\]) is given by $\alpha =\frac{c_{2}}{2c_{1}}$, $\Lambda = -6\frac{% c_{0}}{c_{1}}$, $\kappa =2c_{1}$. [^4]: Since torsion shall be later introduced on $M_3$, the Levi-Civita connection shall be denoted by $\hat{\omega}^{ij}$. The symbol $\omega^{ij}$ is reserved for the full connection including the contorsion. [^5]: In terms of the notation of equation (\[Itensor\]), the coefficients satisfy $4\alpha \Lambda =-1$.
--- abstract: 'In a previous paper, we defined a space-level version ${\mathcal{X}_\mathit{Kh}}(L)$ of Khovanov homology. This induces an action of the Steenrod algebra on Khovanov homology. In this paper, we describe the first interesting operation, $\operatorname{Sq}^2{\colon}{\mathit{Kh}}^{i,j}(L)\to{\mathit{Kh}}^{i+2,j}(L)$. We compute this operation for all links up to $11$ crossings; this, in turn, determines the stable homotopy type of ${\mathcal{X}_\mathit{Kh}}(L)$ for all such links.' address: 'Department of Mathematics, Columbia University, New York, NY 10027' author: - Robert Lipshitz - Sucharit Sarkar bibliography: - 'Squares.bib' title: A Steenrod Square on Khovanov Homology --- [^1] [^2] Introduction ============ Khovanov homology, a categorification of the Jones polynomial, associates a bigraded abelian group ${\mathit{Kh}}^{i,j}_{{\mathbb{Z}}}(L)$ to each link $L\subset S^3$ [@Kho-kh-categorification]. In [@RS-khovanov] we gave a space-level version of Khovanov homology. That is, to each link $L$ we associated stable spaces (finite suspension spectra) ${\mathcal{X}_\mathit{Kh}}^j(L)$, well-defined up to stable homotopy equivalence, so that the reduced cohomology ${\widetilde}{H}^i({\mathcal{X}_\mathit{Kh}}^j(L))$ of these spaces is the Khovanov homology ${\mathit{Kh}}^{i,j}_{{\mathbb{Z}}}(L)$ of $L$. Another construction of such spaces has been given by [@HKK-Kh-htpy]. The space ${\mathcal{X}_\mathit{Kh}}^j(L)$ gives algebraic structures on Khovanov homology which are not (yet) apparent from other perspectives. Specifically, while the cohomology of a spectrum does not have a cup product, it does carry stable cohomology operations. The bulk of this paper is devoted to giving an explicit description of the Steenrod square $$\operatorname{Sq}^2{\colon}{\mathit{Kh}}^{i,j}_{{\mathbb{F}}_2}(L)\to {\mathit{Kh}}^{i+2,j}_{{\mathbb{F}}_2}(L)$$ induced by the spectrum ${\mathcal{X}_\mathit{Kh}}^j(L)$. First we give a combinatorial definition of this operation $\operatorname{Sq}^2$ in and then prove that it agrees with the Steenrod square coming from ${\mathcal{X}_\mathit{Kh}}^j(L)$ in . The description is suitable for computer computation, and we have implemented it in Sage. The results for links with $11$ or fewer crossings are given in . In particular, the operation $\operatorname{Sq}^2$ is nontrivial for many links, such as the torus knot $T_{3,4}$. This implies a nontriviality result for the Khovanov space: \[thm:not-moore\] The Khovanov homotopy type ${\mathcal{X}_\mathit{Kh}}^{11}(T_{3,4})$ is not a wedge sum of Moore spaces. Even simpler than $\operatorname{Sq}^2$ is the operation $\operatorname{Sq}^1{\colon}{\mathit{Kh}}^{i,j}_{{\mathbb{F}}_2}(L)\to {\mathit{Kh}}^{i+1,j}_{{\mathbb{F}}_2}(L)$. Let ${\beta}{\colon}{\mathit{Kh}}^{i,j}_{{\mathbb{F}}_2}(L)\to {\mathit{Kh}}^{i+1,j}_{{\mathbb{Z}}}(L)$ be the Bockstein homomorphism, and $r{\colon}{\mathit{Kh}}^{i,j}_{{\mathbb{Z}}}(L)\to{\mathit{Kh}}^{i,j}_{{\mathbb{F}}_2}(L)$ be the reduction mod $2$. Then $\operatorname{Sq}^1=r{\beta}$, and is thus determined by the integral Khovanov homology; see also . As we will discuss in , the operations $\operatorname{Sq}^1$ and $\operatorname{Sq}^2$ together determine the Khovanov homotopy type ${\mathcal{X}_\mathit{Kh}}(L)$ whenever the Khovanov homology of $L$ has a sufficiently simple form. In particular, they determine ${\mathcal{X}_\mathit{Kh}}(L)$ for any link $L$ of $11$ or fewer crossings; these homotopy types are listed in (). The subalgebra of the Steenrod algebra generated by $\operatorname{Sq}^1$ and $\operatorname{Sq}^2$ is $${\mathcal{A}}(1)=\frac{{\mathbb{F}}_2{\{{\operatorname{Sq}^1,\operatorname{Sq}^2}\}}}{(\operatorname{Sq}^1)^2,(\operatorname{Sq}^2)^2+\operatorname{Sq}^1\operatorname{Sq}^2\operatorname{Sq}^1}$$ where ${\mathbb{F}}_2{\{{\operatorname{Sq}^1,\operatorname{Sq}^2}\}}$ is the non-commuting extension of ${\mathbb{F}}_2$ by the variables $\operatorname{Sq}^1$ and $\operatorname{Sq}^2$. By the Adem relations, the next Steenrod square $\operatorname{Sq}^3$ is determined by $\operatorname{Sq}^1$ and $\operatorname{Sq}^2$, viz. $\operatorname{Sq}^3=\operatorname{Sq}^1\operatorname{Sq}^2$. Therefore, the next interesting Steenrod square to compute would be $\operatorname{Sq}^4{\colon}{\mathit{Kh}}^{i,j}_{{\mathbb{F}}_2}(L)\to {\mathit{Kh}}^{i+4,j}_{{\mathbb{F}}_2}(L)$. The Bockstein ${\beta}$ and the operation $\operatorname{Sq}^2$ are sometimes enough to compute Khovanov $K$-theory: in the Atiyah-Hirzebruch spectral sequence for $K$-theory, the $d_2$ differential is zero and the $d_3$ differential is the integral lift ${\beta}\operatorname{Sq}^2 r$ of $\operatorname{Sq}^3$ (see, for instance [@Adams-top-book Proposition 16.6] or [@Law-top-overflow]). For grading reasons, this operation vanishes for links with $11$ or fewer crossings; indeed, the Atiyah-Hirzebruch spectral sequence degenerates in these cases, and the Khovanov $K$-theory is just the tensor product of the Khovanov homology and $K^*({\mathrm{pt}})$. In principle, however, the techniques of this paper could be used to compute Khovanov $K$-theory in some interesting cases. Similarly, in certain situations, the Adams spectral sequence may be used to compute the real connective Khovanov $KO$-theory using merely the module structure of Khovanov homology ${\mathit{Kh}}_{{\mathbb{F}}_2}(L)$ over ${\mathcal{A}}(1)$. **Acknowledgements.** We thank C. Seed and M. Khovanov for helpful conversations. We also thank the referee for many helpful suggestions and corrections. The answer {#sec:answer} ========== Sign and frame assignments on the cube {#subsec:cube} -------------------------------------- Consider the $n$-dimensional cube ${\mathcal{C}}(n)=[0,1]^n$, equipped with the natural CW complex structure. For a vertex $v=(v_1,\dots,v_n)\in\{0,1\}^n$, let $|v|=\sum_i v_i$ denote the Manhattan norm of $v$. For vertices $u,v$, declare $v\leq u$ if for all $i$, $v_i\leq u_i$; if $v\leq u$ and $|u-v|=k$, we write $v\leq_k u$. For a pair of vertices $v\leq_k u$, let ${\mathcal{C}}_{u,v}={\{x\in [0,1]^n\mid\forall i{\colon}v_i\leq x_i\leq u_i\}}$ denote the corresponding $k$-cell of ${\mathcal{C}}(n)$. Let $C^*({\mathcal{C}}(n),{\mathbb{F}}_2)$ denote the cellular cochain complex of ${\mathcal{C}}(n)$ over ${\mathbb{F}}_2$. Let $1_k\in C^k({\mathcal{C}}(n),{\mathbb{F}}_2)$ denote the $k$-cocycle that sends all $k$-cells to $1$. The *standard sign assignment $s\in C^1({\mathcal{C}}(n),{\mathbb{F}}_2)$* (denoted $s_0$ in [@RS-khovanov Definition \[KhSp:def:sign-assignment\]]) is the following $1$-cochain. If $u=({\epsilon}_1,\dots,{\epsilon}_{i-1},1,{\epsilon}_{i+1},\dots,{\epsilon}_n)$ and $v=({\epsilon}_1,\dots,{\epsilon}_{i-1},0,{\epsilon}_{i+1},\dots,{\epsilon}_n)$, then $$s({\mathcal{C}}_{u,v})=({\epsilon}_1+\dots+{\epsilon}_{i-1})\pmod 2\in{\mathbb{F}}_2.$$ It is easy to see that ${\delta}s=1_2$. The *standard frame assignment $f\in C^2({\mathcal{C}}(n),{\mathbb{F}}_2)$* is the following $2$-cochain. If $u=({\epsilon}_1,\dots,{\epsilon}_{i-1},1,{\epsilon}_{i+1},\dots,{\epsilon}_{j-1},1,{\epsilon}_{j+1},\dots,{\epsilon}_n)$ and $v=({\epsilon}_1,\dots,{\epsilon}_{i-1},0,{\epsilon}_{i+1},\dots,{\epsilon}_{j-1},0,{\epsilon}_{j+1},\dots,\allowbreak{\epsilon}_n)$, then $$f({\mathcal{C}}_{u,v})=({\epsilon}_1+\dots+{\epsilon}_{i-1})({\epsilon}_{i+1}+\dots+{\epsilon}_{j-1})\pmod 2\in{\mathbb{F}}_2.$$ \[lem:frame-assignment-sum\] For any $v\leq_3 u$, $$({\delta}f)({\mathcal{C}}_{u,v}) = \sum_{w\in{\{w\midv{\leq_1}w\leq_2 u\}}} s({\mathcal{C}}_{w,v}).$$ Let $u=({\epsilon}_1,\dots,{\epsilon}_{i-1},1,{\epsilon}_{i+1},\dots,{\epsilon}_{j-1},1,{\epsilon}_{j+1},\dots,{\epsilon}_{k-1},1,{\epsilon}_{k+1},\dots,{\epsilon}_n)$ and $v=({\epsilon}_1,\dots,\allowbreak{\epsilon}_{i-1},\allowbreak 0,\allowbreak{\epsilon}_{i+1},\dots,{\epsilon}_{j-1},0,{\epsilon}_{j+1},\dots,{\epsilon}_{k-1},0,{\epsilon}_{k+1},\dots,{\epsilon}_n)$. Then, $$\begin{aligned} \sum_{w\in{\{w\midv{\leq_1}w\leq_2 u\}}} s({\mathcal{C}}_{w,v})&= ({\epsilon}_1+\dots+{\epsilon}_{i-1})+({\epsilon}_1+\dots+{\epsilon}_{j-1})+({\epsilon}_1+\dots+{\epsilon}_{k-1})\\ &=({\epsilon}_1+\dots+{\epsilon}_{i-1})+({\epsilon}_{j+1}+\dots+{\epsilon}_{k-1}). \end{aligned}$$ On the other hand, $$\begin{aligned} ({\delta}f)({\mathcal{C}}_{u,v})&=({\epsilon}_{1}+\dots+{\epsilon}_{i-1})({\epsilon}_{i+1}+\dots+{\epsilon}_{j-1})+({\epsilon}_{1}+\dots+{\epsilon}_{i-1})({\epsilon}_{i+1}+\dots+{\epsilon}_{j-1})\\ &\qquad{}+ ({\epsilon}_{1}+\dots+{\epsilon}_{i-1})({\epsilon}_{i+1}+\dots+{\epsilon}_{j-1}+0+{\epsilon}_{j+1}+\dots+{\epsilon}_{k-1})\\ &\qquad{}+ ({\epsilon}_{1}+\dots+{\epsilon}_{i-1})({\epsilon}_{i+1}+\dots+{\epsilon}_{j-1}+1+{\epsilon}_{j+1}+\dots+{\epsilon}_{k-1})\\ &\qquad{}+ ({\epsilon}_{1}+\dots+{\epsilon}_{i-1}+0+{\epsilon}_{i+1}+\dots+{\epsilon}_{j-1})({\epsilon}_{j+1}+\dots+{\epsilon}_{k-1})\\ &\qquad{}+({\epsilon}_{1}+\dots+{\epsilon}_{i-1}+1+{\epsilon}_{i+1}+\dots+{\epsilon}_{j-1})({\epsilon}_{j+1}+\dots+{\epsilon}_{k-1})\\ &=({\epsilon}_1+\dots+{\epsilon}_{i-1})+({\epsilon}_{j+1}+\dots+{\epsilon}_{k-1}), \end{aligned}$$ thus completing the proof. The Khovanov setup ------------------ In this subsection, we recall the definition of the Khovanov chain complex associated to an oriented link diagram $L$. Assume $L$ has $n$ crossings that have been ordered, and let $n_-$ denote the number of negative crossings in $L$. In what follows, we will usually work over ${\mathbb{F}}_2$, and we will always have a fixed $n$-crossing link diagram $L$ in the background. Hence, we will typically drop both ${\mathbb{F}}_2$ and $L$ from the notation, writing ${\mathit{KC}}={\mathit{KC}}_{{\mathbb{F}}_2}(L)$ for the Khovanov complex of $L$ with ${\mathbb{F}}_2$-coefficients and ${\mathit{KC}}_{{\mathbb{Z}}}$ for the Khovanov complex of $L$ with ${\mathbb{Z}}$-coefficients. Given a vertex $u\in\{0,1\}^n$, let ${D_{}(u)}$ be the corresponding complete resolution of the link diagram $L$, where we take the $0$ resolution at $i{^{\text{th}}}$ crossing if $u_i=0$, and the $1$-resolution otherwise. We usually view ${D_{}(u)}$ as a *resolution configuration* in the sense of [@RS-khovanov Definition \[KhSp:def:res-config\]]; that is, we add arcs at the $0$-resolutions to record the crossings. The set of circles ([resp. ]{}arcs) that appear in ${D_{}(u)}$ is denoted $Z({D_{}(u)})$ ([resp. ]{}$A({D_{}(u)})$). The *Khovanov generators* are of the form ${\mathbf{x}}=({D_{}(u)},x)$, where $x$ is a labeling of the circles in $Z({D_{}(u)})$ by elements of $\{x_+,x_-\}$. Each Khovanov generator carries a *bigrading $({{\mathrm{gr}}_{h}},{{\mathrm{gr}}_{q}})$*; ${{\mathrm{gr}}_{h}}$ is called the homological grading and ${{\mathrm{gr}}_{q}}$ is called the quantum grading. The bigrading is defined by: $$\begin{aligned} {{\mathrm{gr}}_{h}}({D_{}(u)},x)&=-n_-+|u|\\ {{\mathrm{gr}}_{q}}({D_{}(u)},x)&=n-3n_-+|u|+\#{\{Z\in Z({D_{}(u)})\midx(Z)=x_+\}}\\ &\qquad\qquad{}-\#{\{Z\in Z({D_{}(u)})\midx(Z)=x_-\}}.\end{aligned}$$ The set of all Khovanov generators in bigrading $(i,j)$ is denoted ${\mathit{KG}}^{i,j}$. There is an obvious map ${\mathscr{F}}{\colon}{\mathit{KG}}\to\{0,1\}^n$ that sends $({D_{}(u)},x)$ to $u$. It is clear that if ${\mathbf{x}}\in{\mathit{KG}}^{i,j}$, then $|{\mathscr{F}}({\mathbf{x}})|=n_-+i$. The *Khovanov chain group* in bigrading $(i,j)$, ${\mathit{KC}}^{i,j}$, is the ${\mathbb{F}}_2$ vector space with basis ${\mathit{KG}}^{i,j}$; for ${\mathbf{x}}\in{\mathit{KG}}^{i,j}$, and ${\mathbf{c}}\in{\mathit{KC}}^{i,j}$, we say ${\mathbf{x}}\in{\mathbf{c}}$ if the coefficient of ${\mathbf{x}}$ in ${\mathbf{c}}$ is $1$, and ${\mathbf{x}}\notin{\mathbf{c}}$ otherwise. The *Khovanov differential* ${\delta}$ maps ${\mathit{KC}}^{i,j}\to{\mathit{KC}}^{i+1,j}$, and is defined as follows. If ${\mathbf{y}}=({D_{}(v)},y)\in{\mathit{KG}}^{i,j}$ and ${\mathbf{x}}=({D_{}(u)},x)\in{\mathit{KG}}^{i+1,j}$, then ${\mathbf{x}}\in{\delta}{\mathbf{y}}$ if the following hold: 1. $v{\leq_1}u$, that is, ${D_{}(u)}$ is obtained from ${D_{}(v)}$ by performing an embedded $1$-surgery along some arc $A_1\in A({D_{}(v)})$. In particular, either, 1. \[case:split\] the endpoints of $A_1$ lie on the same circle, say $Z_1\in{D_{}(v)}$, which corresponds to two circles, say $Z_2,Z_3\in{D_{}(u)}$; or, 2. \[case:merge\] The endpoints of $A_1$ lie on two different circles, say $Z_1,Z_2\in{D_{}(v)}$, which correspond to a single circle, say $Z_3\in{D_{}(u)}$. 2. In [Case (\[case:split\])]{}, $x$ and $y$ induce the same labeling on ${D_{}(u)}{\setminus}\{Z_2,Z_3\}= {D_{}(v)}{\setminus}\{Z_1\}$; in [Case (\[case:merge\])]{}, $x$ and $y$ induce the same labeling on ${D_{}(u)}{\setminus}\{Z_3\}= {D_{}(v)}{\setminus}\{Z_1,Z_2\}$; 3. In [Case (\[case:split\])]{}, either $y(Z_1)=x(Z_2)=x(Z_3)=x_-$ or $y(Z_1)=x_+$ and $\{x(Z_2),x(Z_3)\}=\{x_+,x_-\}$; in [Case (\[case:merge\])]{}, either $y(Z_1)=y(Z_2)=x(Z_3)=x_+$ or $\{y(Z_1),y(Z_2)\}=\{x_+,x_-\}$ and $x(Z_3)=x_-$. It is clear that if ${\mathbf{x}}\in{\delta}{{\mathbf{y}}}$, then ${\mathscr{F}}({\mathbf{y}}){\leq_1}{\mathscr{F}}({\mathbf{x}})$. The *Khovanov homology* is the homology of $({\mathit{KC}},{\delta})$; the Khovanov homology in bigrading $(i,j)$ is denoted ${\mathit{Kh}}^{i,j}$. For a cycle ${\mathbf{c}}\in{\mathit{KC}}^{i,j}$, let ${[{\mathbf{c}}]}\in{\mathit{Kh}}^{i,j}$ denote the corresponding homology element. A first look at the Khovanov space ---------------------------------- The Khovanov chain complex is actually defined over ${\mathbb{Z}}$, and the ${\mathbb{F}}_2$ versions is its mod $2$ reduction. The Khovanov chain group over ${\mathbb{Z}}$ in bigrading $(i,j)$, ${\mathit{KC}}_{{\mathbb{Z}}}$, is the free ${\mathbb{Z}}$-module with basis ${\mathit{KG}}^{i,j}$. The differential ${\delta}_{{\mathbb{Z}}}{\colon}{\mathit{KC}}_{{\mathbb{Z}}}^{i,j}\to {\mathit{KC}}_{{\mathbb{Z}}}^{i+1,j}$ is defined by $$\label{eq:integer-kh-diff} {\delta}_{{\mathbb{Z}}}{\mathbf{y}}=\sum_{{\mathbf{x}}\in{\delta}{\mathbf{y}}} (-1)^{s({\mathcal{C}}_{{\mathscr{F}}({\mathbf{x}}),{\mathscr{F}}({\mathbf{y}})})}{\mathbf{x}}.$$ In [@RS-khovanov Theorem \[KhSp:thm:kh-space\]], we construct Khovanov spectra ${\mathcal{X}_\mathit{Kh}}^j$ satisfying ${\widetilde}{H}^i({\mathcal{X}_\mathit{Kh}}^j)={\mathit{Kh}}^{i,j}_{{\mathbb{Z}}}$. Moreover, the spectrum ${\mathcal{X}_\mathit{Kh}}=\bigvee_j{\mathcal{X}_\mathit{Kh}}^j$ is defined as the suspension spectrum of a CW complex ${|{\mathscr{C}_K}|_{{}}}$, formally desuspended a few times [@RS-khovanov Definition \[KhSp:def:Kh-space\]] (this space is denoted $Y=\bigvee_jY_j$ in ). Furthermore, there is a bijection between the cells (except the basepoint) of ${|{\mathscr{C}_K}|_{{}}}$ and the Khovanov generators in ${\mathit{KG}}$, which induces an isomorphism between ${\widetilde}{C}^*({|{\mathscr{C}_K}|_{{}}})$, the reduced cellular cochain complex, and $({\mathit{KC}}_{{\mathbb{Z}}},{\delta}_{{\mathbb{Z}}})$. This allows us to associate homotopy invariants to Khovanov homology. Let ${\mathcal{A}}$ be the (graded) Steenrod algebra over ${\mathbb{F}}_2$, and let ${\mathcal{A}}(1)$ be the subalgebra generated by $\operatorname{Sq}^1$ and $\operatorname{Sq}^2$. The Steenrod algebra ${\mathcal{A}}$ acts on the Khovanov homology ${\mathit{Kh}}$, viewed as the (reduced) cohomology of the spectrum ${\mathcal{X}_\mathit{Kh}}$. The (stable) homotopy type of ${\mathcal{X}_\mathit{Kh}}$ is a knot invariant, and therefore, the action of ${\mathcal{A}}$ on ${\mathit{Kh}}$ is a knot invariant as well. The ladybug matching {#subsec:ladybug-matching} -------------------- Let ${\mathbf{x}}\in{\mathit{KG}}^{i+2,j}$ and ${\mathbf{y}}\in{\mathit{KG}}^{i,j}$ be Khovanov generators. Consider the set of Khovanov generators between ${\mathbf{x}}$ and ${\mathbf{y}}$: $${\mathcal{G}_{{{\mathbf{x}}},{{\mathbf{y}}}}}={\{{\mathbf{z}}\in{\mathit{KG}}^{i+1,j}\mid{\mathbf{x}}\in{\delta}{\mathbf{z}},{\mathbf{z}}\in{\delta}{\mathbf{y}}\}}.$$ Since ${\delta}$ is a differential, for all ${\mathbf{x}},{\mathbf{y}}$, there are an even number of elements in ${\mathcal{G}_{{{\mathbf{x}}},{{\mathbf{y}}}}}$. It is well-known that this even number is $0$, $2$ or $4$. Indeed: [[@RS-khovanov Lemma \[KhSp:lem:ind-2-res-config\]]]{}\[lem:ladybug-config\] Let ${\mathbf{x}}=({D_{}(u)},x)$ and ${\mathbf{y}}=({D_{}(v)},y)$. The set ${\mathcal{G}_{{{\mathbf{x}}},{{\mathbf{y}}}}}$ has $4$ elements if and only if the following hold. 1. $v\leq_2 u$, that is, ${D_{}(u)}$ is obtained from ${D_{}(v)}$ by doing embedded $1$-surgeries along two arcs, say $A_1,A_2\in A({D_{}(v)})$. 2. The endpoints of $A_1$ and $A_2$ all lie on the same circle, say $Z_1\in Z({D_{}(v)})$. Furthermore, their endpoints are linked on $Z_1$, so $Z_1$ gives rise to a single circle, say $Z_2$, in $Z({D_{}(u)})$. 3. $x$ and $y$ agree on $Z({D_{}(u)}){\setminus}\{Z_2\}=Z({D_{}(v)}){\setminus}\{Z_1\}$. 4. $y(Z_1)=x_+$ and $x(Z_2)=x_-$. In the construction of the Khovanov space, we made a global choice. This choice furnishes us with a *ladybug matching ${\mathfrak{l}_{{}}}$*, which is a collection $\{{\mathfrak{l}_{{\mathbf{x}},{\mathbf{y}}}}\}$, for ${\mathbf{x}},{\mathbf{y}}\in{\mathit{KG}}$ with $|{\mathscr{F}}({\mathbf{x}})|=|{\mathscr{F}}({\mathbf{y}})|+2$, of fixed point free involutions ${\mathfrak{l}_{{\mathbf{x}},{\mathbf{y}}}}{\colon}{\mathcal{G}_{{{\mathbf{x}}},{{\mathbf{y}}}}}\to {\mathcal{G}_{{{\mathbf{x}}},{{\mathbf{y}}}}}$. The ladybug matching is defined as follows. Fix ${\mathbf{x}}=({D_{}(u)},x)$ and ${\mathbf{y}}=({D_{}(v)},y)$ in ${\mathit{KG}}$ with $|u|=|v|+2$; we will describe a fixed point free involution ${\mathfrak{l}_{{\mathbf{x}},{\mathbf{y}}}}$ of ${\mathcal{G}_{{{\mathbf{x}}},{{\mathbf{y}}}}}$. The only case of interest is when ${\mathcal{G}_{{{\mathbf{x}}},{{\mathbf{y}}}}}$ has $4$ elements; hence assume that we are in the case described in . Do an isotopy in $S^2$ so that ${D_{}(v)}$ looks like . (In the figure, we have not shown the circles in $Z({D_{}(v)}){\setminus}\{Z_1\}$ and the arcs in $A({D_{}(v)}){\setminus}\{A_1,A_2\}$.) shows the four generators in ${\mathcal{G}_{{{\mathbf{x}}},{{\mathbf{y}}}}}$ and the ladybug matching ${\mathfrak{l}_{{\mathbf{x}},{\mathbf{y}}}}$. (Once again, we have not shown the extra circles and arcs.) It is easy to check (cf.[@RS-khovanov Lemma \[KhSp:lem:in-out-preserved\]]) that this matching is well-defined, i.e., it is independent of the choice of isotopy and the numbering of the two arcs in $A({D_{}(v)}){\setminus}A({D_{}(u)})$ as $\{A_1,A_2\}$. \[lem:ladybug-changes-sign\] Let ${\mathbf{x}},{\mathbf{y}},{\mathbf{z}}$ be Khovanov generators with ${\mathbf{z}}\in{\mathcal{G}_{{{\mathbf{x}}},{{\mathbf{y}}}}}$. Let ${\mathbf{z}}'={\mathfrak{l}_{{\mathbf{x}},{\mathbf{y}}}}({\mathbf{z}})$. Then $$s({\mathcal{C}}_{{\mathscr{F}}({\mathbf{x}}),{\mathscr{F}}({\mathbf{z}})}) + s({\mathcal{C}}_{{\mathscr{F}}({\mathbf{z}}),{\mathscr{F}}({\mathbf{y}})}) + s({\mathcal{C}}_{{\mathscr{F}}({\mathbf{x}}),{\mathscr{F}}({\mathbf{z}}')}) + s({\mathcal{C}}_{{\mathscr{F}}({\mathbf{z}}'),{\mathscr{F}}({\mathbf{y}})}) =1.$$ Let $u={\mathscr{F}}({\mathbf{x}})$, $v={\mathscr{F}}({\mathbf{y}})$, $w={\mathscr{F}}({\mathbf{z}})$ and $w'={\mathscr{F}}({\mathbf{z}}')$. We have $v{\leq_1}w,w'{\leq_1}u$. It follows from the definition of ladybug matching () that $w\neq w'$. Therefore, $u,v,w,w'$ are precisely the four vertices that appear in the $2$-cell ${\mathcal{C}}_{u,v}$. Since ${\delta}s=1_2$, $${\delta}s({\mathcal{C}}_{u,v}) = s({\mathcal{C}}_{u,w})+s({\mathcal{C}}_{w,v})+s({\mathcal{C}}_{u,w'}) + s({\mathcal{C}}_{w',v})=1.\qedhere$$ The operation Sq1 {#subsec:sq1-describe} ----------------- Let ${\mathbf{c}}\in{\mathit{KC}}^{i,j}$ be a cycle in the Khovanov chain complex. For ${\mathbf{x}}\in{\mathit{KG}}^{i+1,j}$, let ${\mathcal{G}_{{\mathbf{c}}}({\mathbf{x}})}={\{{\mathbf{y}}\in{\mathit{KG}}^{i,j}\mid{\mathbf{x}}\in{\delta}{\mathbf{y}}, {\mathbf{y}}\in{\mathbf{c}}\}}$. \[def:boundary-matching\] A *boundary matching ${\mathfrak{m}}$ for ${\mathbf{c}}$* is a collection of pairs $({\mathfrak{b}_{{\mathbf{x}}}},{\mathfrak{s}_{{\mathbf{x}}}})$, one for each ${\mathbf{x}}\in{\mathit{KG}}^{i+1,j}$, where: - ${\mathfrak{b}_{{\mathbf{x}}}}$ is a fixed point free involution of ${\mathcal{G}_{{\mathbf{c}}}({\mathbf{x}})}$, and - ${\mathfrak{s}_{{\mathbf{x}}}}$ is a map ${\mathcal{G}_{{\mathbf{c}}}({\mathbf{x}})}\to{\mathbb{F}}_2$, such that for all ${\mathbf{y}}\in{\mathcal{G}_{{\mathbf{c}}}({\mathbf{x}})}$, $$\{{\mathfrak{s}_{{\mathbf{x}}}}({\mathbf{y}}),{\mathfrak{s}_{{\mathbf{x}}}}({\mathfrak{b}_{{\mathbf{x}}}}({\mathbf{y}}))\}= \begin{cases} \{0,1\}&\text{if }s({\mathcal{C}}_{{\mathscr{F}}({\mathbf{x}}),{\mathscr{F}}({\mathbf{y}})})=s({\mathcal{C}}_{{\mathscr{F}}({\mathbf{x}}),{\mathscr{F}}({\mathfrak{b}_{{\mathbf{x}}}}({\mathbf{y}}))})\\ \{0\}&\text{otherwise.} \end{cases}$$ Since ${\mathbf{c}}$ is a cycle, for any ${\mathbf{x}}$ there are an even number of elements in ${\mathcal{G}_{{\mathbf{c}}}({\mathbf{x}})}$. Hence, there exists a boundary matching ${\mathfrak{m}}$ for ${\mathbf{c}}$. \[def:sq1\] Let ${\mathbf{c}}\in{\mathit{KC}}^{i,j}$ be a cycle. For any boundary matching ${\mathfrak{m}}=\{({\mathfrak{b}_{{\mathbf{x}}}},{\mathfrak{s}_{{\mathbf{x}}}})\}$ for ${\mathbf{c}}$, define the chain $\operatorname{sq}_{{\mathfrak{m}}}^1({\mathbf{c}})\in{\mathit{KC}}^{i+1,j}$ as $$\label{eq:sq1} \operatorname{sq}_{{\mathfrak{m}}}^1({\mathbf{c}})=\sum_{{\mathbf{x}}\in{\mathit{KG}}^{i+1,j}}\biggl(\sum_{{\mathbf{y}}\in{\mathcal{G}_{{\mathbf{c}}}({\mathbf{x}})}}{\mathfrak{s}_{{\mathbf{x}}}}({\mathbf{y}})\biggr){\mathbf{x}}.$$ \[prop:sq1-agrees\] For any cycle ${\mathbf{c}}\in{\mathit{KC}}^{i,j}$ and any boundary matching ${\mathfrak{m}}$ for ${\mathbf{c}}$, $\operatorname{sq}_{{\mathfrak{m}}}^1({\mathbf{c}})$ is a cycle. Furthermore, $${[\operatorname{sq}_{{\mathfrak{m}}}^1({\mathbf{c}})]}=\operatorname{Sq}^1({[{\mathbf{c}}]}).$$ The first Steenrod square $\operatorname{Sq}^1$ is the Bockstein associated to the short exact sequence $$0\to{\mathbb{Z}}/2\to{\mathbb{Z}}/4\to{\mathbb{Z}}/2\to 0.$$ Since the differential in ${\mathit{KC}}_{{\mathbb{Z}}}$ is given by [Equation (\[eq:integer-kh-diff\])]{}, a chain representative for $\operatorname{Sq}^1({[{\mathbf{c}}]})$ is the following: $${\mathbf{b}}=\sum_{{\mathbf{x}}\in{\mathit{KG}}^{i+1,j}}\biggl(\frac{\#{\{{\mathbf{y}}\in{\mathcal{G}_{{\mathbf{c}}}({\mathbf{x}})}\mids({\mathcal{C}}_{{\mathscr{F}}({\mathbf{x}}),{\mathscr{F}}({\mathbf{y}})})=0\}}- \#{\{{\mathbf{y}}\in{\mathcal{G}_{{\mathbf{c}}}({\mathbf{x}})}\mids({\mathcal{C}}_{{\mathscr{F}}({\mathbf{x}}),{\mathscr{F}}({\mathbf{y}})})=1\}}}{2}\biggr){\mathbf{x}}.$$ It is easy to see that $$\sum_{{\mathbf{y}}\in{\mathcal{G}_{{\mathbf{c}}}({\mathbf{x}})}}{\mathfrak{s}_{{\mathbf{x}}}}({\mathbf{y}})= \biggl(\frac{\#{\{{\mathbf{y}}\mids({\mathcal{C}}_{{\mathscr{F}}({\mathbf{x}}),{\mathscr{F}}({\mathbf{y}})})=0\}}- \#{\{{\mathbf{y}}\mids({\mathcal{C}}_{{\mathscr{F}}({\mathbf{x}}),{\mathscr{F}}({\mathbf{y}})})=1\}}}{2}\biggr) \pmod 2,$$ and hence ${\mathbf{b}}=\operatorname{sq}_{{\mathfrak{m}}}^1({\mathbf{c}})$. The operation Sq2 ----------------- Let ${\mathbf{c}}\in{\mathit{KC}}^{i,j}$ be a cycle. Choose a boundary matching ${\mathfrak{m}}=\{({\mathfrak{b}_{{\mathbf{z}}}},{\mathfrak{s}_{{\mathbf{z}}}})\}$ for ${\mathbf{c}}$. For ${\mathbf{x}}\in{\mathit{KG}}^{i+2,j}$, define $${\mathcal{G}_{{\mathbf{c}}}({\mathbf{x}})}={\{({\mathbf{z}},{\mathbf{y}})\in{\mathit{KG}}^{i+1,j}\times{\mathit{KG}}^{i,j}\mid{\mathbf{x}}\in{\delta}{\mathbf{z}}, {\mathbf{z}}\in{\delta}{\mathbf{y}},{\mathbf{y}}\in{\mathbf{c}}\}}.$$ Consider the edge-labeled graph ${\mathfrak{G}_{{\mathbf{c}}}({\mathbf{x}})}$, whose vertices are the elements of ${\mathcal{G}_{{\mathbf{c}}}({\mathbf{x}})}$ and whose edges are the following. 1. \[item:edge-1\]There is an unoriented edge between $({\mathbf{z}},{\mathbf{y}})$ and $({\mathbf{z}}',{\mathbf{y}})$, if the ladybug matching ${\mathfrak{l}_{{\mathbf{x}},{\mathbf{y}}}}$ matches ${\mathbf{z}}$ and ${\mathbf{z}}'$. This edge is labeled by $f({\mathcal{C}}_{{\mathscr{F}}({\mathbf{x}}),{\mathscr{F}}({\mathbf{y}})}) \in{\mathbb{F}}_2$, where $f$ denotes the standard frame assignment (). 2. \[item:edge-2\]There is an edge between $({\mathbf{z}},{\mathbf{y}})$ and $({\mathbf{z}},{\mathbf{y}}')$ if the matching ${\mathfrak{b}_{{\mathbf{z}}}}$ matches ${\mathbf{y}}$ with ${\mathbf{y}}'$. This edge is labeled by $0$. Furthermore, if ${\mathfrak{s}_{{\mathbf{z}}}}({\mathbf{y}})=0$ and ${\mathfrak{s}_{{\mathbf{z}}}}({\mathbf{y}}')=1$, then this edge is oriented from $({\mathbf{z}},{\mathbf{y}})$ to $({\mathbf{z}},{\mathbf{y}}')$; if ${\mathfrak{s}_{{\mathbf{z}}}}({\mathbf{y}})=1$ and ${\mathfrak{s}_{{\mathbf{z}}}}({\mathbf{y}}')=0$, then this edge is oriented from $({\mathbf{z}},{\mathbf{y}}')$ to $({\mathbf{z}},{\mathbf{y}})$; and if ${\mathfrak{s}_{{\mathbf{z}}}}({\mathbf{y}})={\mathfrak{s}_{{\mathbf{z}}}}({\mathbf{y}}')$, then the edge is unoriented. \[def:graph-f\] Let ${f({\mathfrak{G}_{{\mathbf{c}}}({\mathbf{x}})})}\in{\mathbb{F}}_2$ be the sum of all the edge-labels (of the Type [(\[item:edge-1\])]{} edges) in the graph ${\mathfrak{G}_{{\mathbf{c}}}({\mathbf{x}})}$. \[lem:even-cyc\] Each component of ${\mathfrak{G}_{{\mathbf{c}}}({\mathbf{x}})}$ is an even cycle. Furthermore, in each component, the number of oriented edges is even. Each vertex $({\mathbf{z}},{\mathbf{y}})$ of ${\mathfrak{G}_{{\mathbf{c}}}({\mathbf{x}})}$ belongs to exactly two edges: the Type [(\[item:edge-1\])]{} edge joining $({\mathbf{z}},{\mathbf{y}})$ and $({\mathfrak{l}_{{\mathbf{x}},{\mathbf{y}}}}({\mathbf{z}}),{\mathbf{y}})$; and the Type [(\[item:edge-2\])]{} edge joining $({\mathbf{z}},{\mathbf{y}})$ and $({\mathbf{z}},{\mathfrak{b}_{{\mathbf{z}}}}({\mathbf{y}}))$. This implies that each component of ${\mathfrak{G}_{{\mathbf{c}}}({\mathbf{x}})}$ is an even cycle. In order to prove the second part, vertex-label the graph as follows: To a vertex $({\mathbf{z}},{\mathbf{y}})$, assign the number $s({\mathcal{C}}_{{\mathscr{F}}({\mathbf{x}}),{\mathscr{F}}({\mathbf{z}})})+ s({\mathcal{C}}_{{\mathscr{F}}({\mathbf{z}}),{\mathscr{F}}({\mathbf{y}})})\in{\mathbb{F}}_2$. implies that the Type [(\[item:edge-1\])]{} edges join vertices carrying opposite labels; and among the Type [(\[item:edge-2\])]{} edges, it is clear from the definition of boundary matching () that the oriented edges join vertices carrying the same label, and the unoriented edges join vertices carrying opposite labels. Therefore, each cycle must contain an even number of unoriented edges; since there are an even number of vertices in each cycle, we are done. This observation allows us to associate the following number ${g({\mathfrak{G}_{{\mathbf{c}}}({\mathbf{x}})})}\in{\mathbb{F}}_2$ to the graph. \[def:graph-g\]Partition the oriented edges of ${\mathfrak{G}_{{\mathbf{c}}}({\mathbf{x}})}$ into two sets, such that if two edges from the same cycle are in the same set, they are oriented in the same direction. Let ${g({\mathfrak{G}_{{\mathbf{c}}}({\mathbf{x}})})}$ be the number modulo $2$ of the elements in either set. \[def:sq2\] Let ${\mathbf{c}}\in{\mathit{KC}}^{i,j}$ be a cycle. For any boundary matching ${\mathfrak{m}}$ for ${\mathbf{c}}$, define the chain $\operatorname{sq}_{{\mathfrak{m}}}^2({\mathbf{c}})\in{\mathit{KC}}^{i+1,j}$ as $$\label{eq:sq2} \operatorname{sq}^2_{{\mathfrak{m}}}({\mathbf{c}})=\sum_{{\mathbf{x}}\in{\mathit{KG}}^{i+2,j}} \biggl(\#|{\mathfrak{G}_{{\mathbf{c}}}({\mathbf{x}})}| + {f({\mathfrak{G}_{{\mathbf{c}}}({\mathbf{x}})})}+{g({\mathfrak{G}_{{\mathbf{c}}}({\mathbf{x}})})}\biggr){\mathbf{x}}.$$ (Here, $\#|{\mathfrak{G}_{{\mathbf{c}}}({\mathbf{x}})}|$ is the number of components of the graph, ${f({\mathfrak{G}_{{\mathbf{c}}}({\mathbf{x}})})}$ is defined in , and ${g({\mathfrak{G}_{{\mathbf{c}}}({\mathbf{x}})})}$ is defined in .) We devote to proving the following. \[thm:sq2-agrees\] For any cycle ${\mathbf{c}}\in{\mathit{KC}}^{i,j}$ and any boundary matching ${\mathfrak{m}}$ for ${\mathbf{c}}$, $\operatorname{sq}_{{\mathfrak{m}}}^2({\mathbf{c}})$ is a cycle. Furthermore, $${[\operatorname{sq}_{{\mathfrak{m}}}^2({\mathbf{c}})]}=\operatorname{Sq}^2({[{\mathbf{c}}]}).$$ \[cor:commute-isom\] The operations $\operatorname{sq}_{{\mathfrak{m}}}^1$ and $\operatorname{sq}_{{\mathfrak{m}}}^2$ induce well-defined maps $$\operatorname{Sq}^1{\colon}{\mathit{Kh}}^{i,j}\to {\mathit{Kh}}^{i+1,j}\qquad\text{and}\qquad \operatorname{Sq}^2{\colon}{\mathit{Kh}}^{i,j}\to {\mathit{Kh}}^{i+2,j}$$ that are independent of the choices of boundary matchings ${\mathfrak{m}}$. Furthermore, these maps are link invariants, in the following sense: given any two diagrams $L$ and $L'$ representing the same link, there are isomorphisms $\phi_{i,j}{\colon}{\mathit{Kh}}^{i,j}(L)\to{\mathit{Kh}}^{i,j}(L')$ making the following diagrams commute: $$\xymatrix{ {\mathit{Kh}}^{i+1,j}(L)\ar[r]^{\phi_{i+1,j}} & {\mathit{Kh}}^{i+1,j}(L') & \qquad & {\mathit{Kh}}^{i+2,j}(L)\ar[r]^{\phi_{i+2,j}} & {\mathit{Kh}}^{i+2,j}(L')\\ {\mathit{Kh}}^{i,j}(L)\ar[u]_{\operatorname{Sq}^1}\ar[r]_{\phi_{i,j}} & {\mathit{Kh}}^{i,j}(L')\ar[u]_{\operatorname{Sq}^1} &\qquad & {\mathit{Kh}}^{i,j}(L)\ar[u]_{\operatorname{Sq}^2}\ar[r]_{\phi_{i,j}} & {\mathit{Kh}}^{i,j}(L')\ar[u]_{\operatorname{Sq}^2}. }$$ This is immediate from , , and invariance of the Khovanov spectrum [@RS-khovanov Theorem \[KhSp:thm:kh-space\]]. Indeed, we show in [@RS-rasmussen Theorem 4] that we can choose the isomorphisms in to be the canonical ones induced from an isotopy from $L$ to $L'$. An example {#subsec:summary} ---------- For the reader’s convenience, we present an artificial example to illustrate and . Assume ${\mathit{KG}}^{i,j}=\{{\mathbf{y}}_1,\dots,{\mathbf{y}}_5\}$, ${\mathit{KG}}^{i+1,j}=\{{\mathbf{z}}_1,\dots,{\mathbf{z}}_6\}$ and ${\mathit{KG}}^{i+2,j}=\{{\mathbf{x}}_1,{\mathbf{x}}_2\}$, and the Khovanov differential ${\delta}$ has the following form. $$\xymatrix{ &{\mathbf{x}}_1&&{\mathbf{x}}_2&&\\ {\mathbf{z}}_1\ar[ur]\ar[urrr]&{\mathbf{z}}_2\ar[u]&{\mathbf{z}}_3\ar[ul]&{\mathbf{z}}_4 \ar[ull]\ar[u] &{\mathbf{z}}_5\ar[ul]&{\mathbf{z}}_6\ar[ull]\\ &{\mathbf{y}}_1\ar[ul]\ar[u]\ar[ur]\ar[urr]&{\mathbf{y}}_2\ar[ull]\ar[u]\ar[urr] &{\mathbf{y}}_3\ar[ull]\ar[u]\ar[ur] &{\mathbf{y}}_4\ar[u]\ar[ur]&{\mathbf{y}}_5\ar[u]\ar[ul]\\ }$$ Assume that the sign assignment and the frame assignment are as follows. $({\mathbf{x}},{\mathbf{y}})$ $s({\mathcal{C}}_{{\mathscr{F}}({\mathbf{x}}),{\mathscr{F}}({\mathbf{y}})})$ $({\mathbf{x}},{\mathbf{y}})$ $s({\mathcal{C}}_{{\mathscr{F}}({\mathbf{x}}),{\mathscr{F}}({\mathbf{y}})})$ $({\mathbf{x}},{\mathbf{y}})$ $s({\mathcal{C}}_{{\mathscr{F}}({\mathbf{x}}),{\mathscr{F}}({\mathbf{y}})})$ $({\mathbf{x}},{\mathbf{y}})$ $f({\mathcal{C}}_{{\mathscr{F}}({\mathbf{x}}),{\mathscr{F}}({\mathbf{y}})})$ ----------------------------------- ------------------------------------------------------------------------------ ----------------------------------- ------------------------------------------------------------------------------ ----------------------------------- ------------------------------------------------------------------------------ ----------------------------------- ------------------------------------------------------------------------------ $({\mathbf{z}}_1,{\mathbf{y}}_1)$ 1 $({\mathbf{z}}_4,{\mathbf{y}}_3)$ 1 $({\mathbf{x}}_1,{\mathbf{z}}_3)$ 0 $({\mathbf{x}}_1,{\mathbf{y}}_1)$ 1 $({\mathbf{z}}_2,{\mathbf{y}}_1)$ 1 $({\mathbf{z}}_5,{\mathbf{y}}_3)$ 0 $({\mathbf{x}}_2,{\mathbf{z}}_3)$ 0 $({\mathbf{x}}_2,{\mathbf{y}}_1)$ 0 $({\mathbf{z}}_3,{\mathbf{y}}_1)$ 0 $({\mathbf{z}}_5,{\mathbf{y}}_4)$ 0 $({\mathbf{x}}_1,{\mathbf{z}}_4)$ 0 $({\mathbf{x}}_1,{\mathbf{y}}_2)$ 0 $({\mathbf{z}}_4,{\mathbf{y}}_1)$ 0 $({\mathbf{z}}_6,{\mathbf{y}}_4)$ 0 $({\mathbf{x}}_2,{\mathbf{z}}_4)$ 0 $({\mathbf{x}}_2,{\mathbf{y}}_2)$ 1 $({\mathbf{z}}_1,{\mathbf{y}}_2)$ 1 $({\mathbf{z}}_5,{\mathbf{y}}_5)$ 0 $({\mathbf{x}}_2,{\mathbf{z}}_5)$ 0 $({\mathbf{x}}_1,{\mathbf{y}}_3)$ 1 $({\mathbf{z}}_3,{\mathbf{y}}_2)$ 0 $({\mathbf{z}}_6,{\mathbf{y}}_5)$ 0 $({\mathbf{x}}_2,{\mathbf{z}}_6)$ 1 $({\mathbf{x}}_2,{\mathbf{y}}_3)$ 0 $({\mathbf{z}}_5,{\mathbf{y}}_2)$ 0 $({\mathbf{x}}_1,{\mathbf{z}}_1)$ 0 $({\mathbf{x}}_2,{\mathbf{y}}_4)$ 1 $({\mathbf{z}}_2,{\mathbf{y}}_3)$ 0 $({\mathbf{x}}_1,{\mathbf{z}}_2)$ 0 $({\mathbf{x}}_2,{\mathbf{y}}_5)$ 1 Finally, assume that the ladybug matching ${\mathfrak{l}_{{\mathbf{x}}_1,{\mathbf{y}}_1}}$ matches ${\mathbf{z}}_1$ with ${\mathbf{z}}_4$ and ${\mathbf{z}}_2$ with ${\mathbf{z}}_3$. Let us start with the cycle ${\mathbf{c}}\in{\mathit{KC}}^{i,j}$ given by ${\mathbf{c}}=\sum_{i=1}^5 {\mathbf{y}}_i$. In order to compute $\operatorname{Sq}^1({\mathbf{c}})$ and $\operatorname{Sq}^2({\mathbf{c}})$, we need to choose a boundary matching ${\mathfrak{m}}=\{({\mathfrak{b}_{{\mathbf{z}}_j}},{\mathfrak{s}_{{\mathbf{z}}_j}})\}$ for ${\mathbf{c}}$. Let us choose the following boundary matching. $j$ ${\mathfrak{b}_{{\mathbf{z}}_j}}$ ${\mathfrak{s}_{{\mathbf{z}}_j}}$ $j$ ${\mathfrak{b}_{{\mathbf{z}}_j}}$ ${\mathfrak{s}_{{\mathbf{z}}_j}}$ ----- ------------------------------------------------- ------------------------------------------- ----- ------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------- $1$ ${\mathbf{y}}_1{\leftrightarrow}{\mathbf{y}}_2$ ${\mathbf{y}}_1\to 0,{\mathbf{y}}_2\to 1$ $4$ ${\mathbf{y}}_1{\leftrightarrow}{\mathbf{y}}_3$ ${\mathbf{y}}_1,{\mathbf{y}}_3\to 0$ $2$ ${\mathbf{y}}_1{\leftrightarrow}{\mathbf{y}}_3$ ${\mathbf{y}}_1,{\mathbf{y}}_3\to 0$ $5$ ${\mathbf{y}}_2{\leftrightarrow}{\mathbf{y}}_3, {\mathbf{y}}_4{\leftrightarrow}{\mathbf{y}}_5$ ${\mathbf{y}}_2,{\mathbf{y}}_4\to 0,{\mathbf{y}}_3,{\mathbf{y}}_5\to 1$ $3$ ${\mathbf{y}}_1{\leftrightarrow}{\mathbf{y}}_2$ ${\mathbf{y}}_1\to 0,{\mathbf{y}}_2\to 1$ $6$ ${\mathbf{y}}_4{\leftrightarrow}{\mathbf{y}}_5$ ${\mathbf{y}}_4\to 0,{\mathbf{y}}_5\to 1$ Then, the cycle $\operatorname{sq}^1_{{\mathfrak{m}}}({\mathbf{c}})$ is given by $$\begin{aligned} \operatorname{sq}^1_{{\mathfrak{m}}}({\mathbf{c}})&=\sum_{j=1}^6 \biggl(\sum_{{\mathbf{y}}\in{\mathcal{G}_{{\mathbf{c}}}({\mathbf{z}}_j)}}{\mathfrak{s}_{{\mathbf{z}}_j}}({\mathbf{y}})\biggr) {\mathbf{z}}_j\\ &=\bigl({\mathfrak{s}_{{\mathbf{z}}_1}}({\mathbf{y}}_1)+{\mathfrak{s}_{{\mathbf{z}}_1}}({\mathbf{y}}_2)\bigr) {\mathbf{z}}_1+ \bigl({\mathfrak{s}_{{\mathbf{z}}_2}}({\mathbf{y}}_1)+{\mathfrak{s}_{{\mathbf{z}}_2}}({\mathbf{y}}_3)\bigr) {\mathbf{z}}_2+ \bigl({\mathfrak{s}_{{\mathbf{z}}_3}}({\mathbf{y}}_1)+{\mathfrak{s}_{{\mathbf{z}}_3}}({\mathbf{y}}_2)\bigr) {\mathbf{z}}_3\\ &\qquad{}+ \bigl({\mathfrak{s}_{{\mathbf{z}}_4}}({\mathbf{y}}_1)+{\mathfrak{s}_{{\mathbf{z}}_4}}({\mathbf{y}}_3)\bigr) {\mathbf{z}}_4+ \bigl({\mathfrak{s}_{{\mathbf{z}}_5}}({\mathbf{y}}_2)+{\mathfrak{s}_{{\mathbf{z}}_5}}({\mathbf{y}}_3)+ {\mathfrak{s}_{{\mathbf{z}}_5}}({\mathbf{y}}_4)+{\mathfrak{s}_{{\mathbf{z}}_5}}({\mathbf{y}}_5)\bigr) {\mathbf{z}}_5\\ &\qquad{}+ \bigl({\mathfrak{s}_{{\mathbf{z}}_6}}({\mathbf{y}}_4)+{\mathfrak{s}_{{\mathbf{z}}_6}}({\mathbf{y}}_5)\bigr) {\mathbf{z}}_6\\ &={\mathbf{z}}_1+{\mathbf{z}}_3+{\mathbf{z}}_6. \end{aligned}$$ In order to compute $\operatorname{sq}^2_{{\mathfrak{m}}}({\mathbf{c}})$, we need to study the graphs ${\mathfrak{G}_{{\mathbf{c}}}({\mathbf{x}}_1)}$ and ${\mathfrak{G}_{{\mathbf{c}}}({\mathbf{x}}_2)}$, which are the following: $$\xymatrix@C=0pt{ &({\mathbf{z}}_1,{\mathbf{y}}_1)\ar@{->}[rr]\ar@{.}[dl]^-1&&({\mathbf{z}}_1,{\mathbf{y}}_2)&& &&({\mathbf{z}}_1,{\mathbf{y}}_1)\ar@{->}[dr]\ar@{.}[dl]^-0& &&\\ ({\mathbf{z}}_4,{\mathbf{y}}_1)&&&&({\mathbf{z}}_3,{\mathbf{y}}_2) \ar@{<-}[d]\ar@{.}[ul]^-0 & &({\mathbf{z}}_4,{\mathbf{y}}_1)&&({\mathbf{z}}_1,{\mathbf{y}}_2) &({\mathbf{z}}_6,{\mathbf{y}}_5)\ar@{<-}[rr]\ar@{.}[d]^-1&&({\mathbf{z}}_6,{\mathbf{y}}_4)\\ ({\mathbf{z}}_4,{\mathbf{y}}_3)\ar@{-}[u]\ar@{.}[dr]^-1&&&&({\mathbf{z}}_3,{\mathbf{y}}_1)&\text{and}&({\mathbf{z}}_4,{\mathbf{y}}_3)\ar@{-}[u]\ar@{.}[dr]^-0 &&({\mathbf{z}}_5,{\mathbf{y}}_2)\ar@{->}[dl]\ar@{.}[u]^-1 &({\mathbf{z}}_5,{\mathbf{y}}_5)&&({\mathbf{z}}_5,{\mathbf{y}}_4).\ar@{->}[ll]\ar@{.}[u]^-1\\ &({\mathbf{z}}_2,{\mathbf{y}}_3)&&({\mathbf{z}}_2,{\mathbf{y}}_1)\ar@{-}[ll]\ar@{.}[ur]^-1&& &&({\mathbf{z}}_5,{\mathbf{y}}_3)& && }$$ The Type [(\[item:edge-1\])]{} edges are represented by the dotted lines; they are unoriented and are labeled by elements of ${\mathbb{F}}_2$. The Type [(\[item:edge-2\])]{} edges are represented by the solid lines; they are labeled by $0$ and are sometimes oriented. Therefore, the cycle $\operatorname{sq}^2_{{\mathfrak{m}}}({\mathbf{c}})$ is given by $$\begin{aligned} \operatorname{sq}^2_{{\mathfrak{m}}}({\mathbf{c}})&=\sum_{j=1}^2 \biggl(\#|{\mathfrak{G}_{{\mathbf{c}}}({\mathbf{x}}_j)}| + {f({\mathfrak{G}_{{\mathbf{c}}}({\mathbf{x}}_j)})}+ {g({\mathfrak{G}_{{\mathbf{c}}}({\mathbf{x}}_j)})}\biggr){\mathbf{x}}_j\\ &=(1+1+1){\mathbf{x}}_1+(0+1+1){\mathbf{x}}_2\\ &={\mathbf{x}}_1. \end{aligned}$$ Where the answer comes from {#sec:where-answer} =========================== This section is devoted to proving . The operation $\operatorname{Sq}^2$ on a CW complex $Y$ is determined by the sub-quotients $Y^{(m+2)}/Y^{(m-1)}$. (In we review an explicit description in these terms, due to Steenrod.) The space ${\mathcal{X}_\mathit{Kh}}(L)$ is a formal de-suspension of a CW complex $Y=Y(L)$. So, most of the work is in understanding combinatorially how the $m$-, $(m+1)$- and $(m+2)$-cells of $Y(L)$ are glued together. The description of $Y(L)$ from [@RS-khovanov] is in terms of a Pontrjagin-Thom type construction. To understand just $Y^{(m+2)}/Y^{(m-1)}$ involves studying certain framed points in ${{\mathbb{R}}}^m$ and framed paths in ${\mathbb{R}}\times{{\mathbb{R}}}^{m}$. We will draw these framings from a particular set of choices, described in . explains exactly how we assign framings from this set, and shows that these framings are consistent with the construction in [@RS-khovanov]. Finally, discusses how to go from these choices to $Y^{(m+2)}/Y^{(m-1)}$, and why the resulting operation $\operatorname{Sq}^2$ agrees with the operation from . Sq2 for a CW complex {#subsec:sq2-cw-complex} -------------------- We start by recalling a definition of $\operatorname{Sq}^2$. The discussion in this section is heavily inspired by [@Ste-top-operations Section 12]. Let $K_m=K({\mathbb{Z}}/2,m)$ denote the $m{^{\text{th}}}$ Eilenberg-MacLane space for the group ${\mathbb{Z}}/2$, so $\pi_m(K_m)={\mathbb{Z}}/2$ and $\pi_i(K_m)=0$ for $i\neq m$. Assume that $m$ is sufficiently large, say $m\geq 3$. We start by discussing a CW structure for $K_m$. Since $\pi_i(K_m)=0$ for $i<m$, we can choose the $m$-skeleton $K_m^{(m)}$ to be a single $m$-cell $e^m$ with the entire boundary ${\partial}e^m$ attached to the basepoint. To arrange that $\pi_m(K_m)={\mathbb{Z}}/2$ it suffices to attach a single $(m+1)$-cell via a degree $2$ map ${\partial}e^{m+1}\to K^{(m)}_m=S^m$. We show that the resulting $(m+1)$-skeleton $K_m^{(m+1)}$ has $\pi_{m+1}(K_m^{(m+1)})\cong {\mathbb{Z}}/2$. From the long exact sequence for the pair $(K_m^{m+1},S^m)$, $$\pi_{m+2}(K_m^{(m+1)},S^m)\to \pi_{m+1}(S^m)\to \pi_{m+1}(K_m^{(m+1)})\to \pi_{m+1}(K_m^{(m+1)},S^m)\to \pi_m(S^m).$$ By excision (since $m$ is large), $\pi_{m+1}(K_m^{(m+1)},S^m)\cong\pi_{m+1}(K_m^{(m+1)}/S^m)=\pi_{m+1}(S^{m+1})={\mathbb{Z}}$ and $\pi_{m+2}(K_m^{(m+1)},S^m)\cong \pi_{m+2}(S^{m+1})={\mathbb{Z}}/2$. The maps $\pi_{i+1}(K_m^{(m+1)},S^m)=\pi_{i+1}(S^{m+1})\to \pi_i(S^m)$ are twice the Freudenthal isomorphisms. So, this sequence becomes $${\mathbb{Z}}/2\stackrel{2}{\to} {\mathbb{Z}}/2\to \pi_{m+1}(K_m^{(m+1)})\to {\mathbb{Z}}\stackrel{2}{\to}{\mathbb{Z}}.$$ Thus, $\pi_{m+1}(K_m^{(m+1)})\cong{\mathbb{Z}}/2$, represented by the Hopf map $S^{m+1}\to S^m=K_m^{(m)}{\hookrightarrow}K_m^{(m+1)}$. Let $K_m^{(m+2)}$ be the result of attaching an $(m+2)$-cell $e^{m+2}$ to kill this ${\mathbb{Z}}/2$. This attaching map has degree $0$ as a map to the $(m+1)$-cell $e^{m+1}$ in $K_m^{(m+1)}$, so the $(m+2)$-skeleton of $K_m$ has cohomology: $$\begin{aligned} H^m(K_m^{(m+2)};{\mathbb{Z}})&=0 & H^{m+1}(K_m^{(m+2)};{\mathbb{Z}})&={\mathbb{F}}_2 & H^{m+2}(K_m^{(m+2)};{\mathbb{Z}})&={\mathbb{Z}}\\ H^m(K_m^{(m+2)};{\mathbb{F}}_2)&={\mathbb{F}}_2 & H^{m+1}(K_m^{(m+2)};{\mathbb{F}}_2)&={\mathbb{F}}_2 & H^{m+2}(K_m^{(m+2)};{\mathbb{F}}_2)&={\mathbb{F}}_2.\end{aligned}$$ Therefore, there are fundamental cohomology classes $\iota\in H^m(K_m;{\mathbb{F}}_2)$ and $\operatorname{Sq}^2(\iota)\in H^{m+2}(K_m^{(m+2)};{\mathbb{F}}_2)$. It turns out that the element $\operatorname{Sq}^2(\iota)$ survives to $H^{m+2}(K_m;{\mathbb{F}}_2)$. Now, consider a CW complex $Y$ and a cohomology class $c\in H^m(Y;{\mathbb{F}}_2)$. The element $c$ is classified by a map ${{\mathfrak{c}^{{}}}}{\colon}Y\to K_m$, so that ${{\mathfrak{c}^{{}}}}^*\iota=c$. We can arrange that the map ${{\mathfrak{c}^{{}}}}$ is cellular. So, we have an element $\operatorname{Sq}^2(c)={{\mathfrak{c}^{{}}}}^*\operatorname{Sq}^2(\iota)\in H^{m+2}(Y;{\mathbb{F}}_2)$. The element $\operatorname{Sq}^2(c)$ is determined by its restriction to $H^{m+2}(Y^{(m+2)};{\mathbb{F}}_2)$. So, to compute $\operatorname{Sq}^2(c)$ it suffices to give a cellular map $Y^{(m+2)}\to K_m^{(m+2)}$ so that $\iota$ pulls back to $c$. Then, $\operatorname{Sq}^2(c)$ is the cochain which sends an $(m+2)$-cell $f^{m+2}$ of $Y$ to the degree of the map $f^{m+2}/{\partial}f^{m+2}\to e^{m+2}/{\partial}e^{m+2}$. Equivalently, $\operatorname{Sq}^2(c)$ sends $f^{m+2}$ to the element ${{\mathfrak{c}^{{}}}}|_{{\partial}f^{m+2}}\in \pi_{m+1}(K_m^{(m+1)})=\pi_{m+1}(S^m)={\mathbb{Z}}/2$. (In other words, $\operatorname{Sq}^2(c)$ is the obstruction to homotoping ${{\mathfrak{c}^{{}}}}$ so that it sends the $(m+2)$-skeleton of $Y$ to the $(m+1)$-skeleton of $K_m$.) Since $K_m^{(m+2)}$ has no cells of dimension between $0$ and $m$, the map $Y^{(m+2)}\to K_m^{(m+2)}$ factors through $Y^{(m+2)}/Y^{(m-1)}$. To understand the operation $\operatorname{Sq}^2$ on Khovanov homology induced by the Khovanov homotopy type $Y$, it remains to explicitly give the map ${{\mathfrak{c}^{{}}}}$ on $Y^{(m+2)}/Y^{(m-1)}$. This will be done in , after we develop tools to understand the attaching maps for the $(m+2)$-cells. Frames in R3 {#subsec:frames-R3} ------------ As discussed in , the sub-quotients $Y^{(m+2)}/Y^{(m-1)}$ of the Khovanov space $Y=Y(L)$ are defined in terms of framed points in $\{0\}\times {{\mathbb{R}}}^m$ and framed paths in ${\mathbb{R}}\times{{\mathbb{R}}}^{m}$ connecting these points. A framing of a path $\gamma{\colon}[0,1]\to {{\mathbb{R}}}^{m+1}$ is a tuple $[v_1(t),\dots, v_m(t)]\in ({{\mathbb{R}}}^{m+1})^m$ of orthonormal vector fields along $\gamma$, normal to $\gamma$. A collection of $m$ orthonormal vectors $v_1,\dots, v_m$ in ${{\mathbb{R}}}^{m+1}$ specifies a matrix in $\operatorname{\mathit{SO}}(m+1)$, whose last $m$ columns are $v_1,\dots,v_m$ and whose first column is the cross product of $v_1,\dots,v_m$. Now, suppose that $p,q \in \{0\}\times{{\mathbb{R}}}^m$ and that we are given trivializations ${\varphi}_p, {\varphi}_q$ of the normal bundles in $\{0\}\times{{\mathbb{R}}}^m$ to $p$, $q$ (i.e., framings of $p$ and $q$). On the one hand, we can consider the set of isotopy classes of framed paths from $(p, {\varphi}_p)$ to $(q,{\varphi}_q)$. On the other hand, we can consider the homotopy classes of paths in $\operatorname{\mathit{SO}}(m+1)$ from ${\varphi}_p$ to ${\varphi}_q$. There is an obvious map from isotopy classes of framed paths in ${{\mathbb{R}}}^{m+1}$ to homotopy classes of paths in $\operatorname{\mathit{SO}}(m+1)$, by considering only the framing. This map is a surjection if $m\geq 2$ and a bijection if $m\geq 3$. In the case that $m\geq 3$, both sets have two elements. The upshot is that if we want to specify an isotopy classes of framed paths with given endpoints, and $m\geq 3$, then it suffices to specify a homotopy class of paths in $\operatorname{\mathit{SO}}(m+1)$. The framings on both the endpoints and the paths relevant to constructing $Y^{(m+2)}/Y^{(m-1)}$ will have a special form: they will be stabilizations of the $m=2$ case. Specifically, we will write $m=m_1+m_2$ with $m_i\geq 1$. Let $[{\overline}{e},e_{11},\dots,e_{1m_1},e_{21},\dots,e_{2m_2}]$ denote the standard basis for ${\mathbb{R}}\times{{\mathbb{R}}}^m$. Then all of the points will have framings of the form $$[v_1,e_{12},\dots, e_{1m_1}, v_2, e_{22},\dots, e_{2m_2}]\in \bigl(\{0\}\times {{\mathbb{R}}}^{m_1}\times {{\mathbb{R}}}^{m_2}\bigr)^m,$$ for some $v_1\in\{\pm e_{11}\}$, $v_2\in \{\pm e_{21}\}$. So, to describe isotopy classes of framed paths connecting these points it suffices to describe paths of the form $$[v_1(t),e_{12},\dots, e_{1m_1},v_2(t), e_{22},\dots, e_{2m_2}]\in \bigl({{\mathbb{R}}}\times {{\mathbb{R}}}^{m_1}\times {{\mathbb{R}}}^{m_2}\bigr)^{m}.$$ Therefore, to describe such paths, it suffices to work in ${{\mathbb{R}}}^3$ (i.e., the case $m=2$). Denote the standard basis for ${{\mathbb{R}}}^3$ by $[{\overline}{e},e_1,e_2]$. We will work with the four distinguished frames in $\{0\}\times{{\mathbb{R}}}^2$, $[e_1,e_2],[-e_1,e_2],[e_1,-e_2],[-e_1,-e_2]$, which we denote by the symbols ${{}^+_+}, {{}^+_-}, {{}^-_+}, {{}^-_-}$, respectively. By a *coherent system of paths* joining ${{}^+_+}, {{}^+_-}, {{}^-_+}, {{}^-_-}$ we mean a choice of a path ${\overline{{\varphi}_1{\varphi}_2}}$ in $\operatorname{\mathit{SO}}(3)$ from ${\varphi}_1$ to ${\varphi}_2$ for each pair of frames ${\varphi}_1,{\varphi}_2\in\{{{}^+_+}, {{}^+_-}, {{}^-_+}, {{}^-_-}\}$, satisfying the following cocycle conditions: 1. For all ${\varphi}\in\{{{}^+_+}, {{}^+_-}, {{}^-_+}, {{}^-_-}\}$, the loop ${\overline{{\varphi}{\varphi}}}$ is nullhomotopic; and 2. For all ${\varphi}_1,{\varphi}_2,{\varphi}_3\in\{{{}^+_+}, {{}^+_-}, {{}^-_+}, {{}^-_-}\}$, the path ${\overline{{\varphi}_1{\varphi}_2}}\cdot{\overline{{\varphi}_2{\varphi}_3}}$ is homotopic ([relative ]{}endpoints) to the path ${\overline{{\varphi}_1{\varphi}_3}}$. We make a particular choice of a coherent system of paths, as follows: - ${\overline}{{{}^+_+}{{}^+_-}}, {\overline}{{{}^+_-}{{}^+_+}}, {\overline}{{{}^-_+}{{}^-_-}}, {\overline}{{{}^-_-}{{}^-_+}}$: Rotate $180^{\circ}$ around the $e_2$-axis, such that the first vector equals ${\overline}{e}$ halfway through. - ${\overline}{{{}^+_+}{{}^-_+}}, {\overline}{{{}^-_+}{{}^+_+}}$: Rotate $180^{\circ}$ around the $e_1$-axis, such that the second vector equals ${\overline}{e}$ halfway through. - ${\overline}{{{}^+_-}{{}^-_-}}, {\overline}{{{}^-_-}{{}^+_-}}$: Rotate $180^{\circ}$ around the $e_1$-axis, such that the second vector equals $-{\overline}{e}$ halfway through. - ${\overline}{{{}^+_+}{{}^-_-}}, {\overline}{{{}^-_-}{{}^+_+}}, {\overline}{{{}^+_-}{{}^-_+}},{\overline}{{{}^-_+}{{}^+_-}}$: Rotate $180^{\circ}$ around the ${\overline}{e}$-axis, such that the second vector equals $-e_1$ halfway through. The above choice describes a coherent system of paths. We only need to check that each of the loops ${\overline}{{{}^+_+}{{}^+_-}}\cdot{\overline}{{{}^+_-}{{}^-_-}}\cdot{\overline}{{{}^-_-}{{}^+_+}}$, ${\overline}{{{}^+_+}{{}^-_+}}\cdot{\overline}{{{}^-_+}{{}^+_-}}\cdot{\overline}{{{}^+_-}{{}^+_+}}$, and ${\overline}{{{}^+_+}{{}^+_-}}\cdot{\overline}{{{}^+_-}{{}^-_-}}\cdot{\overline}{{{}^-_-}{{}^-_+}}\cdot{\overline}{{{}^-_+}{{}^+_+}}$ are null-homotopic. This is best checked with hand motions, as we have illustrated for the first loop in . ![**Null-homotopy of the loop ${\overline}{{{}^+_+}{{}^+_-}}\cdot{\overline}{{{}^+_-}{{}^-_-}}\cdot{\overline}{{{}^-_-}{{}^+_+}}$ in $\operatorname{\mathit{SO}}(3)$.** Viewing the arm as $2$-dimensional, spanned by the tangent vector to the radius and the vector from the radius to the ulna, it traces out an extension of the map $S^1\to \operatorname{\mathit{SO}}(3)$ to a map ${\mathbb{D}}^2\to \operatorname{\mathit{SO}}(3)$.[]{data-label="fig:handmotion"}](hand1 "fig:"){width="7.00000%"} ![**Null-homotopy of the loop ${\overline}{{{}^+_+}{{}^+_-}}\cdot{\overline}{{{}^+_-}{{}^-_-}}\cdot{\overline}{{{}^-_-}{{}^+_+}}$ in $\operatorname{\mathit{SO}}(3)$.** Viewing the arm as $2$-dimensional, spanned by the tangent vector to the radius and the vector from the radius to the ulna, it traces out an extension of the map $S^1\to \operatorname{\mathit{SO}}(3)$ to a map ${\mathbb{D}}^2\to \operatorname{\mathit{SO}}(3)$.[]{data-label="fig:handmotion"}](hand2 "fig:"){width="7.00000%"} ![**Null-homotopy of the loop ${\overline}{{{}^+_+}{{}^+_-}}\cdot{\overline}{{{}^+_-}{{}^-_-}}\cdot{\overline}{{{}^-_-}{{}^+_+}}$ in $\operatorname{\mathit{SO}}(3)$.** Viewing the arm as $2$-dimensional, spanned by the tangent vector to the radius and the vector from the radius to the ulna, it traces out an extension of the map $S^1\to \operatorname{\mathit{SO}}(3)$ to a map ${\mathbb{D}}^2\to \operatorname{\mathit{SO}}(3)$.[]{data-label="fig:handmotion"}](hand3 "fig:"){width="7.00000%"} ![**Null-homotopy of the loop ${\overline}{{{}^+_+}{{}^+_-}}\cdot{\overline}{{{}^+_-}{{}^-_-}}\cdot{\overline}{{{}^-_-}{{}^+_+}}$ in $\operatorname{\mathit{SO}}(3)$.** Viewing the arm as $2$-dimensional, spanned by the tangent vector to the radius and the vector from the radius to the ulna, it traces out an extension of the map $S^1\to \operatorname{\mathit{SO}}(3)$ to a map ${\mathbb{D}}^2\to \operatorname{\mathit{SO}}(3)$.[]{data-label="fig:handmotion"}](hand4 "fig:"){width="7.00000%"} ![**Null-homotopy of the loop ${\overline}{{{}^+_+}{{}^+_-}}\cdot{\overline}{{{}^+_-}{{}^-_-}}\cdot{\overline}{{{}^-_-}{{}^+_+}}$ in $\operatorname{\mathit{SO}}(3)$.** Viewing the arm as $2$-dimensional, spanned by the tangent vector to the radius and the vector from the radius to the ulna, it traces out an extension of the map $S^1\to \operatorname{\mathit{SO}}(3)$ to a map ${\mathbb{D}}^2\to \operatorname{\mathit{SO}}(3)$.[]{data-label="fig:handmotion"}](hand5 "fig:"){width="7.00000%"} ![**Null-homotopy of the loop ${\overline}{{{}^+_+}{{}^+_-}}\cdot{\overline}{{{}^+_-}{{}^-_-}}\cdot{\overline}{{{}^-_-}{{}^+_+}}$ in $\operatorname{\mathit{SO}}(3)$.** Viewing the arm as $2$-dimensional, spanned by the tangent vector to the radius and the vector from the radius to the ulna, it traces out an extension of the map $S^1\to \operatorname{\mathit{SO}}(3)$ to a map ${\mathbb{D}}^2\to \operatorname{\mathit{SO}}(3)$.[]{data-label="fig:handmotion"}](hand6 "fig:"){width="7.00000%"} ![**Null-homotopy of the loop ${\overline}{{{}^+_+}{{}^+_-}}\cdot{\overline}{{{}^+_-}{{}^-_-}}\cdot{\overline}{{{}^-_-}{{}^+_+}}$ in $\operatorname{\mathit{SO}}(3)$.** Viewing the arm as $2$-dimensional, spanned by the tangent vector to the radius and the vector from the radius to the ulna, it traces out an extension of the map $S^1\to \operatorname{\mathit{SO}}(3)$ to a map ${\mathbb{D}}^2\to \operatorname{\mathit{SO}}(3)$.[]{data-label="fig:handmotion"}](hand7 "fig:"){width="7.00000%"} ![**Null-homotopy of the loop ${\overline}{{{}^+_+}{{}^+_-}}\cdot{\overline}{{{}^+_-}{{}^-_-}}\cdot{\overline}{{{}^-_-}{{}^+_+}}$ in $\operatorname{\mathit{SO}}(3)$.** Viewing the arm as $2$-dimensional, spanned by the tangent vector to the radius and the vector from the radius to the ulna, it traces out an extension of the map $S^1\to \operatorname{\mathit{SO}}(3)$ to a map ${\mathbb{D}}^2\to \operatorname{\mathit{SO}}(3)$.[]{data-label="fig:handmotion"}](hand8 "fig:"){width="7.00000%"} ![**Null-homotopy of the loop ${\overline}{{{}^+_+}{{}^+_-}}\cdot{\overline}{{{}^+_-}{{}^-_-}}\cdot{\overline}{{{}^-_-}{{}^+_+}}$ in $\operatorname{\mathit{SO}}(3)$.** Viewing the arm as $2$-dimensional, spanned by the tangent vector to the radius and the vector from the radius to the ulna, it traces out an extension of the map $S^1\to \operatorname{\mathit{SO}}(3)$ to a map ${\mathbb{D}}^2\to \operatorname{\mathit{SO}}(3)$.[]{data-label="fig:handmotion"}](hand9 "fig:"){width="7.00000%"} ![**Null-homotopy of the loop ${\overline}{{{}^+_+}{{}^+_-}}\cdot{\overline}{{{}^+_-}{{}^-_-}}\cdot{\overline}{{{}^-_-}{{}^+_+}}$ in $\operatorname{\mathit{SO}}(3)$.** Viewing the arm as $2$-dimensional, spanned by the tangent vector to the radius and the vector from the radius to the ulna, it traces out an extension of the map $S^1\to \operatorname{\mathit{SO}}(3)$ to a map ${\mathbb{D}}^2\to \operatorname{\mathit{SO}}(3)$.[]{data-label="fig:handmotion"}](hand10 "fig:"){width="7.00000%"} ![**Null-homotopy of the loop ${\overline}{{{}^+_+}{{}^+_-}}\cdot{\overline}{{{}^+_-}{{}^-_-}}\cdot{\overline}{{{}^-_-}{{}^+_+}}$ in $\operatorname{\mathit{SO}}(3)$.** Viewing the arm as $2$-dimensional, spanned by the tangent vector to the radius and the vector from the radius to the ulna, it traces out an extension of the map $S^1\to \operatorname{\mathit{SO}}(3)$ to a map ${\mathbb{D}}^2\to \operatorname{\mathit{SO}}(3)$.[]{data-label="fig:handmotion"}](hand11 "fig:"){width="7.00000%"} ![**Null-homotopy of the loop ${\overline}{{{}^+_+}{{}^+_-}}\cdot{\overline}{{{}^+_-}{{}^-_-}}\cdot{\overline}{{{}^-_-}{{}^+_+}}$ in $\operatorname{\mathit{SO}}(3)$.** Viewing the arm as $2$-dimensional, spanned by the tangent vector to the radius and the vector from the radius to the ulna, it traces out an extension of the map $S^1\to \operatorname{\mathit{SO}}(3)$ to a map ${\mathbb{D}}^2\to \operatorname{\mathit{SO}}(3)$.[]{data-label="fig:handmotion"}](hand12 "fig:"){width="7.00000%"} Extending this slightly: \[def:standard-frames\] Fix $m_1,m_2$, and let $m=m_1+m_2$. By the four *standard frames for ${{\mathbb{R}}}^m={\mathbb{R}}^{m_1}\times{\mathbb{R}}^{m_2}$* we mean the frames $$[\pm e_{11},e_{12},\dots, e_{1m_1},\pm e_{21}, e_{22},\dots,e_{2m_2}]\in \bigl(\{0\}\times{{\mathbb{R}}}^{m_1}\times {{\mathbb{R}}}^{m_2}\bigr)^{m}.$$ Up to homotopy, there are exactly two paths between any pair of frames. By the *standard frame paths in ${\mathbb{R}}\times{{\mathbb{R}}}^{m}$* we mean the one-parameter families of frames obtained by extending the coherent system of paths for $SO(3)$ specified above by the identity on ${{\mathbb{R}}}^{m-2}={{\mathbb{R}}}^{m_1-1}\times {{\mathbb{R}}}^{m_2-1}$. Abusing terminology, we will sometimes say that any frame path homotopic ([relative ]{}endpoints) to a standard frame path is itself a standard frame path. By a *non-standard frame path* we mean a frame path which is not homotopic ([relative ]{}endpoints) to one of the standard frame paths. Define $$\begin{aligned} {\mathfrak{r}}&{\colon}{{\mathbb{R}}}^m\to {{\mathbb{R}}}^m & {\mathfrak{r}}(x_1,\dots, x_m)&=(-x_1,x_2,\dots, x_m)\label{eq:rmap}\\ {\mathfrak{s}}&{\colon}{{\mathbb{R}}}^{m_1}\times{\mathbb{R}}^{m_2}\to {{\mathbb{R}}}^m & {\mathfrak{s}}(x_1,\dots,x_m)&=(x_1,\dots, x_{m_1},-x_{m_1+1}, x_{m_1+2},\dots,x_m).\label{eq:smap}\end{aligned}$$ \[lem:r-of-framing\] Suppose ${\varphi}_1,{\varphi}_2$ are oppositely-oriented standard frames. Then ${\mathfrak{r}}({\overline}{{\varphi}_1{\varphi}_2})$ is the non-standard frame path between ${\mathfrak{r}}({\varphi}_1)$ and ${\mathfrak{r}}({\varphi}_2)$. That is, ${\mathfrak{r}}$ takes standard frame paths between oppositely-oriented frames to non-standard frame paths. The map ${\mathfrak{s}}$ satisfies $${\overline}{{{}^+_+}{{}^-_+}} \stackrel{{\mathfrak{s}}}{\longleftrightarrow} {\overline}{{{}^-_+}{{}^+_+}} \qquad\qquad {\overline}{{{}^+_-}{{}^-_-}}\stackrel{{\mathfrak{s}}}{\longleftrightarrow} {\overline}{{{}^-_-}{{}^+_-}}.$$ In other words, ${\mathfrak{s}}$ takes the standard frame path $\overline{\begin{smallmatrix}+ & -\\ * & *\end{smallmatrix}}$ to the standard frame path $\overline{\begin{smallmatrix}- & +\\ * & *\end{smallmatrix}}$ for either $*\in\{+,-\}$. This is a straightforward verification from the definitions. The framed cube flow category {#subsec:frame-cube-flow-cat} ----------------------------- In this subsection, we describe certain aspects of the *flow category ${\mathscr{C}_C}(n)$* associated to the cube $[0,1]^n$. For a more complete account of the story, see [@RS-khovanov Section \[KhSp:sec:cube-flow\]]. The features of ${\mathscr{C}_C}(n)$ in which we are interested are the following: 1. To a pair of vertices $u,v\in\{0,1\}^n$ with $v\leq_k u$, ${\mathscr{C}_C}(n)$ associates a $(k-1)$-dimensional manifold with corners[^3] called the *moduli space* ${\mathcal{M}}_{{\mathscr{C}_C}(n)}(u,v)$. We drop the subscript if it is clear from the context. 2. For vertices $v<w<u$ in $\{0,1\}^n$, ${\mathcal{M}}(w,v)\times{\mathcal{M}}(u,w)$ is identified with a subspace of ${\partial}{\mathcal{M}}(u,v)$. 3. Fix vertices $v\leq_k u$ in $\{0,1\}^n$; let ${\overline}{0},{\overline}{1}\in\{0,1\}^k$ be the minimum and the maximum vertex, respectively. Then ${\mathcal{M}}_{{\mathscr{C}_C}(n)}(u,v)$ can be identified with ${\mathcal{M}}_{{\mathscr{C}_C}(k)}({\overline}{1},{\overline}{0})$. 4. ${\mathcal{M}}_{{\mathscr{C}_C}(n)}({\overline}{1},{\overline}{0})$ is a point, an interval and a hexagon, for $n=1,2,3$, respectively, cf. . ![**The hexagon ${\mathcal{M}}_{{\mathscr{C}_C}(3)}(111,000)$.** Each face corresponds to a product of lower-dimensional moduli spaces, as indicated.[]{data-label="fig:hexagon"}](hexagon){width="90.00000%"} The cube flow category is also *framed*. In order to define framings, one needs to embed the moduli spaces into Euclidean spaces; one does so by *neat embeddings* (see [@Lau-top-cobordismcorners Definition 2.1.4] or [@RS-khovanov Definition \[KhSp:def:neat-embedding\]]). Fix $d$ sufficiently large; for each $v\leq_k u$, ${\mathcal{M}}(u,v)$ is neatly embedded in ${\mathbb{R}}_+^{k-1}\times{\mathbb{R}}^{kd}$. These embeddings are coherent in the sense that for each $v\leq_k w\leq_l u$, ${\mathcal{M}}(w,v)\times{\mathcal{M}}(u,w){\subset}{\partial}{\mathcal{M}}(u,v)$ is embedded by the product embedding into ${\mathbb{R}}_+^{k-1}\times{\mathbb{R}}^{kd}\times{\mathbb{R}}_+^{l-1}\times{\mathbb{R}}^{ld}= {\mathbb{R}}_+^{k-1}\times\{0\}\times{\mathbb{R}}_+^{l-1}\times{\mathbb{R}}^{kd+ld}{\subset}{\partial}({\mathbb{R}}_+^{k+l-1}\times{\mathbb{R}}^{(k+l)d})$. The normal bundle to each of these moduli spaces is framed. These framings are also coherent in the sense that the product framing on ${\mathcal{M}}(w,v)\times{\mathcal{M}}(u,w)$ agrees with the framing induced from ${\mathcal{M}}(u,v)$. The framed cube flow category ${\mathscr{C}_C}(n)$ is needed in the construction of the Khovanov homotopy type. The cube flow category can be framed in multiple ways. However, all such framings lead to the same Khovanov homotopy type [@RS-khovanov Proposition \[KhSp:prop:choice-independent\]]; hence it is enough to consider a specific framing. Consider the following partial framing. \[def:0d-1d-moduli-framing\] Let $s\in C^1({\mathcal{C}}(n),{\mathbb{F}}_2)$ and $f\in C^2({\mathcal{C}}(n),{\mathbb{F}}_2)$ be the standard sign assignment and the standard frame assignment from . Fix $d$ sufficiently large. - Consider $v\leq_1 u$ in $\{0,1\}^n$. Embed the point ${\mathcal{M}}(u,v)$ in ${\mathbb{R}}^d$; let $[e_{1},\dots,e_{d}]$ be the standard basis in ${\mathbb{R}}^d$. For framing the point ${\mathcal{M}}(u,v)$, choose the frame $[e_{1},e_{2},\dots,e_{d}]$ if $s({\mathcal{C}}_{u,v})=0$, and choose the frame $[-e_{1},e_{2},\dots,e_{d}]$ if $s({\mathcal{C}}_{u,v})=1$. - Consider $v\leq_2 u$ in $\{0,1\}^n$; let $w_1$ and $w_2$ be the two other vertices in ${\mathcal{C}}_{u,v}$. Choose a proper embedding of the interval ${\mathcal{M}}(u,v)$ in ${\mathbb{R}}_+\times{\mathbb{R}}^{2d}$; let $[{\overline}{e},e_{11},\dots,e_{1d},e_{21},\dots,e_{2d}]$ be the standard basis for ${\mathbb{R}}\times{\mathbb{R}}^{2d}$. The two endpoints ${\mathcal{M}}(w_i,v)\times{\mathcal{M}}(u,w_i)$ of the interval ${\mathcal{M}}(u,v)$ are already framed in $\{0\}\times{\mathbb{R}}^{2d}$ by the product framings, say ${\varphi}_i$. Since $s$ is a sign assignment, the framings of the two endpoints, ${\varphi}_1$ and ${\varphi}_2$, are opposite, and hence can be extended to a framing on the interval. Any such extension can be treated as a path joining ${\varphi}_1$ and ${\varphi}_2$ in $\operatorname{\mathit{SO}}(2d+1)$, cf. . If $f({\mathcal{C}}_{u,v})=0$, choose an extension so that the path is a standard frame path; if $f({\mathcal{C}}_{u,v})=1$, choose an extension so that the path is a non-standard frame path. As we will see in the , in order to study the $\operatorname{Sq}^2$ action, one only needs to understand the framings of the $0$-dimensional and the $1$-dimensional moduli spaces. Therefore, the information encoded in is all we need in order to study the $\operatorname{Sq}^2$ action. However, before we proceed onto the next subsection, we need to check the following. \[lem:extend-framing\] The partial framing from can be extended to a framing of the entire cube flow category ${\mathscr{C}_C}(n)$. We frame the cube flow category in [@RS-khovanov Proposition \[KhSp:prop:cube-can-be-framed\]] inductively: We start with coherent framings of all moduli spaces of dimension less than $k$; after changing the framings in the interior of the $(k-1)$-dimensional moduli spaces if necessary, we extend this to a framing of all $k$-dimensional moduli spaces. Therefore, in order to prove this lemma, we merely need to check that the framings of the zero- and one-dimensional moduli spaces from can be extended to a framing of the two-dimensional moduli spaces. Fix $v\leq_3 u$, and fix a neat embedding of the hexagon ${\mathcal{M}}(u,v)$ in ${\mathbb{R}}_+^2\times{\mathbb{R}}^{3d}$. Let $[{\overline}{e}_1,{\overline}{e}_2,e_{11},\dots,e_{1d},e_{21},\dots,e_{2d},e_{31},\dots,e_{3d}]$ be the standard basis for ${\mathbb{R}}_+^2\times{\mathbb{R}}^{3d}$. The boundary $K$ is a framed $6$-gon embedded in ${\partial}({\mathbb{R}}_+^2\times{\mathbb{R}}^{3d})$. Let us flatten the corner in ${\partial}({\mathbb{R}}_+^2\times{\mathbb{R}}^{3d})$ so that $[{\overline}{e}_1=-{\overline}{e}_2,e_{11},\dots,e_{3d}]$ is the standard basis in the flattened ${\mathbb{R}}\times{\mathbb{R}}^{3d}$. After this flattening operation, we can treat $K$ as a framed $1$-manifold in ${\mathbb{R}}\times{\mathbb{R}}^{3d}$—see —which in turn represents some element $\eta_{u,v}\in\pi_4(S^3)={\mathbb{Z}}/2$ by the Pontrjagin-Thom correspondence. We want to show that $K$ is null-concordant, i.e., that $\eta_{u,v}=0$. $$\xymatrix{ \vcenter{\hbox{\psfrag{rp}{${\mathbb{R}}_+$}\psfrag{r3}{${\mathbb{R}}^{3d}$}\includegraphics[height=0.3\textwidth]{flattening-before}}}\ar[r]& \vcenter{\hbox{\psfrag{r}{${\mathbb{R}}$}\psfrag{r3}{${\mathbb{R}}^{3d}$}\includegraphics[height=0.3\textwidth]{flattening-after}}} }$$ As in , $K$ can also be treated as a loop in $\operatorname{\mathit{SO}}(3d+1)$, and thus represents some element $h_{u,v}\in H_1(\operatorname{\mathit{SO}}(3d+1);{\mathbb{Z}})={\mathbb{F}}_2$. The element $h_{u,v}$ is non-zero if and only if $\eta_{u,v}$ is zero; therefore, we want to show $h_{u,v}=1$. Let $t_1,t_2,t_3,w_1,w_2,w_3$ be the six vertices between $u$ and $v$ in the cube, with $w_1\leq_1 t_1,t_2$ and $w_2\leq_1 t_1,t_3$ and $w_3\leq_1 t_2,t_3$. For $i\in\{1,2,3\}$, let $s({\mathcal{C}}_{u,t_i})=a_i$, $s({\mathcal{C}}_{w_i,v})=c_i$, $f({\mathcal{C}}_{u,w_i})=f_i$ and $f({\mathcal{C}}_{t_i,v})=g_i$. Finally let $s({\mathcal{C}}_{t_1,w_1})=b_1$, $s({\mathcal{C}}_{t_1,w_2})=b_2$, $s({\mathcal{C}}_{t_2,w_1})=b_3$, $s({\mathcal{C}}_{t_2,w_3})=b_4$, $s({\mathcal{C}}_{t_3,w_2})=b_5$ and $s({\mathcal{C}}_{t_3,w_3})=b_6$. This information is encoded in the first part of . Consider the tri-colored planar graph $G$ in the second part of . The vertices represent frames in $\{0\}\times{\mathbb{R}}^{3d}$ as follows: if a vertex is labeled $cba$, then it represents the frame $$[(-1)^c e_{11},e_{12},\dots,e_{1d}, (-1)^b e_{21},e_{22},\dots,e_{2d},(-1)^a e_{31},e_{32},\dots,e_{3d}].$$ Each edge represents a frame path joining the frames at its endpoints as follows. - If the edge is colored black, i.e., if it is at the boundary of the hexagon, then it is one of the edges of the framed $6$-cycle $K$. - If the edge is colored blue (dashed), then it represents the image under flattening of the following path in $\{0\}\times{\mathbb{R}}_+\times{\mathbb{R}}^{d}\times{\mathbb{R}}^d\times{\mathbb{R}}^d$: it is constant on the first ${\mathbb{R}}^d$ and is a standard frame path on the remaining ${\mathbb{R}}_+\times{\mathbb{R}}^d\times{\mathbb{R}}^d$. - If the edge is colored red (dotted), then it represents the image under flattening of the following path in ${\mathbb{R}}_+\times\{0\}\times{\mathbb{R}}^d\times{\mathbb{R}}^d\times{\mathbb{R}}^d$: it is constant on the last ${\mathbb{R}}^d$ and is a standard frame path on the remaining ${\mathbb{R}}_+\times{\mathbb{R}}^d\times{\mathbb{R}}^d$. The element $h_{u,v}\in H_1(\operatorname{\mathit{SO}}(3d+1))$ is represented by the black $6$-cycle in $G$. In order to compute $h_{u,v}$, we will compute the homology classes of some other cycles in $G$. Consider the black-blue $5$-cycle joining $c_1b_1a_1$, $c_10a_1$, $c_100$, $c_10a_2$ and $c_1b_3a_2$. Modulo extending by the constant map on ${\mathbb{R}}^d$, the four blue edges represent standard frame paths in ${\mathbb{R}}\times{\mathbb{R}}^{2d}$ and the black edge is standard if and only if $f_1=0$. Therefore, this cycle represents the element $f_1\in H_1(\operatorname{\mathit{SO}}(3d+1))$. We denote this by writing $f_1$ in the pentagonal region bounded by this $5$-cycle in $G$. The homology classes represented by the other $5$-cycles are shown in . Next consider the red-blue $4$-cycle connecting $c_10a_1$, $00a_1$, $000$ and $c_100$. If $a_1=0$, then the blue edges represent the constant paths, and the two red edges represent the same path; therefore the cycle is null-homologous. Similarly, if $c_1=0$, the cycle is null-homologous as well. Finally, if $a_1=c_1=1$, it is easy to check from the definition of standard paths () that the cycle represents the generator of $H_1(\operatorname{\mathit{SO}}(3d+1))$. Therefore, the cycle represents the element $a_1c_1$. The contributions from such $4$-cycles is also shown in . Finally, consider the red-blue $2$-cycle connecting $c_1b_1a_1$ and $c_10a_1$. If $b_1=0$, then both the red and the blue edges represent the constant paths, and hence the cycle is null-homologous. So let us concentrate on the case when $b_1=1$. Let $$\begin{aligned} {\varphi}_1&=[(-1)^{c_1}e_{11},\dots,e_{1d},-e_{21},\dots,e_{2d},(-1)^{a_1}e_{31},\dots,e_{3d}]\text{ and}\\ {\varphi}_2&=[(-1)^{c_1}e_{11},\dots,e_{1d}, e_{21},\dots,e_{2d},(-1)^{a_1}e_{31},\dots,e_{3d}] \end{aligned}$$ be the two frames in $\{0\}\times{\mathbb{R}}^{3d}$. The blue edge represents the path from ${\varphi}_1$ to ${\varphi}_2$ where the $(d+2){^{\text{th}}}$ vector rotates $180^{\circ}$ in the $\langle{\overline}{e}_2,{e}_{21}\rangle$-plane and equals ${\overline}{e}_2=-{\overline}{e}_1$ halfway through. The red edge also represents a path where the $(d+2){^{\text{th}}}$ vector rotates $180^{\circ}$ in the $\langle{\overline}{e}_1,{e}_{21}\rangle$-plane. However, halfway through, it equals ${\overline}{e}_1$ if $c_1=0$, and it equals $-{\overline}{e}_1$ if $c_1=1$. Hence, when $b_1=0$, the red-blue $2$-cycle is null-homologous if and only if $c_1=1$. Therefore, the cycle represents the element $b_1(c_1+1)=b_1c_1+b_1$. These contributions are also shown in . We end the proof with a mild exercise in addition. h\_[u,v]{}&\ &\ &&\ &\ &\ &&\ &\ &&\ && Sq2 for the Khovanov homotopy type {#subsec:sq2-on-khspace} ---------------------------------- Fix a link diagram $L$ and an integer $\ell$, and let ${\mathcal{X}_\mathit{Kh}}^\ell(L)$ denote the Khovanov homotopy type constructed in [@RS-khovanov]. We want to study the Steenrod square $$\begin{aligned} \operatorname{Sq}^2&{\colon}{\widetilde}{H}^{\kappa}({\mathcal{X}_\mathit{Kh}}^{\ell}(L))\to{\widetilde}{H}^{\kappa+2}({\mathcal{X}_\mathit{Kh}}^{\ell}(L)).\\ \shortintertext{The spectrum ${\mathcal{X}_\mathit{Kh}}^\ell(L)$ is a formal de-suspension $\Sigma^{-N}Y_{\ell}$ of a CW complex $Y_{\ell}$ for some sufficiently large $N$. Therefore, we want to understand the Steenrod square} \operatorname{Sq}^2&{\colon}H^{N+\kappa}(Y_{\ell};{\mathbb{F}}_2)\to H^{N+\kappa+2}(Y_{\ell};{\mathbb{F}}_2).\end{aligned}$$ Before we get started, we give names to a few maps which will make regular appearances. Fix $m_1,m_2\geq 2$ and let $m=m_1+m_2$. First, recall that we have maps ${\mathfrak{r}}, {\mathfrak{s}}{\colon}{{\mathbb{R}}}^m={\mathbb{R}}^{m_1}\times{\mathbb{R}}^{m_2}\to {{\mathbb{R}}}^m$ given by Formulas  and , respectively. Next, let $$\Pi{\colon}{\mathbb{D}}^m\to K_m^{(m+1)}$$ be the composition of the projection map ${\mathbb{D}}^m\to {\mathbb{D}}^m/{\partial}{\mathbb{D}}^m=S^m$ and the inclusion $S^m\to K_m^{(m+1)}$. Let $$\Xi{\colon}[0,1]\times{\mathbb{D}}^m \to K_m^{(m+1)}$$ be the map induced by the identification of $[0,1]\times{\mathbb{D}}^m$ with the $(m+1)$-cell $e^{m+1}$ of $K_m^{(m+1)}$; the map $\Xi$ collapses $[0,1]\times {\partial}{\mathbb{D}}^m$ to the basepoint and maps each of $\{0\}\times{\mathbb{D}}^m$ and $\{1\}\times{\mathbb{D}}^m$ to the $m$-skeleton $S^m\subset K_m^{(m+1)}$ by $\Pi$ and $\Pi\circ {\mathfrak{r}}$, respectively. The map $\Xi$ factors through a map $$\overline{\Xi}{\colon}[0,1]\times S^m\to K_m^{(m+1)}.$$ We construct a CW complex $X$ as follows. Choose real numbers $\epsilon$ and $R$ with $0<\epsilon\ll R$. Then: **Step 1:** Start with a unique $0$-cell $e^0$. **Step 2:** For each Khovanov generator ${\mathbf{x}}_i\in {\mathit{KG}}^{\kappa,\ell}$, $X$ has a corresponding cell $$f_i^{m}=\{0\}\times\{0\}\times [-\epsilon,\epsilon]^{m_1}\times [-\epsilon,\epsilon]^{m_2}.$$ The boundary of $f_i^{m}$ is glued to $e^0$. **Step 3:** For each Khovanov generator ${\mathbf{y}}_j\in {\mathit{KG}}^{\kappa+1,\ell}$, $X$ has a corresponding cell $$f_j^{m+1}=[0,R]\times\{0\}\times [-R,R]^{m_1}\times [-\epsilon,\epsilon]^{m_2}.$$ The boundary of $f_j^{m+1}$ is attached to $X^{(m)}$ as follows. If ${\mathbf{y}}_j$ occurs in ${\delta}_{{\mathbb{Z}}} {\mathbf{x}}_i$ with sign $\epsilon_{i,j}\in\{\pm 1\}$ then we embed $f_i^m$ in ${\partial}f_j^{m+1}$ by a map of the form $$\label{eq:fj-fi} \begin{split} (0,0,x_1,x_2,\dots,x_{m_1}&,y_1,\dots,y_{m_2})\\ &\mapsto (0,0,\epsilon_{i,j}x_1+a_1, x_2+a_2,\dots, x_{m_1}+a_{m_1}, y_1,\dots,y_{m_2}) \end{split}$$ for some vector $(a_1,\dots,a_{m_1})$. Call the image of this embedding ${\mathcal{C}_{i}(j)}$. Then ${\mathcal{C}_{i}(j)}$ is mapped to $f_i^m$ by the specified identification, and $({\partial}f_j^m)\setminus \bigcup_i {\mathcal{C}_{i}(j)}$ is mapped to the basepoint $e^0$. We choose the vectors $(a_1,\dots,a_{m_1})$ so that the different ${\mathcal{C}_{i}(j)}$’s are disjoint. Write $p_{i,j}=(0,a_1,\dots,a_{m_1},0,\dots,0)\in f_j^{m+1}$. **Step 4:** For each Khovanov generator ${\mathbf{z}}_k\in {\mathit{KG}}^{\kappa+2,\ell}$, $X$ has a corresponding cell $$f_k^{m+2}=[0,R]\times[0,R]\times [-R,R]^{m_1}\times [-R,R]^{m_2}.$$ The boundary of $f_k^{m+2}$ is attached to $X^{(m+1)}$ as follows. First, we choose some auxiliary data: - If ${\mathbf{z}}_k$ occurs in ${\delta}_{{\mathbb{Z}}} {\mathbf{y}}_j$ with sign $\epsilon_{j,k}\in\{\pm 1\}$ then we embed $f_j^{m+1}$ in ${\partial}f_k^{m+2}$ by a map of the form $$\label{eq:fk-fj} \begin{split} (x_0,0,x_1,\dots,x_{m_1}&,y_1,\dots,y_{m_2})\\ &\mapsto (x_0,0,x_1, x_2,\dots, x_{m_1},0, \epsilon_{j,k}y_1+b_1,\dots,y_{m_2}+b_{m_2}) \end{split}$$ for some vector $(b_1,\dots,b_{m_2})$. Call the image of this embedding ${\mathcal{C}_{j}(k)}$. Once again, we choose the vectors $(b_1,\dots,b_{m_2})$ so that the different ${\mathcal{C}_{j}(k)}$’s are disjoint. - Let ${\mathbf{x}}_i\in {\mathit{KG}}^{\kappa,\ell}$. If the set of generators ${\mathcal{G}_{{{\mathbf{z}}_k},{{\mathbf{x}}_i}}}$ between ${\mathbf{z}}_k$ and ${\mathbf{x}}_i$ is nonempty then ${\mathcal{G}_{{{\mathbf{z}}_k},{{\mathbf{x}}_i}}}$ consists of $2$ or $4$ points, and these points are identified in pairs (via the ladybug matching of ). Write ${\mathcal{G}_{{{\mathbf{z}}_k},{{\mathbf{x}}_i}}}=\{{\mathbf{y}}_{j_{{\beta}}}\}$. For each $j_{{\beta}}$, the cell ${\mathcal{C}_{i}(j_{{\beta}})}$ can be viewed as lying in the boundary of ${\mathcal{C}_{j_{{\beta}}}(k)}$. Consider the point $p_{i,j_{{\beta}}}$ in the interior of ${\mathcal{C}_{i}(j_{{\beta}})}\subset {\partial}{\mathcal{C}_{j_{{\beta}}}(k)}$. Each of the points $p_{i,j_{{\beta}}}$ inherits a framing, i.e., a trivialization of the normal bundle to $p_{i,j_{{\beta}}}$ in ${\partial}{\mathcal{C}_{j_{{\beta}}}(k)}$, from the map $f_i^m\to {\partial}f_k^{m+2}$, $$\begin{split} \qquad(0&,0,x_1,\dots,x_{m_1},y_1,\dots,y_{m_2})\\ &\qquad\mapsto (0,0, \epsilon_{i,j_{{\beta}}}x_1+a_1, x_2+a_2,\dots, x_{m_1}+a_{m_1}, \epsilon_{j_{{\beta}},k}y_1+b_1,\dots,y_{m_2}+b_{m_2}). \end{split}$$ Notice that the framing of $p_{i,j_{{\beta}}}$ is one of the standard frames for ${{\mathbb{R}}}^m={\mathbb{R}}^{m_1}\times{\mathbb{R}}^{m_2}$ (). The pair of generators $({\mathbf{x}}_i, {\mathbf{z}}_k)$ specifies a $2$-dimensional face of the hypercube ${\mathcal{C}}(n)$. Let ${\mathcal{C}}_{k,i}$ denote this face. The standard frame assignment $f$ of assigns an element $f({\mathcal{C}}_{k,i})\in {\mathbb{F}}_2$ to the face ${\mathcal{C}}_{k,i}$. The matching of elements of ${\mathcal{G}_{{{\mathbf{z}}_k},{{\mathbf{x}}_i}}}$ matches the points $p_{i,j_a}$ in pairs. Moreover, it follows from the definition of the sign assignment that matched pairs of points have opposite framings. For each matched pair of points choose a properly embedded arc $${\zeta}\subset \{0\}\times[0,R]\times [-R,R]^{m_1}\times [-R,R]^{m_2}{\subset}{\partial}f_k^{m+2}$$ connecting the pair of points. The endpoints of ${\zeta}$ are framed. Extend this to a framing of the normal bundle to ${\zeta}$ in ${\partial}f_k^{m+2}$. If $f({\mathcal{C}}_{k,i})=0$, then choose this framing to be isotopic [relative ]{}boundary to a standard frame path for $\{0\}\times{\mathbb{R}}\times{{\mathbb{R}}}^m$; if $f({\mathcal{C}}_{k,i})=1$, then choose this framing to be isotopic [relative ]{}boundary to a non-standard frame path for $\{0\}\times{\mathbb{R}}\times{{\mathbb{R}}}^m$. We call these arcs ${\zeta}$ *Pontrjagin-Thom arcs*, and denote the set of them by $\{{\zeta}_{i_1,1},\dots,\allowbreak{\zeta}_{i_A,n_A}\}$ where the arc ${\zeta}_{i_\alpha,\imath}$ comes from the generator ${\mathbf{x}}_{i_\alpha}\in{\mathit{KG}}^{\kappa,\ell}$. The choice of these auxiliary data is illustrated in . Now, the attaching map on ${\partial}f_k^{m+2}$ is given as follows: - The interior of ${\mathcal{C}_{j}(k)}$ is mapped to $f_j^{m+1}$ by (the inverse of) the identification in Formula . - A tubular neighborhood of each Pontrjagin-Thom arc $\operatorname{nbd}({\zeta}_{i_\alpha,\imath})$ is mapped to $f_{i_\alpha}^m$ as follows. The framing identifies $$\operatorname{nbd}({\zeta}_{i_\alpha,\imath})\cong {\zeta}_{i_\alpha,\imath}\times [-\epsilon,\epsilon]^{m_1+m_2}\cong {\zeta}_{i_\alpha,\imath}\times f_{i,\alpha}^m.$$ With respect to this identification, the map is the obvious projection to $f_{i,\alpha}^m$. - The rest of ${\partial}f_k^{m+2}$ is mapped to the basepoint $e^0$. [attach-modified]{} (88,63) [${\mathbf{z}}_7$]{} (98,54) [${\mathbf{y}}_6$]{} (89.5,54) [${\mathbf{y}}_5$]{} (77,54) [${\mathbf{y}}_4$]{} (77.5,44) [${\mathbf{x}}_1$]{} (89.5,44) [${\mathbf{x}}_2$]{} (98,44) [${\mathbf{x}}_3$]{} Let $X$ denote the space constructed above and $Y_{\ell}=Y_{\ell}(L)$ the CW complex from [@RS-khovanov] associated to $L$ in quantum grading $\ell$. Then $$\Sigma^{N+\kappa-m} X = Y^{(N+\kappa+2)}_{\ell}/Y^{(N+\kappa-1)}_{\ell}.$$ The construction of $X$ above differs from the construction of $Y_{\ell}$ in [@RS-khovanov] as follows: 1. We have collapsed all cells of dimension less than $m$ to the basepoint, and ignored all cells of dimension bigger than $m+2$. 2. In the construction above, we have suppressed the $0$-dimensional framed moduli spaces, instead speaking directly about the embeddings of cells that they induce. The $0$-dimensional moduli spaces correspond to the points $(a_1, \dots, a_{m_1})$ and $(b_1,\dots,b_{m_2})$ in Formulas  and , respectively. Their framings are induced by the maps in Formulas  and . 3. In the construction of [@RS-khovanov Definition \[KhSp:def:flow-gives-space\]], each of the cells above were multiplied by $[0,R]^{p_1}\times[-R,R]^{p_2}\times[-\epsilon,\epsilon]^{p_3}$ for some $p_1,p_2,p_3\in{\mathbb{N}}$ with $p_1+p_2+p_3=N+\kappa-m$; and the various multiplicands were ordered differently. This has the effect of suspending the space $X$ by $(N+\kappa-m)$-many times. 4. The framings in [@RS-khovanov] were given by an obstruction-theory argument [@RS-khovanov Proposition \[KhSp:prop:cube-can-be-framed\]], while the framings here are given explicitly by the standard sign assignment and standard frame assignment. This is justified by . Thus, up to stabilizing, the two constructions give the same space. Therefore, it is enough to study the Steenrod square $\operatorname{Sq}^2{\colon}H^m(X;{\mathbb{F}}_2)\to H^{m+2}(X;{\mathbb{F}}_2)$. Fix a cohomology class $[c]\in H^{m}(X;{\mathbb{F}}_2)$. Let $$c=\sum_{f_i^{m}} c_i(f_i^{m})^*$$ be a cocycle representing $[c]$. Here, the $c_i$ are elements of $\{0,1\}$. We want to understand the map ${{\mathfrak{c}^{{}}}}{\colon}X\to K_m^{(m+2)}$ corresponding to $c$. We start with ${{\mathfrak{c}^{(m)}}}{\colon}X^{(m)}\to K_m^{(m)}$: on $f_i^m$, this map is defined as follows: - the projection $\Pi$ composed with the identification $[-\epsilon,\epsilon]^{m}={\mathbb{D}}^{m}$ if $c_j=1$. - the constant map to the basepoint of $K_{m}^{(m)}$ if $c_j=0$. To extend ${{\mathfrak{c}^{{}}}}$ to $X^{(m+1)}$ we need to make one more auxiliary choice: A *topological boundary matching* for $c$ consists of the following data for each $(m+1)$-cell $f_j^{m+1}$: a collection of disjoint, embedded, framed arcs ${\eta}_{j,\jmath}$ in $f_j^{m+1}$ connecting the points $$\coprod_{i\mid c_i=1} p_{i,j}\subset {\partial}f_j^{m+1}$$ in pairs, together with framings of the normal bundles to the ${\eta}_{j,\jmath}$. The normal bundle in $f_j^{m+1}$ to each of the points $p_{i,j}$ inherits a framing from Formula . Call an arc ${\eta}_{j,\jmath}$ *boundary-coherent* if the points $p_{i_1,j}$ and $p_{i_2,j}$ in ${\partial}{\eta}_{j,\jmath}$ have opposite framings, i.e., if $\epsilon_{i_1,j}=-\epsilon_{i_2,j}$, and *boundary-incoherent* otherwise. We require the following conditions on the framings for the ${\eta}_{j,\jmath}$: - Trivialize $Tf_j^{m+1}$ using the following inclusion $$\begin{aligned} \qquad f_j^{m+1}=[0,R]\times\{0\}\times[-R,R]^{m_1}\times[-{\epsilon},{\epsilon}]^{m_2}&\hookrightarrow {{\mathbb{R}}}^{m+1}\\ (t,0,x_1,\dots,x_{m_1},y_1,\dots,y_{m_2})&\mapsto(-t,x_1,\dots,x_{m_1},y_1,\dots,y_{m_2}). \end{aligned}$$ We require the framing of ${\eta}_{j,\jmath}$ to be isotopic [relative ]{} boundary to one of the standard frame paths for ${\mathbb{R}}\times {\mathbb{R}}^{m}$. - If ${\eta}_{j,\jmath}$ is boundary-coherent then the framing of ${\eta}_{j,\jmath}$ is compatible with the framing of its boundary. - If ${\eta}_{j,\jmath}$ is boundary-incoherent then the framing of one end of ${\eta}_{j,\jmath}$ agrees with the framing of the corresponding $p_{i_1,j}$ while the framing of the other end of ${\eta}_{j,\jmath}$ differs from the framing of $p_{i_2,j}$ by the reflection ${\mathfrak{r}}{\colon}{{\mathbb{R}}}^m\to{{\mathbb{R}}}^m$. Each boundary-incoherent arc in a topological boundary matching inherits an orientation: it is oriented from the endpoint $p_{i,j}$ at which the framings agree to the endpoint at which the framings disagree. A topological boundary matching for $c$ exists. Since $c$ is a cocycle, for each $(m+1)$-cell $f_j^{m+1}$ we have $$\sum_i \epsilon_{i,j}c_i = c(\sum_i \epsilon_{i,j}f_i^m) = c({\partial}f_j^{m+1})\equiv 0\pmod{2}.$$ Together with our condition on $m$, this ensures that a topological boundary matching for $c$ exists. The map ${{\mathfrak{c}^{(m+1)}}}{\colon}X^{(m+1)}\to K_{m}^{(m+1)}$ is defined using the topological boundary matching as follows. On $f_j^{m+1}$: - The map ${{\mathfrak{c}^{(m+1)}}}$ sends the complement of a neighborhood of the ${\eta}_{j,\jmath}$ to the basepoint. - If ${\eta}_{j,\jmath}$ is boundary-coherent then the framing of ${\eta}_{j,\jmath}$ identifies a neighborhood of the arc ${\eta}_{j,\jmath}$ with ${\eta}_{j,\jmath}\times {\mathbb{D}}^m$. With respect to this identification, the map ${{\mathfrak{c}^{(m+1)}}}$ is projection ${\eta}_{j,\jmath}\times{\mathbb{D}}^m\to {\mathbb{D}}^m\stackrel{\Pi}{\longrightarrow} K_m^{(m+1)}.$ Note that ${{\mathfrak{c}^{(m)}}}$ induces a map $({\partial}{\eta}_{j,\jmath})\times{\mathbb{D}}^m$, and that the compatibility condition of the framing of ${\eta}_{j,\jmath}$ with the framing of ${\partial}{\eta}_{j,\jmath}$ implies that the map ${{\mathfrak{c}^{(m+1)}}}$ extends the map ${{\mathfrak{c}^{(m)}}}$. - If ${\eta}_{j,\jmath}$ is boundary-incoherent then the orientation and framing identify a neighborhood of ${\eta}_{j,\jmath}$ with $[0,1]\times{\mathbb{D}}^m$. With respect to this identification, ${{\mathfrak{c}^{(m+1)}}}$ is given by the map $\Xi$. Again, ${{\mathfrak{c}^{(m)}}}$ induces a map $\{0,1\}\times{\mathbb{D}}^m$, and the compatibility condition of the framing of ${\eta}_{j,\jmath}$ with the framing of ${\partial}{\eta}_{j,\jmath}$ implies that the map ${{\mathfrak{c}^{(m+1)}}}$ extends the map ${{\mathfrak{c}^{(m)}}}$. Now, fix an $(m+2)$-cell $f^{m+2}$. We want to compute the element $${{\mathfrak{c}^{{}}}}|_{{\partial}f^{m+2}}\in \pi_{m+1}(K_{m}^{(m+1)})={\mathbb{Z}}/2.$$ As described above, $$\begin{aligned} {\partial}f^{m+2}&= \bigl(\{0\}\times [0,R]\times [-R,R]^{m}\bigr) \cup \bigl(\{R\}\times [0,R]\times [-R,R]^{m}\bigr)\\ &\qquad{}\cup \bigl([0,R]\times \{0\}\times [-R,R]^{m}\bigr) \cup \bigl([0,R]\times \{R\}\times [-R,R]^{m}\bigr)\\ &\qquad{}\cup \bigl([0,R]\times [0,1]\times {\partial}([-R,R]^m)\bigr)\end{aligned}$$ has corners. The map ${{\mathfrak{c}^{{}}}}|_{{\partial}f^{m+2}}$ will send $$\bigl(\{R\}\times[0,R]\times [-R,R]^{m}\bigr)\cup \bigl([0,R]\times \{R\}\times [-R,R]^{m}\bigr)\cup \bigl([0,R]\times [0,1]\times {\partial}([-R,R]^m)\bigr)$$ to the basepoint. We straighten the corner between the other two parts of ${\partial}f^{m+2}$ via the map $$\begin{aligned} \bigl(\{0\}\times [0,R]\times [-R,R]^{m}\bigr)\cup \bigl([0,R]\times \{0\}\times[-R,R]^{m}\bigr) &\to [-R,R]\times [-R,R]^{m}\\ (0,t,x_1,\dots, x_{m_1},y_1,\dots,y_{m_2})&\mapsto (t, x_1,\dots, x_{m_1},y_1,\dots, y_{m_2})\\ (t,0,x_1,\dots, x_{m_1},y_1,\dots,y_{m_2})&\mapsto (-t, x_1,\dots, x_{m_1},y_1,\dots, y_{m_2}).\end{aligned}$$ We will suppress this straightening from the notation in the rest of the section. Let ${\zeta}_1,\dots, {\zeta}_k\subset S^{m+1}={\partial}f^{m+2}$ be the Pontrjagin-Thom arcs corresponding to $f$. Let $\{{\tilde\eta}_{j,\jmath}\}$ be the preimages in $S^{m+1}={\partial}f^{m+2}$ of the topological boundary matching. The union $$\bigcup_{j,\jmath} {\tilde\eta}_{j,\jmath}\cup \bigcup_i {\zeta}_i$$ is a one-manifold in $S^{m+1}$. Each of the arcs ${\zeta}_i\subset {\partial}f^{m+2}$ comes with a framing. Each of the arcs ${\tilde\eta}_{j,\jmath}\subset {\partial}f^{m+2}$ also inherits a framing: the pushforward of the framing of ${\eta}_{j,\jmath}$ under the map of Formula . The map ${{\mathfrak{c}^{{}}}}|_{{\partial}f^{m+2}}{\colon}S^{m+1}\to K_m^{(m+1)}$ is induced from these framed arcs as follows: - A tubular neighborhood of each Pontrjagin-Thom arc ${\zeta}_i$ is mapped via $$\operatorname{nbd}({\zeta}_i)\cong {\zeta}_i\times{\mathbb{D}}^m\to {\mathbb{D}}^m\stackrel{\Pi}{\longrightarrow} S^m,$$ where the first isomorphism is induced by the framing. - A tubular neighborhood of each boundary-coherent ${\tilde\eta}_{j,\jmath}$ is mapped via $$\operatorname{nbd}({\tilde\eta}_{j,\jmath})\cong {\tilde\eta}_{j,\jmath}\times{\mathbb{D}}^m\to {\mathbb{D}}^m\stackrel{\Pi}{\longrightarrow}S^m,$$ where the first isomorphism is induced by the framing. - A tubular neighborhood of each boundary-incoherent ${\tilde\eta}_{j,\jmath}$ is mapped via $$\operatorname{nbd}({\tilde\eta}_{j,\jmath})\cong [0,1]\times {\mathbb{D}}^m\stackrel{\Xi}{\longrightarrow} K_m^{(m+1)}.$$ where the first isomorphism is induced by the orientation and framing. - The map ${{\mathfrak{c}^{{}}}}$ takes the rest of $S^{m+1}$ to the basepoint of $K_m^{(m+1)}$. Let $K$ be a component of $\bigcup_i {\zeta}_i\cup \bigcup_{j,\jmath} {\tilde\eta}_{j,\jmath}$. Relabeling, let $p_1,\dots, p_{2k}$ be the points $p_{i,j}$ on $K$, ${\tilde\eta}_1,\dots,{\tilde\eta}_k$ the sub-arcs of $K$ coming from the topological boundary matching and ${\zeta}_1,\dots,{\zeta}_k$ the sub-arcs of $K$ coming from the Pontrjagin-Thom data. Order these so that ${\partial}{\zeta}_i=\{p_{2i-1},p_{2i}\}$ and ${\partial}{\tilde\eta}_i=\{p_{2i},p_{2i+1}\}$. We define an isomorphism $\Phi{\colon}\operatorname{nbd}(K)\to K\times {\mathbb{D}}^m$ as follows. First, the framing of ${\zeta}_1$ induces an identification of the normal bundle $N_{p_1}K$ with ${\mathbb{D}}^m$. Second, the framing of each arc $\gamma\in \{{\zeta}_i, {\tilde\eta}_i\}$ induces a trivialization of the normal bundle $N\gamma$. Suppose that the framing of $K$ has already been defined at the endpoint $p_i$ of $\gamma$. Then the trivialization of $N\gamma$ allows us to transport the framing of $p_i$ along $\gamma$. This transported framing is the framing of $K$ along $\gamma$. Note that the framing of $K$ along $\gamma$ may not agree with the original framing of $\gamma$; but the two either agree or differ by the map $$K\times{\mathbb{D}}^m\to K\times{\mathbb{D}}^m\quad (x,v)\mapsto (x,{\mathfrak{r}}(v)),$$ depending on the parity of the number of boundary-incoherent arcs traversed from $p_1$. In particular, it is not *a priori* obvious that the framing we have defined is continuous at $p_1$; but this follows from the following lemma: \[lem:even-cyc-top\] An even number of the arcs in $K$ are boundary-incoherent. The proof is essentially the same as the proof of the second half of , and is left to the reader. Call an arc $\gamma$ in $K$ *${\mathfrak{r}}$-colored* if the original framing of $\gamma$ disagrees with the framing of $K$, and *$\operatorname{Id}$-colored* if the original framing of $\gamma$ agrees with the framing of $K$. Write $\Psi={{\mathfrak{c}^{{}}}}\circ \Phi^{-1}{\colon}K\times {\mathbb{D}}^m\to K_m^{(m+1)}$. Explicitly, the map $\Psi$ is given as follows: - If $\gamma_i$ is one of the Pontrjagin-Thom arcs or is a boundary-coherent topological boundary matching arc then a neighborhood of $\gamma_i$ in $K\times{\mathbb{D}}^m$ is mapped to $S^m\subset K_m^{(m+1)}$ by the map $$\begin{aligned} \gamma_i\times{\mathbb{D}}^m\ni (x,v)&\mapsto \Pi(v)\in K_m^{(m+1)} & &\text{if $\gamma_i$ is $\operatorname{Id}$-colored}\\ \gamma_i\times{\mathbb{D}}^m\ni (x,v)&\mapsto \Pi({\mathfrak{r}}(v)) \in K_m^{(m+1)} & &\text{if $\gamma_i$ is ${\mathfrak{r}}$-colored}. \end{aligned}$$ - If ${\tilde\eta}_i$ is boundary-incoherent then the framing of $K$ and the orientation of ${\tilde\eta}_i$ induce an identification $\operatorname{nbd}({\tilde\eta}_i)\cong [0,1]\times{\mathbb{D}}^m$. With respect to this identification, $\operatorname{nbd}({\tilde\eta}_i)$ is mapped to $K_m^{(m+1)}$ by the map $$\begin{aligned} (t,v)&\mapsto \Xi(t,v) & &\text{if ${\tilde\eta}_i$ is $\operatorname{Id}$-colored}\\ (t,v)&\mapsto \Xi(t,{\mathfrak{r}}(v)) & &\text{if ${\tilde\eta}_i$ is ${\mathfrak{r}}$-colored}. \end{aligned}$$ Let $\Psi'$ be the projection $K\times{\mathbb{D}}^m\to {\mathbb{D}}^m\stackrel{\Pi}{\longrightarrow} S^m$. These maps are summarized in the following diagram: $$\label{eq:nbd-K-maps} \xymatrix{ \operatorname{nbd}(K)\ar[r]^{\Phi}_\cong\ar@/^2pc/[rr]^{{{\mathfrak{c}^{{}}}}} & K\times {\mathbb{D}}^m\ar[r]^\Psi\ar[dr]_{\Psi'} & K_m^{(m+1)}\\ & & S^m\ar@{^(->}[u]_\iota }$$ It is immediate from the definitions that the top triangle commutes. Our next goal is to show that the other triangle commutes up to homotopy: \[prop:PsiPsiprime\] The map $\Psi$ is homotopic [relative ]{}$(K\times{\partial}{\mathbb{D}}^m)\cup (\{p_1\}\times{\mathbb{D}}^m)$ to $\iota\circ\Psi'$, i.e., the bottom triangle of Diagram (\[eq:nbd-K-maps\]) commutes up to homotopy [relative ]{}$(K\times{\partial}{\mathbb{D}}^m)\cup (\{p_1\}\times{\mathbb{D}}^m)$. The proof of uses a model computation. Consider the map $\overline{\Xi}{\colon}[0,1]\times S^m\to K_m^{(m+1)}$. Concatenation in $[0,1]$ endows $\operatorname{Hom}([0,1]\times S^m, K_m^{(m+1)})$ with a multiplication, which we denote by $*$. Let ${\mathfrak}{t}{\colon}[0,1]\to[0,1]$ be the reflection $\overline{f}(t,x)=f(1-t,x)$. Using ${\mathfrak}{t}$, we obtain a map $\overline{\Xi}\circ ({\mathfrak}{t}\times \operatorname{Id}){\colon}[0,1]\times S^m\to K_m^{(m+1)}$. Finally, using the map ${\mathfrak{r}}$ we obtain a map $\overline{\Xi}\circ (\operatorname{Id}\times {\mathfrak{r}}){\colon}[0,1]\times S^m\to K_m^{(m+1)}$. \[lem:frame-cancel\] Assume that $m\geq 3$. Then both ${\overline}{\Xi}*[\overline{\Xi}\circ({\mathfrak}{t}\times \operatorname{Id})]$ and ${\overline}{\Xi}* [{\overline}{\Xi}\circ (\operatorname{Id}\times {\mathfrak{r}})]$ are homotopic ([relative ]{}boundary) to the map $[0,1]\times S^m\to S^m \subset K_m^{(m+1)}$ given by $(t,x)\mapsto x$ (i.e., the constant path in $O(m+1)$ with value $\operatorname{Id}$). The statement about ${\overline}{\Xi}*[\overline{\Xi}\circ({\mathfrak}{t}\times \operatorname{Id})]$ is obvious. For the statement about ${\overline}{\Xi}* [{\overline}{\Xi}\circ (\operatorname{Id}\times {\mathfrak{r}})]$, let $H_m={\overline}{\Xi}* [{\overline}{\Xi}\circ (\operatorname{Id}\times {\mathfrak{r}})]{\colon}[0,1]\times S^m\to K_m^{(m+1)}$. We can view $H_m$ as an element of $\pi_1(\Omega^m K_m^{(m+1)})\cong \pi_{m+1}(K_m^{(m+1)})$. Moreover, the map $H_m$ is the $(m-1)$-fold suspension of the map $H_1{\colon}[0,1]\times S^1\to K_1^{(2)}={{\mathbb{R}}}P^2$. But the suspension map $$\Sigma^i {\colon}\pi_2({{\mathbb{R}}}P^2)\to \pi_{i+2}(\Sigma^i{{\mathbb{R}}}P^2)$$ is nullhomotopic for $i\geq 2$; see, for instance, [@Wu-top-proj-plane Proposition 6.5 and discussion before Proposition 6.11]. So, it follows from our assumption on $m$ that $H_1\in \pi_1(\Omega^m K_m^{(m+1)})$ is homotopically trivial. Keeping in mind that our loops are based at the identity map $S^m\to S^m\subset K_m^{(m+1)}$, this proves the result. As in the proof of , we can view $\Psi$ as an element of $\pi_1(\Omega^m K_m^{(m+1)}, \iota)$, i.e., a loop of maps $S^m\to K_m^{(m+1)}$ based at the map $\iota{\colon}S^m\to K_m^{(m+1)}$. From its definition, $\Psi$ decomposes as a product of paths, $$\Psi=\Psi_{\gamma_1}*\dots*\Psi_{\gamma_{2k}},$$ one for each arc $\gamma_i$ in $K$. Here, $\Psi_{\gamma_i}$ is an element of the fundamental groupoid of $\Omega^m K_m^{(m+1)}$, with endpoints in $\{\iota, \iota\circ {\mathfrak{r}}\}$. The path $\Psi_{\gamma_i}$ is: - The constant path based at either $\iota$ or $\iota\circ {\mathfrak{r}}$ if $\gamma_i$ is one of the Pontrjagin-Thom arcs or is a boundary-coherent topological boundary matching arc. - One of the paths $\Xi$, $\Xi\circ (\operatorname{Id}\times {\mathfrak{r}})$, $\Xi\circ({\mathfrak}{t}\times\operatorname{Id})$ or $\Xi\circ({\mathfrak}{t}\times\operatorname{Id})\circ (\operatorname{Id}\times {\mathfrak{r}})$ if $\gamma_i$ is a boundary-incoherent topological boundary matching arc. Contracting the constant paths, $\Psi$ can be expressed as $$\Psi = \Psi_{\eta_{i_1}}*\Psi_{\eta_{i_2}}*\dots*\Psi_{\eta_{i_A}}$$ where the $\eta_{i_{{\alpha}}}$ are boundary-incoherent. By , $A$ is even. Moreover, $\Psi_{\eta_{i_{{\alpha}}}}$ is either $\Xi$ or $\Xi\circ({\mathfrak}{t}\times\operatorname{Id})\circ (\operatorname{Id}\times {\mathfrak{r}})$ if ${\alpha}$ is odd, and is either $\Xi\circ({\mathfrak}{t}\times\operatorname{Id})$ or $\Xi\circ (\operatorname{Id}\times {\mathfrak{r}})$ if ${\alpha}$ is even. So, by , the concatenation $\Psi_{\eta_{i_{2{\alpha}-1}}}*\Psi_{\eta_{i_{2{\alpha}}}}$ is homotopic to the constant path $\iota$. The result follows. The pair $(K,\Phi)$ specifies a framed cobordism class $[K,\Phi]\in \Omega_1^{{\mathrm{fr}}}={\mathbb{Z}}/2$. \[prop:KPhi\] The element $[K,\Phi]\in{\mathbb{Z}}/2$ is given by the sum of: 1. \[item:one\] $1$. 2. \[item:framing\] The number of Pontrjagin-Thom arcs in $K$ with the non-standard framing. 3. \[item:arrows\] The number of arrows on $K$ which point in a given direction. First, exchanging the standard and non-standard framings on an arc changes the overall framing of $K$ by $1$. So, it suffices to prove the proposition in the case that all of the Pontrjagin-Thom arcs in $K$ have the standard framing. Second, the framing on each boundary-matching arc is standard if the corresponding $(m+1)$-cell occurs positively in ${\partial}f^{m+2}$, and differs from the standard framing by the map ${\mathfrak{s}}$ of if the $(m+1)$-cell occurs negatively in ${\partial}f^{m+2}$. In the notation of , the framings of the boundary matching arcs are among $\bigl\{{\overline}{{{}^+_+}{{}^-_+}}, {\overline}{{{}^+_-}{{}^-_-}}, {\overline}{{{}^-_+}{{}^+_+}}, {\overline}{{{}^-_-}{{}^+_-}}\bigr\}$. So, by , ${\mathfrak{s}}$ takes the standard frame path on a boundary-matching arc to a standard frame path. In sum, each of the arcs ${\tilde\eta}_i$ is framed by a standard frame path. So, by , the framing of $K$ at each arc $\gamma_i$ is standard if $\gamma_i$ is $\operatorname{Id}$-colored, and non-standard if $\gamma_i$ is ${\mathfrak{r}}$-colored. Thus, it suffices to show that the number of ${\mathfrak{r}}$-colored arcs agrees modulo $2$ with the number of arrows on $K$ which point in a given direction. Let ${\tilde\eta}_{i_1},\dots, {\tilde\eta}_{i_A}$ be the boundary-incoherent boundary matching arcs in $K$. Then: - There are an odd number of arcs strictly between ${\tilde\eta}_{i_{{\alpha}}}$ and ${\tilde\eta}_{i_{{\alpha}+1}}$. Moreover: - These arcs are all ${\mathfrak{r}}$-colored if ${\alpha}$ is odd. - These arcs are all $\operatorname{Id}$-colored if ${\alpha}$ is even. - If ${\tilde\eta}_{i_{\alpha}}$ and ${\tilde\eta}_{i_{{\alpha}+1}}$ are oriented in the same direction then exactly one of ${\tilde\eta}_{i_{\alpha}}$ and ${\tilde\eta}_{i_{{\alpha}+1}}$ is ${\mathfrak{r}}$-colored. - If ${\tilde\eta}_{i_{\alpha}}$ and ${\tilde\eta}_{i_{{\alpha}+1}}$ are oriented in opposite directions then either both of ${\tilde\eta}_{i_{\alpha}}$ and ${\tilde\eta}_{i_{{\alpha}+1}}$ are ${\mathfrak{r}}$-colored or both of ${\tilde\eta}_{i_{\alpha}}$ and ${\tilde\eta}_{i_{{\alpha}+1}}$ are $\operatorname{Id}$-colored. It follows that there are an even (respectively odd) number of ${\mathfrak{r}}$-colored arcs in the interval $[{\tilde\eta}_{i_{2{\alpha}-1}},{\tilde\eta}_{i_{2{\alpha}}}]$ if ${\tilde\eta}_{i_{2{\alpha}-1}}$ and ${\tilde\eta}_{i_{2{\alpha}}}$ are oriented in the same direction (respectively opposite directions); and all of the arcs in the interval $({\tilde\eta}_{i_{2{\alpha}}},{\tilde\eta}_{i_{2{\alpha}+1}})$ are $\operatorname{Id}$-colored. So, the number of ${\mathfrak{r}}$-colored arcs agrees with the number of arcs which point in a given direction. Finally, the contribution $1$ comes from the fact that the constant loop in $\operatorname{\mathit{SO}}(m)$ corresponds to the nontrivial element of $\Omega_1^{{\mathrm{fr}}}$. Let $\Phi_K$ and $\Psi_K$ be the maps associated above to each component $K$ of $\bigcup_i {\zeta}_i\cup \bigcup_{j,\jmath} {\tilde\eta}_{j,\jmath}$. Then $\Psi_K\circ \Phi_K$ induces an element $[\Psi_K\circ \Phi_K]$ of $\pi_{m+1}(K_m^{m+1})\cong{\mathbb{Z}}/2$ (by collapsing everything outside a neighborhood of $K$ to the basepoint). Let ${\mathbf{z}}$ be the generator of the Khovanov complex corresponding to the cell $f^{m+2}$ and ${\mathbf{c}}$ the element of the Khovanov chain group corresponding to the cocycle $c$. Then it suffices to show that the sum $$\sum_K [\Psi_K\circ \Phi_K]$$ agrees with the expression $$\label{eq:sq2-term} \biggl(\#|{\mathfrak{G}_{{\mathbf{c}}}({\mathbf{z}})}| + {f({\mathfrak{G}_{{\mathbf{c}}}({\mathbf{z}})})}+{g({\mathfrak{G}_{{\mathbf{c}}}({\mathbf{z}})})}\biggr){\mathbf{z}},$$ from Formula . By , $$\xymatrix@R=0ex@C=1ex{ [K,\Phi] \ar@{|->}[rrrr] &&&&[\Psi_K\circ\Phi_K]\\ \rotatebox[origin=c]{270}{$\in$}&&&&\rotatebox[origin=c]{270}{$\in$}\\ \Omega_1^{{\mathrm{fr}}} \ar@{=}[r]& \pi_{m+1}(S^m) \ar[rrr]_-{\cong}^-{\iota_*} &&& \pi_{m+1}(K_m^{(m+1)}). }$$ The element $[K,\Phi_K]\in{\mathbb{Z}}/2$ is computed in , and it remains to match the terms in that proposition with the terms in Formula . By construction, the graph ${\mathfrak{G}_{{\mathbf{c}}}({\mathbf{z}})}$ is exactly $\bigcup_i {\zeta}_i\cup \bigcup_{j,\jmath} {\tilde\eta}_{j,\jmath}$, and the orientations of the oriented edges of ${\mathfrak{G}_{{\mathbf{c}}}({\mathbf{z}})}$ match up with the orientations of the boundary-incoherent ${\tilde\eta}_i$. So, the first term in Formula  corresponds to part (\[item:one\]) of , and the third term in Formula  corresponds to part (\[item:arrows\]) of . Finally, since the framings of the Pontrjagin-Thom arcs differ from the standard frame paths by the standard frame assignment $f$, the second term of Formula  corresponds to part (\[item:framing\]) of . This completes the proof. The Khovanov homotopy type of width three knots {#sec:width-two} =============================================== It is immediate from Whitehead’s theorem that if $\widetilde{H}_i(X)$ is trivial for all $i\neq m$ then $X$ is a Moore space; in particular, the homotopy type of $X$ is determined by the homology of $X$ in this case. This result can be extended to spaces with nontrivial homology in several gradings, if one also keeps track of the action of the Steenrod algebra. To determine the Khovanov homotopy types of links up to $11$ crossings we will use such an extension due to Whitehead [@Whitehead-top-exact] and Chang [@Chang-top-homotopy], which we review here. (For further discussion along these lines, as well as the next larger case, see [@Baues-top-handbook Section 11].) \[prop:characterise-quiver\] Consider quivers of the form $$\xymatrix{ A\ar[r]_{f}\ar@/^{1.5pc}/[rr]^{s} & B \ar[r]_{g}& C }$$ where $A$, $B$ and $C$ are ${\mathbb{F}}_2$-vector spaces, and $gf=0$. Such a quiver uniquely decomposes as a direct sum of the following quivers: $$\xymatrix@C=1ex{ \text{(S-1)}& {\mathbb{F}}_2&& 0 && 0&& \text{(S-2)}& 0 && {\mathbb{F}}_2 && 0&& \text{(S-3)}& 0 && 0 && {\mathbb{F}}_2\\ \text{(P-1)}& {\mathbb{F}}_2\ar[rr]_-{\operatorname{Id}} && {\mathbb{F}}_2 && 0&& \text{(P-2)}& 0 && {\mathbb{F}}_2 \ar[rr]_-{\operatorname{Id}}&& {\mathbb{F}}_2&& \text{(X-1)}& {\mathbb{F}}_2\ar@/^{1.5pc}/[rrrr]^{\operatorname{Id}} && 0&& {\mathbb{F}}_2\\ \text{(X-2)}& {\mathbb{F}}_2\ar[rr]_-{\operatorname{Id}}\ar@/^{1.5pc}/[rrrr]^{\operatorname{Id}} && {\mathbb{F}}_2 && {\mathbb{F}}_2&& \text{(X-3)}& {\mathbb{F}}_2\ar@/^{1.5pc}/[rrrr]^{\operatorname{Id}} && {\mathbb{F}}_2 \ar[rr]_-{\operatorname{Id}}&& {\mathbb{F}}_2&& \text{(X-4)}& {\mathbb{F}}_2\ar[rr]_-{\left(\begin{smallmatrix}\operatorname{Id}\\0\end{smallmatrix}\right)}\ar@/^{1.5pc}/[rrrr]^{\operatorname{Id}} && {\mathbb{F}}_2\oplus{\mathbb{F}}_2 \ar[rr]_-{(0,\operatorname{Id})}&& {\mathbb{F}}_2. }$$ We start with uniqueness. In such a decomposition, let $s_i$ be the number of (S-i) summands, $p_i$ be the number of (P-i) summands, and $x_i$ be the number of (X-i) summands. Consider the following nine pieces of data: - The dimensions of the ${\mathbb{F}}_2$-vector spaces $A$, $B$ and $C$, say $d_1,d_2,d_3$, respectively; - The ranks of the maps $f$ and $g$, say $r_f,r_g$, respectively; - The dimensions of the ${\mathbb{F}}_2$-vector spaces $\operatorname{im}(s)$, $\operatorname{im}({{s}|_{\ker(f)}})$, $\operatorname{im}(g)\cap\operatorname{im}(s)$ and $\operatorname{im}(g)\cap\operatorname{im}({{s}|_{\ker(f)}})$, say $r_1,r_2,r_3,r_4$, respectively. We have $$\begin{aligned} \begin{pmatrix} d_1\\ d_2\\ d_3\\ r_f\\ r_g\\ r_1\\ r_2\\ r_3\\ r_4 \end{pmatrix} &= \begin{pmatrix} 1&.&.&1&.&1&1&1&1\\ .&1&.&1&1&.&1&1&2\\ .&.&1&.&1&1&1&1&1\\ .&.&.&1&.&.&1&.&1\\ .&.&.&.&1&.&.&1&1\\ .&.&.&.&.&1&1&1&1\\ .&.&.&.&.&1&.&1&.\\ .&.&.&.&.&.&.&1&1\\ .&.&.&.&.&.&.&1&. \end{pmatrix} \begin{pmatrix} s_1\\ s_2\\ s_3\\ p_1\\ p_2\\ x_1\\ x_2\\ x_3\\ x_4 \end{pmatrix} \notag\\ \shortintertext{and therefore, the numbers $s_i$, $p_i$ and $x_i$ are determined as follows:} \begin{pmatrix} s_1\\ s_2\\ s_3\\ p_1\\ p_2\\ x_1\\ x_2\\ x_3\\ x_4 \end{pmatrix} &= \begin{pmatrix*}[r] 1& .& .&-1& .& .&-1& .& .\\ .& 1& .&-1&-1& .& .& .& .\\ .& .& 1& .&-1&-1& .& 1& .\\ .& .& .& 1& .&-1& 1& .& .\\ .& .& .& .& 1& .& .&-1& .\\ .& .& .& .& .& .& 1& .&-1\\ .& .& .& .& .& 1&-1&-1& 1\\ .& .& .& .& .& .& .& .& 1\\ .& .& .& .& .& .& .& 1&-1\\ \end{pmatrix*} \begin{pmatrix} d_1\\ d_2\\ d_3\\ r_f\\ r_g\\ r_1\\ r_2\\ r_3\\ r_4 \end{pmatrix} \label{eq:coeff-of-decomposition}. \end{aligned}$$ For existence of such a decomposition, we carry out a standard change-of-basis argument. Choose generators for $A$, $B$ and $C$, and construct the following graph. There are three types of vertices, $A$-vertices, $B$-vertices and $C$-vertices, corresponding to generators of $A$, $B$ and $C$ respectively. There are three types of edges, $f$-edges, $g$-edges and $s$-edges, corresponding to the maps $f$, $g$ and $s$ as follows: for $a$ an $A$-vertex and $b$ a $B$-vertex, if $b$ appears in $f(a)$ then there is an $f$-edge joining $a$ and $b$; the $g$-edges and $s$-edges are defined similarly. We will do a change of basis, which will change the graph, so that in the final graph, each vertex is incident to at most one edge of each type. This will produce the required decomposition of the quiver. We carry out the change of basis in the following sequence of steps. Each step accomplishes a specific simplification of the graph; it can be checked that the later steps do not undo the earlier simplifications. 1. \[step:simplify-1\] We ensure that no two $f$-edges share a common vertex. Fix an $f$-edge joining an $A$-vertex $a$ to a $B$-vertex $b$. Let $\{a_i\}$ be the other $A$-vertices that are $f$-adjacent to $b$ and $\{b_j\}$ be the other $B$-vertices that are $f$-adjacent to $a$. Then change basis by replacing each $a_i$ by $a_i+a$, and by replacing $b$ with $b+\sum_j b_j$. 2. By the same procedure as Step (\[step:simplify-1\]), we ensure that no two $g$-edges share a common vertex. Since $gf=0$, this ensures that no $B$-vertex is adjacent to both an $f$-edge and a $g$-edge. Call an $A$-vertex an $A_1$-vertex ([resp. ]{}$A_2$-vertex) if it is adjacent ([resp. ]{}non-adjacent) to an $f$-edge; similarly, call a $C$-vertex a $C_1$-vertex ([resp. ]{}$C_2$-vertex) if it is adjacent ([resp. ]{}non-adjacent) to a $g$-vertex. 3. Next, we isolate the $s$-edges that connect $A_2$-vertices to $C_2$-vertices. Fix an $s$-edge joining an $A_2$-vertex $a$ to a $C_2$-vertex $c$. If $\{a_i\}$ ([resp. ]{}$\{c_j\}$) are the other $A$-vertices ([resp. ]{}$C$-vertices) that are $s$-adjacent to $c$ ([resp. ]{}$a$), then change basis by replacing each $a_i$ by $a_i+a$ and by replacing $c$ with $c+\sum_j c_j$. 4. The next step is to isolate the $s$-edges that connect $A_1$-vertices to $C_2$-vertices. Once again, fix an $s$-edge joining an $A_1$-vertex $a$ to a $C_2$-vertex $c$. Let $\{a_i\}$ ([resp. ]{}$\{c_j\}$) be the other $A$-vertices ([resp. ]{}$C$-vertices) that are $s$-adjacent to $c$ ([resp. ]{}$a$). Let $b_i$ be the $B$-vertex that is $f$-adjacent to $a_i$ (observe, each $a_i$ is an $A_1$-vertex), and let $b$ be the $B$-vertex that is $f$-adjacent to $a$. Then change basis by replacing each $a_i$ by $a_i+a$, by replacing each $b_i$ by $b_i+b$ and by replacing $c$ with $c+\sum_j c_j$. 5. Similarly, we can isolate the $s$-edges that connect $A_2$-vertices to $C_1$-vertices. As before, fix an $s$-edge joining an $A_2$-vertex $a$ to a $C_1$-vertex $c$. Let $\{a_i\}$ ([resp. ]{}$\{c_j\}$) be the other $A$-vertices ([resp. ]{} $C$-vertices) that are $s$-adjacent to $c$ ([resp. ]{} $a$). Let $b_j$ be the $B$-vertex that is $g$-adjacent to $c_j$ and let $b$ be the $B$-vertex that is $g$-adjacent to $c$. Then change basis by replacing each $a_i$ by $a_i+a$, by replacing $b$ with $b+\sum_j b_j$ and by replacing $c$ with $c+\sum_j c_j$. 6. Finally, we have to isolate the $s$-edges that connect $A_1$-vertices to $C_1$-vertices. This can be accomplished by a combination of the previous two steps. We are interested in stable spaces $X$ satisfying the following conditions: - The only torsion in $H^*(X;{\mathbb{Z}})$ is $2$-torsion, and - ${\widetilde}{H}^i(X;{\mathbb{F}}_2)=0$ if $i\neq 0,1,2$. Then the quiver $$\xymatrix{ {\widetilde}{H}^0(X;{\mathbb{F}}_2)\ar[r]_{\operatorname{Sq}^1}\ar@/^2pc/[rr]^{\operatorname{Sq}^2} & {\widetilde}{H}^{1}(X;{\mathbb{F}}_2) \ar[r]_{\operatorname{Sq}^1}& {\widetilde}{H}^{2}(X;{\mathbb{F}}_2) }$$ is of the form described in . In Examples \[exam:first-subthin\]–\[exam:last-subthin\], we will describe nine such spaces whose associated quivers are the nine irreducible ones of . \[exam:first-subthin\] The associated quivers of $S^0$, $S^1$ and $S^2$ are (S-1), (S-2) and (S-3), respectively. The associated quivers of $\Sigma^{-1}{\mathbb{R}\mathrm{P}}^2$ and ${\mathbb{R}\mathrm{P}}^2$ are (P-1) and (P-2), respectively. The space ${\mathbb{C}\mathrm{P}}^2$ has cohomology $$\xymatrix{ {\widetilde}{H}^4({\mathbb{C}\mathrm{P}}^2;{\mathbb{Z}}) & {\mathbb{Z}}&\qquad & {\widetilde}{H}^4({\mathbb{C}\mathrm{P}}^2;{\mathbb{F}}_2) & {\mathbb{F}}_2\\ {\widetilde}{H}^3({\mathbb{C}\mathrm{P}}^2;{\mathbb{Z}}) & 0 & & {\widetilde}{H}^3({\mathbb{C}\mathrm{P}}^2;{\mathbb{F}}_2) & 0\\ {\widetilde}{H}^2({\mathbb{C}\mathrm{P}}^2;{\mathbb{Z}}) & {\mathbb{Z}}& & {\widetilde}{H}^2({\mathbb{C}\mathrm{P}}^2;{\mathbb{F}}_2) & {\mathbb{F}}_2\ar@/_2pc/[uu]_{\operatorname{Sq}^2}. }$$ (The fact that $\operatorname{Sq}^2$ has this form follows from the fact that for $x\in H^n$, $\operatorname{Sq}^n(x)=x\cup x$.) Therefore, the stable space $X_1{\mathrel{\vcenter{\baselineskip0.5ex \lineskiplimit0pt \hbox{\scriptsize.}\hbox{\scriptsize.}}} =}\Sigma^{-2}{\mathbb{C}\mathrm{P}}^2$ has (X-1) as its associated quiver. The space ${\mathbb{R}\mathrm{P}}^5/{\mathbb{R}\mathrm{P}}^2$ has cohomology $$\xymatrix{ {\widetilde}{H}^5({\mathbb{R}\mathrm{P}}^5/{\mathbb{R}\mathrm{P}}^2;{\mathbb{Z}}) & {\mathbb{Z}}&\qquad & {\widetilde}{H}^5({\mathbb{R}\mathrm{P}}^5/{\mathbb{R}\mathrm{P}}^2;{\mathbb{F}}_2) & {\mathbb{F}}_2\\ {\widetilde}{H}^4({\mathbb{R}\mathrm{P}}^5/{\mathbb{R}\mathrm{P}}^2;{\mathbb{Z}}) & {\mathbb{F}}_2 & & {\widetilde}{H}^4({\mathbb{R}\mathrm{P}}^5/{\mathbb{R}\mathrm{P}}^2;{\mathbb{F}}_2) & {\mathbb{F}}_2\\ {\widetilde}{H}^3({\mathbb{R}\mathrm{P}}^5/{\mathbb{R}\mathrm{P}}^2;{\mathbb{Z}}) & 0 & & {\widetilde}{H}^3({\mathbb{R}\mathrm{P}}^5/{\mathbb{R}\mathrm{P}}^2;{\mathbb{F}}_2) & {\mathbb{F}}_2\ar@/_2pc/[uu]_{\operatorname{Sq}^2}\ar[u]^{\operatorname{Sq}^1}. }$$ To see that $\operatorname{Sq}^2$ has the stated form, consider the inclusion map ${\mathbb{R}\mathrm{P}}^5/{\mathbb{R}\mathrm{P}}^2\to {\mathbb{R}\mathrm{P}}^6/{\mathbb{R}\mathrm{P}}^2$. The map $\operatorname{Sq}^3{\colon}H^3({\mathbb{R}\mathrm{P}}^6/{\mathbb{R}\mathrm{P}}^2)\to H^6({\mathbb{R}\mathrm{P}}^6/{\mathbb{R}\mathrm{P}}^2)$ is an isomorphism (since it is just the cup square). By the Adem relations, $\operatorname{Sq}^3=\operatorname{Sq}^1\operatorname{Sq}^2$, so $\operatorname{Sq}^2{\colon}H^3({\mathbb{R}\mathrm{P}}^6/{\mathbb{R}\mathrm{P}}^2)\to H^5({\mathbb{R}\mathrm{P}}^6/{\mathbb{R}\mathrm{P}}^2)$ is nontrivial. So, the corresponding statement for ${\mathbb{R}\mathrm{P}}^5/{\mathbb{R}\mathrm{P}}^2$ follows from naturality. Therefore, the stable space $X_2{\mathrel{\vcenter{\baselineskip0.5ex \lineskiplimit0pt \hbox{\scriptsize.}\hbox{\scriptsize.}}} =}\Sigma^{-3}({\mathbb{R}\mathrm{P}}^5/{\mathbb{R}\mathrm{P}}^2)$ has (X-2) as its associated quiver. The space ${\mathbb{R}\mathrm{P}}^4/{\mathbb{R}\mathrm{P}}^1$ has cohomology $$\xymatrix{ {\widetilde}{H}^4({\mathbb{R}\mathrm{P}}^4/{\mathbb{R}\mathrm{P}}^1;{\mathbb{Z}}) & {\mathbb{F}}_2 &\qquad & {\widetilde}{H}^4({\mathbb{R}\mathrm{P}}^4/{\mathbb{R}\mathrm{P}}^1;{\mathbb{F}}_2) & {\mathbb{F}}_2\\ {\widetilde}{H}^3({\mathbb{R}\mathrm{P}}^4/{\mathbb{R}\mathrm{P}}^1;{\mathbb{Z}}) & 0 & & {\widetilde}{H}^3({\mathbb{R}\mathrm{P}}^4/{\mathbb{R}\mathrm{P}}^1;{\mathbb{F}}_2) & {\mathbb{F}}_2\ar[u]^{\operatorname{Sq}^1}\\ {\widetilde}{H}^2({\mathbb{R}\mathrm{P}}^4/{\mathbb{R}\mathrm{P}}^1;{\mathbb{Z}}) & {\mathbb{Z}}& & {\widetilde}{H}^2({\mathbb{R}\mathrm{P}}^4/{\mathbb{R}\mathrm{P}}^1;{\mathbb{F}}_2) & {\mathbb{F}}_2\ar@/_2pc/[uu]_{\operatorname{Sq}^2}. }$$ (The answer for $\operatorname{Sq}^2$ again follows from the fact that it is the cup square.) Therefore, the stable space $X_3{\mathrel{\vcenter{\baselineskip0.5ex \lineskiplimit0pt \hbox{\scriptsize.}\hbox{\scriptsize.}}} =}\Sigma^{-2}({\mathbb{R}\mathrm{P}}^4/{\mathbb{R}\mathrm{P}}^1)$ has (X-3) as its associated quiver. \[exam:last-subthin\] The space ${\mathbb{R}\mathrm{P}}^2\wedge{\mathbb{R}\mathrm{P}}^2$ has cohomology $$\xymatrix{ {\widetilde}{H}^4({\mathbb{R}\mathrm{P}}^2\wedge{\mathbb{R}\mathrm{P}}^2;{\mathbb{Z}}) & {\mathbb{F}}_2 &\qquad & {\widetilde}{H}^4({\mathbb{R}\mathrm{P}}^2\wedge{\mathbb{R}\mathrm{P}}^2;{\mathbb{F}}_2) & {\mathbb{F}}_2\ar@{<-}[d]!R(.5)^{\operatorname{Sq}^1}\ar@{<-}[d]!L(.5)_{\operatorname{Sq}^1}\\ {\widetilde}{H}^3({\mathbb{R}\mathrm{P}}^2\wedge{\mathbb{R}\mathrm{P}}^2;{\mathbb{Z}}) & {\mathbb{F}}_2 & & {\widetilde}{H}^3({\mathbb{R}\mathrm{P}}^2\wedge{\mathbb{R}\mathrm{P}}^2;{\mathbb{F}}_2) & {\mathbb{F}}_2\oplus{\mathbb{F}}_2\\ {\widetilde}{H}^2({\mathbb{R}\mathrm{P}}^2\wedge{\mathbb{R}\mathrm{P}}^2;{\mathbb{Z}}) & 0 & & {\widetilde}{H}^2({\mathbb{R}\mathrm{P}}^2\wedge{\mathbb{R}\mathrm{P}}^2;{\mathbb{F}}_2) & {\mathbb{F}}_2\ar[u]!R(.5)_{\operatorname{Sq}^1}\ar[u]!L(.5)^{\operatorname{Sq}^1}\ar@/_3pc/[uu]_{\operatorname{Sq}^2}. }$$ (The answer for $\operatorname{Sq}^2$ follows from the product formula: $\operatorname{Sq}^2(a\wedge b)=a\wedge\operatorname{Sq}^2(b)+\operatorname{Sq}^1(a)\wedge\operatorname{Sq}^1(b)+\operatorname{Sq}^2(a)\wedge b$.) Therefore, the quiver associated to the stable space $X_4{\mathrel{\vcenter{\baselineskip0.5ex \lineskiplimit0pt \hbox{\scriptsize.}\hbox{\scriptsize.}}} =}\Sigma^{-2}({\mathbb{R}\mathrm{P}}^2\wedge{\mathbb{R}\mathrm{P}}^2)$ is isomorphic to (X-4). The following is a classification theorem from [@Baues-top-handbook Theorems 11.2 and 11.7]. \[prop:space-classify\] Let $X$ be a simply connected CW complex such that: - The only torsion in the cohomology of $X$ is $2$-torsion. - There exists $m$ sufficiently large so that the reduced cohomology $\widetilde{H}^i(X;{\mathbb{F}}_2)$ is trivial for $i\neq m, m+1,m+2$. Then the homotopy type of $X$ is determined by the isomorphism class of the quiver $$\xymatrix{ H^m(X;{\mathbb{F}}_2)\ar[r]_{\operatorname{Sq}^1}\ar@/^2pc/[rr]^{\operatorname{Sq}^2} & H^{m+1}(X;{\mathbb{F}}_2) \ar[r]_{\operatorname{Sq}^1}& H^{m+2}(X;{\mathbb{F}}_2) }$$ as follows: Decompose the quiver as in ; let $s_i$ be the number of (S-i) summands, $1\leq i\leq 3$; let $p_i$ be the number of (P-i) summands, $1\leq i\leq 2$; and let $x_i$ be the number of (X-i) summands, $1\leq i\leq 4$. Then $X$ is homotopy equivalent to $$Y{\mathrel{\vcenter{\baselineskip0.5ex \lineskiplimit0pt \hbox{\scriptsize.}\hbox{\scriptsize.}}} =}(\bigvee_{i=1}^3\bigvee_{j=1}^{s_i}S^{m+i-1})\vee (\bigvee_{i=1}^2\bigvee_{j=1}^{p_i}\Sigma^{m+i-2}{\mathbb{R}\mathrm{P}}^2)\vee(\bigvee_{i=1}^4\bigvee_{j=1}^{x_i}\Sigma^mX_i).$$ In light of , the following seems a natural link invariant. \[def:st\] For any link $L$, the function ${\mathit{St}}(L){\colon}{\mathbb{Z}}^2\to{\mathbb{N}}^4$ is defined as follows: Fix $(i,j)\in{\mathbb{Z}}^2$; for $k\in\{i,i+1\}$, let $\operatorname{Sq}^1_{(k)}$ denote the map $\operatorname{Sq}^1{\colon}{\mathit{Kh}}^{k,j}(L)\to{\mathit{Kh}}^{k+1,j}(L)$. Let $r_1$ be the rank of the map $\operatorname{Sq}^2{\colon}{\mathit{Kh}}^{i,j}(L)\to{\mathit{Kh}}^{i+2,j}(L)$; let $r_2$ be the rank of the map ${{\operatorname{Sq}^2}|_{\ker \operatorname{Sq}^1_{(i)}}}$; let $r_3$ be the dimension of the ${\mathbb{F}}_2$-vector space $\operatorname{im}\operatorname{Sq}^1_{(i+1)}\cap\operatorname{im}\operatorname{Sq}^2$; and let $r_4$ be the dimension of the ${\mathbb{F}}_2$-vector space $\operatorname{im}\operatorname{Sq}^1_{(i+1)}\cap\operatorname{im}({{\operatorname{Sq}^2}|_{\ker\operatorname{Sq}^1_{(i)}}})$. Then, $${\mathit{St}}(i,j){\mathrel{\vcenter{\baselineskip0.5ex \lineskiplimit0pt \hbox{\scriptsize.}\hbox{\scriptsize.}}} =}(r_2-r_4,r_1-r_2-r_3+r_4,r_4,r_3-r_4).$$ \[cor:width-3-determined\] Suppose that the Khovanov homology ${\mathit{Kh}}_{{\mathbb{Z}}}(L)$ satisfies the following properties: - ${\mathit{Kh}}^{i,j}_{{\mathbb{Z}}}(L)$ lies on three adjacent diagonals, say $2i-j={\sigma},{\sigma}+2,{\sigma}+4$. - There is no torsion other than $2$-torsion. - There is no torsion on the diagonal $2i-j={\sigma}$. Then the homotopy types of the stable spaces ${\mathcal{X}_\mathit{Kh}}^j(L)$ are determined by ${\mathit{Kh}}_{{\mathbb{Z}}}(L)$ and ${\mathit{St}}(L)$ as follows: Fix $j$; let $i=\frac{j+{\sigma}}{2}$; let ${\mathit{St}}(i,j)=(x_1,x_2,x_3,x_4)$; then ${\mathcal{X}_\mathit{Kh}}^j(L)$ is stably homotopy equivalent to the wedge sum of $$(\bigvee^{x_1}{\Sigma}^{i-2}{\mathbb{C}\mathrm{P}}^2)\vee(\bigvee^{x_2}{\Sigma}^{i-3}({\mathbb{R}\mathrm{P}}^5/{\mathbb{R}\mathrm{P}}^2))\vee (\bigvee^{x_3}{\Sigma}^{i-2}({\mathbb{R}\mathrm{P}}^4/{\mathbb{R}\mathrm{P}}^1))\vee(\bigvee^{x_4}{\Sigma}^{i-2}({\mathbb{R}\mathrm{P}}^2\wedge{\mathbb{R}\mathrm{P}}^2))$$ and a wedge of Moore spaces. In particular, ${\mathcal{X}_\mathit{Kh}}^j(L)$ is a wedge of Moore spaces if and only if $x_1=x_2=x_3=x_4=0$. The first part is immediate from . To wit, if one decomposes the quiver $$\xymatrix{ {\mathit{Kh}}^{i,j}_{{\mathbb{F}}_2}\ar[r]_{\operatorname{Sq}^1}\ar@/^2pc/[rr]^{\operatorname{Sq}^2} & {\mathit{Kh}}^{i+1,j}_{{\mathbb{F}}_2} \ar[r]_{\operatorname{Sq}^1}& {\mathit{Kh}}^{i+2,j}_{{\mathbb{F}}_2} }$$ as a direct sum of the nine quivers of , [Equation (\[eq:coeff-of-decomposition\])]{} implies that the number of (X-i) summands will be $x_i$. The ‘if’ direction of the second part follows from the first part. For the ‘only if’ direction, observe that the rank of $\operatorname{Sq}^2{\colon}{\mathit{Kh}}_{{\mathbb{F}}_2}^{i,j}\to{\mathit{Kh}}_{{\mathbb{F}}_2}^{i+2,j}$ is $x_1+x_2+x_3+x_4$; therefore, if ${\mathcal{X}_\mathit{Kh}}^j(L)$ is a wedge of Moore spaces, $\operatorname{Sq}^2=0$, and hence $x_1=x_2=x_3=x_4=0$. Computations {#sec:computations} ============ It can be checked from the databases [@KAT-kh-knotatlas] that all prime links up to $11$ crossings satisfy the conditions of . Therefore, their homotopy types are determined by the Khovanov homology ${\mathit{Kh}}_{{\mathbb{Z}}}$ and the function ${\mathit{St}}$ of . In , we present the values of ${\mathit{St}}$. To save space, we only list the links $L$ for which the function ${\mathit{St}}(L)$ is not identically $(0,0,0,0)$; and for such links, we only list tuples $(i,j)$ for which ${\mathit{St}}(i,j)\neq (0,0,0,0)$. For the same reason, we do not mention ${\mathit{Kh}}_{{\mathbb{Z}}}(L)$ in the table; the Khovanov homology data can easily be extracted from [@KAT-kh-knotatlas]. After collecting the data for the PD-presentations from [@KAT-kh-knotatlas], we used several Sage programs for carrying out this computation. (To get more information about Sage, visit <http://www.sagemath.org/>.) All the programs and computations are available at any of the following locations: <http://math.columbia.edu/~sucharit/programs/KhovanovSteenrod/> <https://github.com/sucharit/KhovanovSteenrod> \[rem:mirror-sum\] Let ${m(L)}$ denote the mirror of $L$. In [@RS-khovanov Conjecture \[KhSp:conj:mirror\]] we conjecture that ${\mathcal{X}_\mathit{Kh}}^j({m(L)})$ and ${\mathcal{X}_\mathit{Kh}}^{-j}(L)$ are Spanier-Whitehead dual. In particular, the action of $\operatorname{Sq}^i$ on ${\mathit{Kh}}^{*,j}(L)$ and ${\mathit{Kh}}^{*,-j}({m(L)})$ should be transposes of each other. This conjecture provides some justification for the fact that does not list the results for both mirrors of chiral knots. For disjoint unions, we conjecture in [@RS-khovanov Conjecture \[KhSp:conj:disjoint-union\]] that ${\mathcal{X}_\mathit{Kh}}(L_1\amalg L_2)$ is the smash product of ${\mathcal{X}_\mathit{Kh}}(L_1)$ and ${\mathcal{X}_\mathit{Kh}}(L_2)$. So, only lists non-split links. The expected behavior of ${\mathcal{X}_\mathit{Kh}}$ under connected sums is more complicated: in [@RS-khovanov Conjecture \[KhSp:conj:unred-con-sum\]] we conjecture that ${\mathcal{X}_\mathit{Kh}}(L_1\# L_2)\simeq {\mathcal{X}_\mathit{Kh}}(L_1)\otimes_{{\mathcal{X}_\mathit{Kh}}(U)}{\mathcal{X}_\mathit{Kh}}(L_2)$, where $\otimes$ denotes the tensor product of module spectra. So, like for Khovanov homology itself, the Khovanov homotopy type of a connected sum of links is not determined by the Khovanov homotopy types of the individual links: the module structures are required. Nonetheless, we have restricted to prime links. From we see that for $T_{3,4}=8_{19}$, ${\mathit{St}}(2,11)=(0,1,0,0)$. Therefore, by , ${\mathcal{X}_\mathit{Kh}}^{11}(T_{3,4})$ is not a wedge sum of Moore spaces. \[exam:10\_145\] Consider the knot $K=10_{145}$. From [@KAT-kh-knotatlas], we know its Khovanov homology: $$\begin{array}{c|cccccccccc} &-9&-8&-7&-6&-5&-4&-3&-2&-1&0\\ \hline -3&.&.&.&.&.&.&.&.&.&{\mathbb{Z}}\\ -5&.&.&.&.&.&.&.&.&.&{\mathbb{Z}}\\ -7&.&.&.&.&.&.&{\mathbb{Z}}&{\mathbb{Z}}&.&.\\ -9&.&.&.&.&.&.&{\mathbb{F}}_2&{\mathbb{F}}_2&.&.\\ -11&.&.&.&.&{\mathbb{Z}}&{\mathbb{Z}}^2&{\mathbb{Z}}&.&.&.\\ -13&.&.&.&{\mathbb{Z}}&{\mathbb{F}}_2&{\mathbb{F}}_2&.&.&.&.\\ -15&.&.&.&{\mathbb{Z}}\oplus{\mathbb{F}}_2&{\mathbb{Z}}&.&.&.&.&.\\ -17&.&{\mathbb{Z}}&{\mathbb{Z}}&.&.&.&.&.&.&.\\ -19&.&{\mathbb{F}}_2&.&.&.&.&.&.&.&.\\ -21&{\mathbb{Z}}&.&.&.&.&.&.&.&.&. \end{array}$$ and from , we know the function ${\mathit{St}}(K)$: $$\begin{aligned} {\mathit{St}}(-4,-9)&=(0,0,0,1)\\ {\mathit{St}}(-6,-13)&=(0,0,1,0)\\ {\mathit{St}}(-7,-15)&=(0,1,0,0). \end{aligned}$$ Therefore, via , we can compute Khovanov homotopy types: $$\begin{aligned} {\mathcal{X}_\mathit{Kh}}^{-3}(K)&\sim S^0 &{\mathcal{X}_\mathit{Kh}}^{-5}(K)&\sim S^0\\ {\mathcal{X}_\mathit{Kh}}^{-7}(K)&\sim {\Sigma}^{-3}(S^0\vee S^1) &{\mathcal{X}_\mathit{Kh}}^{-9}(K)&\sim {\Sigma}^{-6}({\mathbb{R}\mathrm{P}}^2\wedge{\mathbb{R}\mathrm{P}}^2)\\ {\mathcal{X}_\mathit{Kh}}^{-11}(K)&\sim {\Sigma}^{-5}(S^0\vee S^1\vee S^1\vee S^2) &{\mathcal{X}_\mathit{Kh}}^{-13}(K)&\sim {\Sigma}^{-8}({\mathbb{R}\mathrm{P}}^4/{\mathbb{R}\mathrm{P}}^1\vee{\Sigma}{\mathbb{R}\mathrm{P}}^2)\\ {\mathcal{X}_\mathit{Kh}}^{-15}(K)&\sim {\Sigma}^{-10}({\mathbb{R}\mathrm{P}}^5/{\mathbb{R}\mathrm{P}}^2\vee S^4) &{\mathcal{X}_\mathit{Kh}}^{-17}(K)&\sim {\Sigma}^{-8}(S^0\vee S^1)\\ {\mathcal{X}_\mathit{Kh}}^{-19}(K)&\sim {\Sigma}^{-10}{\mathbb{R}\mathrm{P}}^2 &{\mathcal{X}_\mathit{Kh}}^{-21}(K)&\sim {\Sigma}^{-9}S^0. \end{aligned}$$ The Kinoshita-Terasaka knot $K_1=\text{K}11n42$ and its Conway mutant $K_2=\text{K}11n34$ have identical Khovanov homology. From , we see that ${\mathit{St}}(K_1)={\mathit{St}}(K_2)$. Therefore, by , they have the same Khovanov homotopy type. The Kinoshita-Terasaka knot and its Conway mutant is an example of a pair of links that are not distinguished by their Khovanov homologies. In an earlier version of the paper, we asked: \[ques:is-ours-interesting\] Does there exist a pair of links $L_1$ and $L_2$ with ${\mathit{Kh}}_{{\mathbb{Z}}}(L_1)={\mathit{Kh}}_{{\mathbb{Z}}}(L_2)$, but ${\mathcal{X}_\mathit{Kh}}(L_1)\not\sim {\mathcal{X}_\mathit{Kh}}(L_2)$? We provided the following partial answer: The links $L_1=\text{L}11n383$ and $L_2=\text{L}11n393$ have isomorphic Khovanov homology in quantum grading $(-3)$: ${\mathit{Kh}}^{-2,-3}_{{\mathbb{Z}}}={\mathbb{Z}}^3$, ${\mathit{Kh}}^{-1,-3}_{{\mathbb{Z}}}={\mathbb{Z}}^3\oplus{\mathbb{F}}_2^4$, ${\mathit{Kh}}^{0,-3}_{{\mathbb{Z}}}={\mathbb{Z}}^2$, [@KAT-kh-knotatlas]. However, ${\mathit{St}}(L_1)(-2,-3)=(0,2,0,0)$ and ${\mathit{St}}(L_2)(-2,-3)=(0,1,0,0)$ (); therefore, ${\mathcal{X}_\mathit{Kh}}^{-3}(L_1)$ is not stably homotopy equivalent to ${\mathcal{X}_\mathit{Kh}}^{-3}(L_2)$. Since the first version of this paper, C. Seed has independently computed the $\operatorname{Sq}^1$ and $\operatorname{Sq}^2$ action on the Khovanov homology of knots, and has answered in the affirmative in [@See-kh-squares]. We conclude with an observation and a question. Since all prime links up to $11$ crossings satisfy the conditions of , their Khovanov homotopy types are wedges of various suspensions of $S^0$, ${\mathbb{R}\mathrm{P}}^2$, ${\mathbb{C}\mathrm{P}}^2$, ${\mathbb{R}\mathrm{P}}^5/{\mathbb{R}\mathrm{P}}^2$, ${\mathbb{R}\mathrm{P}}^4/{\mathbb{R}\mathrm{P}}^1$ and ${\mathbb{R}\mathrm{P}}^2\wedge{\mathbb{R}\mathrm{P}}^2$; and this wedge sum decomposition is unique since it is determined by the Khovanov homology ${\mathit{Kh}}_{{\mathbb{Z}}}$ and the function ${\mathit{St}}$. already exhibits all but one of these summands; it does not have a ${\mathbb{C}\mathrm{P}}^2$ summand. A careful look at reveals that neither does any other link up to $11$ crossings. The conspicuous absence of ${\mathbb{C}\mathrm{P}}^2$ naturally leads to the following question. Does there exist a link $L$ for which ${\mathcal{X}_\mathit{Kh}}^j(L)$ contains ${\Sigma}^m{\mathbb{C}\mathrm{P}}^2$ in some[^4] wedge sum decomposition, for some $j,m$? [p[0.1]{}@p[0.85]{}]{} \ $L$&${\mathit{St}}(L)$\ [^1]: RL was supported by an NSF grant number DMS-0905796 and a Sloan Research Fellowship. [^2]: SS was supported by a Clay Mathematics Institute Research Fellowship [^3]: It is also a ${\langlek-1\rangle}$-manifold in the sense of [@Lau-top-cobordismcorners]. [^4]: Wedge sum decompositions are in general not unique.
--- abstract: | As the groupoid model of Hofmann and Streicher proves, identity proofs in intensional Martin-Löf type theory cannot generally be shown to be unique. Inspired by a theorem by Hedberg, we give some simple characterizations of types that do have unique identity proofs. A key ingredient in these constructions are weakly constant endofunctions on identity types. We study such endofunctions on arbitrary types and show that they always factor through a propositional type, the *truncated* or *squashed* domain. Such a factorization is impossible for weakly constant functions in general (a result by Shulman), but we present several non-trivial cases in which it can be done. Based on these results, we define a new notion of anonymous existence in type theory and compare different forms of existence carefully. In addition, we show possibly surprising consequences of the judgmental computation rule of the truncation, in particular in the context of homotopy type theory. All the results have been formalized and verified in the dependently typed programming language Agda. address: - 'University of Nottingham, School of Computer Science, Nottingham NG8 1BB, UK' - '[b]{}University of Birmingham, School of Computer Science, Birmingham B15 2TT, UK' - '[c]{}Chalmers University, Department of Computer Science and Engineering, SE-412 96 Göteborg, Sweden' - author: - Nicolai Krausa - Martín Hötzel Escardób - Thierry Coquandc - Thorsten Altenkirchd bibliography: - 'hjReferences.bib' title: Notions of Anonymous Existence --- [^1] [^2] [^3] Introduction {#sec1:introduction} ============ Although the identity type ${\ensuremath{\mathsf{Id}_{}(a,b)}\xspace}$ is defined as an inductive type with only one single constructor ${\ensuremath{\mathsf{refl}_{}}\xspace}$, it is a concept in Martin-Löf type theory [@Martin-Lof-1972] [@Martin-Lof-1973] [@Martin-Lof-1979] that is hard to get intuition for. The reason is that it is, as a type family, parametrized twice over the same type, while the constructor only expects one argument: ${\ensuremath{\mathsf{refl}_{a}}\xspace} : {\ensuremath{a =_{} a}\xspace}$, where ${\ensuremath{a =_{} b}\xspace}$ is an alternative notation for ${\ensuremath{\mathsf{Id}_{}(a,b)}\xspace}$. In fact, it is the simplest and most natural occurrence of this phenomenon. A result by Hofmann and Streicher [@hofmannStreicher_groupoids] is that we can not prove ${\ensuremath{\mathsf{refl}_{a}}\xspace}$ to be the only inhabitant of the type ${\ensuremath{a =_{} a}\xspace}$, that is, the principle of *unique identity proofs* (UIP) is not derivable. Some time later, Hedberg [@hedberg1998coherence] formulated a sufficient condition on a type to satisfy UIP, namely that its equality is decidable. The core argument of the proof by Hofmann and Streicher is that types can be interpreted as groupoids, i.e. categories of which all morphisms are invertible. Their conjecture that the construction could also be performed using higher groupoids was only made precise more that ten years later. Awodey and Warren [@awodeyWarren_HTmodelsOfIT] as well as, independently, Voevodsky [@voevodsky_equivalenceAndUnivalence] explained that types can be regarded as, roughly speaking, topological spaces. Consequently, an exciting new direction of constructive formal mathematics attracted researchers from originally very separated areas of mathematics, and *homotopy type theory* [@HoTTbook] was born. The current article is not only on homotopy type theory, but on Martin-Löf type theory in general, even though we expect that the results are most interesting in the context of homotopy type theory. We start with Hedberg’s Theorem [@hedberg1998coherence] and describe multiple simple ways of strengthening it, one of them involving *propositional truncation* [@HoTTbook], also known as *bracket types* [@awodeyBauer_bracketTypes] or *squash types* [@Con85]. Propositional truncation is a concept that provides a sequel to the *Propositions-as-Types* paradigm [@Howard80]. If we regard a type as the correspondent of a mathematical statement, a *proposition*, and its inhabitants of proofs thereof, we have to notice that there is a slightly unsatisfactory aspect. A proof of a proposition in mathematics is usually not thought to contain any information apart from the fact that the proposition is true; however, a type can have any number of inhabitants, and therefore any number of witnesses of its truth. Hence it seems natural to regard only *some* types as propositions, namely those which have at most one inhabitant. The notion of propositional truncation assigns to a type the proposition that this type is inhabited. To make the connection clearer, these types are even called *propositions*, or *h-propositions*, in homotopy type theory. With this in mind, we want to be able to say that a type is inhabited without having to reveal an inhabitant explicitly. This is exactly what propositional truncation ${{\mathopen{}\left\Vert -\right\Vert_{}\mathclose{}}} : {\ensuremath{\mathcal{U}}\xspace}\to {\ensuremath{\mathcal{U}}\xspace}$ (where we write ${\ensuremath{\mathcal{U}}\xspace}$ for the universe of types) makes possible. On the other hand, should $A$ have only one inhabitant up to the internal equality, this inhabitant can be constructed from an inhabitant of ${{\mathopen{}\left\Vert A\right\Vert_{}\mathclose{}}}$. This is a crucial difference between propositional truncation and double negation. We consider a weak version of ${{\mathopen{}\left\Vert -\right\Vert_{}\mathclose{}}}$ which does not have judgmental computation properties. After discussing direct generalizations of Hedberg’s Theorem, we attempt to transfer the results from the original setting, where they talk about equality types (of *path spaces*), to arbitrary types. This leads to a broad discussion of *weakly constant functions*: we say that $f: A \to B$ is *weakly constant* if it maps any two elements of $A$ to equal elements of $B$. The attribute *weakly* comes from the fact that we do not require these actual equality proofs to fulfil further conditions, and a weakly constant function does not necessarily appear to be constant in the topological models. For exactly this reason, it is in general not possible to factor the function $f$ through ${{\mathopen{}\left\Vert A\right\Vert_{}\mathclose{}}}$; however, we can do it in certain special cases, and we analyze why. This has, for example, the consequence that the truncated sum of two proposition already has the universal property of their *join*, which is defined as a *higher inductive type* in homotopy type theory. Particularly interesting are weakly constant *endofunctions*. We show that these can always be factored through the propositional truncation, based on the observation that the type of fixed points of such a function is a proposition. This allows us to define a new notion of existence which we call *populatedness*. We say that $A$ is populated if any weakly constant endofunction on $A$ has a fixed point. This property is propositional and behaves very similar to ${{\mathopen{}\left\Vert A\right\Vert_{}\mathclose{}}}$, but we show that it is strictly weaker. On the other hand, it is strictly stronger than the double negation $\neg\neg A$, another notion of existence which, however, is often not useful as it generally only allows to prove negative statements. It is worth emphasizing that our populatedness is not a component that has to be *added* to type theory, but a notion that can be *defined* internally. We strongly suspect that this is not the case for even the weak version of propositional truncation, but we lack a formal proof. It turns out to be interesting to consider the assumption that every type has a weakly constant endofunction. The empty type has a trivial such endofunction, and so does a type of which we know an explicit inhabitant; however, from the assumption that a type has a weakly constant endofunction, we have no way of knowing in which case we are. In a minimalistic theory, we do not think that this assumption implies excluded middle. However, it implies that all equalities are decidable, i.e. a strong version of excluded middle holds for equalities. Finally, we show that the judgmental computation rule of the propositional truncation, if it is assumed, does have some interesting consequences for the theory. One of our observations is that we can construct a term ${\operatorname{\mathsf{myst}}}_{\ensuremath{\mathbb{N}}\xspace}$ such that ${\operatorname{\mathsf{myst}}}_{\ensuremath{\mathbb{N}}\xspace}({{\mathopen{}\left|n\right|_{}^{}\mathclose{}}})$ is judgmentally equal to $n$ for any natural number $n$, which shows that the projection map ${{\mathopen{}\left|-\right|_{}^{}\mathclose{}}} : {\ensuremath{\mathbb{N}}\xspace}\to {{\mathopen{}\left\Vert {\ensuremath{\mathbb{N}}\xspace}\right\Vert_{}\mathclose{}}}$ does not loose meta-theoretic information, in a certain sense. Some parts of the Sections \[sec3:hedbergs-theorem\], \[sec4:coll\], \[sec5:populatedness\] and \[sec6:taboos\] of this article have been published in our previous conference paper [@krausgeneralizations]. Formalization {#formalization .unnumbered} ------------- We have formalized [@krausEscardoEtAll_existenceFormalisation] all of our results in the dependently typed programming language and proof assistant *Agda* [@Norell2007Towards]. It is available in browser-viewable format and as plain source code on the first-named author’s academic homepage. All proofs type-check in Agda version 2.4.2.5. As most of our results are internal statements in type theory, they can be formalized directly in a readable way, understandable even for readers who do not have any experience with the specific proof assistant or formalized proofs in general. We have tried our best and would like to encourage the reader to have a look at the accompanying formalization. Contents {#contents .unnumbered} -------- In Section \[sec2:preliminaries\], we specify the type theory that we work in, a standard version of Martin-Löf type theory. We also state basic definitions, but we try to use standard notation and we hope that all notions are as intuitive as possible. We then revisit Hedberg’s Theorem in Section \[sec3:hedbergs-theorem\] and formulate several generalizations. Next, we move on to explore weakly constant functions between general types. We show that a weakly constant endofunction has a propositional type of fixed points and factors through ${{\mathopen{}\left\Vert -\right\Vert_{}\mathclose{}}}$ in Section \[sec4:coll\]. It is known that the factorization can not always be done for functions between different types, but we discuss some cases in which it is possible in Section \[sec4c:factorizing\]. Section \[sec5:populatedness\] is devoted to *populatedness*, a new definable notion of anonymous existence in type theory, based on our previous observations of weakly constant endofunctions. We examine the differences between inhabitance, populatedness, propositional truncation and double negation, all of which are notions of existence, carefully in Section \[sec6:taboos\]. In particular, we show that if every type has a weakly constant endofunction, then all equalities are decidable. Finally, Section \[sec9:judgm-beta\] discusses consequences of the judgmental computation rule of propositional truncation, and Section \[sec10:open\] presents a summary and questions which we do not know the answer to. Preliminaries {#sec2:preliminaries} ============= Our setting is a standard version of intensional Martin-Löf type theory (MLTT) with type universes that have coproducts, dependent sums, dependent products and identity types. We give a very rough specification of these constructions below. For a rigorous treatment, we refer to our main reference [@HoTTbook Appendix A.1 or A.2]. We use standard notation whenever it is available. If it improves the readability, we allow ourselves to implicitely uncurry functions and write $f(x,y)$ instead of $f(x)(y)$ or $f\,x\,y$. *Type Universes.* MLTT usually comes equipped with a hierarchy ${\ensuremath{\mathcal{U}}\xspace}_0, {\ensuremath{\mathcal{U}}\xspace}_1, {\ensuremath{\mathcal{U}}\xspace}_2, \ldots$ of universes, where ${\ensuremath{\mathcal{U}}\xspace}_{n+1}$ is the type of ${\ensuremath{\mathcal{U}}\xspace}_n$. With very few exceptions, we only need one universe ${\ensuremath{\mathcal{U}}\xspace}$ and therefore omit the index. ${\ensuremath{\mathcal{U}}\xspace}$ can be understood as a generic universe or, for simplicity, as the lowest universe ${\ensuremath{\mathcal{U}}\xspace}_0$. If we say that $X$ is a type, we mean $X:{\ensuremath{\mathcal{U}}\xspace}$, possibly in some context. *Coproducts.* If $X$ and $Y$ are types, then so is $X+Y$. If we have $x:X$ or $y:Y$, we get ${\mathsf{inl}\xspace}\, x : X+Y$ or ${\mathsf{inr}\xspace}\, y : X+Y$, respectively. To prove a statement for all elements in $X+Y$, it is enough to consider those that are of one of these two forms. *Dependent Pairs.* If $X$ is a type and $Y : X \to {\ensuremath{\mathcal{U}}\xspace}$ a family of types, indexed over $X$, then ${\Sigma_{X} }Y$ is the corresponding *dependent pair type*, sometimes called a *dependent sum* or just *$\Sigma$-type*. For $x:X$ and $y:Y(x)$, we have $(x,y): {\Sigma_{X} }Y$, and to eliminate out of ${\Sigma_{X} }Y$, it is enough to consider elements of this form. We prefer to write ${\Sigma_{x:X} }Y(x)$ instead of ${\Sigma_{X} }Y$, hoping to increase readability. Instead of ${\Sigma_{x_1:X} }{\Sigma_{x_2:X} }Y(x_1,x_2)$, we write ${\Sigma_{x_1,x_2:X} }Y(x_1,x_2)$. In the special case that $Y$ does not depend on $X$, it is standard to write $X \times Y$. *Dependent Functions.* Given $X: {\ensuremath{\mathcal{U}}\xspace}$ and $Y:X \to {\ensuremath{\mathcal{U}}\xspace}$ as before, we have the type ${\Pi_{X} }Y$, called the *dependent functions type* or $\Pi$-type. It is sometimes also referred to as the *dependent product type*, although that notion can be confusing as it would fit for $\Sigma$-types as well. If, for any given $x:X$, the term $t$ is an element in $Y(x)$, we have ${\lambda x .}t : {\Pi_{X} }Y$. Similarly to ${\Pi_{X} }Y$, we write ${\Pi_{x:X} }Y(x)$, and, if $Y$ does not depend on $X$, we write $X \to Y$. Instead of ${\Pi_{x_1:X} }{\Pi_{x_2:X} } Y(x_1,x_2)$, we write ${\Pi_{x_1,x_2:X} }Y(x_1,x_2)$. *Identity Types.* Given a type $X$ with elements $x,y:X$, we have the *identity type* or the type of *equalities*, written ${\ensuremath{x =_{X} y}\xspace}$. An inhabitant $p : {\ensuremath{x =_{X} y}\xspace}$ is thus called an *equality*, an *equality proof*, or, having the interpretation of a type as a space in mind, a *path* from $x$ to $y$. Similarly, ${\ensuremath{x =_{X} y}\xspace}$ is called a *path space*. In the past, $p$ often used to be called a *propositional* equality. We avoid this terminology and reserve the word “propositional” for types with at most one element, as explained in the introduction and in Definition \[def:generalnotions\]. The only introduction rule for the identity types is that, for any $x:X$, there is ${\ensuremath{\mathsf{refl}_{x}}\xspace} : {\ensuremath{x =_{X} x}\xspace}$. The elimination rule (called *J*) says that, if $P : ({\Sigma_{x,y:X} } {\ensuremath{x =_{X} y}\xspace}) \to {\ensuremath{\mathcal{U}}\xspace}$ is a type family, it suffices to construct an inhabitant of ${\Pi_{x:X} }P(x,x,{\ensuremath{\mathsf{refl}_{x}}\xspace})$ in order to get an element of $P(p)$ for any $p : {\Sigma_{x,y:X} } {\ensuremath{x =_{X} y}\xspace}$. We do explicitly not assume other elimination rules such as *Streicher’s K* or *uniqueness of identity proofs (UIP)* [@Streicher93]. If the common type of $x,y$ can be inferred or is unimportant, we write ${\ensuremath{x =_{} y}\xspace}$ instead of ${\ensuremath{x =_{X} y}\xspace}$. In contrast to the identity type, *definitional* (also called *judgmental*) equality is a meta-level concept. It refers to two terms, rather than two (hypothetical) elements, with the same $\beta$ (and, sometimes, $\eta$ in a restricted sense) normal form. Recently, it has become standard to use the symbol ${\equiv}$ for judgmental equality in order to use ${{=}}$ solely for the type of equalities [@HoTTbook]. Note that the introduction rule of the latter says precisely that we have a canonical equality proof for any two judgmentally equal terms, viewed as elements of some type. For definitions, we use the notation ${\vcentcolon\equiv}$. Applying the eliminator *J* is also referred to as *path induction* [@HoTTbook]. A variant of *J* that is sometimes more useful is due to Paulin-Mohring [@Moh93]: given a point $x:X$ and a type family $P : ({\Sigma_{y:X} } {\ensuremath{x =_{X} y}\xspace}) \to {\ensuremath{\mathcal{U}}\xspace}$, it is enough to construct an inhabitant of $P(x,{\ensuremath{\mathsf{refl}_{x}}\xspace})$ in order to construct an inhabitant of $P(y,q)$ for any pair $(y,q)$. This elimination principle, called *based path induction*, is equivalent to *J*. As a basic example, we show that equality proofs satisfy the *groupoid laws* [@hofmannStreicher_groupoids], where reflexivity plays the role of identity morphisms. If we have $p : {\ensuremath{x =_{X} y}\xspace}$ and $q : {\ensuremath{y =_{X} z}\xspace}$, we can construct a path $p { \mathchoice{\mathbin{\raisebox{0.5ex}{$\displaystyle\centerdot$}}} {\mathbin{\raisebox{0.5ex}{$\centerdot$}}} {\mathbin{\raisebox{0.25ex}{$\scriptstyle\,\centerdot\,$}}} {\mathbin{\raisebox{0.1ex}{$\scriptscriptstyle\,\centerdot\,$}}} }q : {\ensuremath{x =_{X} z}\xspace}$ (the *composition* of $p$ and $q$): by based path induction, it is enough to do this under the assumption that $(z,q) : {\Sigma_{z:X} } {\ensuremath{y =_{X} z}\xspace}$ is $(y, {\ensuremath{\mathsf{refl}_{y}}\xspace})$. But in that case, the composition $p { \mathchoice{\mathbin{\raisebox{0.5ex}{$\displaystyle\centerdot$}}} {\mathbin{\raisebox{0.5ex}{$\centerdot$}}} {\mathbin{\raisebox{0.25ex}{$\scriptstyle\,\centerdot\,$}}} {\mathbin{\raisebox{0.1ex}{$\scriptscriptstyle\,\centerdot\,$}}} }q$ is given by $p$. Similarly, for $p : {\ensuremath{x =_{X} y}\xspace}$, there is ${\mathord{{p}^{-1}}} : {\ensuremath{y =_{X} x}\xspace}$. It is easy to see (again by path induction) that the types ${\ensuremath{p { \mathchoice{\mathbin{\raisebox{0.5ex}{$\displaystyle\centerdot$}}} {\mathbin{\raisebox{0.5ex}{$\centerdot$}}} {\mathbin{\raisebox{0.25ex}{$\scriptstyle\,\centerdot\,$}}} {\mathbin{\raisebox{0.1ex}{$\scriptscriptstyle\,\centerdot\,$}}} }{{\ensuremath{\mathsf{refl}_{y}}\xspace}} =_{X} p}\xspace}$ and ${\ensuremath{{{\ensuremath{\mathsf{refl}_{x}}\xspace}} { \mathchoice{\mathbin{\raisebox{0.5ex}{$\displaystyle\centerdot$}}} {\mathbin{\raisebox{0.5ex}{$\centerdot$}}} {\mathbin{\raisebox{0.25ex}{$\scriptstyle\,\centerdot\,$}}} {\mathbin{\raisebox{0.1ex}{$\scriptscriptstyle\,\centerdot\,$}}} }p =_{X} p}\xspace}$ as well as ${\ensuremath{p { \mathchoice{\mathbin{\raisebox{0.5ex}{$\displaystyle\centerdot$}}} {\mathbin{\raisebox{0.5ex}{$\centerdot$}}} {\mathbin{\raisebox{0.25ex}{$\scriptstyle\,\centerdot\,$}}} {\mathbin{\raisebox{0.1ex}{$\scriptscriptstyle\,\centerdot\,$}}} }{\mathord{{p}^{-1}}} =_{X} {\ensuremath{\mathsf{refl}_{x}}\xspace}}\xspace}$ are inhabited, and similarly, so are all the other types that are required to give a type the structure of a groupoid. An important special case of the eliminator *J* is *substitution* or *transportation*: if $P : X \to {\ensuremath{\mathcal{U}}\xspace}$ is a family of types and $x,y : X$ are two elements (or points) that are equal by $p : {\ensuremath{x =_{X} y}\xspace}$, then an element of $e : P(x)$ can be *transported along the path $p$* to get an element of $P(y)$, written $${\ensuremath{{p}_{*}\mathopen{}\left({e}\right)\mathclose{}}\xspace} : P(y).$$ Another useful function, similarly easily derived from *J*, is the following: if $f : X \to Y$ is a function and $p : {\ensuremath{x =_{X} y}\xspace}$ a path, we get an inhabitant of ${\ensuremath{f(x) =_{} f(y)}\xspace}$ in $Y$, $${\ensuremath{\mathsf{ap}_{f}}\xspace}{p} : {\ensuremath{f(x) =_{} f(y)}\xspace}.$$ Note that we omit the arguments $x$ and $y$ in the notation of ${\ensuremath{\mathsf{ap}_{f}}\xspace}$. Identity types also enable us to talk about isomorphism, or (better) *equivalence*, of types. We say that $X$ and $Y$ are equivalent, written $X \simeq Y$, if there are functions in both directions which are the inverses of each other, $$\begin{aligned} &f : X \to Y \label{eq:f-part-of-equivalence} \\ &g : Y \to X \label{eq:g-part-of-equivalence} \\ &p : {{\Pi_{x:X} }} {\ensuremath{g(f(x)) =_{X} x}\xspace} \\ &q : {{\Pi_{y:Y} }} {\ensuremath{f(g(y)) =_{Y} y}\xspace}.\end{aligned}$$ Technically, $(f,g,p,q)$ only constitute what is usually called a *type isomorphism*, but from any such isomorphism, an equivalence (in the sense of homotopy type theory) can be constructed; and the only difference is that an equivalence requires a certain coherence between the components $p$ and $q$, which will not be important for us. In this sense, we do not distinguish between isomorphims and equivalences, and only choose the latter terminology on principle. For details, we refer to [@HoTTbook Chapter 4]. We call types *logically equivalent*, written $X \Leftrightarrow Y$, if there are functions in both directions (that is, we only have the components  and ). We write $X \Leftrightarrow Y \Leftrightarrow Z$ if $X,Y,Z$ are pairwise logically equivalent, and $X \Rightarrow Y \Rightarrow Z$ as a shorthand notation for $(X \to Y) \times (Y \to Z)$. Equivalent types share all internalizable properties. In fact, Voevodsky’s univalence axiom (e.g. [@HoTTbook], [@voevodsky_equivalenceAndUnivalence]) has the consequence that equivalent types are equal. For the biggest part of our article, we do not need to assume the univalence axiom; however, it will play some role in Section \[sec9:judgm-beta\]. We sometimes use other additional principles (namely function extensionality and propositional truncation, as introduced later). However, we treat them as assumptions rather than parts of the core theory and state clearly in which cases they are used. In order to support the presentation from the next section on, we define a couple of notions. Our hope is that all of these are as intuitive as possible, if not already known. The only notion that is possibly ambiguous is *weak constancy*, meaning that a function maps any pair of possible arguments to equal values. \[def:generalnotions\] We say that a type $X$ is *propositional*, or is a *proposition*, if all its inhabitants are equal: $${{ \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } X} {\vcentcolon\equiv}{{{\Pi_{x,y:X} }} {\ensuremath{x =_{} y}\xspace}}.$$ It is a well-known fact that the path spaces of a propositional type are not only inhabited but also propositional themselves. This stronger property is called *contractible*, $${{ \edef\a{\compare-2-2\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-2\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-2} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-2}\mbox{-}\mathsf{type} \fi\fi\fi } X} {\vcentcolon\equiv}X \times { \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } X.$$ It is easy to see that any contractible type is equivalent to the unit type. An important well-known lemma is that types are contractible if they are represented as *singletons*, sometimes called *path-to*/*path-from* types: for any $a_0 : A$, the type $${\Sigma_{a:A} }{\ensuremath{a_0 =_{} a}\xspace}$$ is contractible, as any inhabitant is by based path induction easily seen to be equal to $(a_0, {\ensuremath{\mathsf{refl}_{a_0}}\xspace})$. Further, $X$ satisfies *UIP*, or is a *set*, if its path spaces are all propositional: $${ \edef\a{\compare-20\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-10\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{0} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{0}\mbox{-}\mathsf{type} \fi\fi\fi } X {\vcentcolon\equiv}{{\Pi_{x,y:X} }} { \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } ({\ensuremath{x =_{} y}\xspace}).$$ $X$ is *decidable* if it is either inhabited or empty, $${\mathsf{decidable}}X {\vcentcolon\equiv}X + \neg X.$$ We therefore say that $X$ has *decidable equality* if the equality type of any two inhabitants of $X$ is decidable. Based on the terminology in [@mines_constAlgebra], we also call a type with decidable equality *discrete*: $${\mathsf{isDiscrete}}X {\vcentcolon\equiv}{{\Pi_{x,y:X} }} {\mathsf{decidable}}({\ensuremath{x =_{} y}\xspace}).$$ A function (synonymously, map) $f : X \to Z$ is *weakly constant*, or *1-constant*, if it maps any two elements to the same inhabitant of $Y$: $${\operatorname{\mathsf{wconst}}}f {\vcentcolon\equiv}{{\Pi_{x,y:X} }} {\ensuremath{f(x) =_{} f(y)}\xspace}.$$ As weak (or 1-) constancy is the only notion of constancy that we consider in this article (if we ignore factorizability through ${{\mathopen{}\left\Vert -\right\Vert_{}\mathclose{}}}$), we call such a function $f$ just *constant* for simplicity. However, note that this notion is indeed very weak as soon as we consider functions into types that are not sets, as we will see later. It will be interesting to consider the type of constant endomaps on a given type: $${\operatorname{\mathsf{constEndo}}}X {\vcentcolon\equiv}{\Sigma_{f: X \to X} } {\operatorname{\mathsf{wconst}}}f.$$ Finally, we may say that $X$ has constant endomaps on all path spaces: $${\operatorname{\mathsf{pathConstEndo}}}X {\vcentcolon\equiv}{{\Pi_{x,y:X} }} {\operatorname{\mathsf{constEndo}}}{({\ensuremath{x =_{} y}\xspace})}.$$ For some statements, but only if clearly indicated, we use *function extensionality*. This principle says that two functions $f, g$ of the same type ${\Pi_{X} } Y$ are equal as soon as they are pointwise equal: $$\label{eq:naive-funext} \left({{\Pi_{x : X} }} {\ensuremath{f(x) =_{} g(x)}\xspace}\right) \to {\ensuremath{f =_{} g}\xspace}.$$ An important equivalent formulation due to Voevodsky [@voe_coqLib] is that the type of propositions is closed under $\Pi$; more precisely, $$\label{eq:voe-funext} \left({{\Pi_{x : X} }} { \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } \left(Y \, x\right)\right) \, \to \, { \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } \left({\Pi_{X} }Y\right).$$ In the case of non-dependent functions, this means that $X \to Y$ is propositional as soon as $Y$ is. A principle that we do not assume, but which will appear in some of our discussions, is the *law of excluded middle* in the form for propositions and in the form for general types [@HoTTbook Chapter 3.4]. In the first form, it says that every proposition is decidable, while the second says the the same without the restriction to propositions. $$\begin{aligned} &{\ensuremath{\mathsf{LEM}_{}}\xspace} {\vcentcolon\equiv}{{\Pi_{P : {\ensuremath{\mathcal{U}}\xspace}} }} ({ \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } P) \to P + \neg P \\ &{\ensuremath{\mathsf{LEM}_{\infty}}\xspace} {\vcentcolon\equiv}{{\Pi_{X : {\ensuremath{\mathcal{U}}\xspace}} }} X + \neg X.\end{aligned}$$ Note that ${\ensuremath{\mathsf{LEM}_{\infty}}\xspace}$ can be considered the natural formulation under the *Propositions-as-Types* view. However, the view that we adapt in this work (as in homotopy type theory) is the one that only type-theoretical propositions in the sense of Definition \[def:generalnotions\] really represent mathematical propositions; general types carry more structure. In particular, ${\ensuremath{\mathsf{LEM}_{\infty}}\xspace}$ includes a very strong form of choice which is inconsistent with the univalence axiom of homotopy type theory. Therefore, we consider ${\ensuremath{\mathsf{LEM}_{}}\xspace}$ the “correct” formulation in our work. We do not explicitly use this fact, but it may be helpful to note that, assuming function extensionality, all of the above definitions that are called “is…” (${ \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } X$, ${ \edef\a{\compare-2-2\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-2\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-2} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-2}\mbox{-}\mathsf{type} \fi\fi\fi } X$, ${ \edef\a{\compare-20\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-10\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{0} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{0}\mbox{-}\mathsf{type} \fi\fi\fi } X$, ${\mathsf{isDiscrete}}X$) are propositional in the sense of Definition \[def:generalnotions\]. For ${ \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } X$, ${ \edef\a{\compare-2-2\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-2\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-2} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-2}\mbox{-}\mathsf{type} \fi\fi\fi } X$, ${ \edef\a{\compare-20\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-10\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{0} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{0}\mbox{-}\mathsf{type} \fi\fi\fi } X$, this is proved in [@HoTTbook Theorem 7.1.10], and for ${\mathsf{isDiscrete}}X$, this is a consequence of Hedberg’s Theorem that we discuss in Section \[sec3:hedbergs-theorem\]. It will also follow that ${\operatorname{\mathsf{pathConstEndo}}}X$ is propositional. The statements of ${\ensuremath{\mathsf{LEM}_{}}\xspace}$ and are propositional as well, while ${\operatorname{\mathsf{wconst}}}f$, ${\operatorname{\mathsf{constEndo}}}X$, , and ${\ensuremath{\mathsf{LEM}_{\infty}}\xspace}$ are in general not propositional. Hedberg’s Theorem {#sec3:hedbergs-theorem} ================= Before discussing possible generalizations, we want to state Hedberg’s Theorem. \[thm:hedberg\] Every discrete type satisfies UIP, $${\mathsf{isDiscrete}}X \to { \edef\a{\compare-20\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-10\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{0} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{0}\mbox{-}\mathsf{type} \fi\fi\fi } X.$$ We briefly give Hedberg’s original proof, consisting of two steps. \[lem:discr2pathcoll\] If a type has decidable equality, its path spaces have constant endofunctions: $${\mathsf{isDiscrete}}X \to {\operatorname{\mathsf{pathConstEndo}}}X.$$ Given inhabitants $x$ and $y$ of $X$, we get by assumption either an inhabitant of ${\ensuremath{x =_{} y}\xspace}$ or an inhabitant of $\neg ({\ensuremath{x =_{} y}\xspace})$. In the first case, we construct the required constant function $({\ensuremath{x =_{} y}\xspace}) \to ({\ensuremath{x =_{} y}\xspace})$ by mapping everything to this given path. In the second case, we have a proof of $\neg ({\ensuremath{x =_{} y}\xspace})$, and the canonical function is constant automatically. \[lem:pathcoll2set\] If the path spaces of a type have constant endomaps, the type satisfies UIP: $${\operatorname{\mathsf{pathConstEndo}}}X \to { \edef\a{\compare-20\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-10\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{0} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{0}\mbox{-}\mathsf{type} \fi\fi\fi } X.$$ Assume $f$ is a parametrized constant endofunction on the path spaces, meaning that, for any $x,y:X$, we have a constant function $f_{x,y} : {\ensuremath{x =_{} y}\xspace} \to {\ensuremath{x =_{} y}\xspace}$. Let $p$ be a path from $x$ to $y$. We claim that $$\label{eq:pathcoll-set-proof} {\ensuremath{p =_{} {{\mathord{{(f_{x,x}({\ensuremath{\mathsf{refl}_{x}}\xspace}))}^{-1}}}} { \mathchoice{\mathbin{\raisebox{0.5ex}{$\displaystyle\centerdot$}}} {\mathbin{\raisebox{0.5ex}{$\centerdot$}}} {\mathbin{\raisebox{0.25ex}{$\scriptstyle\,\centerdot\,$}}} {\mathbin{\raisebox{0.1ex}{$\scriptscriptstyle\,\centerdot\,$}}} }{f_{x,y}(p)}}\xspace}.$$ By path induction, we only have to give a proof if the triple $(x,y,p)$ is in fact $(x, x, {\ensuremath{\mathsf{refl}_{x}}\xspace})$, in which case  is one of the groupoid laws that equality satisfies. Using the fact $f$ is constant on every path space, the right-hand side of the above equality is independent of $p$, and in particular, equal to any other path of the same type. Hedberg’s proof [@hedberg1998coherence] is just the concatenation of the two lemmata. A slightly more direct proof can be found in the HoTT Coq repository [@hott_coqLib] and in a post by the first named author on the HoTT blog [@nicolai:blog]. Let us analyse the ingredients of the original proof. Lemma \[lem:discr2pathcoll\] uses the rather strong assumption of decidable equality. In contrast, the assumption of Lemma \[lem:pathcoll2set\] is logically equivalent to its conclusion, so that there is no space for a strengthening. We include a proof of this simple claim in Theorem \[tfae\] below and concentrate on weakening the assumption of Lemma \[lem:pathcoll2set\]. Let us first introduce the notions of *stability* and *separatedness*. For a type $X$, define $$\begin{aligned} &{\operatorname{\mathsf{stable}}}X {\vcentcolon\equiv}\neg\neg X \to X, \\ &{\operatorname{\mathsf{separated}}}X {\vcentcolon\equiv}{{\Pi_{x,y:X} }} {\operatorname{\mathsf{stable}}}({\ensuremath{x =_{} y}\xspace}).\end{aligned}$$ We can see ${\operatorname{\mathsf{stable}}}X$ as a classical condition, similar to ${\mathsf{decidable}}X {\equiv}X + \neg X$, but strictly weaker. Indeed, we get a first strengthening of Hedberg’s Theorem as follows: \[lemmaSepExtUip\] Assuming function extensionality, any separated type is a set, $${\operatorname{\mathsf{separated}}}X \to { \edef\a{\compare-20\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-10\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{0} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{0}\mbox{-}\mathsf{type} \fi\fi\fi } X.$$ There is, for any $x, y : X$, a canonical map ${({\ensuremath{x =_{} y}\xspace}) \to \neg \neg ({\ensuremath{x =_{} y}\xspace})}$. Composing this map with the proof that $X$ is separated yields an endofunction on the path spaces. With function extensionality, the first map has a propositional codomain, implying that the endofunction is constant and thereby fulfilling the requirements of Lemma \[lem:pathcoll2set\]. We remark that full function extensionality is actually not needed here. Instead, a weaker version that only works with the empty type is sufficient. Similar statements hold true for all further applications of extensionality in this paper. In a constructive setting, the question how to express that “there exists something” in a type $X$ is very subtle. One possibility is to ask for an inhabitant of $X$, but in many cases, this is too strong to be fulfilled. A second possibility, which corresponds to our above definition of *separated*, is to ask for a proof of $\neg \neg X$. Then again, this is very weak, and often too weak, as one can in general only prove negative statements from double-negated assumptions. This fact has inspired the introduction of *squash types* (Constable [@Con85]), and similar, *bracket types* (Awodey and Bauer [@awodeyBauer_bracketTypes]). These lie in between of the two extremes mentioned above. In our intensional setting, we talk of *propositional truncations*, or *$-1$-truncations* [@HoTTbook Chapter 3.7]. For any type $X$, we postulate that there is a type ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ that is a *proposition*, representing the statement that $X$ is inhabited. The rules are that if we have a proof of $X$, we can, of course, get a proof of ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$, and from ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$, we can conclude the same statements as we can conclude from $X$, but only if the actual representative of $X$ does not matter: \[def:htruncation\] We say that a type theory has *weak propositional truncations* if, for every type $X$, we have a type ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} : {\ensuremath{\mathcal{U}}\xspace}$ which satisfies the following properties: 1. ${{\mathopen{}\left|-\right|_{}^{}\mathclose{}}} : X \to {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ \[item:trunc1\] 2. ${\operatorname{\mathsf{h_{tr}}}}: { \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } ({{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}})$ \[item:trunc2\] 3. ${\operatorname{\mathsf{rec_{tr}}}}: {{\Pi_{P:{\ensuremath{\mathcal{U}}\xspace}} }} { \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } P \to (X \to P) \to {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to P.$\[item:trunc3\] Note that this amounts to saying that the operator ${{\mathopen{}\left\Vert -\right\Vert_{}\mathclose{}}}$ is left adjoint to the inclusion of the subcategory of propositions into the category of all types. Therefore, it can be seen as the *propositional reflection*. For $x,y:{{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$, we will write ${\operatorname{\mathsf{h_{tr}}}}_{x,y}$ for the proof of ${\ensuremath{x =_{{{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}} y}\xspace}$ that we get from ${\operatorname{\mathsf{h_{tr}}}}$. In contrast to other sources [@HoTTbook] we do *not* assume the judgmental $\beta$-rule $${\operatorname{\mathsf{rec_{tr}}}}(P, h, f, {{\mathopen{}\left|x\right|_{}^{}\mathclose{}}}) \; {\equiv}_\beta \; f(x) \label{eq:jdgm-beta-1}$$ as it is simply not necessary for our results and we do not want to make the theory stronger than required. This is the reason why we use the attribute *weak*. We do think that  is often useful, but we also think it is interesting to make clear in which sense  makes the theory actually stronger, rather than more convenient. We will discuss this in Section \[sec9:judgm-beta\]. A practical advantage of not assuming  is that the truncation can be implemented in existing proof assistants more easily. Of course, the $\beta$-rule holds propositionally as both sides of the equation inhabit the same proposition. Adopting the terminology of [@HoTTbook Chapter 3.10], we say that $X$ is *merely* inhabited if ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ is inhabited. We may also say that $X$ *merely* holds. However, we try to always be precise by giving the formal type expression to support the informal statement. The *non-dependent eliminator* (or *recursion principle*, see [@HoTTbook Chapter 5.1]) ${\operatorname{\mathsf{rec_{tr}}}}$ lets us construct the *dependent* one (the *induction principle*): \[[[see [@HoTTbook Exercise 3.17]]{}]{}\] \[lem:ind-from-rec\] The propositional truncation admits the following induction principle: Given a type $X$, a family $P : {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to {\ensuremath{\mathcal{U}}\xspace}$ with $h : {\Pi_{z: {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}} } { \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } (P(z))$, a term $f : {\Pi_{x:X} } P({{\mathopen{}\left|x\right|_{}^{}\mathclose{}}})$ gives rise to an inhabitant of ${\Pi_{z: {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}} } P(z)$. We have a map $j : X \to {\Sigma_{z:{{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}} }P(z)$ by ${\lambda x .} ({{\mathopen{}\left|x\right|_{}^{}\mathclose{}}}, f(x))$. Observe that the codomain of $j$ is a proposition, combining the fact that ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ is one with $h$. Therefore, we get ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to {\Sigma_{z:{{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}} }P(z)$, and this is sufficient, using that ${\ensuremath{y =_{{{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}} z}\xspace}$ for any $y,z:{{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$. In analogy to the notation ${\operatorname{\mathsf{rec_{tr}}}}$, we may write ${\operatorname{\mathsf{ind_{tr}}}}$ for the term witnessing this induction principle. However, most of our further developments will not require the induction principle and will be proved with ${\operatorname{\mathsf{rec_{tr}}}}$. Note that ${{\mathopen{}\left\Vert -\right\Vert_{}\mathclose{}}}$ is functorial in the sense that any function $f : X \to Y$ gives rise to a function ${{\mathopen{}\left\Vert f\right\Vert_{}\mathclose{}}} : {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to {{\mathopen{}\left\Vert Y\right\Vert_{}\mathclose{}}}$, although the proof of ${\ensuremath{{{\mathopen{}\left\Vert g \circ f\right\Vert_{}\mathclose{}}} =_{} {{\mathopen{}\left\Vert g\right\Vert_{}\mathclose{}}} \circ {{\mathopen{}\left\Vert f\right\Vert_{}\mathclose{}}}}\xspace}$ requires function extensionality. It is easy to see that ${{\mathopen{}\left\Vert -\right\Vert_{}\mathclose{}}}$ is a modality (an idempotent monad) in the sense of [@HoTTbook Chapter 7.7]. In particular, we have ${\ensuremath{{{\mathopen{}\left\Vert {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}\right\Vert_{}\mathclose{}}} {\simeq}{{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}}\xspace}$. It is well-known that there is a type expression which is logically equivalent to the propositional truncation: \[hinhabitedLargeSmall\] For any given $X : {\ensuremath{\mathcal{U}}\xspace}$, we have the logical equivalence $$\label{eq:hinhabitedLargeSmall} {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \; \Leftrightarrow \; {{\Pi_{P : {\ensuremath{\mathcal{U}}\xspace}} }} { \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } P \to (X \to P) \to P.$$ Under the assumption of function extensionality, the expression on the right-hand side of  is propositional, and the logical equivalence ($\Leftrightarrow$) is thus an actual equivalence. A potential problem with this expression is that it does not live in the universe ${\ensuremath{\mathcal{U}}\xspace}$. This size issue is the only thing that keeps us from using it as the definition for ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$. All other properties of the above Definition \[def:htruncation\] are satisfied, at least under the assumption of function extensionality. Voevodsky [@voe_coqLib] suggests *resizing rules* to resolve the issue. The direction “$\rightarrow$” of the statement is no more than a rearrangement of the assumptions of property \[item:trunc3\] in the definition of ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$. For the other direction, we only need to instantiate $P$ with ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ and observe that the properties \[item:trunc1\] and \[item:trunc2\] are exactly what is needed. With this definition at hand, we can provide an even stronger variant of Hedberg’s Theorem. Completely analogously to the notions of stability and separatedness, we define what is means to say that a type has *split support* and is *h-separated*: \[def:hsep\] For a type $X$, define $$\begin{aligned} &{\operatorname{\mathsf{splitSup}}}X {\vcentcolon\equiv}{{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X, \\ &{\operatorname{\mathsf{hSeparated}}}X {\vcentcolon\equiv}{{\Pi_{x,y:X} }} {\operatorname{\mathsf{splitSup}}}({\ensuremath{x =_{} y}\xspace}).\end{aligned}$$ We observe that ${\operatorname{\mathsf{hSeparated}}}X$ is a weaker condition than ${\operatorname{\mathsf{separated}}}X$. Not only can we conclude ${ \edef\a{\compare-20\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-10\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{0} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{0}\mbox{-}\mathsf{type} \fi\fi\fi } X$ from ${\operatorname{\mathsf{hSeparated}}}X$, but the converse holds as well. In the following theorem, we also include the simple fact that having constant endomaps on path spaces is equivalent to these statements. \[tfae\] For a type $X$ in MLTT with propositional truncation, the following are equivalent: 1. $X$ is a set \[item:tfae-1\] 2. $X$ has constant endomaps on its path spaces \[item:tfae-2\] 3. $X$ is h-separated. \[item:tfae-3\] Further, each of the three types is propositional. We first show the logical equivalence of the three types. “\[item:tfae-2\] $\Rightarrow $ \[item:tfae-1\]” is simply Lemma \[lem:pathcoll2set\]. “\[item:tfae-1\] $\Rightarrow $\[item:tfae-3\]” uses simply the the definition of the propositional truncation: given $x, y : X$, the fact that $X$ is a set tells us exactly that ${\ensuremath{x =_{} y}\xspace}$ is propositional, implying that we have a map ${{\mathopen{}\left\Vert {\ensuremath{x =_{} y}\xspace}\right\Vert_{}\mathclose{}}} \to ({\ensuremath{x =_{} y}\xspace})$. Concerning “\[item:tfae-3\] $\Rightarrow$ \[item:tfae-2\]”, it is enough to observe that the composition of ${{\mathopen{}\left|-\right|_{}^{}\mathclose{}}} : ({\ensuremath{x =_{} y}\xspace}) \to {{\mathopen{}\left\Vert {\ensuremath{x =_{} y}\xspace}\right\Vert_{}\mathclose{}}}$ and the map ${{\mathopen{}\left\Vert {\ensuremath{x =_{} y}\xspace}\right\Vert_{}\mathclose{}}} \to ({\ensuremath{x =_{} y}\xspace})$, provided by the fact that $X$ is h-separated, is a parametrized constant endofunction. \[item:tfae-1\] is known to be a proposition. If \[item:tfae-2\] or \[item:tfae-3\] are inhabited, then $X$ is a set, implying that \[item:tfae-2\] and \[item:tfae-3\] are propositions. We observe that using propositional truncation in some cases makes it unnecessary to appeal to functional extensionality. In Lemma \[lemmaSepExtUip\], we have given a proof for the simple statement that separated types are sets in the context of function extensionality. Let us now drop function extensionality and assume instead that propositional truncation is available. Every separated type is h-separated — more generally, we have $$\label{eq:negnegXsep} (\neg\neg X \to X) \to ({{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X)$$ for a type $X$ — and every h-separated space is a set. Notice that $\neg X \to \neg {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ and thus also ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to \neg \neg X$ and  do not require function extensionality. Therefore, the mere availability of propositional truncation suffices to solve a gap that function extensionality would usually fill. In Section \[subsec:beta-funext\] below, we will see that propositional truncation with the judgmental $\beta$-rule makes it possible to derive function extensionality. A variant of Theorem \[tfae\], more precisely of the direction “\[item:tfae-3\] $\Rightarrow $\[item:tfae-1\]”, can be formulated without propositional truncation. We say that a *reflexive propositionally-valued relation* on $X$ is an $R : X \times X \to {\ensuremath{\mathcal{U}}\xspace}$ such that $R(x,y)$ is always propositional and $R(x,x)$ always contractible. If $R$ implies identity, that is ${\Pi_{x,y:X} } R(x,y) \to {\ensuremath{x =_{} y}\xspace}$, then $X$ is a set. This is a statement given in the standard text book on homotopy type theory [@HoTTbook Theorem 7.2.2] and is sometimes called “Rijke’s Theorem”. To conclude this part of the article, we want to mention that there is a slightly stronger version of Hedberg’s Theorem which applies to types where equality might only be decidable *locally*. In fact, nearly everything we stated or proved can be done locally, and thus made stronger. In the proof of Lemma \[lem:discr2pathcoll\], we have not made use of the fact that we were dealing with path spaces at all: any decidable type trivially has a constant endofunction. Concerning Lemma \[lem:pathcoll2set\], we observe: A type $X$ that locally has constant endomaps on path spaces does locally satisfy UIP. That means, for any $x_0 : X$, we have $$({{\Pi_{y : X} }} {\operatorname{\mathsf{constEndo}}}({\ensuremath{x_0 =_{} y}\xspace})) \to {{\Pi_{y : X} }} { \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } ({\ensuremath{x_0 =_{} y}\xspace}).$$ The proof is identical to the one of Lemma \[lem:pathcoll2set\], with the only difference that we need to apply based path induction instead of path induction. This enables us to prove the local variant of Hedberg’s Theorem: A locally discrete type $X$ is locally a set, i.e. for any $x_0 : X$, $$({{\Pi_{y : X} }} {\mathsf{decidable}}({\ensuremath{x_0 =_{} y}\xspace})) \to {{\Pi_{y : X} }} { \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } ({\ensuremath{x_0 =_{} y}\xspace}).$$ In the same simple way, we immediately get that the assumption of local separatedness is sufficient. Under the assumption of function extensionality, a locally separated type locally is a set, i.e. for any $x_0 : X$, $$({{\Pi_{y : X} }} {\operatorname{\mathsf{stable}}}({\ensuremath{x_0 =_{} y}\xspace})) \to {{\Pi_{y : X} }} { \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } ({\ensuremath{x_0 =_{} y}\xspace}).$$ Similarly, the local forms of the characterizations of Theorem \[tfae\] are still equivalent. For a type $X$ in MLTT with propositional truncation with a point $x_0:X$, the following are equivalent: 1. for all $y:X$, the type ${\ensuremath{x_0 =_{} y}\xspace}$ is propositional \[item:tfae-local-1\] 2. for all $y:X$, the type ${\ensuremath{x_0 =_{} y}\xspace}$ has a constant endomap \[item:tfae-local-2\] 3. for all $y:X$, the type ${\ensuremath{x_0 =_{} y}\xspace}$ has split support. \[item:tfae-local-3\] Note that most of our arguments can be generalized to higher truncation levels [@HoTTbook Chapter 7] in a reasonable and straightforward way. Details can be found in the first-named author’s PhD thesis [@nicolai:thesis]. Split Support from Constant Endofunctions {#sec4:coll} ========================================= If we unfold the definitions in the statements of Theorem \[tfae\], they all involve the path spaces over some type $X$: 1. ${{\Pi_{x,y:X} }} { \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } ({\ensuremath{x =_{} y}\xspace})$ \[item:tfae-again-1\] 2. ${{\Pi_{x,y:X} }} {\operatorname{\mathsf{constEndo}}}{({\ensuremath{x =_{} y}\xspace})}$ \[item:tfae-again-2\] 3. ${{\Pi_{x,y:X} }} {\operatorname{\mathsf{splitSup}}}({\ensuremath{x =_{} y}\xspace})$. \[item:tfae-again-3\] We have proved that these statements are logically equivalent. It is a natural question to ask whether this is true for types that are not necessarily path spaces. The possibilities that path spaces offer are very powerful and we have used them heavily. Indeed, if we formulate the above properties for an arbitrary type $A$ instead of path types, 1. ${ \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } A$ \[item:tfae-aga-1-prime\] 2. ${\operatorname{\mathsf{constEndo}}}{A}$ \[item:tfae-aga-2-prime\] 3. ${\operatorname{\mathsf{splitSup}}}A$, \[item:tfae-aga-3-prime\] we notice immediately that \[item:tfae-aga-1-prime\] is significantly and strictly stronger than the other two properties. \[item:tfae-aga-1-prime\] says that $A$ has at most one inhabitant, \[item:tfae-aga-2-prime\] says that there is a constant endofunction on $A$, and \[item:tfae-aga-3-prime\] gives us a possibility to get an explicit inhabitant of A from the proposition that A has an anonymous inhabitant. A propositional type has the other two properties trivially, while the converse is not true. In fact, as soon as we know an inhabitant $a : A$, we can very easily construct proofs of \[item:tfae-aga-2-prime\] and \[item:tfae-aga-3-prime\], while it does not help at all with \[item:tfae-aga-1-prime\]. The implication \[item:tfae-aga-3-prime\] $\Rightarrow$ \[item:tfae-aga-2-prime\] is also simple: if we have $h : {{\mathopen{}\left\Vert A\right\Vert_{}\mathclose{}}} \to A$, the composition $h \circ {{\mathopen{}\left|-\right|_{}^{}\mathclose{}}} : A \to A$ is constant, as for any $a, b : A$, we have ${\ensuremath{{{\mathopen{}\left|a\right|_{}^{}\mathclose{}}} =_{} {{\mathopen{}\left|b\right|_{}^{}\mathclose{}}}}\xspace}$ and therefore ${\ensuremath{h({{\mathopen{}\left|a\right|_{}^{}\mathclose{}}}) =_{} h({{\mathopen{}\left|b\right|_{}^{}\mathclose{}}})}\xspace}$. In summary, we have \[item:tfae-aga-1-prime\] $\Rightarrow$ \[item:tfae-aga-3-prime\] $\Rightarrow$ \[item:tfae-aga-2-prime\] and we know that the first implication cannot be reversed. What is less clear is the reversibility of the second implication: If we have a constant endofunction on $A$, can we get a map ${{\mathopen{}\left\Vert A\right\Vert_{}\mathclose{}}} \to A$? Put differently, what does it take to get out of ${{\mathopen{}\left\Vert A\right\Vert_{}\mathclose{}}}$? Of course, a proof that $A$ has split support is fine for that, but does a constant endomap on $A$ also suffice? Surprisingly, the answer is positive, and there are interesting applications (Section \[sec5:populatedness\]). The main ingredient of our proof, and of much of the rest of the paper, is the following crucial lemma about fixed points: \[fixedpoint\] Given a constant endomap $f$ on a type $X$, the type of its fixed points is propositional, where this type is defined by $${\operatorname{\mathsf{fix}}}f {\vcentcolon\equiv}{\Sigma_{x:X} }{{\ensuremath{x =_{} f(x)}\xspace}}.$$ Before we can give the proof, we first need to formulate two observations. Both of them are simple on their own, but important insights for the Fixed Point Lemma. Let $X$ and $Y$ be two types. \[one\] Assume $h, k: X \to Y$ are two functions and $t : {\ensuremath{x =_{} y}\xspace}$ as well as $p : {\ensuremath{h(x) =_{} k(x)}\xspace}$ are paths. Then, transporting along $t$ into $p$ can be expressed as a composition of paths: $${\ensuremath{{\ensuremath{{t}_{*}\mathopen{}\left({p}\right)\mathclose{}}\xspace} =_{} {\mathord{{({\ensuremath{\mathsf{ap}_{h}}\xspace} t)}^{-1}}} { \mathchoice{\mathbin{\raisebox{0.5ex}{$\displaystyle\centerdot$}}} {\mathbin{\raisebox{0.5ex}{$\centerdot$}}} {\mathbin{\raisebox{0.25ex}{$\scriptstyle\,\centerdot\,$}}} {\mathbin{\raisebox{0.1ex}{$\scriptscriptstyle\,\centerdot\,$}}} }p { \mathchoice{\mathbin{\raisebox{0.5ex}{$\displaystyle\centerdot$}}} {\mathbin{\raisebox{0.5ex}{$\centerdot$}}} {\mathbin{\raisebox{0.25ex}{$\scriptstyle\,\centerdot\,$}}} {\mathbin{\raisebox{0.1ex}{$\scriptscriptstyle\,\centerdot\,$}}} }{\ensuremath{\mathsf{ap}_{k}}\xspace} t}\xspace}.$$ This is immediate by path induction on $t$. Even if the latter proof is trivial, the statement is essential. In the proof of Lemma \[fixedpoint\], we need a special case where $x$ and $y$ are the same. However, this special version cannot be proved directly. We consider the second observation a key insight for the Fixed Point Lemma: \[two\] If $f : X \to Y$ is constant and $x_1,x_2 : X$ are points, then ${\ensuremath{\mathsf{ap}_{f}}\xspace} : {\ensuremath{x_1 =_{X} x_2}\xspace} \to {\ensuremath{f(x_1) =_{Y} f(x_2)}\xspace}$ is constant. In particular, ${\ensuremath{\mathsf{ap}_{f}}\xspace}$ maps every loop around $x$ (that is, path from $x$ to $x$) to ${\ensuremath{\mathsf{refl}_{{f(x)}}}\xspace}$. If $c$ is the proof of ${\operatorname{\mathsf{wconst}}}f$, then ${\ensuremath{\mathsf{ap}_{f}}\xspace}$ maps a path $p : {\ensuremath{x =_{} y}\xspace}$ to ${{\mathord{{c(x,x)}^{-1}}}} { \mathchoice{\mathbin{\raisebox{0.5ex}{$\displaystyle\centerdot$}}} {\mathbin{\raisebox{0.5ex}{$\centerdot$}}} {\mathbin{\raisebox{0.25ex}{$\scriptstyle\,\centerdot\,$}}} {\mathbin{\raisebox{0.1ex}{$\scriptscriptstyle\,\centerdot\,$}}} }{c(x,y)}$. This is easily seen to be correct for $(x, x, {\ensuremath{\mathsf{refl}_{x}}\xspace})$, which is enough to apply path induction. As the expression is independent of $p$, the function ${\ensuremath{\mathsf{ap}_{f}}\xspace}$ is constant. The second part follows from the fact that ${\ensuremath{\mathsf{ap}_{f}}\xspace}$ maps ${\ensuremath{\mathsf{refl}_{x}}\xspace}$ to ${\ensuremath{\mathsf{refl}_{f(x)}}\xspace}$. With these lemmata at hand, we give a proof of the Fixed Point Lemma: Assume $f : X \to X$ is a function and $c : {\operatorname{\mathsf{wconst}}}f$ is a proof that it is constant. For any two pairs $(x, p)$ and $(x', p') : {\operatorname{\mathsf{fix}}}f$, we need to construct a path connection them. First, we simplify the situation by showing that we can assume that $x$ and $x'$ are the same: By composing $p : {\ensuremath{x =_{} f \, x}\xspace}$ with $c(x,x') : {\ensuremath{f(x) =_{} f(x')}\xspace}$ and ${\mathord{{(p')}^{-1}}} : {\ensuremath{f(x') =_{} x'}\xspace}$, we get a path $p'' : {\ensuremath{x =_{} x'}\xspace}$. By a standard lemma [@HoTTbook Theorem 2.7.2], a path between two pairs corresponds to two paths: One path between the first components, and one between the second, where transporting along the first path is needed. We therefore now get that $(x, {{\mathord{{(p'')}^{-1}}}} { \mathchoice{\mathbin{\raisebox{0.5ex}{$\displaystyle\centerdot$}}} {\mathbin{\raisebox{0.5ex}{$\centerdot$}}} {\mathbin{\raisebox{0.25ex}{$\scriptstyle\,\centerdot\,$}}} {\mathbin{\raisebox{0.1ex}{$\scriptscriptstyle\,\centerdot\,$}}} }{p'})$ and $(x', p')$ are equal: $p''$ is a path between the first components, which makes the second component trivial. Write $q$ for the term ${{\mathord{{(p'')}^{-1}}}} { \mathchoice{\mathbin{\raisebox{0.5ex}{$\displaystyle\centerdot$}}} {\mathbin{\raisebox{0.5ex}{$\centerdot$}}} {\mathbin{\raisebox{0.25ex}{$\scriptstyle\,\centerdot\,$}}} {\mathbin{\raisebox{0.1ex}{$\scriptscriptstyle\,\centerdot\,$}}} }{p'}$. We are now in the (nicer) situation that we have to construct a path between $(x, p)$ and $(x, q) : {\operatorname{\mathsf{fix}}}f$. Again, such a path can be constructed from two paths for the two components. Let us assume that we use some path $t : {\ensuremath{x =_{} x}\xspace}$ for the first component. We then have to show that ${\ensuremath{{t}_{*}\mathopen{}\left({p}\right)\mathclose{}}\xspace}$ equals $q$. In the situation with $(x,p)$ and $(x', p')$, it might have been tempting to use $p''$ as a path between the first components, and that would correspond to choosing ${\ensuremath{\mathsf{refl}_{x}}\xspace}$ for $t$. However, one quickly convinces oneself that this cannot work in the general case. By Auxiliary Lemma \[one\], with the identity for $h$ and $f$ for $k$, the first of the two terms, i.e. ${\ensuremath{{t}_{*}\mathopen{}\left({p}\right)\mathclose{}}\xspace}$, corresponds to ${{\mathord{{t}^{-1}}}} { \mathchoice{\mathbin{\raisebox{0.5ex}{$\displaystyle\centerdot$}}} {\mathbin{\raisebox{0.5ex}{$\centerdot$}}} {\mathbin{\raisebox{0.25ex}{$\scriptstyle\,\centerdot\,$}}} {\mathbin{\raisebox{0.1ex}{$\scriptscriptstyle\,\centerdot\,$}}} }{p} { \mathchoice{\mathbin{\raisebox{0.5ex}{$\displaystyle\centerdot$}}} {\mathbin{\raisebox{0.5ex}{$\centerdot$}}} {\mathbin{\raisebox{0.25ex}{$\scriptstyle\,\centerdot\,$}}} {\mathbin{\raisebox{0.1ex}{$\scriptscriptstyle\,\centerdot\,$}}} }{{\ensuremath{\mathsf{ap}_{f}}\xspace} t}$. With Auxiliary Lemma \[two\], that term can be further simplified to ${{\mathord{{t}^{-1}}}} { \mathchoice{\mathbin{\raisebox{0.5ex}{$\displaystyle\centerdot$}}} {\mathbin{\raisebox{0.5ex}{$\centerdot$}}} {\mathbin{\raisebox{0.25ex}{$\scriptstyle\,\centerdot\,$}}} {\mathbin{\raisebox{0.1ex}{$\scriptscriptstyle\,\centerdot\,$}}} }{p}$. What we have to prove is now just ${\ensuremath{{{\mathord{{t}^{-1}}}} { \mathchoice{\mathbin{\raisebox{0.5ex}{$\displaystyle\centerdot$}}} {\mathbin{\raisebox{0.5ex}{$\centerdot$}}} {\mathbin{\raisebox{0.25ex}{$\scriptstyle\,\centerdot\,$}}} {\mathbin{\raisebox{0.1ex}{$\scriptscriptstyle\,\centerdot\,$}}} }{p} =_{} q}\xspace}$, so let us just choose ${p} { \mathchoice{\mathbin{\raisebox{0.5ex}{$\displaystyle\centerdot$}}} {\mathbin{\raisebox{0.5ex}{$\centerdot$}}} {\mathbin{\raisebox{0.25ex}{$\scriptstyle\,\centerdot\,$}}} {\mathbin{\raisebox{0.1ex}{$\scriptscriptstyle\,\centerdot\,$}}} }{{\mathord{{q}^{-1}}}}$ for $t$, thereby making it into a straightforward application of the standard lemmata. A more elegant but possibly less revealing proof of the Fixed Point Lemma was given by Christian Sattler: Given $f : X \to X$ and $c : {\operatorname{\mathsf{wconst}}}f$ as before, assume $(x_0, p_0) : {\operatorname{\mathsf{fix}}}f$. For any $x:X$, we have an equivalence of types, $${\ensuremath{f(x) =_{} x}\xspace} \;\, \simeq \;\, {\ensuremath{f(x_0) =_{} x}\xspace},$$ given by precomposition with $c(x_0,x)$. Therefore, we also have the equivalence $${\Sigma_{x:X} } {\ensuremath{f(x) =_{} x}\xspace} \;\, \simeq \;\, {\Sigma_{x:X} } {\ensuremath{f(x_0) =_{} x}\xspace}.$$ The second of these types is a singleton and thus contractible, while the first is just ${\operatorname{\mathsf{fix}}}f$. This shows that any other inhabitant of ${\operatorname{\mathsf{fix}}}f$ is indeed equal to $(x_0, p_0)$. We will exploit Lemma \[fixedpoint\] in different ways. For the following corollary note that, given an endomap $f$ on $X$ with constancy proof $c$, we have a canonical projection $${\mathsf{fst}}: {\operatorname{\mathsf{fix}}}f \to X$$ and a function $$\begin{aligned} &\epsilon : X \to {\operatorname{\mathsf{fix}}}f \label{eq:fixf-X-equiv} \\ &\epsilon (x) {\vcentcolon\equiv}\left(f(x) \, , \, c(x,f(x))\right). \label{eq:def-epsilon}\end{aligned}$$ \[cor:fixistrunc\] In basic MLTT, for a type $X$ with a constant endofunction $f$, the type ${\operatorname{\mathsf{fix}}}f$ is a proposition that is logically equivalent to $X$. In particular, ${\operatorname{\mathsf{fix}}}f$ satisfies the conditions \[item:trunc1\]–\[item:trunc3\] of Definition \[def:htruncation\]. Therefore, for a type with a constant endomap, the weak propositional truncation is actually definable. If ${{\mathopen{}\left\Vert -\right\Vert_{}\mathclose{}}}$ is part of the theory, ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ and ${\operatorname{\mathsf{fix}}}f$ are equivalent, ${\ensuremath{{{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} {\simeq}{\operatorname{\mathsf{fix}}}f}\xspace}$. We are now in the position to prove the statement that we have announced at the beginning of the section. \[thm:maintheorem\] A type $X$ has a constant endomap if and only if it has split support in the sense that ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X$. As already mentioned in earlier, the “if”-part is simple: given ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X$, we just need to compose it with ${{\mathopen{}\left|-\right|_{}^{}\mathclose{}}} : X \to {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ to get a constant endomap. The other direction is an immediate consequence of Corollary \[cor:fixistrunc\]. We want to add the remark that ${\operatorname{\mathsf{constEndo}}}X$ can be replaced by a seemingly weaker assumption. The following statement (together with the Theorem \[thm:maintheorem\]) shows that it is enough to have $f : X \to X$ which is *merely* constant: \[thm:const-proof-hidden\] For a type $X$, the following are logically equivalent: 1. $X$ has a constant endomap \[item:trunc-const-1\] 2. $X$ has an endomap $f$ with a proof ${{\mathopen{}\left\Vert {\operatorname{\mathsf{wconst}}}f\right\Vert_{}\mathclose{}}}$. \[item:trunc-const-3\] The first direction is trivial, but its reversibility is interesting. We do *not* think that ${{\mathopen{}\left\Vert {\operatorname{\mathsf{wconst}}}f\right\Vert_{}\mathclose{}}}$ allows us to construct an element of ${\operatorname{\mathsf{wconst}}}f$. Assume $f$ is an endofunction on $X$. From Lemma \[fixedpoint\], we know that $${\operatorname{\mathsf{wconst}}}f \to { \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } ({\operatorname{\mathsf{fix}}}f).$$ Using the recursion principle with the fact that the statement ${ \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } ({\operatorname{\mathsf{fix}}}f)$ is a proposition itself yields $$\label{eq:const-trunc-shows-fix-prop} {{\mathopen{}\left\Vert {\operatorname{\mathsf{wconst}}}f\right\Vert_{}\mathclose{}}} \to { \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } ({\operatorname{\mathsf{fix}}}f).$$ Previously, we have constructed a map $${\operatorname{\mathsf{wconst}}}f \to {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to {\operatorname{\mathsf{fix}}}f.$$ Let us write this function as $${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to {\operatorname{\mathsf{wconst}}}f \to {\operatorname{\mathsf{fix}}}f.$$ This makes it trivial to define a function $${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \times {{\mathopen{}\left\Vert {\operatorname{\mathsf{wconst}}}f\right\Vert_{}\mathclose{}}} \to {\operatorname{\mathsf{wconst}}}f \to {\operatorname{\mathsf{fix}}}f.$$ We assume ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \times {{\mathopen{}\left\Vert {\operatorname{\mathsf{wconst}}}f\right\Vert_{}\mathclose{}}}$. From , we conclude that ${\operatorname{\mathsf{fix}}}f$ is a proposition. Therefore, we may apply the recursion principle of the truncation and get $${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \times {{\mathopen{}\left\Vert {\operatorname{\mathsf{wconst}}}f\right\Vert_{}\mathclose{}}} \to {{\mathopen{}\left\Vert {\operatorname{\mathsf{wconst}}}f\right\Vert_{}\mathclose{}}} \to {\operatorname{\mathsf{fix}}}f,$$ which, of course, gives us $$\label{eq:brck-fix} {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to {\operatorname{\mathsf{fix}}}f$$ under the assumption \[item:trunc-const-3\] of the theorem. Composing ${{\mathopen{}\left|-\right|_{}^{}\mathclose{}}}$ with and with the first projection, we get a constant function $g : X \to X$. It seems to be impossible to show that the constructed function $g$ is equal to $f$. On the other hand, it is easy to prove the truncated version of this statement: $${{\mathopen{}\left\Vert {\Pi_{x:X} } {\ensuremath{f x =_{} g x}\xspace}\right\Vert_{}\mathclose{}}}.$$ The detailed proof can be found in our formalization [@krausEscardoEtAll_existenceFormalisation]. Factoring weakly constant Functions {#sec4c:factorizing} =================================== In Theorem \[thm:maintheorem\] we have seen that a type $X$ with a constant function $f : X \to X$ always has split support. In fact, what we have done is actually slightly more: the constructed map $\overline f : {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X$ has the property that the triangle \(A) at (0,0) [$X$]{}; (B) at (30,0) [$X$]{}; (P) at (15,-7) [${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$]{}; (A) to node \[above\] [$f$]{} (B); (A) to node \[below left\] [${{\mathopen{}\left|-\right|_{}^{}\mathclose{}}}$]{} (P); (P) to node \[below right\] [$\overline f$]{} (B); commutes pointwise (in the sense that we have a family of equality proofs). It seems a natural question to ask whether the fact that $f$ is an endofunction is required: given a (weakly) constant function $f : X \to Y$, can it be factored in this sense through ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$? With Theorem \[thm:maintheorem\] in mind, it may be surprising that the answer is negative. In the presence of univalence, Shulman has constructed a family of weakly constant functions such that it is impossible that all of them factor [@shulman:wconst]. From another result by the first-named author, it follows that functions ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to Y$ can be constructed from *coherently* constant functions $X \to Y$, where the proof of weak constancy comes with a tower of coherence conditions [@kraus_generaluniversalproperty]. However, there are special cases in which the factorization is possible only assuming weak constancy, and some of these are discussed in the current section. Let us start by giving a precise definition. Given a function $f : X \to Y$ between two types, we say that $f$ *factors* through a type $Z$ if there are functions $f_1 : X \to Z$ and $f_2 : Z \to Y$ such that $${\Pi_{x:X} } \; {\ensuremath{f_2(f_1(x)) =_{Y} f(x)}\xspace}.$$ In particular, we say that $f$ factors through ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ if there is a function $\overline f : {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to Y$ such that $${\Pi_{x:X} } \; {\ensuremath{\overline{f} ({{\mathopen{}\left|x\right|_{}^{}\mathclose{}}}) =_{Y} f(x)}\xspace}.$$ As we will discuss later, assuming judgmental computation for ${{\mathopen{}\left\Vert -\right\Vert_{}\mathclose{}}}$, a factorization in the above sense allows us to construct a judgmental factorization (see Section \[sec9:judgm-beta\]). A related known result is that any function $f : X \to Y$ factors through its *image* (see [@HoTTbook Chapter 7.6]), where the image ${\mathsf{im}}(f)$ is defined as $${\mathsf{im}}(f) {\vcentcolon\equiv}{\Sigma_{y : Y} } {{\mathopen{}\left\Vert {\Sigma_{x:X} } {\ensuremath{f(x) =_{Y} y}\xspace}\right\Vert_{}\mathclose{}}}.$$ If ${\mathsf{im}}(f)$ is propositional, this answers positively the question that we want to discuss. We will see that this is what happens if $Y$ is a set and $f$ is constant (Theorem \[thm:factor-set\]). However, in general, ${\mathsf{im}}(f)$ is not necessarily propositional even if $f$ is constant: One can check easily that $Y$ is a set if and only if all the functions ${\ensuremath{\mathbf{1}}\xspace}\to X$ have a propositional image (which of course means that all those images are contractible). Constructing a function out of the propositional truncation of a type is somewhat tricky. A well-known [@HoTTbook Chapter 3.9] strategy for defining a map ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to Y$ is to construct a proposition $P$ together with functions $X \to P$ and $P \to Y$. We have already implicitly done this in previous sections. We can make this method slightly more convenient to use if we observe that $P$ does not need to be a proposition, but it only needs to be a proposition under the assumption that $X$ is inhabited: \[lifting-principle\] Let $X, Y$ be two types. Assume $P$ is a type such that $P \to Y$. If $X$ implies that $P$ is contractible, then there is a function ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to Y$. In particular, if $f : X \to Y$ is a function that factors through $P$, then $f$ factors through ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$. Let us briefly justify this principle. Assume that $P$ has the assumed property. Utilizing that the statement that $P$ is contractible is propositional itself, we see that ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ is sufficient to conclude that $P$ is a proposition. This allows us to prove ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \times P$ to be propositional. The map $P \to Y$ clearly gives rise to a map ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \times P \to Y$, and the map $X \to {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \times P$ is given by ${{\mathopen{}\left|-\right|_{}^{}\mathclose{}}}$ and the fact that $P$ is contractible under the assumption $X$. There are several situations in which this principle can be applied. The following theorem does not need it as it is mostly a restatement of our previous result from Section \[sec4:coll\]. \[thm:factorendo\] A weakly constant function $f : X \to Y$ factors through ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ in any one of the following cases, of which the equivalent \[item:3\] and \[item:4\] generalize all others: 1. $X$ is empty, i.e. $X \to {\ensuremath{\mathbf{0}}\xspace}$ \[item:1\] 2. $X$ is inhabited, i.e. ${\ensuremath{\mathbf{1}}\xspace}\to X$ \[item:2\] 3. $X$ has split support, i.e. ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X$ \[item:3\] 4. $X$ has a weakly constant endofunction, i.e. ${\Sigma_{f:X \to X} }{\operatorname{\mathsf{wconst}}}f$ \[item:4\] 5. we have any function $g : Y \to X$. \[item:5\] Each of \[item:1\] and \[item:2\] let us conclude \[item:3\]. Further, \[item:5\] gives us \[item:4\] as the composition $g \circ f$ is a constant endofunction on $X$. The logical equivalence of \[item:3\] and \[item:4\] is Theorem \[thm:maintheorem\]. Thus, it is sufficient to prove the statement for \[item:3\], so assume $s : {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X$. The required conclusion is then immediate as $f$ is pointwise equal to the composition of ${{\mathopen{}\left|-\right|_{}^{}\mathclose{}}} : X \to {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ and $f \circ s$. Our next statement implies what we mentioned at the beginning of Section \[sec4c:factorizing\]: under the assumption of unique identity proofs, the factorization is always possible. \[thm:factor-set\] Let $X, Y$ be again two types and $f :X \to Y$ a constant function. If $Y$ is a set, then $f$ factors through ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$. The crucial observation is that the image of $f$ is propositional in the considered case. In detail, we proceed as follows. We define $P$ to be the image of $f$, that is, $$P {\vcentcolon\equiv}{\Sigma_{y : Y} } {{\mathopen{}\left\Vert {\Sigma_{x:X} } {\ensuremath{f(x) =_{Y} y}\xspace}\right\Vert_{}\mathclose{}}}.$$ In order to apply Principle \[lifting-principle\], we need to know that $f$ factors through $P$. This is obvious from the following diagram: \(A) at (0,0) [$X$]{}; (B) at (30,0) [$Y$]{}; (P) at (15,-7) [$P$]{}; (A) to node \[above\] [$f$]{} (B); (A) to node \[below left\] [${\lambda x .} (f(x), {{\mathopen{}\left|x,{\ensuremath{\mathsf{refl}_{f(x)}}\xspace}\right|_{}^{}\mathclose{}}})$]{} (P); (P) to node \[below right\] [${\mathsf{fst}}$]{} (B); We need to prove that $P$ is propositional. That is, given two elements $(y_1, p_1)$ and $(y_2, p_2)$ in $P$, we want to show that they are equal. Let us once more construct the equality via giving a pair of paths. For the second component, there is nothing to do as $p_1$ and $p_2$ live in propositional types. To show ${\ensuremath{y_1 =_{Y} y_2}\xspace}$, observe that this type is propositional as $Y$ is a set and we may thus assume that we have inhabitants $(x_1, q_1) : {\Sigma_{x_1:X} } {\ensuremath{f(x_1) =_{Y} y_1}\xspace}$ and $(x_2, q_2) : {\Sigma_{x_2:X} } {\ensuremath{f(x_2) =_{Y} y_2}\xspace}$ instead of $p_1$ and $p_2$. But ${\ensuremath{f(x_1) =_{} f(x_2)}\xspace}$ by constancy, and therefore ${\ensuremath{y_1 =_{} y_2}\xspace}$. The maps $X \to P$ and $P \to Y$ are the obvious ones and the claim follows by Principle \[lifting-principle\] (or rather the preceding comment, the strengthened version is not needed). It is not hard to see that, assuming function extensionality, the implication of Theorem \[thm:factor-set\] gives rise to an equivalence $${\ensuremath{\left( {\Sigma_{f:X\to Y} } {\operatorname{\mathsf{wconst}}}f \right) {\enspace {\simeq}\enspace}\left( {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to Y\right) }\xspace},$$ where we use in particular that ${\operatorname{\mathsf{wconst}}}f$ is propositional under the given conditions. This is the simplest non-trivial special case of the result that functions ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to Y$ correspond to *coherently* constant functions $X \to Y$ [@kraus_generaluniversalproperty]. Our last example of a special case in which the factorization can be done is more involved. However, it is worth the effort as it provides valuable intuition and an interesting application, as we will discuss below. The proof we give benefits hugely from a simplification by Sattler who showed to us how reasoning with type equivalences can be applied here. \[thm:factor-coprod\] Assume that function extensionality holds. If $f: X \to Y$ is constant and $X$ is the coproduct of two propositions, then $f$ factors through ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$. Assume $X {\equiv}Q + R$, where $Q$ and $R$ are propositions, and assume that $c : {\operatorname{\mathsf{wconst}}}f$ is the witness of constancy. Define $P$ to be the following $\Sigma$-type with four components: $$\begin{alignedat}{2} \label{eq:P-type} P \; {\vcentcolon\equiv}\; & \Sigma && \left( y : Y \right) \\ & \Sigma && \left( s : {\Pi_{q : Q} } \; {\ensuremath{y =_{} f({\mathsf{inl}\xspace}\, q)}\xspace} \right) \\ & \Sigma && \left( t : {\Pi_{r : R} } \; {\ensuremath{y =_{} f({\mathsf{inr}\xspace}\, r)}\xspace} \right) \\ & && \left( {\Pi_{q : Q} } {\Pi_{r : R} } \; {\ensuremath{{\mathord{{s(q)}^{-1}}} { \mathchoice{\mathbin{\raisebox{0.5ex}{$\displaystyle\centerdot$}}} {\mathbin{\raisebox{0.5ex}{$\centerdot$}}} {\mathbin{\raisebox{0.25ex}{$\scriptstyle\,\centerdot\,$}}} {\mathbin{\raisebox{0.1ex}{$\scriptscriptstyle\,\centerdot\,$}}} }{t(r)} \, =_{} \, c({\mathsf{inl}\xspace}\, q \, , \, {\mathsf{inr}\xspace}\, r)}\xspace} \right) \end{alignedat}$$ In order to apply Principle \[lifting-principle\] we need to construct a function $P \to Y$ and a proof that $X$ implies that $P$ is contractible. The function $P \to Y$ is, of course, given by a simple projection. For the other part, let a point of $X$ be given. Without loss of generality, we assume that this inhabitant is ${\mathsf{inl}\xspace}\, q_0$ with $q_0 : Q$. It would be possible to construct a point in $P$ and show that this point is equal to any other point. However, constructing a chain of equivalences yields a more elegant proof. This strategy was proposed to us by Christian Sattler. As $Q$ is contractible with center $q_0$, it suffices to only consider $q_0$ instead of quantifying over all elements of $Q$. Applying this twice shows that $P$ is equivalent to the following type: $$\begin{alignedat}{2} \phantom{P \; {\vcentcolon\equiv}\;} & \Sigma && \left( y : Y \right) \\ & \Sigma && \left( s : {\ensuremath{y =_{} f({\mathsf{inl}\xspace}\, q_0)}\xspace} \right) \\ & \Sigma && \left( t : {\Pi_{r : R} } \; {\ensuremath{y =_{} f({\mathsf{inr}\xspace}\, r)}\xspace} \right) \\ & && \left( {\Pi_{r : R} } \; {\ensuremath{{\mathord{{s}^{-1}}} { \mathchoice{\mathbin{\raisebox{0.5ex}{$\displaystyle\centerdot$}}} {\mathbin{\raisebox{0.5ex}{$\centerdot$}}} {\mathbin{\raisebox{0.25ex}{$\scriptstyle\,\centerdot\,$}}} {\mathbin{\raisebox{0.1ex}{$\scriptscriptstyle\,\centerdot\,$}}} }{t(r)} \, =_{} \, c({\mathsf{inl}\xspace}\, q_0 \, , \, {\mathsf{inr}\xspace}\, r)}\xspace} \right). \end{alignedat}$$ The first two $\Sigma$-components together have the shape of a singleton, showing that this part is contractible with the canonical inhabitant $(f({\mathsf{inl}\xspace}\, q_0) , {\ensuremath{\mathsf{refl}_{}}\xspace})$. We may thus remove these $\Sigma$-components (see [@HoTTbook Theorem 3.11.9 (ii)]) and the above type further simplifies to $$\begin{alignedat}{2} \phantom{P \; {\vcentcolon\equiv}\;} & \Sigma && \left( t : {\Pi_{r : R} } \; {\ensuremath{f({\mathsf{inl}\xspace}\, q_0) =_{} f({\mathsf{inr}\xspace}\, r)}\xspace} \right) \\ & && \left( {\Pi_{r : R} } \; {\ensuremath{{\mathord{{{\ensuremath{\mathsf{refl}_{}}\xspace}}^{-1}}} { \mathchoice{\mathbin{\raisebox{0.5ex}{$\displaystyle\centerdot$}}} {\mathbin{\raisebox{0.5ex}{$\centerdot$}}} {\mathbin{\raisebox{0.25ex}{$\scriptstyle\,\centerdot\,$}}} {\mathbin{\raisebox{0.1ex}{$\scriptscriptstyle\,\centerdot\,$}}} }{t(r)} \, =_{} \, c({\mathsf{inl}\xspace}\, q_0 \, , \, {\mathsf{inr}\xspace}\, r)}\xspace} \right). \end{alignedat}$$ We apply the distributivity principle of $\Pi$ and $\Sigma$ (see [@HoTTbook Theorem 2.15.7]), together with standard simplifications, to further simplify to $$\begin{alignedat}{2} \phantom{P \; {\vcentcolon\equiv}\;} & {\Pi_{r : R} } && \Sigma \left( t : {\ensuremath{f({\mathsf{inl}\xspace}\, q_0) =_{B} f({\mathsf{inr}\xspace}\, r)}\xspace} \right) \\ & && \phantom{\Sigma} \left( {\ensuremath{{t} \, =_{} \, c({\mathsf{inl}\xspace}\, q_0 \, , \, {\mathsf{inr}\xspace}\, r)}\xspace} \right). \end{alignedat}$$ For any $r:R$, the dependent pair part is contractible as it is, once more, a singleton, and function extensionality allows us to conclude the stated result. Theorem \[thm:factor-coprod\] was inspired by a discussion on the *homotopy type theory mailing list* [@hott:mailinglist]. Shulman observed that, for two propositions $Q$ and $R$, their *join* $Q*R$ [@HoTTbook Chapter 6.8], defined as the (homotopy) pushout of the diagram $Q \xleftarrow{{\mathsf{fst}}} Q \times R \xrightarrow{{\mathsf{snd}}} R$, is equivalent to ${{\mathopen{}\left\Vert Q + R\right\Vert_{}\mathclose{}}}$. This means that, in the presence of *higher inductive types* [@HoTTbook Chapter 6], the type ${{\mathopen{}\left\Vert Q+R\right\Vert_{}\mathclose{}}}$ has the (seemingly) stronger elimination rule of the join. The second named author then asked whether higher inductive types do really improve the elimination properties of ${{\mathopen{}\left\Vert Q+R\right\Vert_{}\mathclose{}}}$ in this sense. This was discussed shortly before we could answer the question negatively with the result of Theorem \[thm:factor-coprod\]: its statement about ${{\mathopen{}\left\Vert Q+R\right\Vert_{}\mathclose{}}}$ corresponds exactly to the elimination property of $Q * R$. Thus, the join of two propositions already exists in a minimalistic setting that involves truncation but no other higher inductive types. Populatedness {#sec5:populatedness} ============= In this section we discuss a notion of *anonymous existence*, similar to, but weaker (see Section \[sec72:pop-inh\]) than propositional truncation. It crucially depends on the Fixed Point Lemma \[fixedpoint\]. Let us start by discussing another perspective on what we have explained in Section \[sec4:coll\]. Trivially, for a type $X$, we can prove the statement $$\label{trivial} {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to ({{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X) \to X.$$ By Lemma \[thm:maintheorem\], this is equivalent to $${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to {\operatorname{\mathsf{constEndo}}}X \to X,$$ and hence $$\label{lesstrivial} {\operatorname{\mathsf{constEndo}}}X \to {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X,$$ which can be read as: If we have a constant endomap on $X$ and we wish to get an inhabitant of $X$ (or, equivalently, a fixed point of the endomap), then ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ is sufficient to do so. We can additionally ask whether it is also necessary: can we replace the first assumption ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ by something weaker? Looking at formula \[trivial\], it would be natural to conjecture that this is not the case, but it is. In this section, we discuss what it can be replaced by, and in Section \[sec72:pop-inh\], we give a proof that it is indeed weaker. For answering the question what is needed to get from ${\operatorname{\mathsf{splitSup}}}X$ to $X$, let us define the following notion: \[populatedness\] For a given type $X$, we say that $X$ is populated, written ${\langle \! \langle X \rangle \! \rangle}$, if every constant endomap on $X$ has a fixed point: $${\langle \! \langle X \rangle \! \rangle} {\vcentcolon\equiv}{{\Pi_{f:X \to X} }} {\operatorname{\mathsf{wconst}}}f \to {\operatorname{\mathsf{fix}}}f,$$ where ${\operatorname{\mathsf{fix}}}f$ is the type of fixed points, defined as in Lemma \[fixedpoint\]. The notion of populatedness (which, to add a caveat, is not functorial; see Theorem \[tfae2\]) allows us to comment on the question raised above. If ${\langle \! \langle X \rangle \! \rangle}$ has an element and $X$ has a constant endomap, then $X$ has an inhabitant, as such an inhabitant can be extracted from the type of fixed points by projection. Hence, ${\langle \! \langle X \rangle \! \rangle}$ instead of ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ in \[lesstrivial\] would be sufficient as well. Therefore, $$\label{eq:pophstable} {\langle \! \langle X \rangle \! \rangle} \to ({{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X) \to X.$$ At this point, we have to ask ourselves whether  is an improvement over . But indeed, we have the following property: \[thm:trunc-to-pop\] Any merely inhabited type is populated. That is, for a type $X$, we have $$\label{eq:trunc-to-pop} {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to {\langle \! \langle X \rangle \! \rangle}.$$ Assume $f$ is a constant endofunction on $X$. The claim follows directly from Corollary \[cor:fixistrunc\]. In Section \[sec6:taboos\] we will see that ${\langle \! \langle X \rangle \! \rangle}$ is in fact strictly weaker than ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$. In the presence of propositional truncation, we can give an alternative characterization of populatedness. Recall that we indicate propositional truncation with the attribute *merely*. \[lem:popishstabletoh\] In MLTT with propositional truncation, a type is populated if and only if the statement that it merely has split support implies that it is merely inhabited, or equivalently, if and only if the statement that $X$ has split support allows the construction of an element of $X$. Formally, the following types are logically equivalent: 1. ${\langle \! \langle X \rangle \! \rangle}$ \[item:popchar-1\] 2. ${{\bigl\Vert {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X\bigr\Vert_{}}} \to {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ \[item:popchar-2\] 3. $({{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X) \to X$. \[item:popchar-3\] We have already discussed \[item:popchar-1\] $\Rightarrow$ \[item:popchar-3\] above, see . \[item:popchar-3\] $\Rightarrow$ \[item:popchar-2\] follows from the functoriality of the truncation operator. For \[item:popchar-2\] $\Rightarrow$ \[item:popchar-1\], assume we have a constant endofunction $f$ on $X$. Hence, we have a function ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X$, thus ${{\bigl\Vert {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X\bigr\Vert_{}}}$ and, by assumption, ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$. But ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ is enough to construct a fixed point of $f$ by Corollary \[cor:fixistrunc\]. A nice feature of the notion of populatedness is that it is definable in MLTT, and it can thus be used without making further assumptions. For the rest of this section, let us explicitly not assume that the type theory has propositional truncations. We can give one more characterization of populatedness, and a strong parallel to mere inhabitance, as follows: \[populatedLargeSmall\] In MLTT, a type $X$ is populated if and only if any proposition that is logically equivalent to it holds, $${\langle \! \langle X \rangle \! \rangle} \; \Leftrightarrow \; {{\Pi_{P : {\ensuremath{\mathcal{U}}\xspace}} }} { \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } P \to (P \to X) \to (X \to P) \to P.$$ Note that the only difference to the type expression in Theorem \[hinhabitedLargeSmall\] is that we only quantify over *sub-propositions* of $X$, i.e. over those that satisfy $P \to X$, while we quantify over all propositions in the case of ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$. This again shows that ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$, if it exists, is at least as strong as ${\langle \! \langle X \rangle \! \rangle}$. Let us first prove the direction “$\rightarrow$”. Assume a proposition $P$ is given, together with functions $X \to P$ and $P \to X$. Composition of these gives us a constant endomap on $X$, exactly as in the proof of Theorem \[tfae\]. But then ${\langle \! \langle X \rangle \! \rangle}$ makes sure that this constant endomap has a fixed point, which is (or allows us to extract) an inhabitant of $X$. Using $X \to P$ again, we get $P$. For the direction “$\leftarrow$”, assume we have a constant endomap $f$. We need to construct an inhabitant of ${\operatorname{\mathsf{fix}}}f$. In the expression on the right-hand side, choose $P$ to be ${\operatorname{\mathsf{fix}}}f$, and everything follows from Corollary \[cor:fixistrunc\]. The similarities between ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ and ${\langle \! \langle X \rangle \! \rangle}$ do not stop here. The following statement, together with the direction “$\rightarrow$” of the theorem that we have just proved, should be compared to the definition of ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ (that is, Definition \[def:htruncation\]): \[thm:pop-like-trunc\] For any $X$, the type ${\langle \! \langle X \rangle \! \rangle}$ has the following properties: 1. $X \to {\langle \! \langle X \rangle \! \rangle}$ \[item:x-to-pop\] 2. ${ \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } ({\langle \! \langle X \rangle \! \rangle})$ (if function extensionality holds). The first point can be shown using the map $\epsilon$ as defined in . For the second, we use that ${\operatorname{\mathsf{fix}}}f$ is a proposition (Lemma \[fixedpoint\]). By function extensionality, a (dependent) function type is propositional if the codomain is (see Section \[sec2:preliminaries\]) and we are done. The following result, shown without using propositional truncation, is the analog to Theorem \[thm:maintheorem\]. \[thm:populated-coll\] Let $X$ be a type. If we have a constant endomap on $X$, then $({\langle \! \langle X \rangle \! \rangle} \to X)$. Assuming function extensionality, this implication can be reversed. If $X$ is propositional, then ${\operatorname{\mathsf{constEndo}}}X$ and $({\langle \! \langle X \rangle \! \rangle} \to X)$ are both inhabited (not requiring function extensionality). Given a constant endofunction $f$ on $X$, an inhabitant of ${\langle \! \langle X \rangle \! \rangle}$ gives us ${\operatorname{\mathsf{fix}}}f$ and thus $X$ by projection. For the other direction, if we have $({\langle \! \langle X \rangle \! \rangle} \to X)$, then the composition with (Theorem \[thm:pop-like-trunc\].\[item:x-to-pop\]) gives a constant endofunction on $X$. If $X$ is propositional, then the identity is clearly constant. As remarked above, ${{\mathopen{}\left\Vert -\right\Vert_{}\mathclose{}}}$ is an idempotent monad in an appropriate sense, while ${\langle \! \langle - \rangle \! \rangle}$ is not even functorial (see Theorem \[tfae2\]). However, we do have the following: Assuming function extensionality, the notion of populatedness is idempotent in the sense that, for a type $X$, we have an equivalence $${\ensuremath{{\langle \! \langle {\langle \! \langle X \rangle \! \rangle} \rangle \! \rangle} {\simeq}{\langle \! \langle X \rangle \! \rangle}}\xspace}.$$ Theorem \[thm:pop-like-trunc\] shows that both sides are propositional and that there is a map “$\leftarrow$”. A map “$\rightarrow$” is given by Theorem \[thm:populated-coll\]. Taboos and Counter-Models {#sec6:taboos} ========================= In this section we look at the differences between the various notions of (anonymous) inhabitance we have encountered. We have, for a type $X$, the following chain of implications: $$X \, \Rightarrow \, {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \, \Rightarrow \, {\langle \! \langle X \rangle \! \rangle} \, \Rightarrow \, \neg\neg X. \label{eq:chain}$$ The first implication is trivial and the second is given by Theorem \[thm:trunc-to-pop\]. Maybe somewhat surprisingly, the last implication does not require function extensionality, as we do not need to prove that $\neg\neg X$ is propositional: to show $${\langle \! \langle X \rangle \! \rangle} \to \neg\neg X \; ,$$ let us assume $f : \neg X$. But then, $f$ can be composed with the unique function from the empty type into $X$, yielding a constant endomap on $X$, and obviously, this function cannot have a fixed point in the presence of $f$. Therefore, the assumption of ${\langle \! \langle X \rangle \! \rangle}$ would lead to a contradiction, as required. Under the assumption of ${\ensuremath{\mathsf{LEM}_{}}\xspace}$, all implications of the chain  except the first can be reversed as it is easy to show $${{\Pi_{X : {\ensuremath{\mathcal{U}}\xspace}} }} ({{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} + \neg {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}) \to \neg\neg X \to {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}.$$ Constructively, none of the implications of  should be reversible. To make that precise, we use what we call *taboos*, showing that the provability of a statement would imply the provability of another better understood statement which is known to be not provable. A taboo is essentially a type-theoretic *Brouwerian counterexample* (“constructive taboo”) or a homotopical analog (“homotopical taboo”). In this section, we present the following discussions: 1. We start by assuming that the first implication can be reversed, i.e. that we have a function ${{\Pi_{X : {\ensuremath{\mathcal{U}}\xspace}} }} {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X$. It is easy to see that this assumption implies that all types are sets. We show the more interesting result that all equalities are decidable. As an additional argument, if every type has split support, a form of choice that does not belong to type theory is implied. Moreover, we observe that ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X$ can be read as “the map ${{\mathopen{}\left|-\right|_{}^{}\mathclose{}}} : X \to {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ is a split epimorphism” (where the latter notion must be read with care), and we show that already the weaker assumption that it is an epimorphism implies that all types are sets. 2. It would be nice if the second implication could be reversed, as this would imply that propositional truncation is definable in MLTT. However, this is logically equivalent to a certain weak version of the axiom of choice discussed below, which is not provable (but holds under ${\ensuremath{\mathsf{LEM}_{}}\xspace}$). 3. Assuming function extensionality, the last implication can be reversed if and only if ${\ensuremath{\mathsf{LEM}_{}}\xspace}$ holds. Inhabited and Merely Inhabited {#subsec:pure-trunc} ------------------------------ We first examine the question whether the first part of the chain  can be reversed. If $X$ is a type, it is weaker to have an inhabitant of ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ than to have an inhabitant of $X$. It is unreasonable to expect that we can show in type theory that every type has split support, but it is interesting to see what it would imply. First of all, if we assume that all types have split support, then this in particular holds for path spaces, and by Theorem \[tfae\], every type is a set. This assumption also implies the axiom of choice [@HoTTbook Chapter 3.8]. If we have univalence for propositions and set quotients, this allows us to use Diaconescu’s proof of ${\ensuremath{\mathsf{LEM}_{}}\xspace}$ ([@Diaconescu], see [@HoTTbook Theorem 10.1.14]). We want to present a similar construction in the much more minimalistic theory that we consider in the current article. Using Theorem \[thm:maintheorem\], we can formulate the assumption that all types have split support without using truncations as “every type has a constant endofunction”, $$\label{eq:global-coll} {{\Pi_{X : {\ensuremath{\mathcal{U}}\xspace}} }} {\operatorname{\mathsf{constEndo}}}X.$$ From a constructive point of view, this is an interesting assumption. It clearly follows from ${\ensuremath{\mathsf{LEM}_{\infty}}\xspace}$: if we know an inhabitant of a type, we can immediately construct a constant endomap, and for the empty type, considering the identity function is sufficient. The assumption  contains some form of choice, but we do not expect that the general principle ${\ensuremath{\mathsf{LEM}_{\infty}}\xspace}$ can be derived in our setting. Hence, we may understand  as a weak form of ${\ensuremath{\mathsf{LEM}_{\infty}}\xspace}$. However, what we can derive is ${\ensuremath{\mathsf{LEM}_{\infty}}\xspace}$ for all path spaces, i.e. that all types are discrete, see Lemma \[martin:all-collapsible-discrete\] and Theorem \[theorem:coll-discrete\] below. \[martin:all-collapsible-discrete\] In basic MLTT (without function extensionality and without propositional truncations), let $A$ be a type and $a_0, a_1 : A$ two points. If the type ${({\ensuremath{a_0 =_{} x}\xspace}) + ({\ensuremath{a_1 =_{} x}\xspace})}$ has a constant endomap for all $x : A$ , then ${\ensuremath{a_0 =_{} a_1}\xspace}$ is decidable. As we will see in the proof, we need to know ${{0_{{\ensuremath{\mathbf{2}}\xspace}}}}\not=_{{\ensuremath{\mathbf{2}}\xspace}} {{1_{{\ensuremath{\mathbf{2}}\xspace}}}}$ for Lemma \[martin:all-collapsible-discrete\], which can be proved using a universe. If we assume ${{0_{{\ensuremath{\mathbf{2}}\xspace}}}}\not=_{{\ensuremath{\mathbf{2}}\xspace}} {{1_{{\ensuremath{\mathbf{2}}\xspace}}}}$, the lemma is true in an even weaker setting without a type universe. Before giving the proof of Lemma \[martin:all-collapsible-discrete\], we state an immediate corollary: \[theorem:coll-discrete\] If every type has a constant endofunction then every type has decidable equality, $$({{\Pi_{X : {\ensuremath{\mathcal{U}}\xspace}} }} {\operatorname{\mathsf{constEndo}}}X) \to {{\Pi_{X : {\ensuremath{\mathcal{U}}\xspace}} }} {\mathsf{isDiscrete}}X.$$ For (technical and conceptual) convenience, we regard the elements $a_0, a_1$ as a single map $$a : {\ensuremath{\mathbf{2}}\xspace}\to A$$ and we use $$E_x {\vcentcolon\equiv}{\Sigma_{i : {\ensuremath{\mathbf{2}}\xspace}} } \; {\ensuremath{a_i =_{} x}\xspace}$$ in place of the type ${({\ensuremath{a_0 =_{} x}\xspace}) + ({\ensuremath{a_1 =_{} x}\xspace})}$. In a theory with propositional truncation, the *image* of $a$ can be defined to be ${\Sigma_{x:A} } {{\mathopen{}\left\Vert E_x\right\Vert_{}\mathclose{}}}$ [@HoTTbook Definition 7.6.3]. By assumption, we have a family of constant endofunctions $f_x$ on $E_x$, and by the discussion above, we can essentially regard the type $$E {\vcentcolon\equiv}{\Sigma_{x:A} } {\operatorname{\mathsf{fix}}}f_x,$$ which can be unfolded to $${\Sigma_{x:A} } {\Sigma_{(i,p) : E_x} } {\ensuremath{f_x(i,p) =_{} (i,p)}\xspace},$$ as the image of $a$. It is essentially the observation that we can define this image that allows us to mimic Diaconescu’s argument. Recall from  that $\epsilon$ is the canonical function that maps a point of a type to a fixed point of a given endofunction on that type. Clearly, $a$ induces a map $$\begin{aligned} &r : {\ensuremath{\mathbf{2}}\xspace}\to E \\ &r(i) {\vcentcolon\equiv}(a_i , \epsilon(i, {\ensuremath{\mathsf{refl}_{a_i}}\xspace})).\end{aligned}$$ Using that the second component is an inhabitant of a proposition, we have $$\label{eq:ar} {\ensuremath{r(i) =_{} r(j)}\xspace} \, \Leftrightarrow \, {\ensuremath{a_i =_{} a_j}\xspace}.$$ The type $E$ can be understood as the quotient of ${\ensuremath{\mathbf{2}}\xspace}$ by the equivalence relation $\sim$, given by $i \sim j {\equiv}{\ensuremath{a_i =_{} a_j}\xspace}$. If $E$ was the image of $a$ in the ordinary sense [@HoTTbook Definition 7.6.3], the axiom of choice would be necessary to find a section of $r$ (see [@HoTTbook Theorem 10.1.14]). In our situation, this section is given by a simple projection, $$\begin{aligned} &s : E \to {\ensuremath{\mathbf{2}}\xspace}\\ &s(x, ((i,p) , q)) {\vcentcolon\equiv}i.\end{aligned}$$ It is easy to see that $s$ is indeed a section of $r$ in the sense of ${{\Pi_{e:E} }}{\ensuremath{r(s(e)) =_{} e}\xspace}$. Given $(x, ((i,p),q)) : E$, applying first $s$, then $r$ leads to $(a_i, \epsilon(i, {\ensuremath{\mathsf{refl}_{a_i}}\xspace}))$. Equality of these expressions is equality of the first components due to the propositional second component. But $p$ is a proof of ${\ensuremath{a_i =_{} x}\xspace}$. From that property, we can conclude that, for any $e_0, e_1:E$, $$\label{eq:ere} {\ensuremath{e_0 =_{} e_1}\xspace} \, \Leftrightarrow \, {\ensuremath{s(e_0) =_{} s(e_1)}\xspace}.$$ Combining and yields $${\ensuremath{a_i =_{} a_j}\xspace} \, \Leftrightarrow \, {\ensuremath{s(r(i)) =_{} s(r(j))}\xspace},$$ where the right-hand side is an equality in ${\ensuremath{\mathbf{2}}\xspace}$ and thus always decidable. In particular, ${\ensuremath{a_0 =_{} a_1}\xspace}$ is hence decidable. Another consequence of the assumption  is a form of choice that does not belong to intuitionistic type theory. In order to formulate and prove this, we need a few definitions. We say that a relation $R : X \times X \to {\ensuremath{\mathcal{U}}\xspace}$ is *propositionally valued* if $${{{\Pi_{x , y:X} }}} { \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } (R(x,y)).$$ The $R$-*image* of a point $x : X$ is $$R_x {\vcentcolon\equiv}{\Sigma_{y:X} } {R(x,y)}.$$ We say that $R$ is *functional* if its point-images are all propositions: $${{\Pi_{x:X} }} { \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } R_x.$$ We say that two relations $R,S : X \times X \to {\ensuremath{\mathcal{U}}\xspace}$ *have the same domain* if $${{\Pi_{x:X} }} R_x \Leftrightarrow S_x,$$ and that $S$ is a *subrelation* of $R$ if $${{{\Pi_{x , y:X} }}} S(x,y) \to R(x,y).$$ If all types have constant endofunctions, then every binary relation has a functional, propositionally valued subrelation with the same domain. Assume that $R : X \times X \to U$ is given. For $x:X$, let $k_x : R_x \to R_x$ be the constant map given by the assumption  that all types have constant endofunctions. Define further $$S (x,y) {\vcentcolon\equiv}{\Sigma_{a:R(x,y)} } {{\ensuremath{(y, a) =_{} k_x(y , a)}\xspace}}.$$ Then $S$ is a subrelation of $R$ by construction. We observe that $S_x$ is equivalent to ${\operatorname{\mathsf{fix}}}(k_x)$ and therefore propositional (by Lemma \[fixedpoint\]), proving that $S$ is functional. Together with Corollary \[cor:fixistrunc\], this further shows $$R_x \Leftrightarrow {\operatorname{\mathsf{fix}}}{k_x} \Leftrightarrow S_x,$$ showing that $R$ and $S$ have the same domain. What remains to show is that $S(x,y)$ is always a proposition. Let $s, s' : S(x,y)$. As $S_x$ is propositional we know ${\ensuremath{(y,s) =_{S_x} (y,s')}\xspace}$. By the standard lemma this type corresponds to a dependent pair type with components $$\begin{aligned} &p : {\ensuremath{y =_{X} y}\xspace} \\ &q : {\ensuremath{{\ensuremath{{p}_{*}\mathopen{}\left({s}\right)\mathclose{}}\xspace} =_{S(x,y)} s'}\xspace}.\end{aligned}$$ In our case, as every type is a set, we have ${\ensuremath{p =_{} {\ensuremath{\mathsf{refl}_{y}}\xspace}}\xspace}$, and $q$ gives us the required proof of ${\ensuremath{s =_{S(x,y)} s'}\xspace}$. Instead of the logically equivalent formulation , let us now assume the original assumption that ${{\mathopen{}\left|-\right|_{}^{}\mathclose{}}}$ can be reversed, that is, $$\label{eq:all-h-stable-2} {{\Pi_{X : {\ensuremath{\mathcal{U}}\xspace}} }} {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X.$$ Note that a map $h : {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X$ is automatically a section of ${{\mathopen{}\left|-\right|_{}^{}\mathclose{}}} : X \to {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ in the sense of $${{\Pi_{z:{{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}} }} {\ensuremath{{{\mathopen{}\left|h(z)\right|_{}^{}\mathclose{}}} =_{} z}\xspace}$$ as any two inhabitants of ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ are equal. Therefore, we may read  as: $$\text{For any type $X$, the map ${{\mathopen{}\left|-\right|_{}^{}\mathclose{}}} : X \to {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ is a \emph{split epimorphism}.}$$ We want to consider a weaker assumption, namely $$\text{For any type $X$, the map ${{\mathopen{}\left|-\right|_{}^{}\mathclose{}}} : X \to {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ is an \emph{epimorphism},}$$ where we call $e : U \to V$ an *epimorphism* if, for any type $W$ and any two functions $f,g : V \to W$, we have $$\label{eq:e-is-epi-def} ({{\Pi_{u : U} }} {\ensuremath{f(e \, u) =_{} g(e \, u)}\xspace}) \to {{\Pi_{v : V} }} {\ensuremath{f \, v =_{} g \, v}\xspace}.$$ Of course, under function extensionality, $e$ is an epimorphism if and only if, for all $W,f,g$, we have $${\ensuremath{f \circ e =_{} g \circ e}\xspace} \to {\ensuremath{f =_{} g}\xspace}.$$ A caveat is required. Our definition of *epimorphism* is the direct naive translation of the usual $1$-categorical notion into type theory. However, the category of types and functions with the type of equalities is not only an ordinary category, but rather an $(\infty,1)$-category. The definition  makes sense in the category of sets [@HoTTbook Chapter 10.1], where equalities are propositional. However, the property of being an epimorphism in our sense is not propositional and it could rightfully be argued that it might not be the “correct” definition in a context where not every type is a set, similarly to how we argued that ${\ensuremath{\mathsf{LEM}_{\infty}}\xspace}$ is a problematic version of the principle of excluded middle. Despite this, we use the notion as we think that it helps providing an intuitive meaning to the plain type expression . \[lem:global-epi-to-set\] Let $Y$ be a type. If the map ${{\mathopen{}\left|-\right|_{}^{}\mathclose{}}} : ({\ensuremath{y_1 =_{} y_2}\xspace}) \to {{\mathopen{}\left\Vert {\ensuremath{y_1 =_{} y_2}\xspace}\right\Vert_{}\mathclose{}}}$ is an epimorphism for any points $y_1, y_2 : Y$, then $Y$ is a set. Assume $Y, y_1, y_2$ are given. Define two functions $$f, g : {{\mathopen{}\left\Vert {\ensuremath{y_1 =_{} y_2}\xspace}\right\Vert_{}\mathclose{}}} \to Y$$ by $$\begin{aligned} &f(q) {\vcentcolon\equiv}y_1,\\ &g(q) {\vcentcolon\equiv}y_2,\end{aligned}$$ that is, $f$ and $g$ are constant at $y_1$ and $y_2$, respectively. With these concrete choices, our assumption  with $e \equiv {{\mathopen{}\left|-\right|_{}^{}\mathclose{}}}$ becomes $$\left({{\ensuremath{y_1 =_{} y_2}\xspace}} \to {\ensuremath{y_1 =_{} y_2}\xspace}\right) \to \left({{\mathopen{}\left\Vert {\ensuremath{y_1 =_{} y_2}\xspace}\right\Vert_{}\mathclose{}}} \to {\ensuremath{y_1 =_{} y_2}\xspace}\right)$$ which, of course, gives us a function $${{\mathopen{}\left\Vert {\ensuremath{y_1 =_{} y_2}\xspace}\right\Vert_{}\mathclose{}}} \to {\ensuremath{y_1 =_{} y_2}\xspace}.$$ The statement of the lemma then follows from Theorem \[tfae\]. The following result summarizes the statements of Theorem \[theorem:coll-discrete\] and Lemma \[lem:global-epi-to-set\]: In basic MLTT with weak propositional truncation, 1. if ${{\mathopen{}\left|-\right|_{}^{}\mathclose{}}} : X \to {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ is a split epimorphism for every $X$, then all types have decidable equality 2. if ${{\mathopen{}\left|-\right|_{}^{}\mathclose{}}} : X \to {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ is an epimorphism for every $X$, then all types are sets. The first part is a reformulation of Theorem \[theorem:coll-discrete\], while the second part is a corollary of Lemma \[lem:global-epi-to-set\]. Merely Inhabited and Populated {#sec72:pop-inh} ------------------------------ Assume that the second step in  can be reversed, meaning that we have $${{\Pi_{X : {\ensuremath{\mathcal{U}}\xspace}} }} {\langle \! \langle X \rangle \! \rangle} \to {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}.$$ Repeated use of the Fixed Point Lemma leads to a couple of interesting logically equivalent statements. In the previous subsection, we have discussed that we cannot show that every type has split support. However, a weaker version of this is provable: \[pophstable\] For every type $X$, the statement that it has split support is populated, $${\langle \! \langle {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X \rangle \! \rangle}.$$ To demonstrate the different possibilities that the logically equivalent formulations of populatedness offer, we want to give more than one proof. The first one uses Definition \[populatedness\]: Assume we are given a constant endofunction $f$ on ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X$. We need to construct a fixed point of $f$, or correspondingly, any inhabitant of ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X$. By Theorem \[thm:maintheorem\], a constant function $g : X \to X$ is enough for this. Given $x:X$, we may apply $f$ to the function that is everywhere $x$, yielding an inhabitant of ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X$. Applying it to ${{\mathopen{}\left|x\right|_{}^{}\mathclose{}}}$ gives an element of $X$, and we define $g(x)$ to be this element. The proof that that $f$ is constant immediately translates to a proof that $g$ is constant. Alternatively, we can use the logically equivalent formulation of populatedness, proved in Theorem \[populatedLargeSmall\]: Assume $P$ is a proposition and we have a proof of $$P \; \Leftrightarrow \; ({{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X).$$ We need to show $P$. The logical equivalence above immediately provides an inhabitant of $X \to P$, and, by the rules of the propositional truncation, therefore ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to P$. Assume ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$. We get $P$, thus ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X$ with the above equivalence, and therefore $X$ (using the assumed ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ again). This shows ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X$, and consequently, $P$. Finally, we can also use that ${\langle \! \langle - \rangle \! \rangle}$ can be written in terms of ${{\mathopen{}\left\Vert -\right\Vert_{}\mathclose{}}}$: Using Lemma \[lem:popishstabletoh\]\[item:popchar-3\], the statement that needs to be shown becomes $$\left({{\bigl\Vert {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X\bigr\Vert_{}}} \to {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X\right) \to \left({{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X\right),$$ which is immediate. The assumption that populatedness and mere inhabitance are equivalent has a couple of “suspicious” consequences, as we want to show now. \[tfae2\] In MLTT with weak propositional truncation, the following are logically equivalent: 1. every populated type is merely inhabited, \[tfae:1\] $${{\Pi_{X : {\ensuremath{\mathcal{U}}\xspace}} }} {\langle \! \langle X \rangle \! \rangle} \to {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$$ 2. every type merely has split support, \[tfae:2\] $${{\Pi_{X : {\ensuremath{\mathcal{U}}\xspace}} }} {{\bigl\Vert {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X\bigr\Vert_{}}}$$ 3. every proposition is *projective* in the following sense: \[tfae:3\] $${{\Pi_{P : {\ensuremath{\mathcal{U}}\xspace}} }} { \edef\a{\compare-2-1\empty\empty} \if\a1 {\ensuremath{\operatorname{\mathsf{isContr}}}}\else \edef\b{\compare-1-1\empty\empty} \if\b1 {\ensuremath{\operatorname{\mathsf{isProp}}}}\else \edef\c{-1} \if0\c {\ensuremath{\operatorname{\mathsf{isSet}}}}\else \mathsf{is}\mbox{-}{-1}\mbox{-}\mathsf{type} \fi\fi\fi } P \to {{\Pi_{Y:P \to {\ensuremath{\mathcal{U}}\xspace}} }} ({\Pi_{p : P} } {{\mathopen{}\left\Vert Y(p)\right\Vert_{}\mathclose{}}}) \to {{\mathopen{}\left\Vert {\Pi_{P} } Y\right\Vert_{}\mathclose{}}}$$ (note that this is the *axiom of choice* [@HoTTbook Chapter 3.8] for propositions, without the requirement that $Y$ is a family of sets) 4. ${\langle \! \langle - \rangle \! \rangle} : {\ensuremath{\mathcal{U}}\xspace}\to {\ensuremath{\mathcal{U}}\xspace}$ is functorial in the sense that \[tfae:4\] $${{{\Pi_{X , Y:{\ensuremath{\mathcal{U}}\xspace}} }}} (X \to Y) \to ({\langle \! \langle X \rangle \! \rangle} \to {\langle \! \langle Y \rangle \! \rangle}),$$ where this naming is justified at least in the presence of function extensionality which implies that ${\langle \! \langle X \rangle \! \rangle} \to {\langle \! \langle Y \rangle \! \rangle}$ is propositional, ensuring ${\ensuremath{{\langle \! \langle g \circ f \rangle \! \rangle} =_{} {\langle \! \langle g \rangle \! \rangle} \circ {\langle \! \langle f \rangle \! \rangle}}\xspace}$. Further, \[tfae:4\] can be formulated in MLTT without assumptions on the availability of propositional truncation. If it holds, then ${\langle \! \langle - \rangle \! \rangle}$ satisfies the recursion principle of the weak propositional truncation. Additionally assuming function extensionality, ${\langle \! \langle - \rangle \! \rangle}$ can then serve as an implementation of the weak propositional truncation. Let us first show the final claim. If $Y$ is propositional, then ${\langle \! \langle Y \rangle \! \rangle} \to Y$ by Theorem \[thm:populated-coll\]. Together with \[tfae:4\], this gives the claimed recursion principle. The rest of the properties of the weak propositional truncation is given by Theorem \[thm:pop-like-trunc\]. Let us show the logical equivalence of the four types. The above observation immediately implies \[tfae:4\] $\Rightarrow$ \[tfae:1\]. The direction \[tfae:1\] $\Rightarrow$ \[tfae:4\] is also immediate by functoriality of ${{\mathopen{}\left\Vert -\right\Vert_{}\mathclose{}}}$. The logical equivalence of the first two points follows easily from what we already know. \[tfae:1\] $\Rightarrow$ \[tfae:2\] is an application of Lemma \[pophstable\], while \[tfae:2\] $\Rightarrow$ \[tfae:1\] follows from Lemma \[lem:popishstabletoh\]. Let us now show \[tfae:1\] $\Rightarrow$ \[tfae:3\]. Let $P$ be some proposition and $Y : P \to {\ensuremath{\mathcal{U}}\xspace}$ some family of types. If we assume \[tfae:1\], it is then enough to prove $$\left({\Pi_{p : P} } {{\mathopen{}\left\Vert Y(p)\right\Vert_{}\mathclose{}}}\right) \to {\langle \! \langle {\Pi_{P} } Y \rangle \! \rangle}.$$ By Lemma \[lem:popishstabletoh\], it is enough to show $$\label{eq:auxtfa} \left({\Pi_{p : P} } {{\mathopen{}\left\Vert Y(p)\right\Vert_{}\mathclose{}}}\right) \to ({{\mathopen{}\left\Vert {\Pi_{P} } Y\right\Vert_{}\mathclose{}}} \to {\Pi_{P} } Y) \to {\Pi_{P} } Y.$$ Under several assumptions, one of them being that some $p_0 : P$ is given, we need to construct an inhabitant of $Y(p_0)$. Recall the principle of the *neutral contractible exponent* that we used in the proof of Theorem \[thm:factor-coprod\]. Here, it allows us to replace ${\Pi_{P} } Y$ by $Y(p_0)$ and ${\Pi_{p:P} } {{\mathopen{}\left\Vert Y(p)\right\Vert_{}\mathclose{}}}$ by ${{\mathopen{}\left\Vert Y(p_0)\right\Vert_{}\mathclose{}}}$, and the type  becomes $${{\mathopen{}\left\Vert Y(p_0)\right\Vert_{}\mathclose{}}} \to ({{\mathopen{}\left\Vert Y(p_0)\right\Vert_{}\mathclose{}}} \to Y(p_0)) \to Y(p_0),$$ which is obvious. \[tfae:3\] $\Rightarrow$ \[tfae:2\] can be seen easily by taking $P$ to be ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ and $Y$ to be constantly $X$. Consider the third of the four statements in Theorem \[tfae2\]. When $Y(p)$ is a set with exactly two elements for every $p : P$, this amounts to *the world’s simplest axiom of choice* [@wsac], which fails in some toposes. We expect that this makes it possible to show that, in MLTT with weak propositional truncation, ${{{\Pi_{X : {\ensuremath{\mathcal{U}}\xspace}} }} {\langle \! \langle X \rangle \! \rangle} \to {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}}$ is not derivable. Populated and Non-Empty ----------------------- If we can reverse the last implication of the chain, we have $${{\Pi_{X : {\ensuremath{\mathcal{U}}\xspace}} }} \neg\neg X \to {\langle \! \langle X \rangle \! \rangle}.$$ To show that this cannot be provable, we show that it is equivalent to ${\ensuremath{\mathsf{LEM}_{}}\xspace}$, a constructive taboo. With function extensionality, we have the following (logical) equivalence: $$\left( {{\Pi_{X : {\ensuremath{\mathcal{U}}\xspace}} }} \neg\neg X \to {\langle \! \langle X \rangle \! \rangle} \right) \; \Leftrightarrow \; {\ensuremath{\mathsf{LEM}_{}}\xspace}.$$ The direction “$\leftarrow$” is easy: from $X \to {\langle \! \langle X \rangle \! \rangle}$, we get $\neg\neg X \to \neg\neg {\langle \! \langle X \rangle \! \rangle}$. As ${\langle \! \langle X \rangle \! \rangle}$ is propositional, ${\ensuremath{\mathsf{LEM}_{}}\xspace}$ gives us $\neg\neg {\langle \! \langle X \rangle \! \rangle} \to {\langle \! \langle X \rangle \! \rangle}$. For the direction “$\rightarrow$”, assume that $P$ is a proposition. Thus, the type $P + \neg P$ is a proposition as well, and hence, the identity function on $P + \neg P$ is constant. It is straightforward to construct a proof of $\neg\neg \left(P + \neg P\right)$. By the assumption, this means that $P + \neg P$ is populated, i.e. every constant endomap on it has a fixed point. Therefore, we can construct a fixed point of the identity function, which is equivalent to proving $P + \neg P$. Propositional Truncation with Judgmental Computation Rule {#sec9:judgm-beta} ========================================================= Propositional truncation is often defined to satisfy the judgmental computation rule [@HoTTbook Chapter 3.7], $${\operatorname{\mathsf{rec_{tr}}}}(P, h, f, {{\mathopen{}\left|x\right|_{}^{}\mathclose{}}})\; {\equiv}_\beta \; f(x) \label{eq:jdgm-beta}$$ for any function $f : X \to P$ where $x:X$ and $P$ is propositional. In our discussion, we did not assume it to hold so far. We certainly do not want to argue that a theory without this judgmental equation is to be preferred, we simply did not need it. We agree with the very common view (see the introduction of [@HoTTbook Chapter 6]) that judgmental computation rules are often advantageous, not only for truncations, but for *higher inductive types* [@HoTTbook Chapter 6] in general. Without them, some expressions will need to involve a ridiculous amount of transporting, just to make them type-check, and the “computation” will have to be done manually in order to simplify terms. If  is assumed, it suggests itself to also assume a judgmental computation rule for the induction principle, that is $${\operatorname{\mathsf{ind_{tr}}}}(P, h, f, {{\mathopen{}\left|x\right|_{}^{}\mathclose{}}})\; {\equiv}_\beta \; f(x), \label{eq:jdgm-beta-dep}$$ where $P : {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to {\ensuremath{\mathcal{U}}\xspace}$ might now be a type family and $f: {\Pi_{z:{{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}} }P(z)$ is a dependent function rather than a simple function. Interestingly, it does not seem to be possible to construct ${\operatorname{\mathsf{ind_{tr}}}}$ from ${\operatorname{\mathsf{rec_{tr}}}}$ such that  holds if  holds. In particular, the term constructed in Lemma \[lem:ind-from-rec\] does not have the expected judgmental computation rule. Having said this, the judgmental $\beta$-rules do have some other noteworthy consequences. Unlike the previous results, the statements in this part of our article do need the computation rules to hold judgmentally. So far, all our lemmata and theorems have been internal to type theory. This is only partially the case for the results from this section, as any statement that some equality holds judgmentally is a meta-theoretic property. We thus can not implement such a statement as a type in a proof assistant such as Agda, but we can still use Agda to check our claims; for example, if $$\begin{aligned} &p : {\ensuremath{x =_{} y}\xspace} \\ &p {\vcentcolon\equiv}{\ensuremath{\mathsf{refl}_{x}}\xspace}\end{aligned}$$ type-checks, we may conclude that the equality does hold judgmentally. The Interval ------------ The interval ${\mathbb I}$ as a higher inductive type [@HoTTbook Chapter 6.3] is a type in homotopy type theory that consists of two points $i_0, i_1 : {\mathbb I}$ and a path ${\operatorname{\mathsf{seg}}}: {\ensuremath{i_0 =_{{\mathbb I}} i_1}\xspace}$ between them. Its *recursion*, or *non-dependent elimination* principle says: Given $$\begin{aligned} &Y : {\ensuremath{\mathcal{U}}\xspace}\label{eq:i-ass-1} \\ &y_0 : Y \\ &y_1 : Y \\ &p : {\ensuremath{y_0 =_{} y_1}\xspace}, \label{eq:i-ass-4}\end{aligned}$$ there exists a function $f : {\mathbb I}\to Y$ such that $$\begin{aligned} &f(i_0) {\equiv}y_0 \label{eq:i-point-1} \\ &f(i_1) {\equiv}y_1 \label{eq:i-point-2} \\ &{\ensuremath{{\ensuremath{\mathsf{ap}_{f}}\xspace} ({\operatorname{\mathsf{seg}}}) =_{} p}\xspace}. \label{eq:i-seg}\end{aligned}$$ For the interval’s induction principle, we refer to [@HoTTbook Chapter 6.3]. The interval is a contractible type and as such equivalent to the unit type. However, this does not make it entirely boring; it is the *judgmental* equalities that matter. Note that the *computation rules* for the *points* are judgmental (\[eq:i-point-1\],\[eq:i-point-2\]), while the rule for the path  is only given by an equality proof. We will now show that ${{\mathopen{}\left\Vert {\ensuremath{\mathbf{2}}\xspace}\right\Vert_{}\mathclose{}}}$ can be regarded as the interval. \[thm:bool-is-interval\] For the type ${{\mathopen{}\left\Vert {\ensuremath{\mathbf{2}}\xspace}\right\Vert_{}\mathclose{}}}$, the recursion principle of the interval (including the computational behavior) is derivable using , and the induction principle follows from . We only show that the recursion principle is derivable, which will be sufficient for the subsequent developments. The induction principle can be derived very similarly. We need to show that, under the assumptions (\[eq:i-ass-1\]-\[eq:i-ass-4\]), there is a function $f : {{\mathopen{}\left\Vert {\ensuremath{\mathbf{2}}\xspace}\right\Vert_{}\mathclose{}}} \to Y$ such that $$\begin{aligned} &f({{\mathopen{}\left|{{0_{{\ensuremath{\mathbf{2}}\xspace}}}}\right|_{}^{}\mathclose{}}}) {\equiv}y_0 \label{eq:bool-point-1} \\ &f({{\mathopen{}\left|{{1_{{\ensuremath{\mathbf{2}}\xspace}}}}\right|_{}^{}\mathclose{}}}) {\equiv}y_1 \label{eq:bool-point-2} \\ &{\ensuremath{{\ensuremath{\mathsf{ap}_{f}}\xspace} ({\operatorname{\mathsf{h_{tr}}}}_{{{\mathopen{}\left|{{0_{{\ensuremath{\mathbf{2}}\xspace}}}}\right|_{}^{}\mathclose{}}}, {{\mathopen{}\left|{{1_{{\ensuremath{\mathbf{2}}\xspace}}}}\right|_{}^{}\mathclose{}}}}) =_{} p}\xspace}. \label{eq:bool-seg}\end{aligned}$$ We define $$\begin{aligned} &g : {\ensuremath{\mathbf{2}}\xspace}\to {\Sigma_{y : Y} } {\ensuremath{y_0 =_{} y}\xspace} \\ &g ({{0_{{\ensuremath{\mathbf{2}}\xspace}}}}) {\vcentcolon\equiv}(y_0, {\ensuremath{\mathsf{refl}_{}}\xspace}) \\ &g ({{1_{{\ensuremath{\mathbf{2}}\xspace}}}}) {\vcentcolon\equiv}(y_1, p).\end{aligned}$$ As ${\Sigma_{y : Y} } {\ensuremath{y_0 =_{} y}\xspace}$ is contractible, $g$ can be extended to a function $\overline g : {{\mathopen{}\left\Vert {\ensuremath{\mathbf{2}}\xspace}\right\Vert_{}\mathclose{}}} \to {\Sigma_{y : Y} } {\ensuremath{y_0 =_{} y}\xspace}$, and we define $f {\vcentcolon\equiv}{\mathsf{fst}}\circ \overline g$. It is easy to check that $f$ has indeed the required judgmental properties  and . The equality  is only slightly more difficult: First, using the definition of $f$ and a standard functoriality property of ${\ensuremath{\mathsf{ap}_{}}\xspace}$ [@HoTTbook Lemma 2.2.2 (iii)], we observe that ${\ensuremath{\mathsf{ap}_{f}}\xspace} ({\operatorname{\mathsf{h_{tr}}}}_{{{\mathopen{}\left|{{0_{{\ensuremath{\mathbf{2}}\xspace}}}}\right|_{}^{}\mathclose{}}}, {{\mathopen{}\left|{{1_{{\ensuremath{\mathbf{2}}\xspace}}}}\right|_{}^{}\mathclose{}}}})$ may be written as $${\ensuremath{\mathsf{ap}_{{\mathsf{fst}}}}\xspace} ({\ensuremath{\mathsf{ap}_{\overline g}}\xspace} ({\operatorname{\mathsf{h_{tr}}}}_{{{\mathopen{}\left|{{0_{{\ensuremath{\mathbf{2}}\xspace}}}}\right|_{}^{}\mathclose{}}}, {{\mathopen{}\left|{{1_{{\ensuremath{\mathbf{2}}\xspace}}}}\right|_{}^{}\mathclose{}}}})).$$ But here, the path ${\ensuremath{\mathsf{ap}_{\overline g}}\xspace} ({\operatorname{\mathsf{h_{tr}}}}_{{{\mathopen{}\left|{{0_{{\ensuremath{\mathbf{2}}\xspace}}}}\right|_{}^{}\mathclose{}}}, {{\mathopen{}\left|{{1_{{\ensuremath{\mathbf{2}}\xspace}}}}\right|_{}^{}\mathclose{}}}})$ is an equality in the contractible type ${\ensuremath{(y_0, {\ensuremath{\mathsf{refl}_{}}\xspace}) =_{} (y_1 , p)}\xspace}$ (note that both terms inhabit a contractible type themselves) and thereby unique. In particular, it is equal to the path which is built out of two components, the first of which can be chosen to be $p$ (the second component can then be taken to be a canonically constructed inhabitant of ${\ensuremath{{\ensuremath{{p}_{*}\mathopen{}\left({{\ensuremath{\mathsf{refl}_{}}\xspace}}\right)\mathclose{}}\xspace} =_{} p}\xspace}$). Function Extensionality {#subsec:beta-funext} ----------------------- It is known that the interval ${\mathbb I}$ with its judgmental computation rules implies function extensionality. We may therefore conclude that propositional truncation is sufficient as well. \[lem:int-funext\] In a type theory with ${\mathbb I}$ and the judgmental $\eta$-law for functions (which we assume), function extensionality is derivable. Assume $X,Y$ are types and $f,g : X \to Y$ are functions, and $h : {{\Pi_{x : X} }} {\ensuremath{f(x) =_{} g(x)}\xspace}$ a proof that they are pointwise equal. Using the recursion principle of ${\mathbb I}$, we may then define a family $$k : X \to {\mathbb I}\to Y \\$$ of functions, indexed over $X$, such that $k (x , i_0) {\equiv}f(x)$ and $k (x , i_0) {\equiv}g(x)$ for all $x:X$; of course, we use $h(x)$ as the required family of paths. Switching the arguments gives a function $$k' : {\mathbb I}\to X \to Y \\$$ with the property that $k' (i_0) {\equiv}f$ and $k' (i_1) {\equiv}g$ (by $\eta$ for functions), and thereby ${\ensuremath{\mathsf{ap}_{k'}}\xspace} ({\operatorname{\mathsf{seg}}}) : {\ensuremath{f =_{} g}\xspace}$. The combination of Theorem \[thm:bool-is-interval\] and Lemma \[lem:int-funext\] implies: In type theory with propositional truncation that satisfies the judgmental computation rule, function extensionality can be derived. Judgmental Factorization ------------------------ The judgmental computation rule of ${{\mathopen{}\left\Vert -\right\Vert_{}\mathclose{}}}$ also allows us to factor any function *judgmentally* through the propositional truncation as soon as it can be factored in any way. This observation is inspired by and a generalization of the fact that ${{\mathopen{}\left\Vert {\ensuremath{\mathbf{2}}\xspace}\right\Vert_{}\mathclose{}}}$ satisfies the judgmental properties of the interval (Theorem \[thm:bool-is-interval\]). \[thm:jdg-factor-nondep\] Any (non-dependent) function that factors through the propositional truncation can be factored judgmentally: assume types $X,Y$ and a function $f : X \to Y$ between them. Assume that there is $\overline f : {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to Y$ such that $$h : {{\Pi_{x:X} }} {\ensuremath{f(x) =_{} \overline f({{\mathopen{}\left|x\right|_{}^{}\mathclose{}}})}\xspace}.$$ Then, we can construct a function $f' : {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to Y$ such that, for all $x:X$, we have $$f(x) {\equiv}f'({{\mathopen{}\left|x\right|_{}^{}\mathclose{}}}),$$ which means that the type ${{\Pi_{x:X} }} {\ensuremath{f(x) =_{} f'({{\mathopen{}\left|x\right|_{}^{}\mathclose{}}})}\xspace}$ is inhabited by the function that is constantly ${\ensuremath{\mathsf{refl}_{}}\xspace}$. We define a function $$\begin{aligned} &g : X \to {\Pi_{z:{{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}} } {\Sigma_{y:Y} } {\ensuremath{y =_{} \overline f(z)}\xspace} \label{eq:jdgm-fact-g-type} \\ &g(x) {\vcentcolon\equiv}{\lambda z .} \left( f(x), h(x) { \mathchoice{\mathbin{\raisebox{0.5ex}{$\displaystyle\centerdot$}}} {\mathbin{\raisebox{0.5ex}{$\centerdot$}}} {\mathbin{\raisebox{0.25ex}{$\scriptstyle\,\centerdot\,$}}} {\mathbin{\raisebox{0.1ex}{$\scriptscriptstyle\,\centerdot\,$}}} }{\ensuremath{\mathsf{ap}_{\overline f}}\xspace} ({\operatorname{\mathsf{h_{tr}}}}_{{{\mathopen{}\left|x\right|_{}^{}\mathclose{}}}, z}) \right)\end{aligned}$$ By function extensionality and the fact that singletons are contractible, the codomain of $g$ is contractible, and thus, we can extend $g$ and get $$\overline g : {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to {\Pi_{z:{{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}} } {\Sigma_{y:Y} } {\ensuremath{y =_{} \overline f(z)}\xspace}. \label{eq:jdgm-fact-lift-g-type}$$ We define $$\label{eq:f'-for-judgmental} f' {\vcentcolon\equiv}{\lambda z:{{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} .} {\mathsf{fst}}(\overline g \, z \, z)$$ and it is immediate to check that $f'$ has the required properties. Note that in the above argument we have only used . We have avoided  by introducing the variable $z$ in , which is essentially a duplication of the first argument of the function, as it becomes apparent in . Furthermore, we have assumed that $f$ is a non-dependent function. The question does not make sense if $f$ is dependent in the sense of $f : {\Pi_{x:X} }Y(x)$; however, it does for $f: {\Pi_{x : X} }Y({{\mathopen{}\left|x\right|_{}^{}\mathclose{}}})$. In this case, it seems to be unavoidable to use , but the above proof still works with minimal adjustments. We state it for the sake of completeness. Let $X$ be a type and $Y : {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to {\ensuremath{\mathcal{U}}\xspace}$ a type family. Assume we have functions $$\begin{aligned} & f : {\Pi_{x:X} }{Y({{\mathopen{}\left|x\right|_{}^{}\mathclose{}}})} \\ & \overline f : {\Pi_{z : {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}} } Y(z) \end{aligned}$$ such that $${{\Pi_{x:X} }} {\ensuremath{f(x) =_{Y({{\mathopen{}\left|x\right|_{}^{}\mathclose{}}})} \overline f({{\mathopen{}\left|x\right|_{}^{}\mathclose{}}})}\xspace}.$$ Then, we can construct a function $f' : {\Pi_{z : {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}} }B(z)$ with the property that for any $x:X$, we have the judgmental equality $$f(x) {\equiv}f'({{\mathopen{}\left|x\right|_{}^{}\mathclose{}}}).$$ Because we allow ourselves to use  the proof becomes actually simpler than the proof above. This time, we can define $$\begin{aligned} &g : {\Pi_{x : X} } {\Sigma_{y:Y} } {\ensuremath{y =_{} \overline f({{\mathopen{}\left|x\right|_{}^{}\mathclose{}}})}\xspace} \\ &g(x) {\vcentcolon\equiv}\left( f(x), h(x) \right).\end{aligned}$$ Using , we get $$\overline g : {\Pi_{z : {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}} } {\Sigma_{y:Y} } {\ensuremath{y =_{} \overline f(z)}\xspace}.$$ Then, $${\mathsf{fst}}\circ \overline g$$ fulfils the required condition. An Invertibility Puzzle ----------------------- For a type $X$, the function ${{\mathopen{}\left|-\right|_{}^{}\mathclose{}}} : X \to {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ turns an element $x : X$ into an anonymous inhabitant ${{\mathopen{}\left|x\right|_{}^{}\mathclose{}}} : {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$. It is thus reasonable to think of ${{\mathopen{}\left|-\right|_{}^{}\mathclose{}}}$ as a function that hides information. However, as we will demonstrate, this interpretation is only justified as long as we think of internal properties. We will show that the function ${{\mathopen{}\left|-\right|_{}^{}\mathclose{}}}$ does not erase any meta-theoretical information in the following sense: Assume $z : {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$ is defined to be ${{\mathopen{}\left|x\right|_{}^{}\mathclose{}}}$ for some $x : X$. Without looking at this definition, we can recover $x$ (e.g. in a proof assistant, $z$ could be imported from another file; then, we do not need to open that file in order to find out $x$). To do this, we only need to observe how $z$ computes in a suitable environment. To be precise, we construct a term ${\operatorname{\mathsf{myst}}}_X$ such that, for any $z$ as above, the expression ${\operatorname{\mathsf{myst}}}_X (z)$ is judgmentally equal to $x$. The meta-theoretic statement that we can recover $x$ from $z$ is true, but in general, ${\operatorname{\mathsf{myst}}}_X$ might not be a closed term (i.e. could depend on some assumptions which do not influence the computation). However, assuming the univalence axiom, ${\operatorname{\mathsf{myst}}}_X$ can be constructed without any further assumptions for a non-trivial class of types including the natural numbers. That is, in MLTT with propositional truncations and the univalence axiom, we can construct a term ${\operatorname{\mathsf{myst}}}_{{\ensuremath{\mathbb{N}}\xspace}}$ such that $$\begin{aligned} &\mathsf{id'} : {\ensuremath{\mathbb{N}}\xspace}\to {\ensuremath{\mathbb{N}}\xspace}\\ &\mathsf{id'} {\vcentcolon\equiv}{\lambda n .} {\operatorname{\mathsf{myst}}}_{\ensuremath{\mathbb{N}}\xspace}({{\mathopen{}\left|n\right|_{}^{}\mathclose{}}})\end{aligned}$$ type-checks and $\mathsf{id'}$ is the identity function on ${\ensuremath{\mathbb{N}}\xspace}$, with a proof $$\begin{aligned} &\mathsf{p} : {\Pi_{n : {\ensuremath{\mathbb{N}}\xspace}} } {\ensuremath{\mathsf{id'}(n) =_{} n}\xspace} \\ &\mathsf{p} {\vcentcolon\equiv}{\lambda n .} {\ensuremath{\mathsf{refl}_{n}}\xspace}.\end{aligned}$$ We think that the possibility to do this is counter-intuitive and surprising. In particular it may seem that we could apply ${\ensuremath{\mathsf{ap}_{{\operatorname{\mathsf{myst}}}_{\ensuremath{\mathbb{N}}\xspace}}}\xspace}$ on the canonical inhabitant of ${\ensuremath{{{\mathopen{}\left|0\right|_{}^{}\mathclose{}}} =_{{{\mathopen{}\left\Vert {\ensuremath{\mathbb{N}}\xspace}\right\Vert_{}\mathclose{}}}} {{\mathopen{}\left|1\right|_{}^{}\mathclose{}}}}\xspace}$ to conclude ${\ensuremath{0 =_{{\ensuremath{\mathbb{N}}\xspace}} 1}\xspace}$. However, this would only work if the type of ${\operatorname{\mathsf{myst}}}_{\ensuremath{\mathbb{N}}\xspace}$ was ${{\mathopen{}\left\Vert {\ensuremath{\mathbb{N}}\xspace}\right\Vert_{}\mathclose{}}} \to {\ensuremath{\mathbb{N}}\xspace}$, which it is not; it is a $\Pi$-type that is not easier to write down than the full definition of its inhabitant ${\operatorname{\mathsf{myst}}}_{\ensuremath{\mathbb{N}}\xspace}$ itself. In the following, we show the full construction. For further discussion, see the homotopy type theory blog entry by the first named author [@kraus:pseudoinverse], where this result was presented originally. First, let us state two useful general definitions: A *pointed type* is a pair $(X,x)$ of a type $X : {\ensuremath{\mathcal{U}}\xspace}$ and an inhabitant $x:X$. We write ${{\ensuremath{\mathcal{U}}\xspace}_{\bullet}}$ for the type of pointed types, $${{\ensuremath{\mathcal{U}}\xspace}_{\bullet}}{\vcentcolon\equiv}{\Sigma_{X: {\ensuremath{\mathcal{U}}\xspace}} }X.$$ We say a type $X$ is *transitive* and write ${\operatorname{\mathsf{isTransitive}}}X$ if it satisfies $${{{\Pi_{x , y:X} }}} {\ensuremath{(X,x) =_{{{\ensuremath{\mathcal{U}}\xspace}_{\bullet}}} (X,y)}\xspace}.$$ This is, of course, where univalence comes into play. It gives us the principle that a type $X$ is transitive if, and only if, for every pair $(x,y):X \times X$ there is an automorphism $e_{xy} : X \to X$ such that ${\ensuremath{e_{xy}(x) =_{} y}\xspace}$. We have the following examples of transitive types: Every type with decidable equality is transitive. This is because decidable equality on $X$ lets us define an endofunction on $X$ which swaps $x$ and $y$, and leaves everything else constant. Instances for this example include all contractible and, more generally, propositional types, but also our main candidate, the natural numbers ${\ensuremath{\mathbb{N}}\xspace}$. For any pointed type $X$ with elements $x_1, x_2 : X$, the identity type ${\ensuremath{x_1 =_{X} x_2}\xspace}$ is transitive. In particular, the *loop space* $\Omega^n(X)$ [@HoTTbook Definition 2.1.8] is transitive for any pointed type $X$. Here, it is enough to observe that, for $p_1, p_2 : {\ensuremath{x_1 =_{X} x_2}\xspace}$, the function ${\lambda q .} q { \mathchoice{\mathbin{\raisebox{0.5ex}{$\displaystyle\centerdot$}}} {\mathbin{\raisebox{0.5ex}{$\centerdot$}}} {\mathbin{\raisebox{0.25ex}{$\scriptstyle\,\centerdot\,$}}} {\mathbin{\raisebox{0.1ex}{$\scriptscriptstyle\,\centerdot\,$}}} }{\mathord{{p_1}^{-1}}} { \mathchoice{\mathbin{\raisebox{0.5ex}{$\displaystyle\centerdot$}}} {\mathbin{\raisebox{0.5ex}{$\centerdot$}}} {\mathbin{\raisebox{0.25ex}{$\scriptstyle\,\centerdot\,$}}} {\mathbin{\raisebox{0.1ex}{$\scriptscriptstyle\,\centerdot\,$}}} }{p_2}$ is an equivalence with the required property. As mentioned by Andrej Bauer in a discussion on this result [@kraus:pseudoinverse], we also have the following: Any group [@HoTTbook Definition 6.11.1] is a transitive type. As for equality types, the reason is that there is an inverse operation, such that the automorphism ${\lambda c .} c { \mathchoice{\mathbin{\raisebox{0.5ex}{$\displaystyle\centerdot$}}} {\mathbin{\raisebox{0.5ex}{$\centerdot$}}} {\mathbin{\raisebox{0.25ex}{$\scriptstyle\,\centerdot\,$}}} {\mathbin{\raisebox{0.1ex}{$\scriptscriptstyle\,\centerdot\,$}}} }{\mathord{{a}^{-1}}} { \mathchoice{\mathbin{\raisebox{0.5ex}{$\displaystyle\centerdot$}}} {\mathbin{\raisebox{0.5ex}{$\centerdot$}}} {\mathbin{\raisebox{0.25ex}{$\scriptstyle\,\centerdot\,$}}} {\mathbin{\raisebox{0.1ex}{$\scriptscriptstyle\,\centerdot\,$}}} }b$ maps $a$ to $b$. If $X$ is any type and $Y : X \to {\ensuremath{\mathcal{U}}\xspace}$ is a family of transitive types, then ${\Pi_{x:X} }{Y(x)}$ is transitive. In particular, $\times$ and $\to$ preserve transitivity of types. We are now ready to construct ${\operatorname{\mathsf{myst}}}$: Assume that we are given a type $X$. We can define a map $$\begin{aligned} & f : X \to {{\ensuremath{\mathcal{U}}\xspace}_{\bullet}}\label{eq:f-for-myst} \\ & f (x) {\vcentcolon\equiv}(X,x). \end{aligned}$$ If we know a point $x_0 : X$, we may further define $$\begin{aligned} & \overline f : {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to {{\ensuremath{\mathcal{U}}\xspace}_{\bullet}}\\ & \overline f (z) {\vcentcolon\equiv}(X, x_0).\end{aligned}$$ If $X$ is transitive, we have $${{\Pi_{x:X} }} {\ensuremath{f(x) =_{} \overline f({{\mathopen{}\left|x\right|_{}^{}\mathclose{}}})}\xspace}.$$ By Theorem \[thm:jdg-factor-nondep\], there is then a function $$f' : {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to {{\ensuremath{\mathcal{U}}\xspace}_{\bullet}}$$ such that, for any $x:X$, we have $$f'({{\mathopen{}\left|x\right|_{}^{}\mathclose{}}}) {\equiv}f(x) {\equiv}(X,x).$$ Let us define $$\begin{aligned} & {\operatorname{\mathsf{myst}}}_X : {\Pi_{z:{{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}} } {\mathsf{fst}}(f'(z)) \\ & {\operatorname{\mathsf{myst}}}_X {\vcentcolon\equiv}{\mathsf{snd}}\circ f'. \label{eq:myst-definition}\end{aligned}$$ Note that while the type of ${\operatorname{\mathsf{myst}}}_X$ is *not* simply ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to X$, we have that, for any $x:X$, the type of ${\operatorname{\mathsf{myst}}}_X({{\mathopen{}\left|x\right|_{}^{}\mathclose{}}})$ is judgmentally equal to $X$, and we have ${\operatorname{\mathsf{myst}}}_X({{\mathopen{}\left|x\right|_{}^{}\mathclose{}}}) {\equiv}x$. This already proves the following: \[thm:myst\] Let $X$ be an inhabited transitive type. Then, there is a term ${\operatorname{\mathsf{myst}}}_X$ such that the (dependent) composition $$\begin{aligned} & {\operatorname{\mathsf{myst}}}_X \circ {{\mathopen{}\left|-\right|_{}^{}\mathclose{}}} : X \to X \intertext{type-checks and is equal to the identity, where the proof} & p : {{\Pi_{x:X} }} {\ensuremath{{\operatorname{\mathsf{myst}}}_X({{\mathopen{}\left|x\right|_{}^{}\mathclose{}}}) =_{X} x}\xspace} \\ & p(x) {\vcentcolon\equiv}{\ensuremath{\mathsf{refl}_{x}}\xspace} \end{aligned}$$ it trivial. It is tempting to unfold the type expression ${\Pi_{z:{{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}} } {\mathsf{fst}}(f'(z))$ in order to better understand it. Unfortunately, this is not feasible as this plain type expression involves the whole proof term $f'$, which, in turn, includes the complete construction of Theorem \[thm:jdg-factor-nondep\]. We want to emphasize again that, while we do have ${\operatorname{\mathsf{h_{tr}}}}_{{{\mathopen{}\left|x\right|_{}^{}\mathclose{}}},{{\mathopen{}\left|y\right|_{}^{}\mathclose{}}}} : {\ensuremath{{{\mathopen{}\left|x\right|_{}^{}\mathclose{}}} =_{{{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}} {{\mathopen{}\left|y\right|_{}^{}\mathclose{}}}}\xspace}$ for any $x,y:X$, we cannot conclude ${\ensuremath{{\operatorname{\mathsf{myst}}}_X({{\mathopen{}\left|x\right|_{}^{}\mathclose{}}}) =_{X} {\operatorname{\mathsf{myst}}}_X({{\mathopen{}\left|y\right|_{}^{}\mathclose{}}})}\xspace}$ as the expression ${\ensuremath{\mathsf{ap}_{{\operatorname{\mathsf{myst}}}_X}}\xspace} ({\operatorname{\mathsf{h_{tr}}}}_{{{\mathopen{}\left|x\right|_{}^{}\mathclose{}}},{{\mathopen{}\left|y\right|_{}^{}\mathclose{}}}})$ does not type-check. Finally, we want to remark that the construction of ${\operatorname{\mathsf{myst}}}$ does not need the full strength of Theorem \[thm:jdg-factor-nondep\]. The weaker version in which $\overline f : {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to Y$ is replaced by a fixed $y_0 : Y$ is sufficient: in this case, $\overline f$ can be understood to be *constant at $y_0$*. This leads to a simplification as the dependent function types in  and  can be replaced by their codomains. It may be helpful to see the whole definition of ${\operatorname{\mathsf{myst}}}$ explicitly in this variant, which is also how it was explained originally by the first named author [@kraus:pseudoinverse]: We define $$\begin{aligned} & \mathsf f : X \to {\Sigma_{A : {{\ensuremath{\mathcal{U}}\xspace}_{\bullet}}} } \; {\ensuremath{A =_{{{\ensuremath{\mathcal{U}}\xspace}_{\bullet}}} (X,x_0)}\xspace} \\ & \mathsf f (x) {\vcentcolon\equiv}((X,x) , \mathsf{transitive}_X(x,x_0)),\end{aligned}$$ where $\mathsf{transitive}_X$ is the proof that $X$ is transitive. The function $f$ in  is then simply the composition ${\mathsf{fst}}\circ \mathsf f$. As the codomain of $\mathsf f$ is a singleton, it is contractible (see Definition \[def:generalnotions\]) and thereby propositional (let us write $h$ for the proof thereof). Hence, we get $$\begin{aligned} & \mathsf{f'} : {{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to {\Sigma_{A : {{\ensuremath{\mathcal{U}}\xspace}_{\bullet}}} } \; {\ensuremath{A =_{{{\ensuremath{\mathcal{U}}\xspace}_{\bullet}}} (X,x_0)}\xspace} \\ & \mathsf{f'} {\vcentcolon\equiv}{\operatorname{\mathsf{rec_{tr}}}}\left({\Sigma_{A : {{\ensuremath{\mathcal{U}}\xspace}_{\bullet}}} } \; {\ensuremath{A =_{{{\ensuremath{\mathcal{U}}\xspace}_{\bullet}}} (X,x_0)}\xspace}\right) \; h \; \mathsf f .\end{aligned}$$ We could now define ${\operatorname{\mathsf{myst}}}'_X$ to be $$\begin{aligned} &{\operatorname{\mathsf{myst}}}'_X : {\Pi_{{{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}} } {\mathsf{fst}}\circ {\mathsf{fst}}\circ \mathsf{f'} \\ &{\operatorname{\mathsf{myst}}}'_X {\vcentcolon\equiv}{\mathsf{snd}}\circ {\mathsf{fst}}\circ \mathsf{f'}\end{aligned}$$ which has the same property as , even though it is not judgmentally the same term. Conclusion and Open Problems {#sec10:open} ============================ In this article, generalizations of Hedberg’s Theorem have led us to an exploration of what we call *weakly constant functions*. The attribute *weakly* indicates that higher coherence conditions of such a constancy proof are missing. As a consequence, it is not possible to derive a function ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}} \to Y$ from a weakly constant function $X \to Y$, but we have shown how to do this in several non-trivial special cases. Most interesting is certainly the case of endofunctions. A weakly constant endofunction can always be factored through the propositional truncation of its domain. Further, for a given $X$, the type which says that every constant endofunction on $X$ has a fixed point is propositional, enabling us to use it as a notion of anonymous inhabitance ${\langle \! \langle X \rangle \! \rangle}$, and we have argued that it lies strictly in between of $\neg\neg X$ and ${{\mathopen{}\left\Vert X\right\Vert_{}\mathclose{}}}$. There are two questions for which we have not given an answer. The first is: Is weak propositional truncation definable in Martin-Löf type theory? This is commonly believed to not be the case. However, the standard models do have propositional truncation, making it hard to find a concrete proof. Moreover, populatedness, a similar notion of anonymous existence, is definable. Our second question is about the consequences of the assumption that weakly constant functions factor in general. By Shulman’s result [@shulman:wconst], we know that this is inconsistent with the univalence axiom. Is is possible to strengthen this result further? In particular, does it imply UIP for all types? We leave these questions open. Acknowledgements {#acknowledgements .unnumbered} ================ The first named author would like to thank Paolo Capriotti, Ambrus Kaposi, Nuo Li and especially Christian Sattler for many fruitful discussions. We are grateful to the anonymous referees for numerous helpful suggestions and remarks. We also thank Nils Anders Danielsson for his careful reading of our draft and for pointing out several typos. [^1]: [a]{}Supported by the EPSRC grant EP/M016994/1. [^2]: [c]{}Supported by the ERC project 247219, and grants of The Ellentuck and The Simonyi Fund. [^3]: [d]{}Supported by the EPSRC grants EP/G03298X/1 and EP/M016994/1 and by a grant of the Institute for Advanced Study, as well as by USAF, Airforce office for scientific research, award FA9550-16-1-0029.
--- abstract: 'Neural networks have recently become good at engaging in dialog. However, current approaches are based solely on verbal text, lacking the richness of a real face-to-face conversation. We propose a neural conversation model that aims to read and generate facial gestures alongside with text. This allows our model to adapt its response based on the “mood” of the conversation. In particular, we introduce an RNN encoder-decoder that exploits the movement of facial muscles, as well as the verbal conversation. The decoder consists of two layers, where the lower layer aims at generating the verbal response and coarse facial expressions, while the second layer fills in the subtle gestures, making the generated output more smooth and natural. We train our neural network by having it “watch” 250 movies. We showcase our joint face-text model in generating more natural conversations through automatic metrics and a human study. We demonstrate an example application with a face-to-face chatting avatar.' author: - | Hang Chu$^{1,2}$   Daiqing Li$^{1}$   Sanja Fidler$^{1,2}$\ $^1$University of Toronto   $^2$Vector Institute\ [{chuhang1122, daiqing, fidler}@cs.toronto.edu]{} bibliography: - 'egbib.bib' title: 'A Face-to-Face Neural Conversation Model' ---
--- abstract: 'We extend the Eliashberg-Thurston theorem on approximations of taut oriented $C^2$-foliations of 3-manifolds by both positive and negative contact structures to a large class of taut oriented $C^0$-foliations. These $C^0$-foliations can therefore be approximated by weakly symplectically fillable, universally tight, contact structures. This allows applications of $C^2$-foliation theory to contact topology and Floer theory to be generalized and extended to constructions of $C^0$-foliations.' address: - 'Department of Mathematics, University of Georgia, Athens, GA 30602' - 'Department of Mathematics, Washington University, St. Louis, MO 63130' author: - 'William H. Kazez' - Rachel Roberts title: 'Approximating $C^0$-foliations' --- [^1] Introduction ============ In [@ET], Eliashberg and Thurston introduce the notion of [*confoliation*]{} and prove that when $k\ge 2$, taut, transversely oriented $C^k$-foliations can be $C^k$-approximated by a pair of $C^k$ contact structures, one positive and one negative. It follows that any contact structure sufficiently close to the plane field of such a foliation is weakly symplectically fillable and universally tight. For the most part, they restrict attention to confoliations which are at least $C^1$. For their main theorem, they restrict attention to confoliations which are at least $C^2$. Their assumption that 2-plane fields and 1-forms are $C^1$ is necessary for it to be possible to take derivatives. Their assumption that 2-plane fields and 1-forms are $C^2$ is necessary for Sacksteder’s Theorem ([@S], see also Theorem 1.2.5 of [@ET]) to apply. A weakening of this $C^2$ assumption in the neighborhood of compact leaves is used by Kronheimer, Mrowka, Ozsváth, and Szabó in [@KMOS] to show that the methods of [@ET] apply to those foliations constructed by Gabai in [@G1; @G2; @G3] which are $C^{\infty}$ except along torus leaves. In this paper we show that many of the techniques of Eliashberg and Thurston extend to transversely oriented, taut, $C^0$-foliations satisfying a natural transitivity condition. Eliashberg and Thurston’s proof that sufficiently smooth transversely oriented taut foliations of 3-manifolds can be perturbed to weakly symplectically fillable contact structures gives a direct connection between foliation theory and symplectic topology via contact topology. This connection has been most spectacularly exploited by Kronheimer and Mrowka [@KrMr] in their proof of the Property P Conjecture. Since the contact structures produced by Eliashberg-Thurston are weakly symplectically fillable and universally tight, their theorem is important in contact topology. To apply their theorem, of course one must start with a taut oriented foliation. The issue that we seek to address is that most new constructions of foliations produce foliations that are not smooth enough to apply the Eliashberg-Thurston theorem. Constructions of foliations which fail even to be $C^1$ can be found in [@DL; @DR; @G1; @g1; @g2; @g3; @G2; @G3; @Ga; @KRo; @KR2; @Li; @Li2; @LR; @R; @R1; @R2]. In each of these papers, foliations are constructed using branched surfaces. Foliations constructed using branched surfaces will have smooth leaves but will often vary only continuously in a transverse direction because of the role and nonsmooth nature of leaf blow-up, Operation 2.1.1 of [@Ga]. (To obtain the smooth foliations found in [@G1; @g1; @g2; @g3; @G2; @G3], Gabai avoids appealing to Operation 2.1.1 and instead takes advantage of the fact that the branched surfaces involved in his construction are finite depth.) The main results of this paper are, roughly, that $C^0$-foliations satisfying a simple transitivity condition can be approximated by weakly symplectically fillable contact structures. Such foliations include most of those found in [@DL; @DR; @g1; @g2; @g3; @G3; @Ga; @KRo; @KR2; @LR; @R; @R1; @R2] and many of those found in [@G1; @G2; @Li; @Li2]. In proving this, we give new methods for both initiating and propagating contact structures when beginning with a foliation. Before giving a more precise indication of our results, we give a brief description of some of the main ideas in Eliashberg and Thurston’s proof. Eliashberg and Thurston interpolate between the notions of foliation and contact structure by introducing confoliations. This structure restricts to a contact structure on an open set, but is equal to the original foliation elsewhere. Their first step is to create a contact zone in the manifold, that is, to approximate the foliation so that it is a contact structure on a non-empty set. One place they accomplish this is in a neighborhood of a curve with attracting holonomy. We obtain a related result, Theorem \[attracting\], for the larger class of curves for which the holonomy has a contracting interval. We also introduce the notion of $L$-bracketed foliation in Definition \[transtranv\] and show how a contact zone can be naturally introduced about a regular neighborhood of the corresponding link $L$. This has applications to spines of open book decompositions, and more generally to manifolds obtained by surgery or filling. Their next step is to propagate the contact structure throughout the manifold using smooth foliation charts. See Lemma \[ETlemma\], and Corollary \[approximate\]. We use a similar strategy, but we rely on the local existence of smooth approximations to a given $C^0$-foliation. To prove that the contact structure they produce is weakly symplectically fillable, they use the crucial structure of a smooth volume preserving flow $\Phi$ transverse to the taut foliation. Such a smooth flow is the starting point for our work, for they exist even for $C^0$-foliations. There are several notions that describe the relationship between a foliation $\mathcal F$ and “nearby” contact structures $\xi$. If there is a continuous family of contact structures $\xi_t$ such that $\xi_t \to \mathcal F$, we say $\mathcal F$ can be [*perturbed*]{} or [*deformed*]{} to a contact structure. A weaker notion is the existence of a sequence, $\xi_n \to \mathcal F$, in which case $\mathcal F$ can be [*approximated*]{} by contact structures. Both notions of convergence can be refined by defining convergence using a $C^k$ norm on tangent planes for values of $k$ ranging from 0 to $\infty$. We use the flow $\Phi$ transverse to $\mathcal F$ to define a weaker, topological notion of approximation. \[Phiapprox\] Given an oriented foliation $\mathcal F$ and a positively transverse flow $\Phi$, we say an oriented 2-plane field $\chi$, typically a contact structure or a confoliation, is a [*$\Phi$-approximation*]{} of $\mathcal F$ if $\chi$ is positively transverse to $\Phi$. We say that two oriented 2-plane fields are [*$\Phi$-close*]{} if both are positively transverse to $\Phi$. One application of our work is to complete the proof of a theorem of [@HKM2]. Honda, Kazez, and Matić show that sufficiently large fractional Dehn twisting for an open book decomposition with connected binding implies that the canonically associated contact structure is weakly symplectically fillable. Their proof requires the existence of contact structures approximating $C^0$-foliations constructed by Roberts in [@R2], and thus needs a stronger version of the Eliashberg-Thurston theorem. Another application of our work is to prove that 3-manifolds containing taut, oriented $C^0$-foliations satisfying our transitivity condition cannot be L-spaces, thus extending the result of Theorem 1.4 in [@OS]. (See also Theorem 2.1 in [@KMOS] and Theorem 41.4.1 in [@KM].) Our result also strengthens a result of Baldwin and Etnyre [@BE]. They give a set of examples showing that when an open book decomposition has multiple binding components, no fixed lower bounds on fractional Dehn twisting can guarantee weak symplectic fillabilty. This can now be viewed as a non-existence theorem for taut, oriented $C^0$-foliations satisfying our transitivity condition. We thank Larry Conlon, John Etnyre, and Ko Honda for many helpful conversations. We would also like to thank the referee for several helpful suggestions and corrections. An overview =========== The transition from taut foliations to tight contact structures involves two auxiliary structures, volume preserving flows and symplectic topology. We summarize the results we need from each field as follows. (Theorem II.20, [@Sullivan]; see also Theorem A1, [@Hass]) Suppose $\mathcal F$ is a taut codimension-1 $C^0$-foliation of a smooth closed Riemannian 3-manifold $M$. Then there is a volume-preserving smooth flow $\Phi$ transverse to $\mathcal F$. For clarity, we break the theorem found in [@ET] into two statements: \[weaklysymplectic\] (Corollaries 3.2.2, 3.2.4 and 3.2.8, [@ET]; see also Theorem 41.3.2, [@KM]) Let $M$ be a smooth closed Riemannian 3-manifold with a volume preserving flow $\Phi$. Suppose there exist a smooth positive contact structure $\xi_+$ and a smooth negative contact structure $\xi_-$, both of which are transverse to $\Phi$. Then each of $\xi_{\pm}$ is weakly symplectically fillable and universally tight. Moreover, if $\xi$ is any smooth (positive or negative) contact structure transverse to $\Phi$, then $\xi$ is weakly symplectically fillable and universally tight. [**Remark:**]{} The statement of Theorem \[weaklysymplectic\] is meant to emphasize two things. First, given a smooth positive (respectively, negative) contact structure transverse to a volume preserving flow, it is sufficient to produce a negative (respectively, positive) contact structure also transverse to the flow to conclude both are weakly symplectically fillable and universally tight. Next, once such $\xi_+$ and $\xi_-$ are shown to exist, any contact structure $\xi$ transverse to $\Phi$ is necessarily weakly symplectically fillable and universally tight. \[weaklysymplectic2\] (Corollary 3.2.5, [@ET]; see also Theorem 41.3.2, [@KM]) Let $M$ be a smooth closed oriented 3-manifold which contains a taut, oriented $C^2$-foliation $\mathcal F$. There exist a smooth positive contact structure $\xi_+$ and a smooth negative contact structure $\xi_-$, both $C^0$-close to $\mathcal F$. After giving background definitions and facts about foliations in § \[foliation basics\], we describe in §\[transitive flow boxes\] how flow boxes can be organized with an eye towards spreading an initial contact structure throughout the ambient manifold. Contact structures are propagated from one flow box to the next via a collection of local extension theorems described in §\[basicsection\]. This leads to an inductive construction of the desired contact structure in §\[theconstructionsection\]. Throughout, it is helpful to keep in mind the following [**Guiding Principle:**]{} Constructions must be kept transverse to the flow. Moreover, when constructing a positive contact structure, the slope of the characteristic foliation of a partially constructed confoliation must be greater than or equal the slope of the intersection of the given foliation of $\mathcal F$ and vertical boundary of our flow boxes, with equality allowed only where $\mathcal F$ is smooth. When constructing a negative contact structure, the slope inequality is reversed. To explain this principle more formally, suppose that a closed oriented 3-manifold $M$ is expressed as a union of smooth submanifolds $V$ and $W$, possibly with corners, with $\partial V=\partial W$. Suppose, moreover, that $W$ admits a codimension-1 foliation $\mathcal F_W$. These submanifolds will be chosen so that their common boundary decomposes into [*horizontal*]{} and [*vertical*]{} portions, that is, portions tangent and transverse, respectively, to $\mathcal F_W$. If a confoliation $\xi_V$ has been constructed on $V$ so that it is tangent to the horizontal portion of $\partial V$, transverse to the vertical portion, and contact on certain prescribed portions of $V$, then we call $V$ a [*contact zone*]{}. The Guiding Principle is a statement that the confoliation $\xi_V$ on $V$ must [*dominate*]{} the foliation $\mathcal F_W$ along the vertical boundary (see Definition \[compatible\]) both for an initial choice of $V$, and also for subsequent choices as $V$ is expanded to all of $M$, and $W$ is shrunk correspondingly. To expand a contact zone $V$ to the entire manifold $M$, we use the following structure. A foliation $\mathcal F_W$ is [*$V$-transitive*]{} if every point in $W$ can be connected by a path in a leaf of $\mathcal F$ to a point of $V$. We will see in Theorem \[main1\] that the following structure is very useful. \[tridecomposition\] A closed 3-manifold $M$ admits a [*positive (respectively, negative) $(\xi_V,\mathcal F_W,\Phi)$ decomposition*]{} if $M$ can be decomposed as a union $$M=V\cup W,$$ where the horizontal portion of $\partial W$ is tangent to $\mathcal F_W$ and the vertical portion of $\partial V$ is tangent to $\Phi$, and 1. $\mathcal F_W$ is a $V$-transitive oriented foliation of $W$, 2. $\xi_V$ is a smooth contact structure defined on $V$ which positively (respectively, negatively) dominates $\mathcal F_W$, 3. for some choice of Riemannian metric, $M$ admits a volume preserving flow $\Phi$ transverse to both $\xi_V$ and $\mathcal F_W$. Note that the existence of a $(\xi_V,\mathcal F_W,\Phi)$ decomposition does not require the existence of a codimension-1 foliation defined on all of $M$. [**Theorem \[main1\].**]{} [*If $M$ admits a positive $(\xi_V,\mathcal F_W,\Phi)$ decomposition, then $M$ admits a smooth positive contact structure which agrees with $\xi_V$ on $V$ and is $\Phi$-close to $\mathcal F_W$ on $W$. The analogous result holds if $M$ admits a negative $(\xi_{V'},\mathcal F_{W'},\Phi)$ decomposition. If $M$ admits both a positive $(\xi_V,\mathcal F_W,\Phi)$ decomposition and a negative $(\xi_{V'},\mathcal F_{W'},\Phi)$ decomposition, then these contact structures are weakly symplectically fillable.*]{} If a closed oriented 3-manifold admits both a positive $(\xi_V,\mathcal F_W,\Phi)$ and a negative $(\xi_{V'},\mathcal F_{W'},\Phi)$ decomposition, does it contain a taut oriented foliation transverse to $\Phi$? Given a splitting $M=V\cup W$ and a flow $\Phi$, an oriented, codimension-1 foliation $\mathcal F$ is [*compatible*]{} with $(V, W, \Phi)$ if $\mathcal F$ is transverse to $\Phi$, and the common boundary $\partial V=\partial W$ decomposes into subsurfaces which are either horizontal or vertical with respect to $\mathcal F$. An oriented codimension-1 foliation $\mathcal F$ of a 3-manifold $M$ is [*bracketed*]{} if, for some volume preserving flow $\Phi$, 1. $\mathcal F$ is compatible with some $(V,W,\Phi)$ decomposition of $M$ for which there exist $\mathcal F_W$ and $\xi_V$ such that $(\xi_V,\mathcal F_W,\Phi)$ is a positive decomposition, and 2. $\mathcal F$ is compatible with some $(V',W',\Phi)$ decomposition of $M$ for which there exist $\mathcal F_{W'}$ and $\xi_{V'}$ such that $(\xi_{V'},\mathcal F_{W'},\Phi)$ is a negative decomposition. When we wish to specify the flow $\Phi$, $\mathcal F$ is called [*$\Phi$-bracketed*]{}. Let $\mathcal F$ be an oriented codimension-1 foliation of a 3-manifold $M$ which is $\Phi$-bracketed. Then there exist a smooth positive contact structure $\xi_+$ and a smooth negative contact structure $\xi_-$, both $\Phi$-close to $\mathcal F$. Sometimes $\mathcal F_W$ and $\mathcal F_{W'}$ are obtained by restricting $\mathcal F$ to $W$ and $W'$ respectively, and sometimes they are not. Very roughly speaking, when $W=W'$, we think of the restriction of $\mathcal F$ to $W$ as being [*bracketed by*]{} $\mathcal F_{W'}$ and $\mathcal F_W$ as a generalization of the situation in which the slope of $\mathcal F$ along boundary components of $W$ lies between the corresponding boundary slopes of $\mathcal F_{W'}$ and $\mathcal F_W$. As noted in Corollary \[smoothisbracketed\], all taut, oriented $C^2$-foliations apart from the product foliation $S^1\times S^2$ are bracketed. In Example \[s1timess2\], we show that the product foliation of $S^1\times S^2$ is not bracketed. In this paper, we show that many taut, oriented $C^0$-foliations are bracketed. For each bracketed foliation considered in this paper, it is possible to choose $V=V'$ and $W=W'$. \[bracketconj\] Let $\mathcal F$ be a taut oriented $C^0$-foliation of a closed oriented 3-manifold $M\ne S^1\times S^2$. Then $\mathcal F$ is bracketed. We have the following two closely related questions. Suppose $\mathcal F$ is a taut, oriented foliation with no torus leaf. Is $\mathcal F$ $C^0$-close to a taut, oriented smooth foliation? Suppose $\mathcal F$ is a taut, oriented foliation with no torus leaf. Is $\mathcal F$ $\Phi$-close to a taut, oriented smooth foliation for some volume preserving flow $\Phi$? Establishing the initial contact zone is of fundamental importance. In the context of $C^0$-foliation theory we introduce, in §\[attracting holonomy\], the notion of [*holonomy with a contracting interval*]{} and define what we mean by [*attracting neighborhood*]{}. This is significantly weaker than the more familiar notion of linear attracting holonomy, yet it suffices to build an initial contact zone. The precise definition appears as Definition \[contracting\]. As a corollary to Theorem \[main1\], we obtain: [**Theorem \[attracting\].**]{} [*Let $\mathcal F$ be a taut $C^0$-foliation transverse to a flow $\Phi$. If $V$ is a disjoint union of attracting neighborhoods, and $\mathcal F$ is $V$-transitive, then $\mathcal F$ is bracketed and hence can be $\Phi$-approximated by a pair of weakly symplectically fillable and universally tight, contact structures, one positive and one negative.* ]{} When working with $C^0$-foliations, it can be difficult, or even impossible, to establish the existence of sufficient attracting holonomy. Therefore, we introduce a different way of creating an initial contact zone. Roughly speaking, instead of looking for loops tangent to the foliation and satisfying a nice property, we look for loops *transverse* to the foliation and satisfying a nice property. We make this precise in Definition  \[transtranv\], where we define [*$L$-bracketed foliation*]{}. As a corollary to Theorem \[main1\], we obtain: [**Theorem \[transitivemain\].**]{} [*Suppose $\mathcal F$ is a taut oriented codimension-1 foliation in $M$, and that $\mathcal F$ is $L$-bracketed for some link $L$. Then $\mathcal F$ is bracketed and hence can be $\Phi$-approximated by a pair of smooth contact structures $\xi_{\pm}$, one positive and one negative. These contact structures are necessarily weakly symplectically fillable and universally tight.* ]{} In § \[OBresults\], we consider the important special case that $\mathcal F$ is transverse to a flow $\Phi$ that has been obtained by removing a fibred link $L$ and doing a Dehn filling of a volume preserving suspension flow. In this case, $L$ forms the binding of an open book decomposition $(S,h)$ of $M$ and the contact structure $\xi_{(S,h)}$ compatible with $(S,h)$ is $\Phi$-close to $\mathcal F$. In [@HKM2], Honda, Kazez and Matić introduced the use of foliations $\Phi$-close to $\xi_{(S,h)}$ as a way of establishing universal tightness of $\xi_{(S,h)}$. In particular, they appealed to $C^0$-foliations constructed in [@R1; @R2] to claim that $\xi_{(S,h)}$ is universally tight whenever the binding of $(S,h)$ is connected and the fractional Dehn twist coefficient at least one. Although the foliations constructed in [@R1; @R2] are not smooth, and therefore the proof in [@HKM2] contained a gap, they are $L$-bracketed, and hence Theorem \[transitivemain\] reveals that the conclusions of [@HKM2] are correct. In §\[Open book\] we also include some background material relating language arising in the theory of open books with language arising in the theory of foliations. In particular, we give a translation between coordinates used in each subject together with a summary of our results related to open book decompositions. To make the paper more self-contained there is an appendix containing an overview of the relationship between volume preserving flows and closed dominating 2-forms, and giving some standard definitions from symplectic topology. Most of this material is present either implicitly or explicitly in [@ET]. We close this section with an application of Theorem \[main1\] to the study of L-spaces. (Definition 1.1, [@OS2]) A closed three-manifold $Y$ is called an [*L-space*]{} if $H_1(Y;\mathbb Q)=0$ and $\widehat{HF}(Y)$ is a free abelian group of rank $|H_1(Y;\mathbb Z)|$. (Theorem 1.4, [@OS]) An L-space has no symplectic semi-filling with disconnected boundary; and all its symplectic fillings have $b_2^+(W)=0$. In particular, $Y$ admits no taut smooth foliation. In other words, Ozsváth and Szabó show that if $Y$ is an L-space then there is no symplectic manifold $(X,\omega)$ with weakly convex boundary such that $|\partial X|>1$ and $Y$ is one of the boundary components. So an L-space cannot contain a pair of $\Phi$-close contact structures, $\xi_+$ positive and $\xi_-$ negative, where $\Phi$ is a volume preserving flow. (For details, see the Appendix.) Theorem \[main1\] thus implies the following. An L-space $Y$ admits no bracketed foliation. In particular, the foliations constructed in [@KRo; @LR; @R1; @R2] never exist in an L-space. Foliation basics {#foliation basics} ================ \[folndefn\] Let $M$ be a smooth closed 3-manifold, and let $k$ be a non-negative integer or infinity. A *[$C^k$ codimension-1 foliation]{} $\mathcal F$ of (or in) $M$ is a union of disjoint connected surfaces $L_i$, called the *[leaves]{} of $\mathcal F$, such that:** 1. $\cup_i L_i = M$, and 2. there exists a $C^k$ atlas $\mathcal A$ on $M$ which contains all $C^{\infty}$ charts and with respect to which $\mathcal F$ satisfies the following local product structure: - for every $p\in M$, there exists a coordinate chart $(U,(x,y,z))$ in $\mathcal A$ about $p$ such that $U\approx \mathbb R^3$ and the restriction of $\mathcal F$ to $U$ is the union of planes given by $z = $ constant. When $k=0$, the tangent plane field $T\mathcal F$ is required to be $C^0$. The extra hypothesis in the $k=0$ case, that $T\mathcal F$ is $C^0$, implies that there is a continuous, and hence there is a smooth, 1-dimensional foliation transverse to $\mathcal F$. It follows (see Proposition \[transcont\]) that the leaves of $\mathcal F$ are therefore smoothly immersed in $M$. Such foliations are called $C^{0+}$ in [@CC]. It follows from Proposition \[transcont\] that for $k\ge 1$, the condition that $\mathcal F$ is $C^k$ is equivalent to the condition that $T\mathcal F$ is $C^k$. A frequently used technique for constructing foliations is to start with a branched surface embedded in $M$ that has product complementary regions. Since the embedding may be smoothed, a foliation resulting from thickening the branched surface and extending across the complementary regions can be constructed to be $C^0$. Definition \[folndefn\] extends in an obvious way to define a codimension-1 foliation on a compact oriented smooth 3-manifold with non-empty boundary, where we insist that for each torus boundary component $T$, either $T$ is a leaf of $\mathcal F$, or $\mathcal F$ is everywhere transverse to $T$, and that any non-torus boundary component is a leaf of $\mathcal F$. Recall that a smooth structure with corners on a topological 3-manifold $M$ with nonempty boundary is a maximal collection of smoothly compatible charts with corners whose domains cover $M$, where a chart with corners is an open set diffeomorphic to one of $\mathbb R^3$, $\{(x,y,z)\}| z\ge 0\}$, $\{(x,y,z)\}| y,z\ge 0\}$, or $\{(x,y,z)\}| x,y, z\ge 0\}$. Notice that the boundary of a manifold with corners naturally admits a stratification as a disjoint union of 0-, 1-, and 2-dimensional manifolds. The 0- and 1-manifolds of this stratification are referred to as the corners of $M$. Definition \[folndefn\] extends further in an obvious way to define a codimension-1 foliation on a compact smooth 3-manifold $M$ with corners, where we insist that $\partial M$ can be written as a union of two compact piecewise linear surfaces $\partial_v M$ and $\partial_h M$, where the intersection $\partial_v M\cap\partial_h M$ is a union of corners of $M$, the components of $\partial_h M$ are contained in leaves of $\mathcal F$, and $\partial_v M$ is everywhere transverse to $\mathcal F$. A [*flow*]{} is an oriented 1-dimensional foliation of $M$; namely, a decomposition $\Phi$ of a smooth compact 3-manifold $M$ into a disjoint union of connected 1-manifolds, called the [*flow curves*]{} of $\Phi$, such that there exists a $C^k$ atlas $\mathcal A$ on $M$ which contains all $C^{\infty}$ charts and with respect to which $\Phi$ satisfies the following local product structure: - for every $p\in M$, there exists a coordinate chart $(U,(x,y,z))$ in $\mathcal A$ about $p$ such that $U\approx \mathbb R^3$, and the restriction of $\Phi$ to $U$ is the union of lines given by $(x,y) = $ constant. When $M$ has boundary a disjoint union of tori, we insist that for each torus boundary component $T$, either $\Phi$ is everywhere tangent to $T$ or $\Phi$ is everywhere transverse to $T$. When $M$ is smooth with corners, we insist that $\partial M$ can be written as a union of two compact piecewise linear surfaces $\partial_v M$ and $\partial_h M$, where the intersection $\partial_v M\cap\partial_h M$ is a union of corners of $M$, $\partial_h M$ is everywhere transverse to $\Phi$, and $\partial_v M$ is everywhere tangent to $\Phi$. Flows and oriented codimension-1 foliations coexist in interesting ways. A good overview can be found in [@CC]. In particular, given an oriented $C^k$-foliation $\mathcal F$, for any $k$, of an oriented 3-manifold $M$, possibly with non-empty boundary and possibly with corners, there is a $C^{\infty}$ flow everywhere transverse to $\mathcal F$. From this we have the following: \[Proposition 5.1.4 of [@CC]\]\[transcont\] Let $M$ be a smooth compact oriented 3-manifold, possibly with corners. Let $R$ denote any one of $\mathbb R^2$, the closed upper half-plane in $\mathbb R^2$, or the closed upper right quadrant of $\mathbb R^2$. Given an oriented codimension-1 $C^k$-foliation $\mathcal F$ and smooth flow $\Phi$ transverse to $\mathcal F$, there exists a smooth biregular cover for $(M,\mathcal F,\Phi)$; namely, for every $p\in M$ there is a smooth coordinate chart $(U,(x,y,z))$, where $U\approx R\times \mathbb R$, and 1. the restriction of $\Phi$ to $U$ is the union of lines given by $(x,y) = $ constant, and 2. the restriction of $\mathcal F$ to $U$ is a $C^k$ family of $C^{\infty}$ graphs over $R$. The second condition emphasizes the fact that $C^0$-foliations are leafwise smooth and transversely $C^0$. When the oriented foliation $\mathcal F$ is taut, and $M$ is Riemannian, the smooth transverse flow can be chosen to be volume preserving. (Theorem II.20, [@Sullivan]; see also Theorem A1, [@Hass])\[volpreserve\] Let $\mathcal F$ be a codimension-1, taut $C^0$-foliation of a closed smooth Riemannian 3-manifold $M$. Then there is a volume-preserving smooth flow everywhere transverse to $\mathcal F$. Equivalently, there is a smooth closed 2-form dominating $\mathcal F$. Given a 3-manifold $M$ containing a taut $C^0$-foliation, one can ask whether there is a closely related $C^{\infty}$-foliation. Interpreting ‘closely related’ to mean any of $C^0$-$\epsilon$-close for some fixed $\epsilon>0$, $\Phi$-close, or topologically conjugate results in questions for which the answers are very little understood. There are certainly 3-manifolds which contain Reebless $C^0$-foliations but not Reebless $C^{\infty}$-foliations (Theorem D, [@BNR]). The existence of a taut sutured manifold hierarchy guarantees the existence of two types of foliation, one $C^0$ and finite depth and the other $C^{\infty}$ ([@G1; @G2; @G3]). Fixing a Riemannian metric and some $\epsilon>0$, these two types of foliation are not necessarily $C^0$-$\epsilon$-close. However, since they are carried by a common transversely oriented branched surface, they are $\Phi$-close. We will take advantage of the fact that it is always possible [*locally*]{} to $C^0$-approximate $(\mathcal F,T\mathcal F)$ by $(\tilde{\mathcal F}, T\tilde{\mathcal F})$, for some locally defined smooth foliation $\tilde{\mathcal F}$. \[smoothapprox\] Let $D$ be a smooth disk with corners and let $\mathcal F$ be a $C^0$-foliation of $D^2\times [0,1]$ which is positively transverse to the smooth 1-dimensional foliation by intervals $\{ (x,y)\}\times [0,1],\,\, (x,y)\in D^2$. Given any $\epsilon>0$, there is a smooth foliation $\tilde{\mathcal F}$ which is positively transverse to the smooth 1-dimensional foliation by intervals $\{ (x,y)\}\times [0,1],\,\, (x,y)\in D^2$, and satisfies $(\tilde{\mathcal F},T\tilde{\mathcal F})$ is $C^0$ $\epsilon$-close to $(\mathcal F, T\mathcal F)$. Moreover, if $\mathcal F$ is smooth on some compact $\mathcal F$-saturated subset, then we may choose $\tilde{\mathcal F}$ to equal $\mathcal F$ on this subset. By identifying $D$ with a subset of the plane, a point $p$ in a leaf of $\mathcal F$ determines both a point in $\mathbb R^3$, and by choosing a unit vector perpendicular to the tangent plane of the leaf, a point $\bf{u}_p$ in $T\mathbb R^3$. The standard metric on $T\mathbb R^3 = \mathbb R^6$ is used to measure the distance between two leaves of $\mathcal F$ as follows. By Proposition \[transcont\], we may assume that the leaves of $\mathcal F$ are given by the graphs of $z=f_{\theta}(x,y)$, for some continuous family of smooth functions $f_{\theta} : D\to [0,1]$, $0\le\theta\le 1$. For any two such leaves, $L_1$ and $L_2$ say, given by $z=f_{\theta_1}(x,y)$ and $z=f_{\theta_2}(x,y)$ respectively, define the distance between them to be the maximum distance, computed in $T\mathbb R^3$, between $(x,y,f_{\theta_1}(x,y),{\bf u}_{(x,y,f_{\theta_1}(x,y))})$ and $(x,y,f_{\theta_2}(x,y),{\bf u}_{(x,y,f_{\theta_2}(x,y))})$ for $(x,y) \in D$. Since $D$ is compact, uniform continuity guarantees that $d$ is continuous and hence a metric on the leaf space of $\mathcal F$. For any $\theta \in [0,1]$, let $U_{\theta}$ denote the subset of $D\times [0,1]$ which is the union of all graphs $z=f_{\theta}(x,y)$ which are of d-distance strictly less than $\epsilon/2$ from the leaf $z = f_{\theta}(x,y)$. Since $U_{\theta}$ is the pullback of an $\epsilon/2$ $d$-neighborhood in the leaf space, $U_{\theta}$ is open in $D\times[0,1]$. Pick a finite cover of $D\times[0,1]$ by $U_{\theta_0},U_{\theta_1},\cdots,U_{\theta_r}$ for some $r\ge 0$ and $0=\theta_0<\theta_1<\cdots <\theta_r = 1$. Now let $\tilde{\mathcal F}$ be the foliation of $D\times[0,1]$ which includes the leaves given by the graphs of $f_{\theta_i}$ and, for each $i, 0\le i\le r-1$, the leaves given by the graphs of a damped straight line homotopy between $f_{\theta_i}$ and $f_{\theta_{i+1}}$. Thus if $g$ is a smooth homeomorphism of $[0,1]$ with derivatives at $0$ and $1$ vanishing to infinite order, the leaves of $\tilde{\mathcal F}$ are $z=(1-g(t))f_{\theta_i}(x,y) + g(t)f_{\theta_{i+1}}(x,y)$, $0\le t\le 1$, on the subset of $D\times [0,1]$ bounded by the graphs of $f_{\theta_i}$ and $f_{\theta_{i+1}}$. By construction, $\tilde{\mathcal F}$ is smooth. Moreover, $(\tilde{\mathcal F},T\tilde{\mathcal F})$ and $(\mathcal F,T\mathcal F)$ are $\epsilon$-close. To see this, recall that a normal vector to a graph $z=f(x,y)$ is given by ${\bf n}_f = \langle -f_x, -f_y, 1\rangle$, and a straight-line homotopy between $f_{\theta_1}$ and $f_{\theta_2}$ induces a straight-line homotopy between ${\bf n}_{f_{\theta_1}}$ and ${\bf n}_{f_{\theta_2}}$. Normalizing this straight-line homotopy of normal vectors gives a geodesic on the unit sphere joining ${\bf u}_{f_{\theta_1}}$ and ${\bf u}_{f_{\theta_2}}$. Since the leaves given by $z = f_{\theta_i}(x,y)$ and $z = f_{\theta_{i+1}}(x,y)$ are of $d$-distance at most $\epsilon/2$, it follows immediately from the triangle inequality that the leaves given by $z=f_{(1-g(t))\theta_i + g(t)\theta_{i+1}}(x,y)$ and $z=(1-g(t))f_{\theta_i}(x,y) + g(t)f_{\theta_{i+1}}(x,y)$ are of $d$-distance strictly less than $\epsilon$. So$(\tilde{\mathcal F},T\tilde{\mathcal F})$ and $(\mathcal F,T\mathcal F)$ are $\epsilon$-close. If $\mathcal F$ is smooth on some compact $\mathcal F$-saturated subset $A$ of $D\times [0,1]$, each component of $A$ is bounded by graphs of the form $f_{\theta}$. By compactness of $A$, $\partial A$ contains only finitely many such $f_{\theta}$. For each $z=f_{\theta}$ in $\partial A$, include $\theta$ in the list $\theta_0,\theta_1,\cdots,\theta_r$ and modify $\mathcal F$ only on the complement of $A$. Next we recall Operation 2.1.1 of [@Ga]. Let $L_1,\dots, L_m$ be distinct leaves of a $C^0$-foliation $\mathcal F$. Modify $\mathcal F$ by thickening each of the leaves $L_j$. Thus, each $L_j$ is blown up to an $I$-bundle $L_j\times [-1,1]$. Let $\mathcal F'$ denote the resulting foliation. We highlight the following observation. \[blowup\] The leaves $L_1, \dots, L_m$ may be thickened so that the foliation $\mathcal F'$ is $C^0$ and the restriction of $\mathcal F'$ to $$L_j\times (-1,1)\subset M$$ is a smooth foliation for each $j$. Transitive flow box decompositions {#transitive flow boxes} ================================== \[flowboxdefn\] A [*flow box*]{}, $F$, for a $C^0$-foliation $\mathcal F$ and smooth transverse flow $\Phi$, is a smooth chart with corners that is of the form $D\times I$, where $D$ is a polygon (a disk with at least three corners), $\Phi$ intersects $F$ in the arcs $\{(x,y)\}\times I$, and $\mathcal F$ intersects $F$ in disks which are everywhere transverse to $\Phi$ and hence can be thought of as graphs over $D$. In particular, $D\times \partial I$ lies in leaves of $\mathcal F$, each component of $\mathcal F\cap F$ is a smoothly embedded disk, and these disks vary continuously in the $I$ direction. The [*vertical boundary*]{} of $F$, denoted $\partial_v F$, is $\partial D \times I$. The [*horizontal boundary*]{} of $F$ is $D \times \partial I$ and is denoted $\partial_h F$. An arc in $M$ is [*vertical*]{} if it is a subarc of a flow line and [*horizontal*]{} if it contained in a leaf of $\mathcal F$. It is often useful to view the disk $D$ as a 2-cell with $\partial D$ the cell complex obtained by letting the vertices correspond exactly to the corners of $D$. Similarly, it is useful to view the flow box $F$ as a 3-cell possessing the product cell complex structure of $D\times I$. Then the horizontal boundary $\partial_h F$ is a union of two (horizontal) 2-cells and the vertical boundary $\partial_v F$ is a union of $c$ (vertical) 2-cells, where $c$ is the number of corners of $D$. A subset $R$ of $F$ is called a [*vertical rectangle*]{} if it has the form $\alpha\times [a,b]$, where $0\le a<b\le 1$ and $\alpha$ is either a 1-cell of $\partial D$ or else a properly embedded arc in $D$ connecting distinct vertices of $D$. A subset $e$ of $F$ is called an [*edge*]{} if it is a compact interval contained in a 1-cell of $F$. Given a vector $\vec w$ tangent to $\partial_v F$, we choose a [*slope*]{} convention such that the leaves of $\mathcal F \cap \partial_v F$ have slope $0$, the flow lines have slope $\infty$, and the sign of the slope of $\vec w$ is computed as viewed from outside of $F$. Given a codimension-1 leafwise smooth foliation $\mathcal F$ and transverse smooth flow $\Phi$, let $V$ be a compact codimension-0 sub-manifold of $M$, with $\partial V = \partial_v V \cup\partial_h V$, where $\partial_v V$ is a union of flow arcs or circles, and $\partial_hV$ is a union of subsurfaces of leaves of $\mathcal F$. In the case that $\partial V = \partial_vV$, $\mathcal F$ and $\Phi$ need only be defined on the complement of $V$. A [*flow box decomposition*]{} of $M$ [*rel*]{} $V$ is a decomposition of $M\setminus \text{int} V$ as a finite union $M = V\cup (\cup_iF _i)$ where 1. Each $F _i$ is a standard flow box for $\mathcal F$. 2. If $i \neq j$, the interiors of $F _i$ and $F _j$ are disjoint. 3. If $F _i$ and $F _j$ are different flow boxes, then their intersection is connected and either empty, a 0-cell, an edge, a vertical rectangle, or a subdisk of $\partial_h F_i \cap \partial_h F_j$. \[transitive\] We call a flow box decomposition $M= V\cup F_1 \cup \dots \cup F_n$ [*transitive*]{} if $V_0=V$, $V_i = V_{i-1} \cup F_i$, and for $i=1,\dots, n$, 1. each 2-cell of $\partial_v F_i$ has interior disjoint from $\partial_h F_j$ for all $j<i$, 2. $V_{i-1}\cap F_i$ is a union of horizontal subsurfaces and vertical 2-cells of $F_i$, together possibly with some 0- and 1-cells, and 3. $V_{i-1}\cap F_i$ contains a vertical 2-cell of $F_i$. \[transitiveflowbox\] If $M$ is $V$-transitive, then there is a transitive flow box decomposition of $M$ rel $V$. Since $M$ is $V$-taut, for each point $x\in M\backslash V$, there exists $\gamma$ an embedded arc in the leaf containing $x$ that connects $x$ to $V$. By taking a regular neighborhood of $\gamma$ in its leaf and flowing along it, create a flow box $F$ with the property that it has one vertical 2-cell contained in $\partial_v V$. The point $x$ may or may not be in the interior of $F$. Using compactness of $M$, pick a finite collection of flow boxes, of the sort just described, $F_1, F_2,\dots, F_r$ that cover $M\backslash V$. Assume no proper subset of the $F_i, 1\le i\le r,$ cover. Next, let $L_1, L_2, \dots, L_m$ be the collection of leaves of $\mathcal F$ that contain the horizontal boundaries of all $F_i$. We proceed by induction to show that $V\cup F_1\cup\dots\cup F_i$ admits a flow box decomposition with respect to $V$ for every $i, 1\le i\le r$. Certainly, $V\cup F_1$ does. So suppose that $V\cup F_1\cup\dots\cup F_{i-1}$ admits a flow box decomposition with respect to $V$. After renaming and reindexing as necessary, we assume that this flow box decomposition is given by $V\cup F_1\cup\dots\cup F_{i-1}$. We show that $V_i = V\cup F_1\cup\dots\cup F_{i-1}\cup F_i$ also admits a flow box decomposition with respect to $V$. Begin by slightly increasing the size of $F_i$ in $M\setminus \text{int} V$, as necessary, so that $F_i$ is still a flow box and, for all $j<i$, $\partial_v F_j$ and $\partial_v F_i$ are transverse away from $V$, where they may overlap tangentially. Notice that this ensures that $ V\cup F_1\cup\dots\cup F_{i-1}\cup F_i$ is a codimension-0 submanifold with corners and piecewise vertical and horizontal boundary. Also, cut $F_i$ open along those (horizontal disk) components of $(\cup_i L_i)\cap F_i$ which have non-empty intersection with $\partial_h F_j$ for some $j<i$. Denote the resulting flow boxes by $F_i^1,\dots,F_i^s$; so $F_i = F_i^1\cup\dots\cup F_i^s$. Consider $F_j\cap F_i^1$ for some $j<i$. Since $\partial_v F_j$ and $\partial_v F_i^1$ are transverse away from $V$, each component of $F_j\cap F_i^1$ is a flow box. Consider any such component, $X$ say, from the point of view of the flow box $F_i^1=D_i\times [c,d]$. Notice that $X=D\times [c,d]$, where $D$ is a subdisk (with corners) of $D_i$, and $X\cap \partial_v F_i^1$ is a non-empty union of vertical 2-cells. Now, for all $j<i$, remove $F_j\cap F_i^1$ from $F_i^1$. Taking the closure of the result we get a union $G_1\cup\dots\cup G_b$ of flow boxes, where each $G_k\cap V_{i-1}$ is a union of horizontal subsurfaces and vertical 2-cells of $G_k$, together possibly with some 0- and 1-cells, and contains a vertical 2-cell of $G_k$. Notice that $G_k\cap G_l=\emptyset$ if $k\ne l$ and, by subdividing each $G_k$ along finitely many vertical rectangles as necessary, we may assume $G_k\cap F_j$ is connected for all $j<i$. The resulting union $$V\cup F_1\cup\dots\cup F_{i-1}\cup G_1\cup\dots\cup G_b$$ is then a transitive flow box decomposition of $V\cup F_1\cup\dots\cup F_{i-1}\cup F_i^1$ with respect to $V$. Repeat this process for each $a, 2\le a\le s$, to obtain a transitive flow box decomposition of $V\cup F_1\cup\dots\cup F_i$ with respect to $V$. Basic extension results {#basicsection} ======================= In this section, we collect together an assortment of confoliation extension results important for the inductive construction to be described in Section \[theconstructionsection\]. For the most part, it will be possible to restrict attention to flow boxes diffeomorphic to one of the flow boxes $F$, $G$ or $H$, where $F$, $G$ and $H$ are defined as follows. Let $F$ denote the flow box given by $$F=\{|x| \le 1, 0 \le y \le 1, |z| \le 1\}.$$ Let $G$ denote the flow box given by $\Delta \times [0,1]$, where $\Delta$ is the region in the $xy$-plane bounded by the triangle with vertices $$(-3/2,1), (3/2,1) \mbox{ and } (0,-1/2).$$ Let $\Delta^{(0)}$ denote the $0$-skeleton $$\Delta^{(0)} = \{(-3/2,1), (3/2,1), (0,-1/2)\}.$$ Let $H=F\cap G$, a flow box with hexagonal horizontal cross-section. Notice that $H$ is diffeomorphic to the complement in $G$ of an open neighborhood of the $1$-skeleton of $\partial_v G$. We begin with the following elementary, and very useful, observations of Eliashberg-Thurston, [@ET]. (Proposition 1.1.5, [@ET])\[ETlemma\] Let $\eta$ be a $C^k$-confoliation with $k \ge 1$ and domain $F$ given by a 1-form $dz - a(x,y,z) dx$. Then $$\dfrac {\partial a}{\partial y}(x,y,z) \ge 0$$ at all points of $F$, and $$\dfrac {\partial a}{\partial y}(x,y,z) > 0$$ where $\eta$ is contact. Eliashberg and Thurston use this lemma to approximate the confoliation by a contact structure with the following corollary. (Lemma 2.8.2, [@ET])\[approximate\] Let $\eta$ be a $C^k$-confoliation, with $k \ge 1$ and domain $F$, given by a 1-form $dz - a(x,y,z) dx$. If $\eta$ is contact in a neighborhood of $y=1$ in $F$, then $\eta$ can be approximated by a confoliation $\hat\eta$ which coincides with $\eta$ together with all of its derivatives along the boundary $\partial F$ and which is contact inside $F$. It is enough to approximate $a(x,y,z)$ along each interval $\{x\} \times [0,1] \times \{z\}$ by a function $\hat{a}(x,y,z)$ that is strictly monotonic for $(x,z)\in(-1,1) \times (-1,1)$ but is damped to agree smoothly with $a(x,y,z)$ on $\partial F$. \[dominate\] If $\alpha$ and $\beta$ are families of curves transverse to $\partial/\partial z$ and contained in a vertical 2-cell $R$ of the vertical boundary of a flow box, we say $\alpha$ [*strictly dominates*]{} $\beta$ along $A\subset R$ if at every $p\in A$, the slope of the tangent to $\alpha$ at $p$ is greater than the slope of the tangent to $\beta$ at $p$. It must be specified if the comparison of slopes is made from inside or outside of the flow box. If $\alpha$ strictly dominates $A$, and $\alpha$ and $\beta$ are the characteristic foliations of 2-plane fields $\xi_1$ and $\xi_2$ respectively, we also say that $\xi_1$ strictly dominates $\xi_2$ along $R$. If $\xi_2 = T\mathcal F$ for some codimension-1 foliation $\mathcal F$, we also say that $\xi_1$ strictly dominates $\mathcal F$ along $R$. The statement of Lemma \[ETlemma\] raises the question of whether flow box coordinates can always be chosen so that the contact form can be written as $dz-a(x,y,z)dx$. The next lemma points out that this is the case and gives a simple condition for a contact structure to dominate in such coordinates. Let $U$ be a regular neighborhood in $F$ of the union of $x=\pm 1$ and $z=\pm 1$. Let $\eta$ be a $C^k$-confoliation with $k \ge 1$ defined in a neighborhood $V$ of $y=1$ in $F$ which is everywhere transverse to the vertical segments $(x,y) =$ constant, horizontal in $\overline{U}$, and contact on $V\setminus \overline{U}$. Then, after smoothly reparametrizing $F$ as necessary, we may assume that $\eta$ is given by a 1-form $$dz - a(x,y,z) dx$$ with 1. $a(x,y,z)=0$ on $V\cap \overline{U}$, and 2. $\dfrac {\partial a}{\partial y}(x,y,z) > 0$ on $V\setminus \overline{U}$. Moreover, the characteristic foliation of $\eta$ along the complement of $\overline{U}$ in $y=1$ strictly dominates the horizontal foliation, when viewed from inside $F$, if and only if $a(x,y,z)>0$ in $V'\setminus \overline{U}$, for some neighbourhood $V'\subset V$ of $y=0$. Since $dz - a(x,y,z) dx$ vanishes on $\partial/\partial y$, it is enough to choose coordinates $x, y$ for leaves so that curves with constant $x$ coordinate are Legendrian. At points where $\eta$ is transverse to the horizontal foliation, there is a unique Legendrian direction. At all other points, any direction is Legendrian. The coordinate $y$ can be constructed by choosing a section of the Legendrian directions. ![Each figure shows a $z=0$ slice capturing the flow box setup of one of Corollaries \[extend3\]–\[extend8\]. Plus signs are positioned on the side from which the contact structure dominates the horizontal foliation. Dashes, for instance along $U$, show where the confoliation slope is 0. The smooth foliation acts as a transport mechanism for contact structures in the direction shown by the arrows.[]{data-label="extend"}](extend){width="5in"} \[extend3\] Let $U$ be a regular neighborhood in $F$ of the union of $x=\pm 1$ and $z=\pm 1$. Let $\eta$ be a $C^k$-confoliation with $k \ge 1$ defined in a neighborhood $V$ of $y=1$ in $F$ by a 1-form $$dz - a(x,y,z) dx$$ with 1. $a(x,y,z)=0$ on $V\cap \overline{U}$, and 2. $a(x,y,z)>0$ and $\dfrac {\partial a}{\partial y}(x,y,z) > 0$ on $V\setminus \overline{U}$. Then $\eta$ can be extended to a $C^k$-confoliation $\hat\eta$ on $F$ that agrees with the horizontal foliation of $F$ in $\overline{U}$, is contact on the complement of $\overline{U}$, and strictly dominates, when viewed from outside $F$, the horizontal foliation on the complement in $y=0$ of $\overline{U}$. It is enough to extend $a(x,y,z)$ along each interval $\{x_0\} \times [0,1] \times \{z_0\}$ to a $C^k$ function $\hat{a}(x,y,z)$ such that 1. $\hat{a}(x,y,z)=0$ in $\overline{U}$, and 2. $\hat{a}(x,y,z)>0$ and $\dfrac {\partial \hat{a}}{\partial y}(x,y,z) > 0$ outside $\overline{U}$. \[extend4\] Let $U$ be a regular neighborhood in $F$ of the union of $x=\pm 1$ and $z=\pm 1$. Let $\eta$ be a $C^k$-confoliation with $k \ge 1$ defined in a neighborhood $V$ of $y=1$ in $F$ by a 1-form $$dz - a(x,y,z) dx$$ with 1. $a(x,y,z)=0$ on $V\cap \overline{U}$, and 2. $a(x,y,z)>0$ and $\dfrac {\partial a}{\partial y}(x,y,z) > 0$ on $V\setminus \overline{U}$. Then $\eta$ can be extended to a $C^k$-confoliation $\hat\eta$ on $F$ that agrees with the horizontal foliation of $F$ in $\overline{U}$, is contact on the complement of $\overline{U}$ in the interior of $F$, and smoothly agrees with the horizontal foliation at $y=0$. Proceed as in the proof of Corollary \[extend3\] except insist that $$\hat{a}(x,0,z)\equiv 0.$$ \[extend5\] Let $U$ be a regular neighborhood in $F$ of the union of $x=\pm 1$ and $z=\pm 1$. Let $\eta$ be a $C^k$-confoliation with $k \ge 1$ defined in a neighborhood $V$ of the union of $y=0$ and $y=1$ in $F$. Suppose that when viewed from inside $F$, $\eta$ dominates the horizontal foliation along the vertical faces given by $y=0$ and $y=1$, with strict domination in the complement of $\overline{U}$. Then $\eta$ can be extended to a $C^k$-confoliation $\hat\eta$ on $F$ that agrees with the horizontal foliation of $F$ in $\overline{U}$, and is contact on the complement of $\overline{U}$. Decompose $F$ as a union of two flow boxes diffeomorphic to $F$ by cutting open along the plane $y=1/2$. Apply Corollary \[extend4\] to each of the resulting flow boxes. The point of “smoothly agrees with” in the next corollary is that flow boxes are brick-like objects that, when sensibly glued together, should define a smooth confoliation. Thus we require smooth convergence of the confoliation to horizontal at $y=0$. \[extend6\] Let $U$ be a regular neighborhood in $F$ of the union of $x=\pm 1$ and $z=\pm 1$. Let $\eta$ be a $C^k$-confoliation with $k \ge 1$ defined in a neighborhood $V$ of $y=1$ in $F$ by a 1-form $$dz - a(x,y,z) dx$$ with 1. $a(x,y,z)=0$ on $V\cap \overline{U}$, and 2. $a(x,y,z)>0$ and $\dfrac {\partial a}{\partial y}(x,y,z) > 0$ on $V\setminus \overline{U}$. Then $\eta$ can be extended to a $C^k$-confoliation $\hat\eta$ on $H$ that agrees with the horizontal foliation of $H$ in $\overline{U}$, is contact on the complement of $\overline{U}$ in the interior of $H$, smoothly agrees with the horizontal foliation at $y=0$, and dominates the horizontal foliation in the complement of $\overline{U}$ along the lines $$y = x -1/2 \mbox{ and } y = -1/2-x.$$ This follows from Corollary \[extend4\]. \[extend7\] Let $U$ be a regular neighborhood in $G$ of the union of $z=\pm 1$ and $\Delta^{(0)}\times [-1,1]$. Let $S$ denote the complement in $\partial_v G$ of the 2-cell given by $y=1$. Let $\eta$ be a $C^k$-confoliation with $k \ge 1$ defined in a neighborhood $V$ of $S$ in $G$ such that 1. $\eta$ is contact on $V\setminus \overline{U}$, 2. $\eta$ agrees with the horizontal foliation on $V\cap\overline{U}$, and 3. $\eta$ strictly dominates the horizontal foliation along $S\setminus \overline{U}$, when viewed from inside $G$. Then $\eta$ can be extended to a $C^k$-confoliation $\hat\eta$ on $G$ that agrees with the horizontal foliation of $G$ in $\overline{U}$, is contact on the complement of $\overline{U}$ in the interior of $G$, and, when viewed from outside $G$, dominates the horizontal foliation in the complement of $\overline{U}$ along the line $y=1$. Let $\alpha$ be a smooth arc properly embedded in $\Delta$ which connects the vertices $(3/2,1)$ and $(0,-1/2)$ and is not tangent to a side of the triangle $\Delta$ at its endpoints. Let $R=\alpha\times [-1,1]$, a vertical rectangle in $G$. Decompose $G$ along $R$ into two flow boxes as $G=G'\cup F'$, where $G'$ is diffeomorphic to $G$ and $F'$ is diffeomorphic to $F$. First apply Corollary \[extend6\] to $G'$ and then apply Corollary \[extend3\] to $F'$. \[extend8\] Let $U$ be a regular neighborhood in $G$ of the union of $z=\pm 1$ and $\Delta^{(0)}\times [-1,1]$. Let $\eta$ be a $C^k$-confoliation with $k \ge 1$ defined in a neighborhood $V$ of $\partial_v G$ in $G$ such that 1. $\eta$ is contact on $V\setminus \overline{U}$, 2. $\eta$ agrees with the horizontal foliation on $V\cap\overline{U}$, and 3. $\eta$ strictly dominates the horizontal foliation along $\partial_v G \setminus \overline{U}$, when viewed from inside $G$. Then $\eta$ can be extended to a $C^k$-confoliation $\hat\eta$ on $G$ that agrees with the horizontal foliation of $G$ in $\overline{U}$ and is contact on the complement of $\overline{U}$ in the interior of $G$. This time we cut $G$ open along two vertical rectangles and consider instead the resulting union of flow boxes. This time we cut $G$ open along two disjoint vertical rectangles to obtain $$G=G'\cup F'\cup F^{''},$$ where $G'$ is diffeomorphic to $G$ and each of $F'$ and $F^{''}$ is diffeomorphic to $F$. Then apply Corollary \[extend6\] to $G'$. Finally, apply Corollary \[extend3\] to each of $F'$ and $F^{''}$. We now consider a case where the initial confoliation is defined on the entire vertical boundary of a solid cylinder. Let $\mathcal L$ be a smooth 1-dimensional foliation on the cylinder $S^1 \times I$ such that the boundary components of $S^1\times I$ are leaves of $\mathcal L$. Let $h:I\to I$ be the holonomy map of $\mathcal L$. \[cylinderextend\] If $h'(z)<0$ for $z\in(0,1)$, there is a confoliation on $D^2 \times I$ that is contact on $D^2 \times (0,1)$, tangent on $D^2 \times \partial I$, has characteristic foliation $\mathcal L$, and is everywhere transverse to $\partial/\partial z$. This will follow from Lemmas \[extension1\]–\[extension3\]. \[extension1\] If $h'(z)<0$ for $z\in(0,1)$, then there exists a smooth 1-dimensional foliation $\mathcal K$ of $S^1 \times I$ with the same holonomy map $h:I\to I$ such that for every $(\theta, z)\in S^1 \times (0,1)$, the slope, $s(\theta,z)$, of $\mathcal K$ at $(\theta,z)$ is negative. As a first approximation, let $\mathcal K_1$ be the foliation of $[0,2\pi] \times I$ given by connecting each point $(0,z)$ to $(2\pi,h(z))$ by a straight line. Next create $S^1 \times I$ by identifying $(0,z)$ and $(2\pi,z)$ for $z\in I$. Let $\mathcal K_2$ denote the image of $\mathcal K_1$ in $S^1\times I$. This has the desired properties, except that $\mathcal K_2$ is not smooth along $\{0\} \times I$. Carefully rounding these corners (see for example, Lemma 4.7 of [@Milnor]), yields the desired smooth foliation. \[extension2\] There is a diffeomorphism, $F$, of $D^2 \times I$ that is the identity map on $D^2 \times \partial I$ and takes $\mathcal L$ to $\mathcal K$. Let $\theta = 0$ be a base point for $S^1$ so that the holonomy map for each foliation is $h:\{0\} \times I \to \{0\} \times I$. Let $f$ be the diffeomorphism of $S^1 \times I$ such that $f$ restricts to the identity map on $\{0\} \times I$, preserves the $S^1$ coordinate, and maps $\mathcal L$ to $\mathcal K$. Define $F:D^2 \times I\to D^2 \times I$ by $F(r,\theta,z)=(r, t(r)f(\theta,z) + (1-t(r))(\theta,z))$ where $t$ is a diffeomorphism of the interval smoothly damped at the endpoints. \[extension3\] There is a confoliation $\xi$ on $D^2 \times I$ that is contact on $D^2 \times (0,1)$, tangent on $D^2 \times \partial I$, has characteristic foliation $\mathcal K$, and is everywhere transverse to $\partial/\partial z$. Using cylindrical coordinates, define $\alpha = dz - r^2s(\theta,z)d\theta$. Then $d\alpha=-2rs(\theta,z)drd\theta-r^2s_z(\theta,z)dzd\theta$, from which it follows that $$\alpha \wedge d\alpha=-2s(\theta,z)rdrd\theta dz.$$ Setting $\xi=\text{ker}(\alpha)$ has the desired properties. The inductive construction of $\xi$. {#theconstructionsection} ==================================== In this section, we show how a $C^0$-foliation can be used to propagate a contact structure across $M$. Before describing this procedure, we highlight the role of smoothness in the approach used by Eliashberg and Thurston in performing this propagation. First, recall Lemma \[ETlemma\]. Roughly speaking, a clever choice of foliation coordinates permits a confoliation along a Legendrian curve to be described by a monotone function. By taking advantage of a beginning contact zone, such a function can be approximated by a strictly monotone function, thereby creating a larger contact zone. This argument can be repeated on overlapping regions covering the manifold. The issue is whether strict monotonicity attained on a given region can be preserved under subsequent approximations. This is precisely where smoothness of the foliation becomes important, guaranteeing that derivatives are globally defined and continuous, and thus allowing one to preserve strict monotonicity under subsequent approximations. We circumvent the issue of monotonicity with carefully chosen, minimally overlapping, flow boxes and a more discrete propagation technique, which we now describe. Recall Definition \[dominate\] and the slope convention chosen in Definition \[flowboxdefn\]. Let $F$ be a flow box and let $R$ be a rectangle in $\partial_v F$. Let $\chi_{\xi_1}$ and $ \chi_{\xi_2}$ denote the characteristic foliations induced on $R$ by two 2-plane fields $\xi_1$ and $\xi_2$ defined in a neighborhood of $R$ and positively transverse to $\Phi$. We write $\chi_{\xi_1}<_p\chi_{\xi_2}$ if the unit vector tangent to $\chi_{\xi_1}$ at $p$ has slope less than the unit vector tangent to $\chi_{\xi_2}$ at $p$. Similarly, we write $\chi_{\xi_1}=_p\chi_{\xi_2}$ if the unit vector tangent to $\chi_{\xi_1}$ at $p$ has slope equal to the unit vector tangent to $\chi_{\xi_2}$ at $p$. We write $\chi_{\xi_1}<\chi_{\xi_2}$ if $\chi_{\xi_1}<_p\chi_{\xi_2}$ for all $p\in \text{int}(R)$. Similarly, we write $\chi_{\xi_1}\le\chi_{\xi_2}$ if $\chi_{\xi_1}\le_p\chi_{\xi_2}$ for all $p\in \text{int}(R)$. Let $M$ be a closed oriented 3-manifold with smooth flow $\Phi$. Suppose that $M$ decomposes as a union $$M=V\cup W,$$ where $V$ and $W$ are smooth 3-manifolds, possibly with corners, and $\partial V=\partial W$. We say that this decomposition is [*compatible with the flow $\Phi$*]{} if $\partial V$ (and hence $\partial W$) decomposes as a union of compact subsurfaces $\partial_v V\cup \partial_h V$, where $\partial_v V$ is a union of flow segments of $\Phi$ and, $\partial_h V$ is transverse to $\Phi$. In the presence of a foliation transverse to the flow, the notation $\partial_h V$ will be used for the portion of $\partial V$ tangent to the foliation. \[compatible1\] Let $M$ be a closed oriented 3-manifold with smooth flow $\Phi$. Suppose that $M$ can be expressed as a union $$M=V\cup W,$$ where $V$ and $W$ are smooth 3-manifolds, possibly with corners, such that $\partial V=\partial W$. Suppose also that this decomposition is compatible with $\Phi$, that $V$ admits a smooth contact structure $\xi_V$, and that $W$ admits a $C^0$-foliation $\mathcal F_W$. We say that $(V,\xi_V)$ is [*$\Phi$-compatible*]{} with $(W,\mathcal F_W)$, and that $M$ admits a positive $(\xi_V,\mathcal F_W,\Phi)$ decomposition, if the following are satisfied: 1. $\xi_V$ and $\mathcal F_W$ are (positively) transverse to $\Phi$ on their domains of definition, 2. $\xi_V$ is tangent to $\partial_h V$, and 3. $\chi_{\xi_V}<\chi_{T\mathcal F_W}$ on the interior of $\partial_v V$, when viewed from outside $V$. The main result of the section is Theorem \[main1\]. The starting point is a transitive flow box decomposition $M= V\cup F_1 \cup \dots \cup F_n$ for $\mathcal F$. Set $V_0 = V$ and $V_i = V\cup F_1\cup...\cup F_i$ for each $i, 1\le i\le n$. For each $i, 0\le i\le n$, set $W_i = M\setminus \text{int}(V_i)$. Thus, for $0\le i \le n$, $\mathcal F$ is compatible with $(V_i,W_i,\Phi)$. When $i=0$, $\partial_h V_i = \emptyset$, and when $i=n$, $V_n=M$ and hence $\partial V_n=\emptyset$. \[smoooth\] After possibly blowing up finitely many leaves of $\mathcal F$, we may assume that 1. $\mathcal F$ is smooth in a neighborhood of the horizontal faces $\partial_h F_i$, and 2. $\mathcal F$ is smooth in a neighborhood of the vertical edges of each $F_i$. We will define $\xi$ inductively and in a piecewise fashion over each $F_i$. To guarantee that the resulting confoliation is everywhere smooth, we add a [*smoothly foliated collar*]{} about the horizontal boundary and the vertical 1-simplices of each $F_i$ as follows. Let $L_1, L_2, \dots, L_m$ be the collection of leaves of $\mathcal F$ that contain the horizontal boundaries of all $F_i$. Modify the original foliation $\mathcal F$ by thickening each of the leaves $L_j$. Thus each $L_j$ is replaced with a product of leaves $L_j \times [-1,1]$. The thickening should be performed so that for each $i$, if a component of $\partial_h F_i$ was originally contained in $L_j$, then it is now contained in $L_j \times \{0\}$. As noted in Lemma \[blowup\], we may assume the interior of each thickening, $L_j\times (-1,1)$, is smoothly immersed. Note that $M= V\cup F_1 \cup \dots \cup F_n$ remains a transitive flow box decomposition of $M$ with respect to this new foliation. Moreover, $\mathcal F \cap F_i$ is smooth in a neighourhood of $\partial_h F_i$ for each $i$. The vertical edges of the $F_i$ are smooth transverse arcs, thus $\mathcal F$ can be smoothed on $D^2 \times I$ product neighborhoods of these edges. Now we fix a preferred regular neighborhood of the union of the horizontal 2-cells and the vertical 1-cells of the flow box decomposition. We do this as follows. For each $i, 1\le i \le n$, choose a regular neighborhood of $ \partial_h F_i$ which is contained in the thickening $L_i\times (-1,1)$, and choose a regular neighborhood of the vertical 1-cells of $F_i$on which $\mathcal F$ is smooth. Choose these neighborhoods so that the union $U$ of these neighborhoods is a regular neighborhood of the union of all horizontal 2-cells and all vertical 1-cells of the flow box decomposition. We refer to both this preferred neighborhood $U$ and the restriction of $\mathcal F$ to $U$ as the [*smoothly foliated collar*]{}. Next, we modify Definition \[compatible1\] slightly to account for the smoothly foliated collar. \[compatible\] Let $0\le i<n$ and let $\xi_i$ be a smooth confoliation on $V_i$ such that $\xi_i$ is tangent to $\partial_h V_i$ and everywhere transverse to $\partial_v V_i$. (Note that by smoothness, we may consider $\xi_i$ to be defined on an open neighborhood of $ V_i$.) We say the smooth confoliation $\xi_i$ [*dominates*]{} $\mathcal F$ if $\chi_{\xi_i} \ge \chi_{\mathcal F}$ on $\partial_v V_i$ when viewed from outside $V_i$, with equality permitted only on the closure, $\overline{U}$, of the smoothly foliated collar. Use [*strictly dominates*]{} if the inequality is strict. Let $\xi_i$ be a smooth confoliation defined on $V_i$. We say that $(V_i,\xi_i)$ is [*smoothly $\Phi$-compatible*]{} with $(W_i,\mathcal F)$ if the following are satisfied: 1. $\xi_i$ and $\mathcal F_W$ are (positively) transverse to $\Phi$ on their domains of definition, 2. $\xi_i$ is tangent to $\partial_h V_i$ 3. $\xi_i = T\mathcal F$ on $\overline{U}\cap V_i$, 4. $\xi_i$ is a contact structure on $V_i\setminus U$, and 5. $\xi_i$ dominates $\mathcal F$ (on $\partial_v V_i$). \[inductthis\]Let $M = V \cup F_1 \cup \dots \cup F_n$ be a transitive flow box decomposition of $M$, and let $\xi_i$, $i\ge 0$, be a smooth confoliation defined on $V_i$ that is smoothly $\Phi$-compatible with $(W_i, \mathcal F)$. Then there is a smooth confoliation $\xi_{i+1}$ defined on $V_{i+1}$ that is smoothly $\Phi$-compatible with $(W_{i+1}, \mathcal F)$ and restricts to $\xi_i$ on $V_i$. Since any polygon admits a triangulation, any transitive flow box decomposition can be chosen to consist only of flow boxes diffeomorphic to the flow box $G=\Delta\times [0,1]$ defined in Section \[basicsection\]. In particular, we may assume that $F_{i+1}$ is diffeomorphic to $G$. Such a diffeomorphism preserves the foliation and flow directions, that is, slopes $0$ and $\infty$; thus we will make slope comparisons and approximations without reference to the change of coordinates. By hypothesis, $\xi$ strictly dominates $\mathcal F$ on $X=\partial_v V_i \setminus U$. By compactness, there exists $\epsilon>0$ such that $$\text{slope}(\xi_i)-\text{slope}(\mathcal F)> 3\epsilon$$ on $X\cap \partial_v F_{i+1}$. By Proposition \[smoothapprox\], we may approximate the restriction of $\mathcal F$ to $F_{i+1}$ by a smooth foliation $\tilde{\mathcal F}$ such that $(\tilde{\mathcal F}, T\tilde{\mathcal F})$ is $C^0$-close to $(\mathcal F,T\mathcal F)$. Choose this approximation so that $\xi_i$ dominates $\tilde{\mathcal F}$ on $\partial_v(F_{i+1}) \cap V_i$, and so that $$|\text{slope}(\tilde{\mathcal F})-\text{slope}(\mathcal F)|<\epsilon$$ on all of $\partial_v F_{i+1}$. It follows that $$\text{slope}(\xi_i)-\text{slope}(\tilde{\mathcal F}) > 2\epsilon$$ on $X\cap \partial_v F_{i+1}$. Choose smooth coordinates $(x,y,z)$ on $F_{i+1}$ so that the leaves of $\tilde{\mathcal F}$ are horizontal, given by $z=$ constant. (Note that although this change of coordinates might change slope values, it doesn’t affect the relative values of slopes.) Next consider the number of 2-cells contained in $\partial_v(F_{i+1}) \cap V_i$. From the definition of a transitive flow box decomposition there is at least one. Depending on whether there are exactly one, two, or three such 2-cells, apply the corresponding Corollary \[extend6\], \[extend7\], or \[extend8\] to smoothly extend $\xi_i$ across $F_{i+1}$ and call the resultant confoliation $\xi_{i+1}$. Smoothness of the glued confoliation is assured by the construction of the confoliations near the boundaries of their flow boxes. In constructing the extensions of Corollaries \[extend5\]–\[extend8\] the starting point is a 1-form on $\partial_v F_{i+1}$ given by $dz-a(x,y,z)dx$ with $a(x,y,z)>0$ on $ X \cap \partial_v F_{i+1}$. This is then extended across $F_{i+1}$ while keeping $a(x,y,z)>0$ and also $\frac{\partial a}{\partial y}(x,y,z)>0$. Given any $\delta>0$, these constructions can be performed while keeping the change in $a(x,y,z)$ along Legendrian curves less than $\delta$. In other words, the extension $\xi_{i+1}$ can be chosen so that the change in $\text{slope}(\xi_{i+1})$ along Legendrian curves is less than $\epsilon$. Thus $$\text{slope}(\xi_{i+1})-\text{slope}(\tilde{\mathcal F}) > \epsilon$$ on $\partial_v V_{i+1} -(N_h\cup N_v)$, and consequently, on this set we also have $$\text{slope}(\xi_{i+1})-\text{slope}({\mathcal F}) > 0.$$ \[inductthiscor\] Let $M = V \cup F_1 \cup \dots \cup F_n$ be a transitive flow box decomposition of $M$, and let $\xi_i$, $i\ge 0$, be a smooth confoliation defined on $V_i$ that is smoothly $\Phi$-compatible with $(W_i, \mathcal F)$ and lies within $\epsilon$ of $\mathcal F$ on the intersection of the domain of $\mathcal F$ with $V_i$. Then there is a smooth confoliation $\xi_{i+1}$ defined on $V_{i+1}$ that is smoothly $\Phi$-compatible with $(W_{i+1}, \mathcal F)$, restricts to $\xi_i$ on $V_i$, and lies within $3\epsilon$ of $T\mathcal F$ on the intersection of the domain of $\mathcal F$ with $V_{i+1}$. It follows from Proposition \[smoothapprox\] that a smooth foliation $\tilde{\mathcal F}$ may be chosen on $F_{i+1}$ so that $T\tilde{\mathcal F}$ is within $\epsilon$ of $T\mathcal F$. Restricting attention to $\partial_v V\cap F_{i+1}$, $T\tilde{\mathcal F}$ lies within $2\epsilon$ of $\xi_i$. From the proof of Proposition \[inductthis\], $\xi_{i+1}$ is constructed to be as close or closer to $T\tilde{\mathcal F}$ on $F_{i+1}$ than $\xi_i$ is on $\partial_v V\cap F_{i+1}$. Thus $\xi_{i+1}$ is within $2\epsilon$ of $T\tilde{\mathcal F}$, and hence within $3\epsilon$ of $T\mathcal F$. \[almost\] Let $M = V \cup F_1 \cup \dots \cup F_n$ be a transitive flow box decomposition of $M$, and let $\xi_0$ be a smooth contact structure defined on $V$ that is compatible with $(W, \mathcal F)$. Then there is a smooth contact structure defined on $M$ that agrees with $\xi_0$ in $V$ and is $\Phi$-close to $\mathcal F$ on $W$. Inductively applying Proposition \[inductthis\] produces a transitive smooth confoliation that is $\Phi$-close to $\mathcal F$ on $W$ and agrees with $\xi_0$ in $V$. By Proposition 2.8.1 of [@ET] (see also [@Et2]), this transitive smooth confoliation can be smoothly deformed into a smooth contact structure. Thus there is a smooth contact structure defined on $M$ that agrees with $\xi_0$ in $V$ and is $\Phi$-close to $\mathcal F$ on $W$. Let $M = V \cup F_1 \cup \dots \cup F_n$ be a transitive flow box decomposition of $M$, and let $\xi_0$ be a smooth confoliation defined on $V$ that is $\Phi$-compatible with $(W, \mathcal F)$. Then there is a smooth contact structure defined on $M$ that agrees with $\xi_0$ in $V$ and is $\Phi$-close to $\mathcal F$ on $W$. If $\xi_0$ is within $\epsilon$ of $T\mathcal F$ along $\partial V=\partial W$, then $\xi$ can be chosen to lie within $f(n)\epsilon$ of $T\mathcal F$ in $W$, for some positive function $f$. This follows immediately from Corollary \[inductthiscor\]. \[phitoC0\] Let $M$ be a closed oriented 3-manifold. Suppose that $M$ can be expressed as a union $$M=V\cup W,$$ such that $\partial V=\partial W$, $V$ admits a smooth contact structure $\xi_0$, $W$ admits a $C^0$-foliation $\mathcal F$, and $(V,\xi_0)$ and $(W,\mathcal F)$ are $\Phi$-compatible. Then $\xi_0$ can be modified in an arbitrarily small collar neighborhood of $\partial V$ so that the restriction of $\xi_0$ to $\partial_v V$ lies arbitrarily $C^0$-close to $T\mathcal F$. Fix any $\epsilon>0$. Let $X=(\partial_v V) - (N_h\cup N_v).$ Let $X\times (-\delta,0]$ be a collar neighborhood of $X$ in $V$, with $X=X\times \{0\}$. Pick any smooth line field $l$ on $X$ which is tangent to $\mathcal F$ along the smoothly foliated collar, is dominated by the projection of $\xi_0|_{X\times \{ -\delta\}}$ to $X$, dominates the projection of $\mathcal F|_{X\times \{0\}}$ to $X$, and lies with $\epsilon$ of $\mathcal F|_{X\times \{0\}}$. Replace the restriction of $\xi_0$ to $X\times [-\delta,0]$ by a straight line homotopy between $\xi_0$ restricted to $X\times \{-\delta\}$ and $l$, damped to fit smoothly with the original $\xi_0$ as defined on the complement of the collar.  \[main1\] If $M$ admits a positive $(\xi_V,\mathcal F_W,\Phi)$ decomposition, then $M$ admits a smooth positive contact structure which agrees with $\xi_V$ on $V$ and is $\Phi$-close to $\mathcal F_W$ on $W$. Moreover, $M$ also admits a smooth contact structure which agrees with $\xi_0$ on the complement in $V$ of a collar neighborhood of $\partial V$ and is arbitrarily $C^0$-close to $\mathcal F$ on $W$. The analogous result holds if $M$ admits a negative $(\xi_{V'},\mathcal F_{W'},\Phi)$ decomposition. If $M$ admits both a positive $(\xi_V,\mathcal F_W,\Phi)$ decomposition and a negative $(\xi_{V'},\mathcal F_{W'},\Phi)$ decomposition, then these contact structures are weakly symplectically fillable. This follows immediately from Proposition \[transitiveflowbox\], Corollary \[almost\], and Lemma \[phitoC0\]. Attracting holonomy {#attracting holonomy} =================== This section gives a generalization of the Eliashberg-Thurston result on perturbing foliations in a neighborhood of a curve in a leaf with sometimes attracting holonomy (Proposition 2.5.1, [@ET]) to the larger class of $C^0$-foliations with holonomy containing a [*contracting interval*]{}. Holonomy with a contracting interval will be defined below in terms of the existence of a particular smooth submanifold with corners. Holonomy with a contracting interval is a weaker condition than holonomy with an attracting leaf. Let $P$ be the prism in $\mathbb R^3$ given by $|x|\le 1$, $|y|\le 1$ and $|z|\le y/2 +3/2$. The slanted top and bottom of $P$ is denoted by $\partial_h P$, and the rest of $\partial P$ is denoted by $\partial_v P$. Let $V$ be the solid torus given by identifying each pair of points $(x,-1,z)$ and $(x,1,z)$ where both are in $P$. Then $\partial_h V$ is defined to be the image of $\partial_h P$, and $\partial_v V$ is defined to be the portion of $\partial V$ that is in the image of $\partial_v P$. \[contracting\] The holonomy of a $C^0$-foliation $\mathcal F$ transverse to a flow $\Phi$ in a 3-manifold $M$ has a [*contracting interval*]{} if there is a subset of $M$ diffeomorphic to $V$, called an [*attracting neighborhood*]{}, such that 1. $\partial_h V$ is mapped into leaves of $\mathcal F$ and 2. vertical intervals in $V$ are mapped to flow lines of $\Phi$. \[attracting\] Let $\mathcal F$ be a taut oriented $C^0$-foliation transverse to a flow $\Phi$. If $V$ is a disjoint union of attracting neighborhoods, and $\mathcal F$ is $V$-transitive, then $\mathcal F$ can be $\Phi$-approximated both by a smooth positive contact structure and by a smooth negative contact structure. These contact structures are therefore weakly symplectically fillable and universally tight. In light of Theorem \[main1\], it is enough to construct a smooth confoliation $\xi$ on $V$ such that $(V, \xi)$ is compatible with $(W,\mathcal F)$, in the sense of Definition \[compatible\]. It is sufficient to consider the case that $\xi$ is positive. The construction of the desired $\xi$ is the same on each component of $V$, so we will treat $V$ as if it is connected. Let $W$ denote the closure of the complement of $V$ in $M$. ![Foliated neighborhoods of horizontal faces and vertical edges are shown only in $Q$ though they exist in $P$ and $V=P/\sim$ as well.[]{data-label="QPV"}](QPV){width="5in"} Consider the transformation of the prism $P$ to the cube $$Q=[-1,1] \times [-1,1] \times [-1,1]$$ that fixes $(x,y)$ and linearly scales the $z$-coordinate. View $V$ as the quotient $$V=Q/{\sim}$$ of $Q$ obtained by identifying $$(x,-1,z)\sim (x,1,z/2).$$ Notice that $\partial_h Q$ is mapped into leaves of $\mathcal F$, and vertical intervals in $Q$ are mapped to flow lines of $\Phi$. Moreover, we may assume that the original parametrization of $P$ was chosen so that $\mathcal F$ meets the $y=\pm 1$ sides of $\partial_v Q$ in horizontal lines. Since $\mathcal F$ is $C^0$, the leaves of the foliation meet each of the $x=\pm 1$ sides in a continuous family of smooth graphs. To facilitate smooth gluings, thicken the leaf or leaves of $\mathcal F$ which meet $\partial_h Q$. Fix $0< \epsilon< 1/4$. Choose the thickening of the leaves of $\mathcal F$ intersecting $\partial_h V$ to replace the leaves $z=\pm 1$ in $Q$ with a disjoint union, $J$, of $I$-bundles in $Q$, with $J$ containing $[-1,1]\times[-1,1]\times [-1,-1+\epsilon]$ and $[-1,1]\times[-1,1]\times [1-\epsilon,1]$ as components. Assume also that $\mathcal F$ meets an $\epsilon$-neighborhood, $N$, of the quotient of the vertical edges of $Q$ in a smooth horizontal foliation. Let $U_Q = \text{int} (J\cup N)$; $U_Q$ is an $\epsilon$-neighbourhood of the union of $\partial_h Q$ with the 1-cells of $\partial_v Q$. We assume also that the pull-back of $\mathcal F$ to $\overline{U_Q}$ is horizontal. We will abuse notation and let $U_Q$ also refer to the projection of $U_Q$ to $M$. Since $Q$ is a flow box it is amenable to the constructions of Section \[basicsection\]. We will $\Phi$-approximate $\mathcal F$ in $Q$, and thus in $V$, by a $C^{\infty}$-confoliation $\xi_0$ which smoothly respects both the identification $\sim$ and the gluing of $(V,\xi_0)$ and $(W,\mathcal F)$ along $\partial_h V=\partial_h W$. The confoliation $\xi_0$ will be chosen to agree with $\mathcal F$ on $U_Q$ and to be a contact structure on $Q\setminus \overline{U_Q}$ which, when viewed from outside $Q$, is strictly dominated by $\mathcal F$ on the $y=-1$ side of $\partial_v Q$ and strictly dominates $\mathcal F$ on the remaining three sides of $\partial_v Q$. See Figure \[thecube\]. Since the interior of the $y=-1$ side lies in the interior of $V$, the resulting $(V,\xi_0)$ will be compatible with $(W,\mathcal F)$ in the sense of Definition \[compatible\]. ![Some of the choices of flow lines of the vector fields $X_A,...$ and their relationship to $\mathcal F$ are shown. Not enough detail is drawn to show that the holonomy given by flowing from left to right, that is counterclockwise about $\partial_v Q$, is decreasing.[]{data-label="thecube"}](cube){width="5in"} As a first step in constructing $\xi_0$, we define $\xi_0$ along $\partial_v Q$. The vertical boundary $\partial_v Q$ consists of four vertical faces. Let $A$ denote the face $y=-1$, let $B$ denote the face $x=1$, let $C$ denote the face $y=1$, and let $D$ denote the face $x=-1$. We construct $\xi_0$ by first specifying smooth unit vector fields $X_A, X_B, X_C,$ and $X_D$ along the faces $A, B, C, $ and $D$ respectively, and then declaring $\xi_0$ along $\partial_v Q$ to be the 2-plane field which is normal to $\partial_v Q$ and contains the corresponding tangent vector $X_A, X_B, X_C,$ or $ X_D$. We will choose $X_A, X_B, X_C,$ and $X_D$ to be horizontal on $\overline{U_Q}$ and hence in a neighborhood of the vertical 1-simplices of $\partial_v Q$; in particular, the 2-plane field $\xi_0$ will therefore be well-defined on the vertical edges. Begin by defining the vector field $X_B$. Choose $X_B=X_B(1,y,z)$ to be a smooth unit vector field which satisfies the following 1. $X_B$ dominates $\mathcal F$, 2. $X_B$ has positive slope when both $y$ and $z$ lie in $(-1+\epsilon, 1-\epsilon)$, and 3. $X_B =\partial/\partial y$ when $y$ or $z$ lies in $ [-1,-1+\epsilon]\cup [1-\epsilon,1]$. Let $\Psi_B$ denote the flow generated by $X_B$. Abusing notation a bit, we denote by $\Psi_B(y,z)$ the intersection of of the flow line of $\Psi_B$ that starts at $(1,-1,z)$ with $\{(1,y)\}\times[-1,1]$. (We use this notation when referring to all flows in this section.) Let $f_B: [-1,1]\to [-1,1]$ denote the diffeomorphism given by $f_B(z) = \Psi_B(1,z)$. Note that since $X_B$ has positive slope whenever both $t$ and $z$ lie in $(-1+\epsilon, 1-\epsilon)$, $f_B(z)>z$ whenever $z$ lies in $(-1+\epsilon, 1-\epsilon)$. Indeed, by choosing the slope of $X_B$ to be great enough, we may guarantee that $f_B(-1/2)\ge 1/2$. Rechoose $X_B$ as necessary so that $X_B$ satisfies (1)–(3) and also:\ (4) $f_B(-1/2)\ge 1/2$.\ Now choose $X_D$ along the side $x=-1$ by setting $X_D(-1,y,z) = X_B(1,-y,z)$. Notice that if $f :[-1,1]\to [-1,1]$ is any orientation preserving diffeomorphism, then there is a smooth flow $\Psi=\Psi(x,z)$ on $[-1,1]\times [-1,1]$ such that $\Psi(1,z) = f(z)$; simply set $$\Psi(x,z)=\frac{1-t(x)}{2} z + \frac{1+t(x)}{2} f(z)$$ where $t(x)$ is a smooth function of $x$ that is $-1$ for $x\in[-1,-1+\epsilon]$, $1$ for $x\in [1-\epsilon, 1]$, and has positive derivative for all other $x$. Thus, by specifying a diffeomorphism $f:[-1,1]\to [-1,1]$, we specify a family of smooth flows $\Psi(x,z)$ and corresponding smooth unit tangent vector fields $X$. We take advantage of this to define the vector field $X_A$ along the side $y = -1$. Let $f_A: [-1,1]\to [-1,1]$ be the diffeomorphism given by $$f_A^{-1}(z)=u\circ f_B \circ f_B\circ f_B(z),$$ where $u: [-1,1]\to [-1,1]$ is a diffeomorphism which is the identity on $z\in [-1,-1+\epsilon]\cup [1-\epsilon,1]$ and strictly increasing elsewhere. Let $\Psi_A$ be a smooth flow on $[-1,1]\times [-1,1]$ which satisfies the following: 1. $\Psi_A$ has negative slope whenever $z\in (-1+\epsilon, 1-\epsilon)$, 2. $\Psi_A$ has unit tangent vector field given by $\partial/\partial x$ when $z\in [-1,-1+\epsilon]\cup [1-\epsilon,1]$ 3. $ \Psi_A(1,z) = f_A(z)$. Let $X_A$ be the smooth unit tangent vector field to $\Psi_A$. Recall that $\mathcal F$ is horizontal, and hence is dominated by $X_A$, along $A$. Similarly, we use a diffeomorphism $f_C:[-1,1]\to [-1,1]$ to define a smooth vector field $X_C$ along the side $y=1$. Let $f_C:[-1,1]\to [-1,1]$ be a diffeomorphism which satisfies: 1. $f_C(z)=1/2 f_A(2z)$ when $|z| \le 1/2$, 2. $f_C(z) = z$ when $z\in [-1,-1+\epsilon]\cup [1-\epsilon,1]$, and 3. $f_C(z)<f_B(z)$ when $ 1/2 < |z| < 1-\epsilon$. Since $f_A(z)=z$ whenever $|z|\ge 1-\epsilon$, $f_C(z)=z$ for $(1-\epsilon)/2 \le |z|\le 1/2$. Therefore, $f_C[-1/2,1/2]=[-1/2,1/2]$. Since $f_B(-1/2)>1/2$, it follows that $f_C(z)\le f_B(z)$ for all $z$, with equality only when $|z|\ge 1-\epsilon$. So $f_B\circ f_C\circ f_B(z)<f_B\circ f_B\circ f_B (z)$, and hence $$f_A\circ f_B\circ f_C\circ f_B(z)< z,$$ for all $z \in (-1+\epsilon, 1-\epsilon).$ So the diffeomorphism $f_A\circ f_B\circ f_C\circ f_B : [-1,1]\to [-1,1]$ is strictly decreasing on $(-1+\epsilon,1-\epsilon)$ and hence Proposition \[cylinderextend\] applies. \[smoothisbracketed\] Let $\mathcal F$ be a taut, oriented, $C^2$-foliation of a closed oriented 3-manifold. Suppose $\mathcal F$ is not the product foliation $S^1\times S^2$. Then $\mathcal F$ is bracketed. Let $\Phi$ be a volume preserving flow transverse to $\mathcal F$. As noted in [@ET], either $\mathcal F$ has a lot of nontrivial linear holonomy or it is $C^0$ close, and hence $\Phi$-close, to a foliation $\mathcal F'$ which has a lot of nontrivial linear holonomy. In other words, there exists a disjoint union $V$ of attracting neighborhoods such that one of $\mathcal F$ or $\mathcal F'$ is $V$-transitive. In either case it follows that $\mathcal F$ is bracketed. $L$-bracketed foliations {#L bracketed} ======================== In this section, we introduce a new method for $\Phi$-approximating a foliation $\mathcal F$ by a pair of transitive confoliations, one positive and one negative. This method applies whenever there exists a link transverse to $\mathcal F$ which satisfies the condition given in Definition \[transtranv\]. We also remark on some consequences yielding 3-manifolds containing weakly symplectically fillable contact structures. Before stating Definition \[transtranv\], we make some preliminary observations. \[flowextend\] Suppose $\mathcal F$ is a taut oriented codimension-1 foliation in $M$. Let $L$ be any link transverse to $\mathcal F$. Then there is a choice of metric on $M$ and volume preserving flow $\Phi$ everywhere transverse to $\mathcal F$ such that $L$ is contained as a union of closed orbits of $\Phi$. Moreover, given a choice of regular neighborhood $N(L) = \cup_i D_i\times S^1$ of $L$, the metric on $M$ and $\Phi$ can be chosen so that $\Phi$ is a trivial product $\{p\}\times S^1$, $p\in D_i$, on this regular neighborhood; in particular, $\Phi$ restricts to a flow on $\partial N(L)$. This follows immediately from the proof of Theorem A1 found in [@Hass]. So any link transverse to $\mathcal F$ can be extended to a volume preserving flow $\Phi$ transverse to $\mathcal F$. Alternatively, we may begin with a volume preserving flow $\Phi$ and let $L$ be a collection of closed orbits of $\Phi$. Without loss of generality, we will restrict attention to the case that this flow $\Phi$ restricts to a flow on $\partial N(L)$ for some choice of regular neighborhood $N(L)$ of $L$. In either case, $\Phi$ determines a preferred, possibly non-compact, curve on each component of $\partial N(L)$: \[torusfoln\] Let $T$ be a framed torus and let $\Phi$ be a flow on $T$. Then either $\Phi$ contains a simple closed curve of some rational slope $m_{\Phi}$ or $\Phi$ is topologically conjugate either to a foliation by lines of some irrational slope $m_{\Phi}$ or to a Denjoy blowup of a foliation by lines of some irrational slope $m_{\Phi}$. In each case, the slope $m_{\Phi}$ is uniquely determined (by $\Phi$). As long as no leaf of $\Phi$ has slope $1/0$, the framing determines a unique realization of $\Phi$ as a suspension of some homeomorphism $f$ of $S^1$ and the Poincaré rotation number of $f$ determines the slope $m_{\Phi}$. See, for example, 4.3.1, 5.1.1 and 5.1.3 of [@HH]. ![Slope convention on a component of $\partial N(L)$ in which slopes are designated as viewed from inside $N(L)$. []{data-label="slopeconvention"}](slopeconvention){width="3in"} Denote this preferred isotopy class of curves, represented by either a simple closed leaf or an immersed $\mathbb R$, by $m_{\Phi}^T$. We are interested in the case that this torus $T$ is a component of $\partial N(L)$. In this case, there is also the isotopy class of the meridian, $\nu^T$ say, and $\nu^T\ne m_{\Phi}^T$. We shall call an isotopy class of a nontrivial curve $C$ in $T$ [*positive*]{} if it has positive slope with respect to $\langle \nu, m_{\Phi}\rangle$ when viewed from inside $N$. Similarly, we shall call an isotopy class of curves [*negative*]{} if it has negative slope with respect to $\langle \nu, m_{\Phi}\rangle$ when viewed from inside $N$. This convention is illustrated in Figure \[slopeconvention\]. Note that if $\mathcal F$ is an oriented codimension-1 foliation which intersects a torus $T$ transversely, then $\mathcal F\cap T$ is a flow on $T$, and $m_{\mathcal F}^T$ denotes the preferred isotopy class of this flow. Define a triple $(M,\mathcal F, \Phi)$ to be [*coherent*]{} if the foliation $\mathcal F$ is taut and oriented, the flow $\Phi$ is volume preserving and positively transverse to $\mathcal F$, and the boundary of M, if nonempty, is a union of flow lines. Let $L$ be a link in $M$. A foliation $\mathcal F$ is [*$L$-taut*]{} if $\mathcal F\pitchfork L$, and $\mathcal F$ is $L$-transitive, that is, each leaf of $\mathcal F$ has nonempty intersection with $L$. Similarly, if $\partial M\ne \emptyset$, then $\mathcal F$ is [*$\partial M$-taut*]{} if $\mathcal F$ intersects $\partial M$ transversely, with no Reeb annuli, and $\mathcal F$ is $\partial M$-transitive, that is, each leaf of $\mathcal F$ has nonempty intersection with $\partial M$. Recall that a foliation $\mathcal F_0$ is said to [*realize slope*]{} $m$ along a framed torus boundary component $T$ if $\partial \mathcal F_0\cap T$ consists of parallel curves, not necessarily compact, of slope $x$. When $\mathcal F_0$ is oriented, these curves $\partial \mathcal F_0\cap T$ are necessarily consistently oriented. Notice that the condition that a foliation $\mathcal F_0$ be $\partial M_0$-taut is weaker than the condition that $\partial F_0$ realizes slope $m_{\mathcal F_0}^T$ for each component $T$ of $\partial M_0$; in other words, nontrivial holonomy is possible for $\mathcal F_0\cap \partial M_0$. \[transtranv\] Suppose $\mathcal F$ is a taut oriented codimension-1 foliation in $M$. Let $L$ be a link in $M$ which is transverse to $\mathcal F$ and let $M_0$ equal $M\setminus \text{int}N(L)$. Let $\mathcal F_0$ denote the restriction of $\mathcal F$ to $M_0$. The foliation $\mathcal F$ is [*$L$-bracketed*]{} if, for some choice of metric on $M_0$, there is a volume preserving flow $\Phi_0$ on $M_0$ such that $(M_0,\mathcal F_0,\Phi_0)$ is coherent and the following property is satisfied: $M_0$ contains a pair of foliations $\mathcal F_{\pm}$ such that 1. $(M_0,\mathcal F_+,\Phi_0)$ and $(M_0,\mathcal F_-,\Phi_0)$ are coherent, 2. $\mathcal F_{\pm}$ are $\partial M_0$-taut, and 3. for each component $T$ of $\partial M_0$, $m_{\mathcal F_-}^T$ is negative and $m_{\mathcal F_+}^T$ is positive with respect to $\langle \nu^T, \Phi|_T\rangle$, where $\nu^T$ is the meridian slope of component $T$ (and hence is the slope of $\mathcal F_0\cap T$). To make the flow explicit, we also say that $M$ contains an [*$(\mathcal F,\Phi)$-transitive*]{} link $L$, where $\Phi$ is the flow $\Phi_0$ blown down to a (volume preserving) flow on $M$. The notion of $L$-bracketed is a special case of a bracketed foliation $\mathcal F$. The decomposition is given by setting $V=V'=N(L)$, $W=W'=M_0$. There is a canonical choice of positive or negative contact structure on $N(L)$ which is given by perturbing the meridional disks, and then the requirement is to find foliations $\mathcal F_W=\mathcal F_+$ and $\mathcal F_{W'}=\mathcal F_-$ on $M_0$. \[transitivemain\] Suppose $\mathcal F$ is a taut oriented codimension-1 foliation in $M$ and that $\mathcal F$ is $L$-bracketed for some link $L$. Then $\mathcal F$ can be $\Phi$-approximated by a pair of smooth contact structures $\xi_{\pm}$, one positive and one negative. These contact structures are necessarily weakly symplectically fillable and universally tight. Set $V= N(L)$, and let $W$ denote the closure of the complement of $V$. Define a contact structure $\xi_0$ on $N(L)$ so that each component of $L$ is a transverse knot and each component of $N(L)$ is a standard positive contact neighborhood of its core. Choosing the rate of rotation of the contact planes along each meridional disk to be small guarantees that the characteristic foliation of $\xi_0$ along $\partial N(L)$ is close to the meridian. It follows that we may choose $\xi_0$ so that it is strictly dominated by $\mathcal F_+$. Apply Theorem \[main1\] to obtain $\xi_+$. Similarly, each component of $N(L)$ can be modeled using the standard negative radial model, and Theorem \[main1\] can be applied to obtain $\xi_-$. Since $\xi_{\pm}$ are both positively transverse to the volume preserving flow $\Phi$, they are weakly symplectically fillable. Since transitive links are somewhat mysterious, it is natural to ask: Given a foliation $\mathcal F$, does there exist a link $L$ for which $\mathcal F$ is $L$-bracketed? It is not clear how the answer to this might change if the link is required to be connected. Given a foliation $\mathcal F$, does there exist a knot $K$ for which $\mathcal F$ is $K$-bracketed? \[s1timess2\] The product foliation $\mathcal F$ on $S^1 \times S^2$ is an example of a foliation which is not $L$-bracketed for any link $L$. The existence of such a link would imply, by Theorem \[transitivemain\], that $\mathcal F$ can be approximated by a tight contact structure $\xi$. This would imply that the underlying 2-plane bundles of $\mathcal F$ and $\xi$ are equivalent. The Euler class of the foliation, $e(\mathcal F)$, evaluated on a spherical leaf $S^2$ of $\mathcal F$ equals 2. On the other hand, this $S^2$ is homotopic to a convex surface $S'$, [@Gi1], in a tight contact structure. It follows that $S'$ has a connected dividing set, [@Gi2], and that $e(\xi)$ vanishes on it. A taut foliation $\mathcal F$ is certainly $L$-taut for some link $L$. In fact, it is $L$-taut for some knot $L$. Moreover, as noted above, for some choice of metric there is a volume preserving flow $\Phi$ transverse to $\mathcal F$ and containing $L$ as an orbit or union of closed orbits. And often, although not necessarily, foliations on $M_0$ that are $\Phi_0$-close to $\mathcal F$ will also be $L$-taut. So a key question is the existence of a pair of $\Phi_0$-close foliations $\mathcal F_{\pm}$ in $M_0$ such that $m_{\mathcal F_-}^T< \text{slope}\, \partial \mathcal F_0|_T< m_{\mathcal F_+}^T$ for each component $T$. Consider the case that $M_0$ is any compact orientable manifold with boundary a nonempty union of $b$ tori. Suppose that $\Phi_0$ is a volume preserving flow which is tangent to $\partial M_0$ and that $B$ is a transversely oriented branched surface transverse to $\Phi_0$. If $B$ fully carries a set of foliations which are $\partial M_0$-taut and realize a nonempty open set $J$ of boundary slopes (if $b=1$) or multi-slopes (if $b\ge 2$), then Dehn-filling $M_0$ along any rational slope or multi-slope in $J$ results in a foliation which is $L$-bracketed, where $L$ is the link which is the core of the Dehn filling. Examples of such foliations can be found in the papers [@DL; @DR; @g1; @g2; @g3; @G3; @KRo; @KR2; @Li2; @LR; @R; @R1; @R2]. One can ask whether the foliations constructed by Dehn filling more than one torus can be $K$-bracketed for some knot $K$. In [@G1; @G2; @G3] Gabai constructs foliations in closed manifolds $M$ with $H_2(M)\ne 0$. These foliations are fully carried by finite depth branched surfaces. One can ask whether such foliations are $L$-bracketed for some link $L$. Finally, we note that the proof of Theorem \[transitivemain\] doesn’t actually require the existence of the foliation $\mathcal F_0$. More precisely, we have the following. \[tranvlink\] Suppose $(M,\xi)$ is a contact 3-manifold. Let $L$ be a transverse link in $(M,\xi)$. Let $M_0$ equal $M\setminus \text{int}N(L)$. The contact structure $\xi$ is [*$L$-bracketed*]{} if, for some choice of metric on $M_0$, there is a volume preserving flow $\Phi_0$ on $M_0$, tangent to $\partial M_0$, such that the following property is satisfied: $M_0$ contains a pair of foliations $\mathcal F_{\pm}$ such that 1. $(M_0,\mathcal F_+,\Phi_0)$ and $(M_0,\mathcal F_-,\Phi_0)$ are coherent, 2. $\mathcal F_{\pm}$ are $\partial M_0$-taut, and 3. for each component $T$ of $\partial M_0$, $m_{\mathcal F_-}^T$ is negative and $m_{\mathcal F_+}^T$ is positive with respect to $\langle \nu^T, \Phi|_T\rangle$, where $\nu^T$ is the meridian slope of component $T$. \[transitivemaincontact\] Suppose $(M,\xi)$ is a contact 3-manifold and that $\xi$ is $L$-bracketed for some transverse link $L$. Then $\xi$ can be $\Phi$-approximated by a pair of smooth contact structures $\xi_{\pm}$, one positive and one negative. These contact structures are necessarily weakly symplectically fillable and universally tight. Open book decompositions {#Open book} ======================== An interesting class of $L$-bracketed foliations is obtained by considering the special case that $L$ is a fibered link in $M$ and $\mathcal F$ is transverse to a flow $\Phi$ obtained by surgery from a volume preserving suspension flow of the corresponding fibre bundle complement of $L$. In this case $L$ forms the binding of an open book decomposition $(S,h)$ of $M$ and the contact structure $\xi_{(S,h)}$ compatible with $(S,h)$ is $\Phi$-close to $\mathcal F$. For completeness, we begin with some standard definitions. Since we are relating ideas from the world of codimension-1 foliations and the world of contact structures, we will also provide some translations between the terminologies of these two worlds. The main results of this section appear in Subsection \[OBresults\]. Open book decompositions {#open-book-decompositions} ------------------------ Let $S$ be a compact surface with nonempty boundary. A pair $(S,h)$, where $h$ is a homeomorphism that restricts to the identity map on $\partial S$, determines a closed 3-manifold $M=S\times [0,1]/\approx$ where the equivalence relation $\approx$ identifies $(x,1)\approx(h(x),0)$ for all $x\in S$ and $(x,s)\approx(x,t)$ for all $x\in \partial S$ and $s,t\in [0,1]$. The singular fibration with pages $S\times\{t\}$ is called the [*open book determined by the data $(S,h)$,*]{} and we write $M=(S,h)$. Surface bundles over $S^1$ -------------------------- Corresponding to an open book decomposition of $M$ is a description of $M$ as a Dehn surgery along the binding $L=\cup l_i$ by meridional multislope $(\nu_1,...\nu_b)$. Conversely, corresponding to such a Dehn filling description of $M$, we have a corresponding open book description of $M$. Since many existing constructions of foliations are described from the Dehn surgery perspective, it is useful to consider this correspondence more carefully. Let $M_0$ denote the compact complement of $L$; so $M_0= M\setminus \text{int} N(L)$, where $N(L)$ is a regular neighbourhood of $L$, and $M_0$ is homeomorphic to $S\times [0,1]/h$. Notice that if $S$ is a disk, then necessarily $h$ is isotopic rel boundary to the identity map. If $S$ is an annulus, then $h$ is isotopic rel boundary to some power of the Dehn twist about the core of $S$. Otherwise, $S$ is hyperbolic. We therefore lose little by restricting attention to the case that $S$ is hyperbolic and will now do so. Recall Thurston’s classification of surface automorphisms. \[Thurston\] [@Th; @CB; @FLP] Let $S$ be an oriented hyperbolic surface with geodesic boundary, and let $h\in \text{Homeo}(S,\partial S)$. Then $h$ is freely isotopic to either \(1) a pseudo-Anosov homeomorphism $\theta$, \(2) a periodic homeomorphism $\theta$, in which case there is a hyperbolic metric for which $S$ has geodesic boundary and such that $\theta$ is an isometry of $S$, or \(3) a reducible homeomorphism $h'$ that fixes, setwise, a maximal collection of disjoint simple closed geodesic curves $\{C_j\}$ in $S$. Recall that a pseudo-Anosov homeomorphism has finitely many prong singularities and is smooth and hyperbolic elsewhere [@FLP]. To avoid overlap in the cases, we refer to a map as reducible only if it is not periodic. Since we will be considering homeomorphisms $h$ in the context of open books $(S,h)$, we will be considering only homeomorphisms $h$ which fix $\partial S$ pointwise. Therefore, given a reducible map, splitting $S$ along $\cup_jC_j$ gives a collection of surfaces $S_1,\dots, S_n \subset S$ with geodesic boundary that are fixed by $h'$. Maximality of $\{C_j\}$ implies that applying Thurston’s classification theorem to each $h'|_{S_i}\in \text{Homeo}(S_i, \partial S_i)$ produces either a pseudo-Anosov or periodic representative. So we may assume that $h'$ is either periodic or pseudo-Anosov away from some small neighborhood of the $C_i$. \[Thurstonrep\] Let $S$ be hyperbolic and $h\in \text{Homeo}(S,\partial S)$. If conclusion (1) or conclusion (2) of Theorem \[Thurston\] is satisfied, call $\theta$ *the Thurston representative* of $h$. If instead conclusion (3) holds, let $\theta: (S,\partial S)\to (S,\partial S)$ denote the piecewise continuous function uniquely determined by the following constraints: - $\theta$ restricted to each component of the complement of the union $\cup C_i$ is freely isotopic to the restriction of $h'$ to this component and is either periodic or pseudo-Anosov. - $\theta$ restricted to each simple closed geodesic $C_i$ is freely isotopic to the restriction of $h'$ to $C_i$ and is a periodic isometry, and Again, refer to $\theta$ as *the Thurston representative* of $h$. Now consider again the open book decomposition $M=(S,h)$ and let $\theta$ denote the Thurston representative of $h$. When $\theta$ is periodic or pseudo-Anosov, the link complement $M_0 = M\setminus \text{int} N(L)$ is also homeomorphic to the mapping torus $S\times [0,1]/(x,1)\sim (\theta(x),0)$ of $\theta$, and in the discussions which follow, we will typically view $M_0$ as the mapping torus of $\theta$. When $\theta$ is reducible and so only piecewise continuous, we will typically view $M_0$ as the union along essential tori of the mapping tori of the extension to $\cup S_i$ of the restriction of $\theta$ to the complement of the union $\cup C_i$. Let $\Theta_0$ be the flow obtained by integrating the vector field $\partial/\partial t$, where points of $M_0$ are given by $[(x,t)], x\in S, t\in [0,1]$. We will refer to this flow as either the [*suspension flow of $\theta$*]{} or the [*Thurston flow (associated to $h$)*]{}. Since $\theta$ is area preserving with respect to some metric on the fibre, $\Theta_0$ is volume preserving with respect to some choice of metric on $M_0$. Notice that the suspension flow $\Theta_0$ is pseudo-Anosov (respectively, periodic) when $\theta$ is pseudo-Anosov (respectively, periodic). In particular, when $\theta$ is periodic, all orbits of $\Theta_0$ are closed. When $\theta$ is pseudo-Anosov, there are an even number of alternately attracting and repelling closed orbits of $\Theta_0|{\partial N(l_i)}$ for each $i$. When $\theta$ is periodic or pseudo-Anosov, this flow is continuous and the orbits are smoothly embedded. When $\theta$ is reducible, this flow is not continuous, but orbits of the flow are smoothly embedded. Since many closed manifolds, together with corresponding open book decompositions, can be realized by Dehn filling $M_0$, it is useful to have canonical framings on the boundary components of $M_0$ which are defined independently from $M$. As in [@R2], we will use the Thurston flow $\Theta_0$ to define these canonical coordinate systems on $\partial M_0$. The Thurston flow framing on surface bundles over $S^1$ ------------------------------------------------------- \[coordinatesgalore\] Let $\partial_iM_0$ denote the $i$-th boundary component of $M_0$. Choose an oriented identification $\partial_i M_0 \sim \mathbb R^2/\mathbb Z^2$ by choosing oriented curves $\lambda_i$ and $\mu_i$, so that $\lambda_i$ has slope $0$ and $\mu_i$ has slope $\infty$, as follows. Let $\lambda_i = \partial (S\times \{0\})$, with orientation induced by the orientation on $S$. Let $\gamma_i$ be a closed orbit of the flow $\Theta_0$ restricted to $\partial_iM_0$. Choose $\mu_i$ to be an oriented simple closed curve which has algebraic intersection number $< \lambda_i,\mu_i> = 1$ and which minimizes the geometric intersection number $|\gamma_i\cap\mu_i|$. This choice is unique except in the case that the geometric intersection number $|\gamma_i\cap\lambda_i| = 2$. In this case we choose $\mu_i$ so that $\gamma_i$ has slope $+2$. Call the resulting framing the *Thurston flow framing* on $\partial_iM_0$. Slopes expressed in terms of the flow framing will be said to be given in [*Thurston flow coordinates*]{}. This was originally called the [*natural framing*]{} or [*natural coordinates*]{} by Roberts in [@R2]. In this section, we are beginning with a fixed fibering and hence the associated Thurston flow coordinates are well defined. In general, different choices of fibering can lead to nonisotopic closed orbits $\gamma_i$, and hence to different Thurston flow coordinates. Notice also that in these coordinates, the slope of $\gamma_i$ always satisfies $$1/ (\mbox{slope } \gamma_i) \in (-1/2,1/2].$$ Now let’s consider the relationship between the flow framing and the fractional Dehn twist coefficient defined by Honda, Kazez, and Matić in [@HKMRV1]. First recall the definition of fractional Dehn twist coefficient. [@HKMRV1] Fixing a component $C_i$ of $\partial S$ and restricting the flow $\Theta_0$ to the component of $\partial N(L)$ corresponding to $C_i$, $\Theta_0$ necessarily has periodic orbits. Let $\gamma_i$ be one such, and write $$\gamma_i = p_i\lambda_i+q_i\nu_i$$ where $\lambda_i = C_i$, $\nu_i$ is the meridian (oriented so that $<\lambda_i,\nu_i>=1$), and $p_i$ and $q_i$ are relatively prime integers with $q_i>0$. The [*fractional Dehn twist coefficient*]{} of $h$ with respect to the component $C_i$ of $\partial S$ is given by $$c_i(h)=p_i/q_i\, .$$ In particular, when $\gamma_i=\nu_i, (p_i,q_i)=(0,1)$ and $c_i(h)=0.$ Recall that $M$ is obtained from $M_0$ by $(\nu_1,...,\nu_b)$ Dehn filling along the boundary components of $M_0$. Beginning with the open book decomposition of $M$ and the associated fractional Dehn twist coefficients $c_i=p_i/q_i, 1\le i\le b,$ (as above, expressed in $(\lambda_i,\nu_i)$ coordinates), it is sometimes useful to express the slopes of $\nu_i$ and $\gamma_i$ in terms of the Thurston flow coordinates, $(\lambda_i,\mu_i)$. We now describe how to do this. In flow coordinates, $\lambda_i$ has slope $0$. Since $|\lambda_i\cap\nu_i| = 1$, it follows that, in flow coordinates, $\nu_i$ has slope $1/k_i$ for some integer $k_i$. As noted in [@HKM2], the integer $k_i$ is uniquely determined by the fractional Dehn twist coefficient $c_i(h)$ for each $i, 1\le i \le b$. This relationship can be very simply stated: \[translation1\][**(Coordinate translation I)**]{} Let $c_i = c_i(h)$ and let $n_i$ be the integer determined by the condition $$c_i\in (n_i-1/2,n_i+1/2].$$ In other words, $$n_i=\lceil c_i -1/2 \rceil,$$ the integer nearest to $c_i$, with ties in the case $c_i\in \mathbb Z+1/2$ broken by rounding down. Then $k_i = -n_i$ and so $\nu_i$ has slope $-1/n_i$. Moreover, $\gamma_i$ has slope $1/(c_i-n_i)$ and so 1. if $c_i = n_i$, then $\gamma_i=\mu_i$ has slope $1/0$; 2. if $c_i > n_i$, then $\gamma_i$ has positive slope; and 3. if $c_i<n_i$, then $\gamma_i$ has negative slope. The meridian $\nu_i$ has slope $1/k_i$ and so $\nu_i = \mu_i + k_i\lambda_i$. So $$\gamma_i = p_i\lambda_i+ q_i\nu_i = p_i\lambda_i + q_i(\mu_i + k_i\lambda_i) = q_i\mu_i +(p_i+k_iq_i)\lambda_i$$ has slope ${q_i}/(p_i+k_iq_i) = {1}/(c_i+k_i)$ and $|\gamma_i\cap\mu_i| = |p_i+k_iq_i|$. By definition of flow coordinates, $k_i$ is chosen to minimize $|\gamma_i\cap\mu_i| = |p_i+k_iq_i|$, and hence to minimize $|c_i + k_i|$. There is a unique such minimizing $k_i$ unless $c_i\in \mathbb Z + 1/2$. In this case, $k_i$ is chosen so that $\gamma_i$ has slope $2/1$; namely, so that $c_i = -k_i + 1/2$. So $k_i$ is the unique integer satisfying $c_i\in (-k_i-1/2,-k_i + 1/2]$. Conversely, given the fibered 3-manifold $M_0$ and meridional Dehn filling slopes $\nu_i, 1\le i\le b,$ in terms of the Thurston flow coordinates $(\lambda_i,\mu_i)$, it is often useful to express the slopes $\nu_i$ and $\gamma_i$ in terms of the associated open book coordinates $(\lambda_i,\nu_i)$. We have the following. \[translation2\][**(Coordinate translation (II)**]{} Suppose $M_0$ is fibered, with the boundary components $\partial_i M_0$ given the Thurston flow framing $(\lambda_i,\mu_i)$ for each $i$. As above, let $\nu_i$ be a meridional slope and let $\gamma_i$ be a closed orbit of the Thurston flow on $\partial_i M_0$. In terms of the Thurston framing, $\nu_i=-1/n_i$ and $\gamma_i=r_i/s_i$, for some integers $n_i,r_i$ and $s_i$. Again as above, let $M$ be the manifold obtained by $(\nu_1,...,\nu_b)$ filling $M_0$ and let $(S,h)$ be the open book decomposition of $M$ determined by the fibering of $M_0$. Then, in terms of the open book framing $(\lambda_i,\nu_i)$ on $\partial_i M_0 = \partial N(l_i)$, $$\nu_i=1/0, \,\,\, \mu_i = 1/n_i \,\,\, \mbox{ and } \gamma_i = r_i/(n_i r_i + s_i).$$ In particular, the fractional Dehn twist coefficient along or $S \cap \partial_iM_0$ is given by $$c_i(h) = n_i+s_i/r_i,$$where, as noted in Definition \[coordinatesgalore\], $s_i/r_i\in (-1/2,1/2]$. To eliminate the ugliness of subscripts, focus on a particular boundary component, $\partial_i M_0$, and drop all reference to $i$. Two right handed framings are related by a unique transformation in $SL_2(\mathbb Z)$. Notice that $$A= \left[ {\begin{array}{cc} 1 & n \\ 0 & 1 \\ \end{array} } \right]$$ is the element in $SL_2(\mathbb Z)$ which maps the pair $({1 \choose 0},{-n \choose 1})$ to the pair $({1 \choose 0},{0 \choose 1})$. Hence, with the correspondence ${a\choose b} \mapsto b/a$, $A$ describes the translation from $(\lambda,\mu)$ coordinates to $(\lambda,\nu)$ coordinates. The slope computations follow immediately. Dehn filling the Thurston flow ------------------------------ Dehn surgery on Anosov flows is defined by Goodman in [@Go]. (See also [@Fried].) This definition generalizes naturally to the setting of pseudo-Anosov flows and permits us to consider the effect of Dehn filling of Thurston flows. We define Dehn filling of a Thurston flow as follows. Let $Y$ be any closed 3-manifold obtained by Dehn filling $M_0$. For each $i, 1\le i\le b$, let $X_i$ denote the solid torus in $Y$ bounded by $\partial_i M_0$ and let $\kappa_i$ denote the core of $X_i$. As long as the surgery coefficient along $\partial_i M_0$ is not $\gamma_i$, it is possible to blow down $X_i$ to its core and obtain a flow $\Theta$ defined on $Y$. Notice that the cores $\kappa_i$ are closed orbits of $\Theta$. Also, either $\Theta_0$ is periodic in a neighborhood of $\partial_i M_0$ and therefore so is $\Theta$ in a neighborhood of $\kappa_i$, or else $\Theta_0$ is pseudo-Anosov (with possibly a single prong pair along $\kappa_i$) in a neighborhood of $\partial_i M_0$ and therefore so is $\Theta$ in a neighborhood of $\kappa_i$. We shall refer to this flow $\Theta$ as the [*surgered Thurston flow*]{}. Notice that since $\Theta_0$ is volume preserving for some metric, so is $\Theta$. Suppose $M$ has an open book decomposition $(S,h)$ with binding $L$ and corresponding Dehn filling description $M=M_0(\nu_1,\cdots,\nu_b)$. Let $\Theta_0$ denote the Thurston flow on the complement of $L$. Then $\Theta_0$ extends to a flow $\Theta$ on $M$ if and only if all fractional Dehn twist coefficients $c_i$ are nonzero. Notice that $$c_i(h)=0 \iff p_i = 0, q_i = 1 \iff \gamma_i = \nu_i \implies \nu_i = \mu_i = \gamma_i.$$ In particular, if at every component $C_i$ of $\partial S$ the fractional Dehn twist coefficient $c_i(h)\ne 0$, then $\nu_i\ne\gamma_i$ and so it is possible to blow down the flow $\Theta_0$ to the surgered Thurston flow $\Theta$ on $M$. Otherwise it is not. Notice that the binding $L=\cup l_i$ inherits orientations both from the open book structure and from the flow $\Theta$. ![ Curves on a boundary component of $M_0$.[]{data-label="postrans"}](postrans){width="4.2in"} \[bindingorientation\] When $c_i>0$, these orientations agree on $l_i$. When $c_i<0$, these orientations do not agree on $l_i$. The orientation of $\Theta$ restricted to the binding is determined by the sign of the slope of $\gamma_i$ as expressed in $(\lambda_i,\nu_i)$ coordinates. This is illustrated in Figure \[postrans\]. Contact structures supported by an open book -------------------------------------------- In [@Gi], Giroux defined the notion of contact structure supported by an open book: a positive (respectively, negative) contact structure $\xi$ on $M$ is [*supported by, or compatible with, an open book decomposition $(S,h)$ of $M$*]{} if $\xi$ can be isotoped through contact structures so that there is a contact 1-form $\alpha$ for $\xi$ such that 1. $d\alpha$ is a positive area form on each page $S_t$ of the open book and 2. $\alpha>0$ (respectively, $\alpha<0$) on the binding $l$. ([@Gi]) Two contact structures supported by the same open book are contact isotopic. We may therefore abuse language and refer to [*the*]{} contact structure compatible with the open book decomposition $(S,h)$. Let $(\xi_+)_{(S,h)}$ denote the positive contact structure compatible with the open book decomposition $(S,h)$. Let $(\xi_-)_{(S,h)}$ denote the negative contact structure compatible with the open book decomposition $(S,h)$. \[posnegc\] Suppose all fractional Dehn twist coefficients are nonzero and let $\Theta$ denote the surgered Thurston flow. 1. $(\xi_+)_{(S,h)}$ is positively transverse to $\Theta$ if and only if all fractional Dehn twist coefficients are positive. 2. $(\xi_-)_{(S,h)}$ is positively transverse to $\Theta$ if and only if all fractional Dehn twist coefficients are negative. This follows immediately from Lemma \[bindingorientation\]. Any contact structure $(M,\xi)$ is supported by infinitely many open book decompositions. Honda, Kazez, and Matić proved in [@HKM2] that if there is a compatible open book decomposition with nonpositive fractional Dehn twist coefficient $c_i$ for some $i$, then necessarily $\xi$ is overtwisted. In fact, they show the following. \[Theorem 1.1 of [@HKMRV1]\] A contact structure $(M,\xi)$ is tight if and only if all of its compatible open book decompositions $(S,h)$ have fractional Dehn twist coefficients $c_i \ge 0$ for $1\le i\le |\partial S|$. Let $S'$ be the smallest invariant subsurface of $S$ for the Thurston representative of $h$; thus $S'=S$ if and only if $h$ is not reducible. If $c_i=0$ for some common boundary component of $S$ and $S'$, there are two possibilities for $h|S'$. The first is that it is periodic, and it follows that it is equal to the identity map. The second possibility is that $h|S'$ is pseudo-Anosov, but this immediately implies $\xi$ is overtwisted [@HKMRV1]. Since our primary focus is studying tightness for pseudo-Anosov maps it is enough to consider open book decompositions for which all fractional Dehn twist coefficients $c_i$ are positive (or, in the case that the contact structure is negative, to a consideration of open book decompositions for which all fractional Dehn twist coefficients $c_i$ are negative). Foliations compatible with an open book decomposition {#OBresults} ----------------------------------------------------- Let $\mathcal F$ be an oriented foliation of $M$. Let $(S,h)$ be an open book decomposition of $M$, with binding $L$. Let $\Phi$ be the surgered Thurston flow associated to $h$. If $\mathcal F$ is everywhere transverse to $\Phi$, then we say that $\mathcal F$ is [*compatible with the open book*]{} $(S,h)$. \[oppfoln\] Let $(S,h)$ be an open book decomposition of $M$, with binding $L$, and surgered Thurston flow $\Phi$. Set $M_0=M\setminus \text{int} N(L)$ and $\Phi_0$ to be the Thurston flow associated to $h$. 1. Suppose all fractional Dehn twist coefficients are positive. If there exists a foliation $\mathcal F_-$ in $M_0$ which is $L$-taut, is transverse to $\Phi_0$ and satisfies $m_{\mathcal F_-}^T < 0$ with respect to $\langle \nu^T,\Phi|_T \rangle$ for all components $T$ of $\partial M_0$, then $(\xi_+)_{(S,h)}$ is weakly symplectically fillable. 2. Suppose all fractional Dehn twist coefficients are negative. If there exists a foliation $\mathcal F_+$ in $M_0$ which is $L$-taut, is transverse to $\Phi_0$ and satisfies $m_{\mathcal F_+}^T > 0$ with respect to $\langle \nu^T,\Phi|_T \rangle$ for all components $T$ of $\partial M_0$, then $(\xi_-)_{(S,h)}$ is weakly symplectically fillable. Consider case (1). Since $(\xi_+)_{(S,h)}$ is transverse to $\Phi$, it suffices to show that there is a negative contact structure $\xi_-$ which is transverse to $\Phi$. This follows immediately from the assumptions by Theorem \[main1\]. Case (2) follows similarly. Set $\mathcal G_0$ to be the fibration $S\times [0,1]/h$ in the complement of $L$. Note that $\mathcal G_0\cap T$ has either positive or negative slope with respect to $\langle \nu^T,\Phi|_T \rangle$, on each component $T$ of $\partial N(L)$. For completeness, we note the following: Let $T_i$ be a component of of $\partial N(L)$. Then $\mathcal G_0\cap T_i$ has positive (respectively, negative) slope with respect to $\langle \nu^{T_i},\Phi|_{T_i}\rangle$ if the corresponding fractional Dehn twist $c_i$ is positive (respectively, negative). Consider the relative slope values of $\gamma, \lambda$, and $\nu$ on any boundary component $T_i$ of $\partial M_0$. This is captured in Figure \[postrans\]. Notice that $\lambda$ represents the slope of $\partial \mathcal G_0$ and $\gamma$ represents the slope of $\Phi$. Recalling the slope convention, illustrated in Figure \[slopeconvention\], we see that $\mathcal G_0\cap T_i$ has positive (respectively, negative) slope with respect to $\langle \nu^{T_i},\Phi|_{T_i}\rangle$ if the corresponding fractional Dehn twist $c_i$ is positive (respectively, negative). So $\mathcal G_0$ extends to a positive confoliation transverse to $\Phi$ when $c_i>0$ for all $i$ and to a negative confoliation transverse to $\Phi$ when $c_i<0$ for all $i$. In other words, and unsurprisingly, $\mathcal G_0$ as a foliation playing the role of $\mathcal F_{+}$ (respectively, $\mathcal F_-$) gives a second way of establishing the existence of $\xi_+$ (respectively $\xi_-$). Finally, we use Proposition \[translation1\] to rephrase Proposition \[oppfoln\] in terms of Thurston flow coordinates. For simplicity of exposition, we restrict attention to the case that all fractional Dehn twist coefficients are positive. There is a symmetric statement in the case that all fractional Dehn twist coefficients are negative. \[oppfolnTh\] Let $(S,h)$ be an open book decomposition of $M$, with binding $L$. Suppose all fractional Dehn twist coefficients $c_i, 1\le i\le b,$ are positive. For each $i$, let $n_i$ be the integer nearest to $c_i$, with ties in the case $c_i\in \mathbb Z+1/2$ broken by rounding down. Set $M_0=M\setminus \text{int} N(L)$. Let $\Phi_0$ denote the Thurston flow associated to $h$ and let $\Phi$ denote the surgered Thurston flow on $M$. Suppose there exists a foliation $\mathcal F_-$ in $M_0$ which is $L$-taut, is transverse to $\Phi$, and such that, for each component $T_i$ of $\partial M_0$, $c_i$, $n_i$ and $$m_i = m^{T_i}_{\mathcal F_-}$$ satisfy one of the following: 1. $c_i=n_i$ and $m_i\in (-\infty,-\frac{1}{n_i})$, 2. $c_i>n_i$ and $m_i\in (\frac{1}{c_i-n_i},\infty] \cup [-\infty,-\frac{1}{n_i})$, or 3. $c_i<n_i$ and $m_i\in (\frac{1}{c_i-n_i},-\frac{1}{n_i})$. Thus $m_{\mathcal F_-}^T<0$ with respect to $\langle \nu^T,\Phi|_T \rangle$ for all components $T$ of $\partial M_0$, and consequently, $(\xi_+)_{(S,h)}$ is weakly symplectically fillable. The boundary slope of $\mathcal F_-$ on the $i^{th}$ boundary component lies between $\Phi$ and $\nu_i$ as shown in Figure \[slopeconvention\]. By Proposition \[translation1\] the slope of $\Phi$ is $\frac{1}{c_i-n_i}$ while the slope of $\nu_i$ is $-\frac{1}{n_i}$. The form of the intervals given, depends on a case by case analysis of whether or not they contain slope $\infty$. Weak symplectic fillability of $(\xi_+)_{(S,h)}$ follows from Proposition \[oppfoln\]. One can translate the results of [@HKM2; @R2] into the current context as follows. \[existence\] [@R2] When the binding $L$ is connected, $c>0$, and the monodromy $h$ has pseudo-Anosov representative, there are $\partial M_0$-taut foliations in $M_0$ transverse to $\Phi_0$ which realize all slopes in an interval $J$ as follows: 1. $c=n$ and $J=(-\infty,\infty)$, or 2. $c>n$ and $J=(-\infty,1)$, or 3. $c<n$ and $J=(-1,\infty)$. The next corollary follows by intersecting the intervals where foliations exist in Theorem \[existence\] with the intervals where they are required in Proposition \[oppfolnTh\]. There exists $\mathcal F_-$ as described in Proposition \[oppfolnTh\], if one of the following is true 1. $c=n$ and $(-\infty,-\frac{1}{n})\cap (-\infty,\infty)\ne\emptyset$, or 2. $c>n$ and $((\frac{1}{c-n},\infty] \cup [-\infty,-\frac{1}{n}))\cap (-\infty,1)\ne\emptyset$, or 3. $c<n$ and $ (-\frac{1}{n-c},-\frac{1}{n})\cap (-1,\infty)\ne\emptyset$. The case when the fractional Dehn twist coefficient is greater than or equal to $1$ is of particular interest since If $c \ge 1$ there exists $\mathcal F_-$ as described in Proposition \[oppfolnTh\]. If $c>0$ then $n\ge 0$, with $n=0$ only when $c\in (0,1/2]$. Thus $c \ge 1$ implies $n\ge 1$, and it follows that the intersections in Cases (1) and (2) are nonempty. In Case (3), $1 \le c < n$, and the intersection is again nonempty. This corollary is exactly what is needed to complete the proof of a theorem of Honda, Kazez, and Matić in [@HKM2] that was one of the original motivations for this work. If $(S,h)$ is an open book decomposition such that $S$ has connected boundary, $h$ is isotopic to a pseudo-Anosov homeomorphism, and the fractional Dehn twist coefficient of $h$ is greater than or equal to $1$, then the contact structure canonically associated to the open book decomposition, $\xi(S,h)$, is weakly symplectically fillable. The proof strategy of [@HKM2] used a single foliation $\mathcal F$ defined on all of $(S,h)$ as constructed by Roberts, [@R1; @R2] with boundary slope related to open book data. Next they wanted to apply the Eliashberg-Thurston theorem to produce a weakly symplectically fillable contact structure $\xi$. Finally they argued that the two contact structures $\xi(S,h)$ and $\xi$ were necessarily equivalent. There is a somewhat surprising aspect that arises in addressing the issue of lack of smoothness of $\mathcal F$. It is that we do not use the same foliation. To complete their proof with our strategy, we require the existence of two foliations $\mathcal F^+$ and $\mathcal F^-$, both of which exist on the complement of the binding of $(S,h)$, by the work of Roberts, and have boundary slopes on either side of the boundary slope of $\mathcal F$. One foliation is used to the produce a positive contact structure, the other a negative contact structure, and both are necessary to conclude that the approximating contact structure is weakly symplectically fillable. The notion of $\Phi$-approximating contact structure that we produce is sufficient to conclude that $\xi(S,h)$ and $\xi$ are equivalent using the argument of [@HKM2]. When $M=(S,h)$ has binding which is not connected, our results can be applied to the Kalelkar-Roberts constructions of $\partial M_0$-taut foliations in $M_0$ transverse to $\Phi_0$. \[Theorem 1.1,[@KRo]\] There are $\partial M_0$-taut oriented foliations in $M_0$ transverse to $\Phi_0$ and realizing a neighborhood of rational boundary multislopes about the boundary multislope of the fibration. Suppose $M$ has open book decomposition $(S,h)$ and fractional Dehn twist coefficients $c_i, 1\le i\le b$. There are constants $A_i= A_i(M_0), 1\le i\le b$, dependent on $M_0$ such that if $c_i>A_i$, then $\xi_{(S,h)}$ is weakly symplectically fillable. Work of Baldwin and Etnyre [@BE] implies that any such constants $A_i$ must depend on $M_0$, at least in the case that the page $S$ has genus one. There exist open books whose fractional Dehn twist coefficients are arbitrarily large, but whose compatible contact structures are not $C^0$ close to smooth orientable taut foliations. There exist open books whose fractional Dehn twist coefficients are arbitrarily large, but whose compatible contact structures are not $\Phi$-close to taut oriented bracketed $C^0$-foliations, for any volume preserving flow $\Phi$. Some symplectic topology {#symplectic} ======================== This section contains an overview of the relationship between foliations, volume preserving flows, symplectic topology, and contact topology that is summarized in Theorem \[weaklysymplectic\]. Let $\mathcal F$ be a transversely oriented, taut $C^0$-foliation in $M$. Fix a metric on $M$, and let $\Phi$ be a volume preserving flow transverse to $\mathcal F$. The starting point for the interconnections we will describe is a carefully chosen 2-form. (See Section 3.2 of [@ET].) Let $\xi$ be a co-oriented $C^k$ 2-plane field on a smooth 3-manifold $M$ with $k\ge 0$. A smooth closed 2-form $\omega$ on $M$ is said to [*dominate*]{} $\xi$ if $\omega|\xi$ does not vanish (i.e., if $p\in M$ and $X_p,Y_p$ is a basis for $\xi_p$, then $\omega_p(X_p,Y_p)\ne 0$). A smooth closed 2-form $\omega$ on $M$ is said to [*positively dominate*]{} $\xi$ if $\omega|\xi$ is positive (i.e., for all $p\in M$, if $X_p,Y_p$ is a positively oriented basis for $\xi_p$, then $\omega_p(X_p,Y_p)> 0$). To produce such a dominating closed 2-form, let $\Omega$ be the volume form on $M$ preserved by the smooth flow $\Phi$, and let $X$ be the vector field which generates $\Phi$. Define $$\omega = X \lrcorner\Omega.$$ Recall that $\Phi$ is volume preserving if and only if $\mathcal L_X\Omega = 0$, where $\mathcal L_X$ denotes the Lie derivative with respect to $X$. (See for example, Proposition 18.16 of [@Lee].) By Cartan’s Formula (see, for example, Proposition 18.13 of [@Lee]), $$\mathcal L_X\Omega = X\lrcorner (d\Omega) + d(X\lrcorner \Omega) = d(X\lrcorner \Omega) = d\omega.$$ It follows that $\omega$ is closed. From its definition, $\omega$ is killed by the flow direction $X$, thus a co-oriented 2-plane field $\xi$ is positively dominated by $\omega$ if and only if it is everywhere positively transverse to $\Phi$. A closed 2-form dominating $T\mathcal F$ can be produced directly from a taut foliation [@Sullivan; @Hass] thereby eliminating the need to choose a metric, a volume form, and a flow preserving the volume form. We have chosen to emphasize the volume preserving flow since it clarifies the local nature of our foliations in flow boxes. Specialize now to the case of Theorem \[weaklysymplectic\] in which $\xi$ is a contact structure positively transverse to $\Phi$. Choose a 1-form $\alpha$ such that ker$\,\alpha=\xi$ and $\alpha \wedge \omega>0$. Define a 2-form $\tilde{\omega}$ on $M \times [-1,1]$ using the projection map $p$ and the formula $$\label{sympform} \tilde{\omega} = p^{\star}(\omega) + \epsilon d(t\alpha).$$ Direct computation shows that if $\epsilon$ is positive and small enough, $(M\times[-1,1],\tilde{\omega})$ is a symplectic manifold with boundary, that is, $\tilde{\omega}$ is a non-degenerate 2-form. The important role of the positive and negative contact structures, $\xi_+$ and $\xi_-$, of Theorem \[weaklysymplectic\] will be described after the next definition. A boundary component $Y$ of a symplectic manifold $(W,\tilde{\omega})$ is called *[weakly convex]{} if $Y$ admits a positive contact structure dominated by $\tilde{\omega}|_Y$.* This is precisely the structure that the boundary components of $(M\times[-1,1],\tilde{\omega})$ have. The restriction of $\tilde{\omega}$ positively dominates $\xi_+$ on $M\times\{1\}$. Because of boundary orientations, $\xi_-$ defines a positive contact structure on $M\times\{-1\}$ that is positively dominated by the restriction of $\tilde{\omega}$. Moreover, $\tilde{\omega}$ restricts to $\omega$ on $M\times\{0\}$, thus $M \times[-1,0]$ also has weakly convex boundary. Since both boundary components of either $(M\times[-1,1], \tilde{\omega})$ or the restriction of $\tilde{\omega}$ to $M\times [-1,0]$ are weakly convex, they give examples of weak symplectic fillings. A [*weak symplectic filling*]{} of a contact manifold $(M,\xi)$ is a symplectic manifold $(W,\tilde{\omega})$ with $\partial W = M$ (as oriented manifolds) such that $\tilde{\omega}|_{\xi}> 0$. A contact manifold $(M,\xi)$ which admits a weak symplectic filling is called *[weakly symplectically fillable]{}.* A [*strong symplectic filling*]{} of a contact manifold $(M,\xi)$ is a symplectic manifold $(W,\tilde{\omega})$ with $\partial W = M$ (as oriented manifolds) where $\xi = \text{ker} \, \alpha$ for a 1-form $\alpha$ satisfying $d\alpha = \tilde{\omega}|_M$. A contact manifold $(M,\xi)$ which admits a strong symplectic filling is called *[strongly symplectically fillable]{}.* In general, strong symplectic fillability is a stronger condition than weak symplectic fillability [@Eli2]. However, by Lemma 1.1 of [@OO], when $M$ is a rational homology sphere, $(M,\xi)$ is weakly symplectically fillable if and only if it is strongly symplectically fillable. A contact manifold is said to be weakly (or strongly) [*semi-fillable*]{} if it is one boundary component of a weak (or strong) symplectic filling. A weakly (or strongly) semi-fillable contact manifold is weakly (or strongly) fillable [@Eli; @Et]. The following fundamental theorem gives an example of the importance of weak symplectic fillability in contact topology. \[[@Gr],[@Eli1],[@ET]\] \[EGr\] Weakly symplectically fillable contact structures are tight. Weakly symplectically fillable contact structures that are $\Phi$-close to taut foliations are universally tight. S. Altschuler *A geometric heat oriented flow for one-forms on three-dimensional manifolds*, Illinois J. Math. **39** (1995), 98–118. J. Baldwn and J. Etnyre, *Admissible transverse surgery does not preserve tightness*, `arXiv:1023.2993v3`. M. Brittenham, R. Naimi and R.  Roberts, *Graph manifolds and taut foliations*, J. Diff. Geom. **45** (1997), 446–470. A. Candel and L. Conlon, *Foliations I*, A.M.S. Graduate Studies in Mathematics **23**, 2000. A. Casson, S. Bleiler, *Automorphisms of surfaces after Nielsen and Thurston*, London Mathematical Society Student Texts, [**9**]{}. Cambridge University Press, Cambridge, 1988. iv+105 pp. O. Dasbach and T. Li, *Property P for knots admitting certain Gabai disks*, Topology and its Applications **142** (2004), 113–129. C.   Delman and R. Roberts, *Alternating knots satisfy Strong Property P*. Comment. Math. Helv. **74** (1999), no. 3, 376–397. Y. Eliashberg, *Filling by holomorphic discs and its applications*, Lecture Notes LMS **151** (1992), 45–67. Y. Eliashberg, *Unique holomorphically fillable contact structure on the 3-torus*, Internat. Math. Res. Notices **2** (1996), 77–82. Y. Eliashberg, *A few remarks about symplectic filling*, Geom. Top. **8** (2004), 277–293. Y. Eliashberg and W. Thurston, *Confoliations*, University Lecture Series **13**, Amer. Math. Soc., Providence, 1998. J. Etnyre, *On symplectic fillings*, Alg. Geom. Top. **4** (2004), 73–80. J. Etnyre, *Contact geometry in low dimensional topology*, Low Dimensional Topology, IAS/Park City Mathematics Series **15** (2006), 231–264. A. Fathi, F. Laudenbach and V. Poénaru, *Travaux de Thurston sur les surfaces. Séminaire Orsay*, Astérisque **66-67**, Société Mathématique de France, Paris, 1979. D. Fried, *Transitive Anosov flows and pseudo-Anosov maps*, Topology **22(3)**, 299–303. D. Gabai, *Foliations and the topology of 3-manifolds*, J. Differential Geometry. **18** (1983), 445-503. D. Gabai, *Foliations and Genera of Links*, Topology **23** (1984), 381–394. D. Gabai, *Genera of the Alternating Links*, Duke Math. J. **53** (1986), 677–681. D. Gabai, *Detecting Fibred Links in S3*, Commentarii Math. Helvetici **61** (1986), 519–555. D. Gabai, *Foliations and the topology of 3-manifolds II* J. Differential Geometry. **26** (1987), 461–478. D. Gabai, *Foliations and the topology of 3-manifolds III.* J. Differential Geometry. **26** (1987), 479-536. D. Gabai, *Taut foliations of 3-manifolds and suspensions of $S^1$*, Ann. Inst. Fourier, Grenoble, **42 (1-2)** (1992), 193–208. H. Geiges, *An introduction to contact topology*, Cambridge Studies in Advanced Mathematics **109**, 2008. E. Giroux, *Convexité en topologie de contact*, Comment. Math. Helv. [**66**]{} (1991), 637–677. E. Giroux, *Structures de contact sur les variétés fibrées en cercles audessus d’une surface*, Comment. Math. Helv. [**76**]{} (2001), no. 2, 218–262. E. Giroux, *Géométrie de contact: de la dimension trois vers les dimensions supérieures*, Proc. International Congress of Math. Vol II (Beijing 2002), (2002), 405–414. S. Goodman, *Dehn surgery on Anosov flows*, Geometric Dynamics, Lecture Notes in Mathematics **1007**, Springer 1983, 300–307. M. Gromov, *Pseudo-holomorphic curves in symplectic manifolds*, Invent. Math. **82** (1985), 307–347. J. Hass, *Minimal surfaces in foliated manifolds*, Comment. Math. Helv. **61** (1986), 1–32. G. Hector and U. Hirsch, *Introduction to the Geometry of Foliations, Part A*, 1981. K. Honda, W. Kazez and G. Matić, *Right-veering diffeomorphisms of compact surfaces with boundary I*, Invent. Math. [**169**]{},(2007), 427–449. K. Honda, W. Kazez and G. Matić, *Right-veering diffeomorphisms of compact surfaces with boundary II*, Geom. Topol. [**12**]{} (2008), 2057–2094. T. Kalelkar and R. Roberts, *Taut foliations in surface bundles with multiple boundary components*, to appear in the Pac. J. Math. (arXiv:1211.3637) W.  Kazez and R. Roberts, in preparation. P. B. Kronheimer and T. Mrowka, *Monopoles and three-manifolds*, Cambridge University Press, 2007. P. Kronheimer and T. Mrowka, *Witten’s conjecture and Property P*, Geom. Top. **8** (2004), 295–310. P. Kronheimer, T. Mrowka, P. Ozsváth, and Z. Szabó, *Monopoles and lens space surgeries*, Annals of Math. **165** (2007), 457–546. J. M. Lee, *Introduction to smooth manifolds*, Graduate Texts in Mathematics **218** (2003). T. Li, [*Laminar branched surfaces in 3-manifolds*]{}, Geom. Topol. [**6**]{} (2002), 153–194 (electronic). T.  Li, *Boundary train tracks of laminar branched surfaces*. Proc. of Symposia in Pure Math., **71**, AMS (2003), 269–285. T. Li and R. Roberts, [*Taut foliations in knot complements*]{}, to appear in Pac. J. Math., `ArXiv:1211.3066`. J. Milnor, [*Lectures on the h-cobordism theorem*]{}, Princeton University Press, 1965. S. P. Novikov, [*Topology of foliations*]{}, Trans. Moscow Math. Society **14** (1963), 268–305. H. Ohta and K. Ono, *Simple singularities and topology of symplectically filling 4-manifold*, Comment. Math. Helv. **74** (1999), 575–590. P. Ozsváth and Z. Szabó, [*Holomorphic disks and genus bounds*]{}, Geometry and Topology **8** (2004), 311–334. P. Ozsváth and Z. Szabó, [*On knot Floer homology and lens space surgeries*]{}, Top. **44** (2005), 1281–1300. B. Ozbagci and A.  Stipsicz, [*Surgery on contact 3-manifolds and Stein surfaces*]{}, Bolyai Society Mathematical Studies **13**, 2004. R.  Roberts, *Constructing taut foliations*, Comment. Math. Helv. **70** (1995), 516–545. R.  Roberts, *Taut foliations in punctured surface bundles, I*. Proc. London Math. Soc. (3) **82** (2001), no. 3, 747–768. R.  Roberts, *Taut foliations in punctured surface bundles, II*. Proc. London Math. Soc. (3) **83** (2001), no. 2, 443–471. R. Sacksteder, *Foliations and pseudogroups*, Amer. J. Math. **87** (1965), 79–102. D. Sullivan, *Cycles for the dynamical study of foliated manifolds and complex manifolds*, Inventiones. Math. **36** (1976), 225–255. W. Thurston *On the geometry and dynamics of diffeomorphisms of surfaces*, Bull. Amer. Math.  Soc. [**19**]{}, (1988), 417–431. [^1]: This work was partially supported by a grant from the Simons Foundation (\#244855 to William Kazez)
--- abstract: 'For random graphs distributed according to a stochastic block model, we consider the inferential task of partioning vertices into blocks using spectral techniques. Spectral partioning using the normalized Laplacian and the adjacency matrix have both been shown to be consistent as the number of vertices tend to infinity. Importantly, both procedures require that the number of blocks and the rank of the communication probability matrix are known, even as the rest of the parameters may be unknown. In this article, we prove that the (suitably modified) adjacency-spectral partitioning procedure, requiring only an upper bound on the rank of the communication probability matrix, is consistent. Indeed, this result demonstrates a robustness to model mis-specification; an overestimate of the rank may impose a moderate performance penalty, but the procedure is still consistent. Furthermore, we extend this procedure to the setting where adjacencies may have multiple modalities and we allow for either directed or undirected graphs.' author: - | Donniell E. Fishkind, Daniel L. Sussman, Minh Tang, Joshua T. Vogelstein\ and Carey E. Priebe\ Department of Applied Mathematics and Statistics, Johns Hopkins University title: 'Consistent adjacency-spectral partitioning for the stochastic block model when the model parameters are unknown' --- Background and overview ======================= Our setting is the [*stochastic block model*]{} [@HLL; @WW]—a random graph model in which a set of $n$ vertices is randomly partitioned into $K$ [*blocks*]{} and then, conditioned on the partition, existence of edges between all pairs of vertices are independent Bernoulli trials with parameters determined by the block membership of the pair. (The model details are specified in Section \[c\].) The realized partition of the vertices is not observed, nor are the Bernoulli trial parameters known. However, the realized vertex adjacencies (edges) are observed, and the main inferential task is to estimate the partition of the vertices, using the realized adjacencies as a guide. Such an estimate will be called consistent if and when, in considering a sequence of realizations for $n=1,2,3,\ldots$ with common model parameters, it happens almost surely that the fraction of misassigned vertices converges to zero as $n \rightarrow \infty$. Rohe et al. [@RCY] proved the consistency of a block estimator that is based on spectral partitioning applied to the normalized Laplacian, and Sussman et al. [@STFP] extended this to prove the consistency of a block estimator that is based on spectral partitioning applied to the adjacency matrix. Importantly, both of these procedures assume that $K$ and the rank of $M$ are known (where $M \in [0,1]^{K \times K}$ is the matrix consisting of the Bernoulli parameters for all pairs of blocks), even as the rest of the parameters may be unknown. In this article, we prove that the (suitably modified) adjacency-spectral partitioning procedure, requiring only an upper bound for rank$M$, gives consistent block estimation. We demonstrate a robustness to mis-specification of rank$M$; in particular, if a practitioner overestimates the rank of $M$ in carrying out adjacency spectral partitioning to estimate the blocks, then the consistency of the procedure is not lost. Indeed, this is a model selection result, and we provide estimators for $K$ and prove their consistency. Our analysis and results are valid for both directed and undirected graphs. We also allow for more than one modality of adjacency. For instance, the stochastic block model can model a social network in which the vertices are people, and the blocks are different communities within the network such that probabilities of communication between individual people are community dependent, and there is available information about several different modes of communication between the people; e.g. who phoned whom on cell phones, who phoned whom on land lines, who sent email to whom, who sent snail mail to whom, with a separate adjacency matrix for each modality of communication. Indeed, if there are different matrices $M$ for each mode of communication, even if there is dependence in the communications between two people across different modalities, our analysis and results will hold—provided that every pair of blocks is “probabilistically discernable" within at least one mode of communication. (This will be made more precise in Section \[c\].) Latent space models (e.g. Hoff et al. [@HRH]) and, specifically, random dot product models (e.g. Young and Scheinerman [@YS]) give rise to the stochastic block model. Indeed, the techniques that we use in this article involve generating latent vectors for a random dot product model structure which we then use in our analysis. Nonetheless, our results can be used without awareness of such random-dot-product-graph underlying structure, and we do not concern ourselves here with estimating latent vectors for the blocks. (In any event, latent vectors are not uniquely determinable here). Consistent block estimation in stochastic block models has received much attention. Fortunato [@Fortunato] and Fjallstrom [@Fjallstrom] provide reviews of partitioning techniques for graphs in general. Consistent partitioning of stochastic block models for two blocks was accomplished by Snijders and Nowicki [@SN] in 1997 and for equal-sized blocks by Condon and Karp [@CK] in 2001. For the more general case, Bickel and Chen [@BC] in 2009 demonstrated a stronger version of consistency via maximizing Newman-Girvan modularity [@NG] and other modularities. For a growing number of blocks, Choi et al. [@CWA] in 2010 proved consistency of likelihood based methods. In 2012, Bickel et al. [@BCL] provided a method to consistently estimate the stochastic block model parameters using subgraph counts and degree distributions. This work and the work of Bickel and Chen [@BC] both consider the case of very sparse graphs. Rohe et al. [@RCY] in 2011 used spectral partitioning on the normalized Laplacian to consistently estimate a growing number of blocks and they allow the minimum expected degree to be at least $\Theta(n/\sqrt{\log n})$. Sussman et al. [@STFP] extended this to prove consistency of spectral partitioning directly on the adjacency matrix for directed and undirected graphs. Finally, Rohe et al. [@Rohe2] proved consistency of bi-clustering on a directed version of the Laplacian for directed graphs. Unlike modularity and likelihood based methods, these spectral partitioning methods are computationally fast and easy to implement. Our work extends these spectral partitioning results to the situation when the number of blocks and the rank of the communication matrix is unknown. We present the situation for fixed parameters, and in Section \[sec:disc\] we discuss possible extensions. The adjacency matrix has been previously used for block estimation in stochastic block models by McSherry [@MS], who proposed a randomized algorithm when the number of blocks as well as the block sizes are known. Coja-Oghlan [@Coja] further investigate the methods proposed in McSherry and extend the work to sparser graphs. This method relies on bounds in the operator norm which have also been investigated by Oliveira [@Oliveira] and Chung et al. [@Chung]. In 2012, Chaudhuri et al. [@Chaudhuri] used an algorithm similar to the one in McSherry [@MS] to prove consistency for the degree corrected planted partition model, a slight restriction of the degree corrected stochastic block model proposed in [@Karrer]. Notably, Chaudhuri et al. [@Chaudhuri] do not assume the number of blocks is known and provide an alternative method to estimate the number of blocks. This represents another important line of work for model selection in the stochastic block model. The organization of the remainder of this article is as follows. In Section \[jjj\] we describe the stochastic block model, then we describe the inferential task and the adjacency-spectral partitioning procedure for the task—when very little is known about the parameters of the stochastic block model. In Section \[kkk\] ancillary results and bounds are proven, followed in Section \[f\] by a proof of the consistency of our adjacency-spectral partitioning. However, through Section \[f\], there is an extra assumption that the number of blocks $K$ is known. In Section \[e\] we provide a consistent estimator for $K$, and in Section \[hhh\] we prove the consistency of an extended adjacency-spectral procedure that does not assume that $K$ is known. Indeed, at that point, the only aspect of the model parameters which is still assumed to be known is just an upper bound for the rank of the communication probability matrix $M$. Bickel et al. [@BCL] mention the work of Rohe et al. [@RCY] as an important step, and then opine that “unfortunately this does not deal with the problem \[of\] how to pick a block model which is a good approximation to the nonparametric model." Taking these words to heart, our focus in this article is on showing a robustness in the consistency of spectral partitioning in the stochastic block model when using the adjacency matrix. Our focus is on removing the need to know a priori the parameters, and to still attain consistency in partitioning. This robustness opens the door to explore principled use of spectral techniques even for settings where the stochastic block model assumptions do not strictly hold, and we anticipate more future progress in consistency results for spectral partitioning in nonparametric models. We conclude the article with additional discussion of consistent estimation of $K$ (Section \[nnn\]), illustrative simulations (Section \[ppp\]), and a brief discussion (Section \[sec:disc\]). The model, the adjacency-spectral partitioning procedure, and its consistency \[jjj\] ====================================================================================== The stochastic block model \[c\] -------------------------------- The random graph setting in which we work is the [*stochastic block model*]{}, which has parameters $K,\rho,M$ where positive integer $K$ is the number of blocks, the [*block probability vector*]{} $\rho \in (0,1]^K$ satisfies $\sum_{k=1}^K \rho_k = 1$, and the [*communication probability matrix*]{} $M \in [0,1]^{K \times K}$ satisfies the model identifiability requirement that, for all $p,q \in \{ 1,2,\ldots,K \}$ distinct, either it holds that $M_{p,\cdot} \ne M_{q,\cdot}$ (i.e. the $p$th and $q$th rows of $M$ are not equal) or $M_{\cdot,p} \ne M_{\cdot,q}$ (i.e. the $p$th and $q$th columns of $M$ are not equal). The model is defined (and the parameters have roles) as follows: There are $n$ vertices, labeled $1,2,\ldots,n$, and they are each randomly assigned to blocks labeled $1,2,\ldots,K$ by a random [*block membership function*]{} $\tau :\{ 1,2,\ldots,n\} \rightarrow \{ 1,2,\ldots,K \}$ such that for each vertex $i$ and block $k$, independently of the other vertices, the probability that $\tau(i)=k$ is $\rho_k$. Then there is a random adjacency matrix $A \in \{0,1\}^{n \times n}$ where, for all pairs of vertices $i,j$ that are distinct, $A_{i,j}$ is $1$ or $0$ according as there is an $i,j$ edge or not. Conditioned on $\tau$, the probability of there being an $i,j$ edge is $M_{\tau(i),\tau(j)}$, independently of the other pairs of vertices. Our analysis and results will cover both the [*undirected setting*]{} in which edges are unordered pairs (in particular, $A$ and $M$ are symmetric) and also the [*directed setting*]{} in which edges are ordered pairs (in particular, $A$ and $M$ are not necessarily symmetric). In both settings the diagonals of $A$ are all $0$’s (i.e. there are no “loops" in the graph). We assume that the parameters of the stochastic block model are not known, except for one underlying assumption; namely, that a positive integer $R$ is known that satisfies rank$M \leq R$. (Of course, $R$ may be taken to be rank$M$ or $K$ if either of these happen to be known.) However, for now through Section \[f\], we also assume that $K$ is known; in Section \[e\] we will provide a consistent estimator for $K$ if $K$ is not known, and then in Section \[hhh\] we utilize this consistent estimator for $K$ to extend all of the previous procedures and results to the scenario where $K$ is also not known (and then the only remaining assumption is our one underlying assumption that a positive integer $R$ is known such that rank$M \leq R$). Although the realized adjacency matrix $A$ is observed, the block membership function $\tau$ is not observed and, indeed, the inferential task here is to estimate $\tau$. In Section \[b\], adjacency-spectral partitioning is used to obtain a [*block [**assignment**]{} function*]{} $\hat{\tau}:\{1,2,\ldots,n\} \rightarrow \{1,2,\ldots,K\}$ that serves as an estimator for $\tau$, up to permutation of the block labels $1,2,\ldots,K$ on the $K$ blocks. Then Theorem \[a\] in Section \[d\] asserts that almost always the number of misassignments $\min_{\textup{bijections }\pi: \{ 1,2,\ldots,K \} \rightarrow \{ 1,2,\ldots,K \} } | \{ j=1,2,\ldots,n: \tau(j) \ne \pi(\hat{\tau}(j)) \} |$ is negligible. A more complicated scenario is where there are multiple “modalities of communication" for the vertices. Specifically, instead of one probability communication matrix, there are several probability communication matrices $M^{(1)},M^{(2)}, \ldots, M^{(S)} \in [0,1]^{K \times K}$ which are all parameters of the model, and there are corresponding random adjacency matrices $A^{(1)},A^{(2)},\ldots,A^{(S)} \in \{ 0,1 \}^{n \times n}$ such that for each [*modality*]{} $s=1,2,\ldots,S$ and for each pair of vertices $i,j$ that are distinct, $A_{i,j}$ is $1$ with probability $M^{(s)}_{\tau(i),\tau(j)}$ independently of the other pairs of vertices but possibly with dependence across the modalities. As above, for model identifiability purposes we assume that, for each $p,q \in \{1,2,\ldots,K \}$ distinct, there exists an $s \in \{1,2,\ldots,S\}$ such that $M^{(s)}_{p,\cdot} \ne M^{(s)}_{q,\cdot}$ or $M^{(s)}_{\cdot,p} \ne M^{(s)}_{\cdot,q}$. Also, it is assumed that we know positive integers $R^{(1)},R^{(2)},\ldots,R^{(S)}$ which are upper bounds on $\mathrm{rank}M^{(1)},\mathrm{rank}M^{(2)}, \dotsc,\mathrm{rank}M^{(S)}$ respectively. We will also describe next in Section \[b\] how the adjacency-spectral partitioning procedure of that section can be modified for this more complicated scenario so that Theorem \[a\] will still hold for it. The adjacency-spectral partitioning procedure \[b\] --------------------------------------------------- The adjacency-spectral partitioning procedure that we work with is given as follows: First, take the realized adjacency matrix $A$, and compute a singular value decomposition $A=[U | U_r ] (\Sigma \oplus \Sigma_r) [V|V_r]^T$ where $U,V \in {\mathbb{R}}^{n \times R}$, $U_r,V_r \in {\mathbb{R}}^{n \times (n-R)}$, $\Sigma\in{\mathbb{R}}^{R \times R}$, and $\Sigma_r~\in~{\mathbb{R}}^{(n-R) \times (n-R)}$ are such that $[U|U_r]$ and $[V|V_r]$ are each real-orthogonal matrices, and $\Sigma \oplus \Sigma_r$ is a diagonal matrix with its diagonals non-increasingly ordered $\sigma_1 \geq \sigma_2 \geq \sigma_3 \ldots \geq \sigma_n$. Let $\sqrt{\Sigma} \in {\mathbb{R}}^{R \times R}$ denote the diagonal matrix whose diagonals are the nonnegative square roots of the respective diagonals of $\Sigma$, and then compute $X:=U \sqrt{\Sigma}$ and $Y:=V \sqrt{\Sigma}$. Then, cluster the rows of $X$ or $Y$ or $[X|Y]$ into at most $K$ clusters using the minimum least squares criterion, as follows: If it is known that the rows of $M$ are pairwise not equal, then compute ${\mathcal C} \in {\mathbb{R}}^{n \times R}$ which minimizes $\| C-X \|_F$ over all matrices $C \in {\mathbb{R}}^{n \times R}$ such that there are at most $K$ distinct-valued rows in $C$, otherwise, if it is known that the columns of $M$ are pairwise not equal, then compute ${\mathcal C} \in {\mathbb{R}}^{n \times R}$ which minimizes $\| C-Y \|_F$ over all matrices $C \in {\mathbb{R}}^{n \times R}$ such that there are at most $K$ distinct-valued rows in $C$, otherwise compute ${\mathcal C} \in {\mathbb{R}}^{n \times 2R}$ which minimizes $\| C-[X|Y] \|_F$ over all matrices $C \in {\mathbb{R}}^{n \times 2R}$ such that there are at most $K$ distinct-valued rows in $C$. (Although our analysis will assume the use of this minimum least squares criterion, note that popular clustering algorithms such as $K$-means will also (empirically) produce good results for our inferential task of block assignment.) The clusters obtained are estimates for the true blocks; i.e. define the block assignment function $\hat{\tau}: \{ 1,2,\ldots,n \} \rightarrow \{ 1,2,\ldots,K \} $ such that the inverse images $\{ \hat{\tau}^{-1}(i):i=1,2,\ldots K \}$ partition the rows of ${\mathcal C}$ (by index) so that rows in each part are equal-valued. This concludes the procedure. In the more complicated scenario of multiple modalities of communication, carry out the above procedure in the same way, mutatis mutandis: For each modality $s$, compute the singular value decomposition $A^{(s)}=[U^{(s)} | U^{(s)}_r ] (\Sigma^{(s)} \oplus \Sigma_r^{(s)}) [V^{(s)}|V^{(s)}_r]^T$ for $U^{(s)},V^{(s)} \in {\mathbb{R}}^{n \times R^{(s)}}$, $U_r^{(s)},V_r^{(s)} \in {\mathbb{R}}^{n \times (n-R^{(s)})}$, $\Sigma \in {\mathbb{R}}^{R^{(s)} \times R^{(s)}}$, and $\Sigma_r \in {\mathbb{R}}^{(n-R^{(s)}) \times (n-R^{(s)})}$ such that $[U^{(s)}|U_r^{(s)}]$ and $[V^{(s)}|V_r^{(s)}]$ are each real-orthogonal matrices and $\Sigma^{(s)} \oplus \Sigma_r^{(s)}$ is a diagonal matrix with its diagonals non-increasingly ordered, then define $X^{(s)}:=U^{(s)} \sqrt{\Sigma^{(s)}}$ and $Y^{(s)}:=V^{(s)} \sqrt{\Sigma^{(s)}}$ and then, according as the rows of all $M^{(s)}$ are known to be distinct-valued, the columns of $M^{(s)}$ are known to be distinct-valued, or neither, compute ${\mathcal C}$ which minimizes $\| C -[X^{(1)}|X^{(2)}| \cdots |X^{(S)}] \|_F$ or $\| C -[Y^{(1)}|Y^{(2)}| \cdots |Y^{(S)}] \|_F$ or $\| C -[X^{(1)}|X^{(2)}| \cdots |X^{(S)}|Y^{(1)}|Y^{(2)}| \cdots |Y^{(S)} ] \|_F$ such that there are at most $K$ distinct-valued rows in $C$, and then define $\hat{\tau}$ as the partition of the vertices into $K$ blocks according to equal-valued corresponding rows in ${\mathcal C}$. Consistency of the adjacency-spectral partitioning of Section \[b\] \[d\] ------------------------------------------------------------------------- We consider a sequence of realizations of the stochastic block model given in Section \[c\] for successive values $n=1,2,3,\ldots$ with all stochastic block model parameters being fixed. In this article, an event will be said to hold [*almost always*]{} if almost surely the event occurs for all but a finite number of $n$. The following consistency result asserts that the number of misassignments in the adjacency-spectral procedure of Section \[b\] is negligible; it will be proven in Section \[f\]. \[a\] With the adjacency-spectral partitioning procedure of Section \[b\], for any fixed $\epsilon>\frac{3}{4}$, the number of misassignments $\min_{\textup{bijections }\pi: \{ 1,2,\ldots,K \} \rightarrow \{ 1,2,\ldots,K \} } | \{ j=1,2,\ldots,n: \tau(j) \ne \pi(\hat{\tau}(j)) \} |$ is almost always less than $n^\epsilon$. Theorem \[a\] holds for all of the scenarios we described in Section \[c\]; whether the edges are directed or undirected, whether there is one modality of communication or multiple modalities. It also doesn’t matter if for each successive $n$ the partition function and adjacencies are re-realized for all vertices or if instead they are carried over from previous $n$’s realization with just one new vertex randomly assigned to a block and just this vertex’s adjacencies to the previous vertices being newly realized. (Note that if the partition function and adjacencies are re-realized for all vertices for successive $n$ then when we invoke the Strong Law of Large Numbers we will be using the version of the Law in [@HMT].) In Sussman et al. [@STFP], it was shown that if $R=$ rank$M$ then the number of misassignments of the adjacency spectral procedure in Section \[b\] is almost always less than a constant times $\log n$ (where the constant is a function of the model parameters). Indeed, both $\log n$ and $n^\epsilon$, when divided by the number of vertices $n$, converge to zero, and in that sense we can now say that whether rank$M$ is known or if it is overestimated then either way the number of misassignments of spectral-adjacency partitioning is negligible. This is a useful robustness result. Ancillary results \[kkk\] ========================= Latent vectors and constants from the model parameters \[g\] ------------------------------------------------------------- In this section we identify relevant constants $\alpha$, $\beta$, and $\gamma$ which depend on the specific values of the stochastic block model parameters; these constants will be used in our analysis. We also consider a particular decomposition of a model parameter (the communication probability matrix $M$) into [*latent vectors*]{} which we may then usefully associate with the respective blocks. We first emphasize that knowing the values of these constants $\alpha$, $\beta$, and $\gamma$ which we are about to identify and knowing the values of the latent vectors which we are about to define are not at all needed to actually [**perform**]{} the adjacency-spectral clustering procedure of Section \[b\], nor is any such knowledge needed in order to invoke and [**use**]{} the consistency result Theorem \[a\]. These constants and latent vectors will be used here in developing the analysis and then proving Theorem \[a\]. The stochastic block model parameters are $K$, $\rho$, $M$; the constants $\alpha$, $\beta$, $\gamma$ are defined as follows: Recall that $\rho_k>0$ for all $k$; choose constant $\alpha >0$ such that $\alpha < \rho_k$ for all $k$. Next, choose matrices $\mu,\nu \in {\mathbb{R}}^{K \times \textup{Rank}M}$ such that $M = \mu \nu^T$; indeed, such matrices $\mu$ and $\nu$ (exist and) can be easily computed using a singular value decomposition of $M$. It is trivial to see that if any two rows of $M$ are not equal-valued then those two corresponding rows of $\mu$ must be not equal-valued, and if any two columns of $M$ are not equal-valued then those two corresponding rows of $\nu$ are not equal-valued. Choose constant $\beta >0$ be such that, for all pairs of nonequal-valued rows $\mu_{k,\cdot}$, $\mu_{k',\cdot}$ of $\mu$ it holds that $\|\mu_{k,\cdot}-\mu_{k',\cdot}\|_2>\beta$, and for all pairs of nonequal-valued rows $\nu_{k,\cdot}$, $\nu_{k',\cdot}$ of $\nu$ it holds that $\|\nu_{k,\cdot}-\nu_{k',\cdot}\|_2>\beta$. Lastly, since $\mu$ and $\nu$ are full column rank, choose constant $\gamma>0$ such that the eigenvalues of $\mu^T\mu$ and $\nu^T\nu$ are all greater than $\gamma$. The rows of $\mu$ and $\nu$ are respectively called [*left latent vectors*]{} and [*right latent vectors*]{}, and are associated with the vertices as follows. The matrices ${\mathcal X} \in {\mathbb{R}}^{n \times \textup{rank}M}$ and ${\mathcal Y} \in {\mathbb{R}}^{n \times \textup{rank}M}$ are defined such that for all $i=1,2,\ldots,n$, ${\mathcal X}_{i,\cdot}:=\mu_{\tau(i),\cdot}$ and ${\mathcal Y}_{i,\cdot}:=\nu_{\tau(i),\cdot}$. The significance of the latent vectors is that for any pair of distinct vertices $i$ and $j$ the probability of an $i,j$ edge is the inner product of the left latent vector associated with $i$ (which is ${\mathcal X}_{i,\cdot}$) with the right latent vector associated with $j$ (which is ${\mathcal Y}_{j,\cdot}$). Of course, these latent vectors are not observed; indeed, $M$ is not known and $\tau$ is not observed. Finally, let ${\mathcal X}{\mathcal Y}^T={\mathcal U} \Lambda {\mathcal V}^T$ be a singular value decomposition, i.e. ${\mathcal U},{\mathcal V} \in {\mathbb{R}}^{n \times \textup{rank}M}$ each have orthonormal columns and $\Lambda \in {\mathbb{R}}^{\textup{rank}M \times \textup{rank}M}$ is a diagonal matrix with diagonals ordered in nonincreasing order $\varsigma_1 \geq \varsigma_2 \geq \varsigma_3 \geq \cdots \geq \varsigma_{\textup{rank}M}$. It is useful to observe that ${\mathcal X}({\mathcal Y}^T{\mathcal V}\Lambda^{-1})={\mathcal U}$ and $(\Lambda^{-1}{\mathcal U}^T{\mathcal X}){\mathcal Y}^T={\mathcal V}^T$ imply that rows of ${\mathcal X}$ which are equal-valued correspond to rows of ${\mathcal U}$ that are equal-valued, and rows of ${\mathcal Y}$ which are equal-valued correspond to rows of ${\mathcal V}$ that are equal-valued. In the more complicated scenario of more than one communication modality these definitions are made in the same way, mutatis mutandis: For all modalities $s$, choose $\mu^{(s)},\nu^{(s)} \in {\mathbb{R}}^{K \times \textup{Rank}M^{(s)}}$ such that $M^{(s)}=\mu^{(s)}\nu^{(s)^T}$, then choose $\beta >0$ such that for every modality $s$ and all pairs of nonequal-valued rows $\mu^{(s)}_{k,\cdot}$, $\mu^{(s)}_{k',\cdot}$ of $\mu^{(s)}$ it holds that $\|\mu^{(s)}_{k,\cdot}-\mu^{(s)}_{k',\cdot}\|_2>\beta$, and for all pairs of nonequal-valued rows $\nu^{(s)}_{k,\cdot}$, $\nu^{(s)}_{k',\cdot}$ of $\nu^{(s)}$ it holds that $\|\nu^{(s)}_{k,\cdot}-\nu^{(s)}_{k',\cdot}\|_2>\beta$. Choose constant $\gamma>0$ such that all eigenvalues of $\mu^{(s)^T}\mu^{(s)}$ and $\nu^{(s)^T}\nu^{(s)}$ for all modalities $s$ are greater than $\gamma$. Then, for each modality $s$, define the rows of ${\mathcal X}^{(s)} \in {\mathbb{R}}^{n \times \textup{Rank}M^{(s)}}$ and ${\mathcal Y}^{(s)} \in {\mathbb{R}}^{n \times \textup{Rank}M^{(s)}}$ to be the rows from $\mu^{(s)}$ and $\nu^{(s)}$, respectively, corresponding to the blocks of the respective vertices, and then define ${\mathcal U}^{(s)}$, ${\mathcal V}^{(s)}$, and $\Lambda^{(s)}$ (with ordered diagonals $\varsigma_1^{(s)}, \varsigma_2^{(s)}, \ldots \varsigma^{(s)}_{\textup{rank}M^{(s)}}$) to form singular value decompositions ${{\mathcal X}}^{(s)} {{\mathcal Y}}^{(s)^T}={\mathcal U}^{(s)} \Lambda^{(s)} {\mathcal V}^{(s)^T}$. Bounds ------ In this section we prove a number of bounds involving $A$, ${{\mathcal X}}{{\mathcal Y}}^T$, their singular values and matrices constructed from components of their singular value decompositions. These bounds will then be used in Section \[f\] to prove Theorem \[a\], which asserts the consistency of the adjacency-spectral partitioning procedure of Section \[b\]. The results in this section are stated and proved for both the directed setting and the undirected setting of Section \[c\]. However, we directly treat only the setting with one modality of communication; if there are multiple modalities of communication then all of the statements and proofs in this section apply to each modality separately. Some of the results in this section can be found in similar or different form in [@STFP]; we include all necessary results for completeness, and in order to incorporate many substantive changes needed for treatment of this article’s focus. It almost always holds that $\| AA^T -{{\mathcal X}}{{\mathcal Y}}^T({{\mathcal X}}{{\mathcal Y}}^T)^T\|_F \leq \sqrt{3}n^{3/2}\sqrt{\log n}$ and it almost always holds that $\| A^TA -({{\mathcal X}}{{\mathcal Y}}^T)^T{{\mathcal X}}{{\mathcal Y}}^T\|_F \leq \sqrt{3}n^{3/2}\sqrt{\log n}$. \[aa\] [**Proof:**]{} Let ${{\mathcal X}}_{i,\cdot}$ and ${{\mathcal Y}}_{i,\cdot}$ denote the $i$th rows of ${{\mathcal X}}$ and ${{\mathcal Y}}$, respectively. For all $i \ne j$, $$\begin{aligned} \label{ii} [AA^T]_{ij}-[{{\mathcal X}}{{\mathcal Y}}^T({{\mathcal X}}{{\mathcal Y}}^T)^T]_{ij}=\sum_{l \ne i,j}(A_{il}A_{jl}-{{\mathcal X}}_{i,\cdot} {{\mathcal Y}}_{l,\cdot}^T {{\mathcal X}}_{j,\cdot} {{\mathcal Y}}_{l,\cdot}^T)-{{\mathcal X}}_{i,\cdot} {{\mathcal Y}}_{i,\cdot}^T {{\mathcal X}}_{j,\cdot} {{\mathcal Y}}_{i,\cdot}^T -{{\mathcal X}}_{i,\cdot} {{\mathcal Y}}_{j,\cdot}^T {{\mathcal X}}_{j,\cdot} {{\mathcal Y}}_{j,\cdot}^T\end{aligned}$$ Hoeffding’s inequality states that if $\Upsilon$ is the sum of $m$ independent random variables that take values in the interval $[0,1]$, and if $c>0$ then ${\mathbb{P}}\left [ (\Upsilon-\textup{E}[\Upsilon])^2\geq c \right ] \leq 2e^{-\frac{2c}{m}}$. Thus, for all $i,j$ such that $i \ne j$, if we condition on ${{\mathcal X}}$ and ${{\mathcal Y}}$, we have for $l\ne i,j$ that the $m:=n-2$ random variables $A_{il}A_{jl}$ have distribution Bernoulli$({{\mathcal X}}_{i,\cdot} {{\mathcal Y}}_{l,\cdot}^T {{\mathcal X}}_{j,\cdot} {{\mathcal Y}}_{l,\cdot}^T)$ and are independent. Thus, taking $c=2(n-2)\log n$ in Equation (\[ii\]), we obtain that $$\begin{aligned} \label{jj} {\mathbb{P}}\left [ ([AA^T]_{ij}- [{{\mathcal X}}{{\mathcal Y}}^T( {{\mathcal X}}{{\mathcal Y}}^T)^T]_{ij})^2 \geq 2(n-2)\log n + 4n-4 \right ] \leq \frac{2}{n^{4}}.\end{aligned}$$ Integrating Equation (\[jj\]) over ${{\mathcal X}}$ and ${{\mathcal Y}}$ yields that Equation (\[jj\]) is true unconditionally. By probability subadditivity, summing over $i,j$ such that $i \neq j$ in Equation (\[jj\]), we obtain that $$\begin{aligned} \label{kk} {\mathbb{P}}\left [ \sum_{i,j:i \neq j} ([AA^T]_{ij}-[{{\mathcal X}}{{\mathcal Y}}^T({{\mathcal X}}{{\mathcal Y}}^T)^T]_{ij})^2 \geq 2n(n-1)(n-2)\log n + 4n(n-1)^2 \right ] \leq \frac{2n(n-1)}{n^4}.\end{aligned}$$ By the Borel-Cantelli Lemma (which states that if a sequence of events have probabilities with bounded sum then almost always the events do not occur) we obtain from Equation (\[kk\]) that almost always $$\begin{aligned} \sum_{i,j:i \neq j} ([AA^T]_{ij}-[{{\mathcal X}}{{\mathcal Y}}^T({{\mathcal X}}{{\mathcal Y}}^T)^T]_{ij})^2 \leq \frac{5}{2}n^3 \log n\end{aligned}$$ and thus almost always $\| AA^T-{{\mathcal X}}{{\mathcal Y}}^T({{\mathcal X}}{{\mathcal Y}}^T)^T \|^2_F \leq 3n^3 \log n$ because each of the diagonals of $AA^T-{{\mathcal X}}{{\mathcal Y}}^T( {{\mathcal X}}{{\mathcal Y}}^T)^T$ are bounded in absolute value by $n$. The very same argument holds mutatis mutandis for $\|A^TA-({{\mathcal X}}{{\mathcal Y}}^T)^T {{\mathcal X}}{{\mathcal Y}}^T\|^2_F$. $\qed$\ The next lemma, Lemma \[bb\], provides bounds on the singular values $\varsigma_1,\varsigma_2,\varsigma_3,\ldots$ of matrix ${{\mathcal X}}{{\mathcal Y}}^T$ and then, in Corollary \[cc\], we obtain bounds on the singular values $\sigma_1,\sigma_2,\sigma_3,\ldots$ of matrix $A$. Recall that the rank of ${{\mathcal X}}{{\mathcal Y}}^T$ is (almost always) $\textup{rank}M$, while $A$ may in fact have rank $n$. It almost always holds that $\alpha \gamma n \leq \varsigma_{\textup{rank}M}$, and it always holds that $\varsigma_1\leq n$. \[bb\] [**Proof:**]{} Because ${{\mathcal X}}{{\mathcal Y}}^T$ is in $[0,1]^{n \times n}$, the nonnegative matrix ${{\mathcal X}}{{\mathcal Y}}^T({{\mathcal X}}{{\mathcal Y}}^T)^T$ has all of its entries bounded by $n$, thus all of its row sums bounded by $n^2$, and thus its spectral radius $\varsigma_1^2$ is bounded by $n^2$, ie we have $\varsigma_1\leq n$ as desired.\ Next, for all $k=1,2,\ldots,K$, let random variable $n_k$ denote the number of vertices in block $k$. The nonzero eigenvalues of $({{\mathcal X}}{{\mathcal Y}}^T)({{\mathcal X}}{{\mathcal Y}}^T)^T={{\mathcal X}}{{\mathcal Y}}^T {{\mathcal Y}}{{\mathcal X}}^T$ are the same as the nonzero eigenvalues of ${{\mathcal Y}}^T {{\mathcal Y}}{{\mathcal X}}^T {{\mathcal X}}$. By the definition of $\alpha$ and the Law of Large Numbers, almost always $n_k > \alpha n$ for each $k$, thus we express ${{\mathcal X}}^T {{\mathcal X}}=\sum_{k=1}^K n_k \mu_{k,\cdot}^T \mu_{k,\cdot} = \alpha n \mu^T \mu + \sum_{k=1}^K(n_k-\alpha n)\mu_{k,\cdot}^T \mu_{k,\cdot}$ as the sum of two positive semidefinite matrices and obtain that the minimum eigenvalue of ${{\mathcal X}}^T{{\mathcal X}}$ is at least $\alpha \gamma n$. Similarly the minimum eigenvalue of ${{\mathcal Y}}^T {{\mathcal Y}}$ is at least $\alpha \gamma n$. The minimum eigenvalue of a product of positive semidefinite matrices is at least the product of their minimum eigenvalues [@ZZ], thus the minimum eigenvalue of ${{\mathcal Y}}^T {{\mathcal Y}}{{\mathcal X}}^T X$ (which is equal to $\varsigma^2_{\textup{rank}M}$) is at least $\alpha \gamma n \cdot \alpha \gamma n$, as desired. $\qed$ It almost always holds that $\alpha \gamma n \leq \sigma_{\textup{rank}M}$, it always holds that $\sigma_1 \leq n$, and it almost always holds that $\sigma_{\textup{rank}M+1} \leq 3^{1/4} n^{3/4} \log^{1/4}n$. \[cc\] [**Proof:**]{} By Lemma \[aa\] and Weyl’s Lemma (e.g., see [@HJ]), we obtain that for all $m$ it almost always holds that $| \sigma^2_m-\varsigma^2_m| \leq \| AA^T -{{\mathcal X}}{{\mathcal Y}}^T({{\mathcal X}}{{\mathcal Y}}^T)^T\|_F \leq \sqrt{3}n^{3/2}\sqrt{\log n}$. For all $m > \textup{rank}M$, the $m$th singular value of ${{\mathcal X}}{{\mathcal Y}}^T$ is zero, thus almost always $\sigma_{\textup{rank}M+1}\leq 3^{1/4} n^{3/4} \log^{1/4}n$. Lemma \[bb\] can in fact be strengthened to show that there is an $\delta >0$ such that almost always $(\alpha \gamma +\delta) n \leq \varsigma_{\textup{rank}M}$, hence $(\alpha \gamma +\delta)^2 n^2 \leq \varsigma_{\textup{rank}M}^2$, thus we have almost always that $(\alpha \gamma)^2 n^2 \leq \sigma_{\textup{rank}M}^2$, as desired. Showing that $\sigma_1 \leq n$ is done the same way that $\varsigma_1\leq n$ was shown in Lemma \[bb\]. $\qed$\ It is worth noting that a consequence of Corollary \[cc\] is that, for any chosen real number $\omega$ such that $\frac{3}{4}<\omega <1$, the random variable which counts the number of $\sigma_1,\sigma_2, \ldots,\sigma_n$ which are greater than $n^\omega$ is a consistent estimator for $\textup{rank}M$ (is almost always equal to $\textup{rank}M$). Our goal in this article is to show a robustness result, that “overestimating" $\textup{rank}M$ with $R$ in the adjacency-spectral partitioning procedure does not ruin the consistency of the procedure.\ Recall from Section \[b\] the singular value decomposition $A=[U | U_r ] (\Sigma \oplus \Sigma_r) [V|V_r]^T$. At this point it will useful to further partition $U=[U_\ell|U_c]$, $V=[V_\ell|V_c]$, and $\Sigma=\Sigma_\ell \oplus \Sigma_c$ where $U_\ell,V_\ell \in {\mathbb{R}}^{n \times \textup{rank}M}$, $U_c,V_c \in {\mathbb{R}}^{n \times (R-\textup{rank}M)}$, $\Sigma_\ell \in {\mathbb{R}}^{\textup{rank}M \times \textup{rank}M}$, and $\Sigma_c \in {\mathbb{R}}^{(R-\textup{rank}M) \times (R-\textup{rank}M)}$. (The subscripts $\ell,c,r$ are mnemonics for “left", “center", and “right", respectively.) Also define the matrices $X_\ell := U_\ell \sqrt{\Sigma_\ell}$, $Y_\ell := V_\ell \sqrt{\Sigma_\ell}$, $X_c := U_c \sqrt{\Sigma_c}$, $Y_c := V_c \sqrt{\Sigma_c}$, $X_r := U_r \sqrt{\Sigma_r}$, and $Y_r := V_r \sqrt{\Sigma_r}$. Referring back to the definition of $X$ and $Y$ in Section \[b\], note that $X=[X_\ell | X_c]$ and $Y=[Y_\ell | Y_c]$. From the definition of $\beta$ in Section \[g\] if follows that for any $i$ and $j$ such that ${{\mathcal X}}_{i,\cdot} \neq {{\mathcal X}}_{j,\cdot}$ (or ${{\mathcal Y}}_{i,\cdot} \neq {{\mathcal Y}}_{j,\cdot}$) it holds that $\| {{\mathcal X}}_{i,\cdot} - {{\mathcal X}}_{j,\cdot} \| \geq \beta$ (respectively, $\| {{\mathcal Y}}_{i,\cdot} - {{\mathcal Y}}_{j,\cdot} \| \geq \beta$ ). The next result shows how this separation extends to the rows of the singular vectors of ${{\mathcal X}}{{\mathcal Y}}^T$. Almost always the following are true:\ For all $i,j$ such that $\|{{\mathcal X}}_{i,\cdot}-{{\mathcal X}}_{j,\cdot}\|_2 \geq \beta$, it holds that $\| {{\mathcal U}}_{i, \cdot} - {{\mathcal U}}_{j, \cdot } \|_2 \geq \beta \sqrt{\frac{\alpha \gamma}{n}}$.\ For all $i,j$ such that $\|{{\mathcal Y}}_{i,\cdot}-{{\mathcal Y}}_{j,\cdot}\|_2 \geq \beta$, it holds that $\| {{\mathcal V}}_{i, \cdot} - {{\mathcal V}}_{j, \cdot } \|_2 \geq \beta \sqrt{\frac{\alpha \gamma}{n}}$.\ For all $i,j$ such that $\|{{\mathcal X}}_{i,\cdot}-{{\mathcal X}}_{j,\cdot}\|_2 \geq \beta$, it holds that $\| {{\mathcal U}}_{i, \cdot}Q \sqrt{\Sigma_\ell} - {{\mathcal U}}_{j, \cdot }Q \sqrt{\Sigma_\ell} \|_2 \geq \alpha \beta \gamma $ for any orthogonal matrix $Q \in {\mathbb{R}}^{\textup{rank}M \times \textup{rank}M}$.\ For all $i,j$ such that $\|{{\mathcal Y}}_{i,\cdot}-{{\mathcal Y}}_{j,\cdot}\|_2 \geq \beta$, it holds that $\| {{\mathcal V}}_{i, \cdot}Q \sqrt{\Sigma_\ell} - {{\mathcal V}}_{j, \cdot }Q \sqrt{\Sigma_\ell} \|_2 \geq \alpha \beta \gamma$ for any orthogonal matrix $Q \in {\mathbb{R}}^{\textup{rank}M \times \textup{rank}M}$. \[ff\] [**Proof:**]{} Recall the singular value decomposition ${\mathcal X}{\mathcal Y}^T={\mathcal U} \Lambda {\mathcal V}^T$ from Section \[g\] (where ${\mathcal U},{\mathcal V} \in {\mathbb{R}}^{n \times \textup{rank}M}$ each have orthonormal columns and $\Lambda \in {\mathbb{R}}^{\textup{rank}M \times \textup{rank}M}$ is diagonal). Let ${{\mathcal Y}}^T {{\mathcal Y}}=W \Delta^2 W^T$ be a spectral decomposition; that is, $W \in {\mathbb{R}}^{\textup{rank}M \times \textup{rank}M}$ is orthogonal and $\Delta \in {\mathbb{R}}^{\textup{rank}M \times \textup{rank}M}$ is a diagonal matrix with positive diagonal entries. Note that $$\begin{aligned} \label{ll} ({{\mathcal X}}W \Delta)({{\mathcal X}}W \Delta)^T= {{\mathcal X}}W \Delta^2 W^T {{\mathcal X}}^T = {{\mathcal X}}{{\mathcal Y}}^T {{\mathcal Y}}{{\mathcal X}}^T={{\mathcal U}}\Lambda {{\mathcal V}}^T {{\mathcal V}}\Lambda {{\mathcal U}}^T=({{\mathcal U}}\Lambda)({{\mathcal U}}\Lambda)^T.\end{aligned}$$ For any $i,j$ distinct, let $e \in {\mathbb{R}}^n$ denote the vector with all zeros except for the value $1$ in the $i$th coordinate and the value $-1$ in the $j$th coordinate. By Equation (\[ll\]), we thus have that $$\begin{aligned} \label{mm} \|( {{\mathcal X}}W \Delta)_{i,\cdot}-({{\mathcal X}}W \Delta)_{j,\cdot}\|_2^2= e^T( {{\mathcal X}}W \Delta)({{\mathcal X}}W \Delta)^Te = e^T ({{\mathcal U}}\Lambda )( {{\mathcal U}}\Lambda)^T e = \| ( {{\mathcal U}}\Lambda )_{i,\cdot} -( {{\mathcal U}}\Lambda )_{j,\cdot}\|_2^2.\end{aligned}$$ From Lemma \[bb\] and its proof, we have that the diagonals of $\Delta$ are almost always at least $\sqrt{\alpha \gamma n}$ and that the diagonals of $\Lambda$ are at most $n$. Using this and Equation (\[mm\]), we get that if $i,j$ are such that $\|X_{i,\cdot}-X_{j,\cdot}\| \geq \beta$ then it holds that $$\begin{aligned} \beta \leq \| {{\mathcal X}}_{i,\cdot}- {{\mathcal X}}_{j,\cdot} \|_2= \| ({{\mathcal X}}W)_{i,\cdot}-({{\mathcal X}}W)_{j,\cdot}\|_2 \leq \frac{1}{\sqrt{\alpha \gamma n}} \| ({{\mathcal X}}W\Delta)_{i,\cdot} - ({{\mathcal X}}W\Delta)_{j,\cdot} \|_2 \\ = \frac{1}{\sqrt{\alpha \gamma n}} \|({{\mathcal U}}\Lambda )_{i,\cdot}-({{\mathcal U}}\Lambda)_{j,\cdot} \|_2 \leq \frac{1}{\sqrt{\alpha \gamma n}} n \| {{\mathcal U}}_{i,\cdot}-{{\mathcal U}}_{j,\cdot} \|_2.\end{aligned}$$ Thus $\| {{\mathcal U}}_{i, \cdot} - {{\mathcal U}}_{j, \cdot } \|_2 \geq \beta \sqrt{\frac{\alpha \gamma}{n}}$, as desired. Now, if $Q \in {\mathbb{R}}^{\textup{rank}M \times \textup{rank}M}$ is any orthogonal matrix then, by Corollary \[cc\], $$\begin{aligned} \| {{\mathcal U}}_{i,\cdot}- {{\mathcal U}}_{j,\cdot} \|_2 = \| {{\mathcal U}}_{i,\cdot}Q- {{\mathcal U}}_{j,\cdot} Q\|_2 \leq \frac{1}{\sqrt{\alpha \gamma n}} \| {{\mathcal U}}_{i, \cdot}Q \sqrt{\Sigma_\ell} - U_{j, \cdot }Q \sqrt{\Sigma_\ell} \|_2\end{aligned}$$ which, together with $\| {{\mathcal U}}_{i, \cdot} - {{\mathcal U}}_{j, \cdot } \|_2 \geq \beta \sqrt{\frac{\alpha \gamma}{n}}$, implies $\| {{\mathcal U}}_{i, \cdot}Q \sqrt{\Sigma_\ell} - {{\mathcal U}}_{j, \cdot }Q \sqrt{\Sigma_\ell} \|_2 \geq \alpha \beta \gamma $, as desired. The same argument applies mutatis mutandis for $\|Y_{i,\cdot}-Y_{j,\cdot} \| \geq \beta$. $\qed$\ In the following, the sum of vector subspaces will refer to the subspace consisting of all sums of vectors from the summand subspaces; equivalently, it will be the smallest subspace containing all of the summand subspaces. The following theorem is due to Davis and Kahan [@DK] in the form presented in [@RCY]. Let $H,H' \in {\mathbb{R}}^{n \times n}$ be symmetric, suppose ${\mathcal S} \subset {\mathbb{R}}$ is an interval, and suppose for some positive integer $d$ that ${\mathcal W}, {\mathcal W}' \in {{\mathbb{R}}^{n \times d}}$ are such that the columns of ${\mathcal W}$ form an orthonormal basis for the sum of the eigenspaces of $H$ associated with the eigenvalues of $H$ in ${\mathcal S}$, and the columns of ${\mathcal W}'$ form an orthonormal basis for the sum of the eigenspaces of $H'$ associated with the eigenvalues of $H'$ in ${\mathcal S}$. Let $\delta$ be the minimum distance between any eigenvalue of $H$ in ${\mathcal S}$ and any eigenvalue of $H$ not in ${\mathcal S}$. Then there exists an orthogonal matrix ${\mathcal Q} \in {\mathbb{R}}^{d \times d}$ such that $\|{\mathcal W}{\mathcal Q} -{\mathcal W}' \|_F \leq \frac{\sqrt{2}}{\delta}\|H-H'\|_F$. \[dd\] There almost always exist real orthogonal matrices $Q_{{\mathcal U}},Q_{{\mathcal V}}\in {\mathbb{R}}^{\textup{rank}M \times \textup{rank}M}$ which satisfy $\|{{\mathcal U}}Q_{{\mathcal U}}- U_\ell \|_F \leq \frac{\sqrt{6}}{\alpha^2 \gamma^2} \cdot \sqrt{ \frac{\log n }{n}}$ and $\|{{\mathcal V}}Q_{{\mathcal V}}- V_\ell \|_F \leq \frac{\sqrt{6}}{\alpha^2 \gamma^2} \cdot \sqrt{ \frac{\log n }{n}}$. Furthermore, it holds that $\| \tilde{{{\mathcal X}}}_\ell - X_\ell \|_F \leq \frac{\sqrt{6}}{\alpha^2 \gamma^2} \cdot \sqrt{\log n}$ and $\| \tilde{{{\mathcal Y}}}_\ell - Y_\ell \|_F \leq \frac{\sqrt{6}}{\alpha^2 \gamma^2} \cdot \sqrt{ \log n }$, where we define $\tilde{{{\mathcal X}}}_\ell :={{\mathcal U}}Q_{{\mathcal U}}\sqrt{\Sigma_\ell}$ and $\tilde{{{\mathcal Y}}}_\ell:={{\mathcal V}}Q_{{\mathcal V}}\sqrt{\Sigma_\ell}$. \[ee\] [**Proof:**]{} Take ${\mathcal S}$ in Theorem \[dd\] to be the interval $(\frac{1}{2}\alpha^2 \gamma^2 n^2, \infty)$. By Lemma \[bb\] and Corollary \[cc\], we have almost always that precisely the greatest $\textup{rank}M$ eigenvalues of each of $H:={{\mathcal X}}{{\mathcal Y}}^T ({{\mathcal X}}{{\mathcal Y}}^T)^T$ and $H':=AA^T$ are in ${\mathcal S}$. By Lemma \[bb\], almost always $\delta \geq \alpha^2 \gamma^2 n^2$ (for the $\delta$ in Theorem \[dd\]) so, by Lemma \[aa\], almost always $\frac{\sqrt{2}}{\delta}\|H-H'\|_F \leq \frac{\sqrt{2}}{\alpha^2 \gamma^2 n^2}\sqrt{3}n^{3/2}\sqrt{\log n}$. With this, the first statements of Corollary \[ee\] follow from the Davis and Kahan Theorem (Theorem \[dd\]). The last statements of Corollary \[ee\] follow from postmultiplying ${{\mathcal U}}Q_{{\mathcal U}}-U_\ell$ with $\sqrt{\Sigma_\ell}$ and then using Corollary \[cc\] and the definition of $X_\ell$. $\qed$\ Now, choose ${{\mathcal U}}_c \in {\mathbb{R}}^{n \times (R-\textup{rank}M)}$ and ${{\mathcal U}}_r \in {\mathbb{R}}^{n \times (n-R)}$ such that $[{{\mathcal U}}| {{\mathcal U}}_c | {{\mathcal U}}_r] \in {\mathbb{R}}^{n \times n}$ is an orthogonal matrix. In particular, note that the columns of ${{\mathcal U}}_c$ together with the columns of ${{\mathcal U}}_r$ form an orthonormal basis for the eigenspace associated with eigenvalue $0$ in the matrix $H:={{\mathcal X}}{{\mathcal Y}}^T ({{\mathcal X}}{{\mathcal Y}}^T)^T$. There almost always exists a real orthogonal matrix $Q \in {\mathbb{R}}^{(n-\textup{rank}M) \times (n- \textup{rank}M)}$ such that $\| \ [{{\mathcal U}}_c | {{\mathcal U}}_r ] \ Q - \ [ U_c | U_r ] \ \|_F \leq \frac{\sqrt{6}}{\alpha^2 \gamma^2} \cdot \sqrt{ \frac{\log n }{n}}$. Define $\tilde{{{\mathcal X}}}_c \in {\mathbb{R}}^{n \times (R-\textup{rank}M)}$ and $\tilde{{{\mathcal X}}}_r \in {\mathbb{R}}^{n \times (n-R)}$ such that $[\tilde{{{\mathcal X}}}_c|\tilde{{{\mathcal X}}}_r]:=[{{\mathcal U}}_c|{{\mathcal U}}_r]Q \sqrt{\Sigma_c \oplus \Sigma_r}$. Then $\| \ [\tilde{{{\mathcal X}}}_c | \tilde{{{\mathcal X}}}_r ] \ - \ [ X_c | X_r ] \ \|_F \leq \frac{3^{1/8}6^{1/2}}{\alpha^2 \gamma^2} \cdot n^{-1/8} \log^{5/8}n$. \[gg\] [**Proof:**]{} The first statement of Corollary \[gg\] is proven in the exact manner that we proved Corollary \[ee\], except that ${\mathcal S}$ is instead taken to be the [*complement*]{} of $(\frac{1}{2}\alpha^2 \gamma^2 n^2, \infty)$. The second statement of Corollary \[gg\] follows by postmultiplying $[{{\mathcal U}}_c | {{\mathcal U}}_r ] Q - [ U_c | U_r ]$ with $\sqrt{\Sigma_c \oplus \Sigma_r}$ and then using Corollary \[cc\] and the definitions of $X_c$ and $X_r$. $\qed$ Almost always it holds that $\| \tilde{{{\mathcal X}}}_c \|_F \leq \sqrt{R-\textup{rank}M} \ 3^{1/8} n^{3/8}\log^{1/8}n $. \[hh\] [**Proof:**]{} It is clear (with the matrix $Q$ from Corollary \[gg\]) that $[{{\mathcal U}}_c| {{\mathcal U}}_r ] \ Q$ has orthonormal columns, hence the Froebenius norm of the first $R-\textup{rank}M$ columns is exactly $\sqrt{R-\textup{rank}M}$. The result follows from postmultiplying these columns by $\sqrt{\Sigma_c}$ and using Corollary \[cc\]. $\qed$ Proof of Theorem \[a\], consistency of the adjacency-spectral procedure of Section \[b\] \[f\] ============================================================================================== In this section we prove Theorem \[a\]. Assuming that the number of blocks $K$ is known and that an upper bound $R$ is known for $\textup{rank}M$, Theorem \[a\] states that, for the adjacency-spectral procedure described in Section \[b\], and for any fixed real number $\epsilon > \frac{3}{4}$, the number of misassignments $\min_{\textup{bijections }\pi: \{ 1,2,\ldots,K \} \rightarrow \{ 1,2,\ldots,K \} } | \{ j=1,2,\ldots,n: \tau(j) \ne \pi(\hat{\tau}(j)) \} |$ is almost always less than $n^\epsilon$. We focus first on the scenario where there is a single modality of communication, and we also suppose for now that it is known that the rows of $M$ are pairwise nonequal. First, an observation: Recall from Section \[g\] that, for each vertex, the block that the vertex is a member of via the block [**membership**]{} function $\tau$ is characterized by which of the $K$ distinct-valued rows of ${{\mathcal U}}$ the vertex is associated with in ${{\mathcal U}}$. In Corollary \[ee\], we defined $\tilde{{{\mathcal X}}}_\ell:={{\mathcal U}}Q_{{\mathcal U}}\sqrt{\Sigma_\ell}$. Because $\tilde{{{\mathcal X}}}_\ell$ is ${{\mathcal U}}$ times an invertible matrix (since $\sqrt{\Sigma_\ell}$ is almost always invertible by Corollary \[cc\]), the block that the vertex is truly a member of is thus characterized by which of the $K$ distinct-valued rows of $\tilde{{{\mathcal X}}}_\ell$ the vertex is associated with in $\tilde{{{\mathcal X}}}_\ell$. Also recall that the block which the vertex is assigned to by the block [**assignment**]{} function $\hat{\tau}$ is characterized by which of the at-most-$K$ distinct-valued rows of ${\mathcal C}$ the vertex is associated with in ${\mathcal C}$—where ${\mathcal C} \in {\mathbb{R}}^{n \times R}$ was defined as the matrix which minimized $\| C-X \|_F$ over all matrices $C \in {\mathbb{R}}^{n \times R}$ such that there are at most $K$ distinct-valued rows in $C$. Denote by $0^{n \times (R - \textup{rank}M)}$ the matrix of zeros in ${\mathbb{R}}^{n \times (R - \textup{rank}M)}$. We next show the following: $$\begin{aligned} \label{bbb} \mbox{{\it For any fixed} } \xi > \frac{3}{8}, \mbox{ {\it almost always it holds that} } \| {\mathcal C} - [\tilde{{{\mathcal X}}}_\ell |0^{n \times (R - \textup{rank}M)}] \|_F \leq n^\xi.\end{aligned}$$ Indeed, by the definition of ${\mathcal C}$, the fact that $[\tilde{{{\mathcal X}}}_\ell |0^{n \times (R - \textup{rank}M)}]$ has $K$ distinct-valued rows, and the triangle inequality, we have that $$\begin{aligned} \label{aaa} \| {\mathcal C} - X \|_F \leq \| \ [\tilde{{{\mathcal X}}}_\ell |0^{n \times (R - \textup{rank}M)}] -X \|_F \leq \| [\tilde{{{\mathcal X}}}_\ell | \tilde{{{\mathcal X}}}_c ] -X \|_F + \| \tilde{{{\mathcal X}}}_c \|_F.\end{aligned}$$ Then, by two uses of the triangle inequality and then Equation (\[aaa\]), we have $$\begin{aligned} \| {\mathcal C} - [\tilde{{{\mathcal X}}}_\ell |0^{n \times (R - \textup{rank}M)}] \|_F & \leq & \| {\mathcal C} - [\tilde{{{\mathcal X}}}_\ell | \tilde{{{\mathcal X}}}_c ] \|_F + \| \tilde{{{\mathcal X}}}_c \|_F \\ & \leq & \| {\mathcal C}-X \|_F + \| X - [\tilde{{{\mathcal X}}}_\ell | \tilde{{{\mathcal X}}}_c ] \|_F + \| \tilde{{{\mathcal X}}}_c \|_F \\ & \leq & 2 \cdot \| [\tilde{{{\mathcal X}}}_\ell | \tilde{{{\mathcal X}}}_c ] -X \|_F + 2 \cdot \| \tilde{{{\mathcal X}}}_c \|_F\end{aligned}$$ which, by Corollary \[ee\], Corollary \[gg\], and Note \[hh\], is almost always bounded by $$\begin{aligned} 2 \left [ \left ( \frac{\sqrt{6}}{\alpha^2 \gamma^2} \cdot \sqrt{\log n} \right )^2 + \left ( \frac{3^{1/8}6^{1/2}}{\alpha^2 \gamma^2} \cdot n^{-1/8} \log^{5/8}n \right )^2 \right ] ^{1/2} + 2 R^{1/2} \ 3^{1/8} n^{3/8}\log^{1/8}n,\end{aligned}$$ which is almost always bounded by $n^\xi$ for any fixed $\xi > \frac{3}{8}$. Thus Line (\[bbb\]) is shown. Now, it easily follows from Line (\[bbb\]) that $$\begin{aligned} \mbox{ {\it For any fixed} } \epsilon > \frac{3}{4}, \mbox{ {\it the number of rows of} } {\mathcal C} - [\tilde{{{\mathcal X}}}_\ell |0^{n \times (R - \textup{rank}M)}] \nonumber \\ \mbox{ {\it with Euclidean norm at least} } \frac{\alpha \beta \gamma}{3} \mbox{ {\it is almost always less than} } n^\epsilon; \label{ccc}\end{aligned}$$ indeed, if this was not true, then $\|{\mathcal C} - [\tilde{{{\mathcal X}}}_\ell |0^{n \times (R - \textup{rank}M)}]\|_F \geq \sqrt{n^\epsilon \left ( \frac{\alpha \beta \gamma}{3} \right )^2}$ would contradict Line \[bbb\]. Lastly, form balls $B_1,B_2,\ldots,B_K$ of radius $\frac{\alpha \beta \gamma}{3}$ about the $K$ distinct-valued rows of $[\tilde{{{\mathcal X}}}_\ell |0^{n \times (R - \textup{rank}M)}]$; by Lemma \[ff\], these balls are almost always disjoint. The number of vertices which the block membership function $\tau$ assigns to each block is almost always at least $\alpha n$, thus (by Line (\[ccc\]) and the Pigeonhole Principle) almost always each ball $B_1,B_2,\ldots,B_K$ contains exactly one of the $K$ distinct-valued rows of ${\mathcal C}$. And, for any fixed $\epsilon > \frac{3}{4}$, the number of misassignments from $\hat{\tau}$ is thus almost always less than $n^\epsilon$. Theorem \[a\] is now proven in the scenario where there is a single modality of communication and it is known that the rows of $M$ are pairwise nonequal. In the general case where there are multiple modalities of communication and/or the rows of $M$ are not known to be pairwise nonequal, then the above proof holds mutatis mutandis (affecting relevant bounds by at most a constant factor); in place of $X$ use $Y$ or $[X|Y]$ or $[X^{(1)}|X^{(2)}| \cdots | X^{(S)}]$ or $[Y^{(1)}|Y^{(2)}| \cdots | Y^{(S)}]$ or $[X^{(1)}|X^{(2)}| \cdots | X^{(S)}|Y^{(1)}|Y^{(2)}| \cdots | Y^{(S)} ]$ and in place of $[\tilde{{{\mathcal X}}}_\ell | \tilde{{{\mathcal X}}}_c ]$ use $[\tilde{{{\mathcal Y}}}_\ell | \tilde{{{\mathcal Y}}}_c ]$ or $[\tilde{{{\mathcal X}}}_\ell | \tilde{{{\mathcal X}}}_c | \tilde{{{\mathcal Y}}}_\ell | \tilde{{{\mathcal Y}}}_c ]$ or $[\tilde{{{\mathcal X}}}^{(1)}_\ell | \tilde{{{\mathcal X}}}^{(1)}_c | \tilde{{{\mathcal X}}}^{(2)}_\ell | \tilde{{{\mathcal X}}}^{(2)}_c | \cdots | \tilde{{{\mathcal X}}}^{(S)}_\ell | \tilde{{{\mathcal X}}}^{(S)}_c ]$, or $[\tilde{{{\mathcal Y}}}^{(1)}_\ell | \tilde{{{\mathcal Y}}}^{(1)}_c | \tilde{{{\mathcal Y}}}^{(2)}_\ell | \tilde{{{\mathcal Y}}}^{(2)}_c | \cdots | \tilde{{{\mathcal Y}}}^{(S)}_\ell | \tilde{{{\mathcal Y}}}^{(S)}_c ]$, or $[\tilde{{{\mathcal X}}}^{(1)}_\ell | \tilde{{{\mathcal X}}}^{(1)}_c | \tilde{{{\mathcal X}}}^{(2)}_\ell | \tilde{{{\mathcal X}}}^{(2)}_c | \cdots | \tilde{{{\mathcal X}}}^{(S)}_\ell | \tilde{{{\mathcal X}}}^{(S)}_c | \tilde{{{\mathcal Y}}}^{(1)}_\ell | \tilde{{{\mathcal Y}}}^{(1)}_c | \tilde{{{\mathcal Y}}}^{(2)}_\ell | \tilde{{{\mathcal Y}}}^{(2)}_c | \cdots | \tilde{{{\mathcal Y}}}^{(S)}_\ell | \tilde{{{\mathcal Y}}}^{(S)}_c ]$, as appropriate, and similar kinds of adjustments. Consistent estimation for the number of blocks $K$ \[e\] ======================================================== In this section we provide a consistent estimator $\hat{K}$ for the number of blocks $K$, if indeed $K$ is not known. (The only assumption used is our basic underlying assumption that an upper bound $R$ is known for $\textup{rank}M$.) To simplify the notation, in this section we assume that there is only one modality of communication and we also assume that it is known that the rows of $M$ are distinct-valued. These simplifying assumptions do not affect the results we obtain, and the analysis can be easily generalized to the general case in the same manner as was done at the end of Section \[f\]. In the adjacency-spectral partitioning procedure from Section \[b\], recall that one of the steps was to compute ${\mathcal C} \in {\mathbb{R}}^{n \times R}$ which minimized $\| C-X \|_F$ over all matrices $C \in {\mathbb{R}}^{n \times R}$ such that there are at most $K$ distinct-valued rows in $C$. Then the block assignment function $\hat{\tau}$ was defined as partitioning the vertices into $K$ blocks according to equal-valued corresponding rows in ${\mathcal C}$. Let us now generalize the procedure of Section \[b\]. Suppose that, for any fixed positive integer $K'$, we instead compute ${\mathcal C} \in {\mathbb{R}}^{n \times R}$ which minimizes $\| C-X \|_F$ over all matrices $C \in {\mathbb{R}}^{n \times R}$ such that there are at most $K'$ distinct-valued rows in $C$. Then the block assignment function $\hat{\tau}$ is defined as partitioning the vertices into $K'$ parts (some possibly empty) according to equal-valued corresponding rows in ${\mathcal C}$. We shall call this adjusted procedure “the adjacency-spectral partitioning procedure from Section \[b\] with $K'$ parts." \[fff\] Let real number $\xi$ such that $\frac{3}{8}<\xi < \frac{1}{2}$ be chosen and fixed. For the adjacency-spectral procedure from Section \[b\] with $K'$ parts, if $K'=K$ then almost always $\| {\mathcal C}-X\|_F \leq n^\xi$, and if $K'<K$ then almost always $\| {\mathcal C}-X\|_F > n^\xi$. [**Proof:**]{} Using Equation (\[aaa\]), Corollary \[ee\], Corollary \[gg\], and Note \[hh\] in the manner used to prove Line (\[bbb\]), we obtain that almost always $\| \ [\tilde{{{\mathcal X}}}_\ell |0^{n \times (R - \textup{rank}M)}]-X \|_F \leq n^\xi$, and that if $K'=K$ then almost always $\| {\mathcal C}-X\|_F \leq n^\xi$.\ However, if $K'<K$ then, as we did in Section \[f\], consider balls $B_1,B_2,\ldots,B_K$ of radius $\frac{\alpha \beta \gamma}{3}$ about the $K$ distinct-valued rows of $[\tilde{{{\mathcal X}}}_\ell |0^{n \times (R - \textup{rank}M)}]$. By Lemma \[ff\], these balls are almost always disjoint and, in fact, their centers are almost always at least $\alpha \beta \gamma$ distance one from the other. By the pigeonhole principle, there is at least one ball that contains none of the $K'$ distinct-valued rows of ${\mathcal C}$. Together with the fact that each block almost always has more than $\alpha n$ vertices, we obtain almost always that $\| {\mathcal C} - [\tilde{{{\mathcal X}}}_\ell |0^{n \times (R - \textup{rank}M)}] \|_F \geq \sqrt{\alpha n \left ( \frac{\alpha \beta \gamma}{3} \right )^2} $. Thus, almost always $\| {\mathcal C} - X \| \geq \| {\mathcal C} - [\tilde{{{\mathcal X}}}_\ell |0^{n \times (R - \textup{rank}M)}] \|_F - \| \ [\tilde{{{\mathcal X}}}_\ell |0^{n \times (R - \textup{rank}M)}]-X \|_F>n^\xi$. $\qed$\ Let real number $\xi$ such that $\frac{3}{8}<\xi < \frac{1}{2}$ be chosen and fixed. Define the random variable $\hat{K}$ to be the least positive integer $K'$ such that for the adjacency-spectral procedure from Section \[b\] with $K'$ parts it happens that $\| {\mathcal C}-X\|_F \leq n^\xi$. By Theorem \[fff\], we have the following consistency result for $\hat{K}$. \[ggg\] Almost always $\hat{K}=K$. The extended adjacency-spectral partitioning procedure \[hhh\] ============================================================== The adjacency-spectral partitioning procedure of Section \[b\] assumed that an integer $R$ was known such that $R \geq \textup{rank}M$, but it also assumed that the number of blocks $K$ was known. We next extend the adjacency-spectral partitioning procedure of Section \[b\] (we call it “the extended adjacency-spectral partitioning procedure") so that it only has the assumption that an integer $R$ is known such that $R \geq \textup{rank}M$, and it is not assumed that $K$ is known. The procedure is as follows: Let real number $\xi$ such that $\frac{3}{8}<\xi < \frac{1}{2}$ be chosen and fixed. Successively for $K'=1,2,3\ldots$, do the spectral partitioning procedure of Section \[b\] with $K'$ parts until it happens that $\| {\mathcal C} - X \|_F \leq n^\xi$, then return the $\hat{\tau}$ from the last successive iteration (i.e. the iteration where $K'=\hat{K}$). \[iii\] With the extended adjacency-spectral partitioning procedure, for any fixed $\epsilon>\frac{3}{4}$, the number of misassignments $\min_{\textup{bijections }\pi: \{ 1,2,\ldots,K \} \rightarrow \{ 1,2,\ldots,K \} } | \{ j=1,2,\ldots,n: \tau(j) \ne \pi(\hat{\tau}(j)) \} |$ is almost always less than $n^\epsilon$. [**Proof:**]{} Indeed, almost always the last value of $K'$ (which is $\hat{K}$) is equal to $K$ by Theorem \[ggg\], and then almost always the number of misassignments is less than $n^\epsilon$ by Theorem \[a\]. $\qed$ Another consistent estimator for $K$ \[nnn\] ============================================ In Section \[e\] we provided the consistent estimator $\hat{K}$ for the number of blocks $K$. It was based on Theorem \[fff\], which contrasted—for the adjacency-spectral procedure from Section \[b\] with $K'$ parts—what would happen when $K'=K$ versus when $K'<K$. In this section we are interested in contrasting—for the adjacency-spectral procedure from Section \[b\] with $K'$ parts—what would happen when $K'=K$ versus when $K'>K$. This yields another consistent estimator for $K$. For the adjacency-spectral procedure from Section \[b\] with $K'$ parts, the at-most $K'$ distinct-valued rows of ${\mathcal C}$ will be called the [*centroids*]{}, the [*centroid separation*]{} will refer to the minimum Euclidean distance between all pairs of distinct centroids, and the [*minimum part size*]{} will refer to the least cardinality of the $K'$ parts as partitioned by $\hat{\tau}$; in particular, if one of the parts is empty then the minimum part size is zero, whereas the centroid separation would still be positive. \[mmm\] For the adjacency-spectral procedure from Section \[b\] with $K'$ parts, if $K'=K$ then almost always the minimum part size is greater than $\alpha n$ and the centroid separation is at least $\frac{\alpha \beta \gamma}{3}$. Let $\zeta>0$ and $\vartheta >0$ be any fixed real numbers. If $K'>K$ then almost always it will [**not**]{} hold that the minimum part size is greater than $\vartheta n$ and the centroid separation is at least $\zeta$. \[ddd\] [**Proof:**]{} As we did in Section \[f\], consider balls $B_1,B_2,\ldots,B_K$ of radius $\frac{\alpha \beta \gamma}{3}$ about the $K$ distinct-valued rows of $[\tilde{{{\mathcal X}}}_\ell |0^{n \times (R - \textup{rank}M)}]$. By Lemma \[ff\], these balls are almost always disjoint and, in fact, their centers are almost always at least $\alpha \beta \gamma$ distance one from the other. If $K=K'$ then recall from Section \[f\] that almost always each ball contains exactly one centroid. By the $\alpha \beta \gamma$ separation between the balls’ centers, we thus have almost always that the centroid separation is at least $\frac{\alpha \beta \gamma}{3}$. Also, by Theorem \[a\] there is almost always a strictly sublinear number of misassignments, hence almost always the minimum part size is greater than $\alpha n$. Now to the case of $K' > K$. Suppose by way of contradiction that the minimum part size is greater than $\vartheta n$ and the centroid separation is at least $\zeta$. Since there are strictly more centroids than balls $B_1,B_2,\ldots,B_K$, and because of the $\zeta$ separation between the centroids, by the pigeonhole principle there is at least one centroid with distance greater than $\frac{\zeta}{3}$ from each row of $[\tilde{{{\mathcal X}}}_\ell |0^{n \times (R - \textup{rank}M)}]$ (these rows are the centers of the balls). Since this centroid appears as a row of ${\mathcal C}$ more than $\vartheta n$ times, this would imply that $\|{\mathcal C} - [\tilde{{{\mathcal X}}}_\ell |0^{n \times (R - \textup{rank}M)}]\|_F \geq \sqrt{\vartheta n \left ( \frac{\zeta}{3} \right )^2}$. However we have by the triangle inequality, the definition of ${\mathcal C}$, and the first few line of the proof of Theorem \[fff\] that almost always $\|{\mathcal C} - [\tilde{{{\mathcal X}}}_\ell |0^{n \times (R - \textup{rank}M)}]\|_F \leq \|{\mathcal C} - X \|_F + \| X- [\tilde{{{\mathcal X}}}_\ell |0^{n \times (R - \textup{rank}M)}]\|_F \leq \|{\mathcal C}_K - X \|_F + \| X- [\tilde{{{\mathcal X}}}_\ell |0^{n \times (R - \textup{rank}M)}]\|_F \leq 2n^\xi < \sqrt{\vartheta n \left ( \frac{\zeta}{3} \right )^2}$ (where $\xi$ such that $\frac{3}{8}<\xi < \frac{1}{2}$ is fixed and ${\mathcal C}_K$ denotes what ${\mathcal C}$ would have been if we instead did the adjacency-spectral procedure from Section \[b\] with $K$ parts instead of $K'$ parts), which gives us the desired contradiction.$\qed$\ With Theorem \[mmm\] we obtain another consistent estimator for $K$. However, we would need to assume that positive real numbers $\zeta$ and $\vartheta$ are known that satisfy $\vartheta \leq \alpha$ and $\zeta \leq \frac{\alpha \beta \gamma}{3}$. Assuming that such $\zeta$ and $\vartheta$ are indeed known, we can define the random variable $\check{K}$ to be the greatest positive integer $K'$ among the values $1,2,3,\ldots \lfloor \frac{1}{\vartheta} \rfloor$ (note that $\frac{1}{\vartheta}$ is an upper bound on $K$) such that for the adjacency-spectral procedure from Section \[b\] with $K'$ parts the minimum part size is greater than $\vartheta n$ and the centroid separation is at least $\zeta$. By Theorem \[mmm\] we immediately obtain the following consistency result for $\check{K}$. Almost always $\check{K}=K$. In order to define $\check{K}$, lower bounds on $\frac{\alpha \beta \gamma}{3}$ and $\alpha$ need to be known in addition to an upper bound on $\textup{rank}M$ that needs to be known. This contrasts with $\hat{K}$, for which we only need to assume that an upper bound on $\textup{rank}M$ is known. (Because $\hat{K}$ requires fewer assumptions, the extended adjacency-spectral partitioning procedure in Section \[hhh\] utilizes $\hat{K}$ and not $\check{K}$.) Nonetheless, it is useful to be aware of how the adjacency-spectral procedure from Section \[b\] with $K'$ parts changes in behavior when $K'$ becomes greater than $K$—besides how it changes in behavior when $K'$ becomes less than $K$. And when lower bounds on $\frac{\alpha \beta \gamma}{3}$ and $\alpha$ are also known then, in practice for a single value of $n$, we can check for $\hat{K}=\check{K}$ in order to have more confidence that their common value is indeed $K$. A simulated example and discussion \[ppp\] ========================================== As an illustration, consider the stochastic block model with parameters $$\label{eq:param1} K=3 ,\quad \rho= \left [ \begin{array}{c} .3 \\ .3 \\ .4 \end{array} \right ] \quad M=\left [ \begin{array}{ccc} .205 & .045 & .150 \\ .045 & .205 & .150 \\ .150 & .150 & .180 \end{array} \right ],$$ (in particular, there is only one modality of communication) and suppose edges are undirected. Here rank$M=2$, For each of the values $R=1,2,3,10,25$ and for each number of vertices $n=100,200,300,\ldots,1400$, we generated $2500$ Monte Carlo replications of this stochastic block model and to each of these $2500$ realizations we applied the adjacency-spectral partitioning procedure of Section \[b\] using $R$ as the upper bound on rank$M$ (which, in the case of $R=1$, is purposely incorrect for illustration purposes) assuming that we know $K=3$. Note that rather than finding the actual minimum of $\|\mathcal{C}-X\|_F$, we use the $K$-means algorithm which approximates this minimum. The five curves in Figure \[qqq\] correspond to $R=1,2,3,10,25$ ![image](thefigure){width="100.00000%"} respectively, and they plot the mean fraction of misassignments (the number of misassigned vertices divided by the total number of vertices $n$, such fractions averaged over the $2500$ Monte Carlo replicates) along the $y$-axis, against the value of $n$ along the $x$-axis. Note that when $R=2$ the performance of the adjacency-spectral partitioning is excellent (in fact, the number of misassignments becomes effectively zero as $n$ gets to $1600$). Indeed, even when $R=10$ and $R=25$ (which is substantially greater than rank$M=2$) the adjacency-spectral partitioning partitioning performs very well. However, when $R=1$, which is not an upper bound on rank$M$ (violating our one assumption in this article), the misassignment rate of adjacency-spectral partitioning is almost as bad as chance. Next we will consider the estimator for $K$ proposed in Section \[e\]. Recall that this estimator is defined as $\hat{K}=\arg\min_{K'} \{\| {\mathcal C}_{K'} - X \|_F \leq n^\xi\}=\arg\min_{K'} \{\log_n (\|\mathcal{C}_{K'}-X\|_F)\leq \xi\}$ where $\mathcal{C}_{K'}$ is the $n\times R$ matrix of centroids associated with each vertex, the adjacency spectral clustering procedure in Section \[b\] is done with $K'$ parts, and $\xi\in(3/8,1/2)$ is fixed. We now consider stochastic block model parameters with stronger differences between blocks to illustrate the effectiveness of the estimator. In particular we let $$\label{eq:kestParam} K=3, \quad \rho= \begin{bmatrix} .3 \\ .3 \\ .4 \end{bmatrix} \quad M=\begin{bmatrix} .5 & .1 & .1 \\ .1 & .5 & .1 \\ .1 & .1 & .5 \end{bmatrix}$$ so that $\mathrm{rank}M=3$. For each $n=100,200,400,800,1600,3200,6400$ we generated $50$ Monte Carlo replications of this stochastic block model. To each of these $50$ realizations we performed the adjacency spectral clustering procedure using $R=3$ (Figure \[fig:kest\], left panel) and $R=6$ (Figure \[fig:kest\], right panel) as our upper bound but this time assuming $K$ is not known. We used $K'=2,3,4$ and computed the statistic $\log_n (\|\mathcal{C}_{K'}-X\|_F)$. Figure \[fig:kest\] shows the mean and standard deviation of this test statistic over the $50$ Monte Carlo replicates for each $R$, $K'$ and $n$. The results demonstrate that for $n=6400$, $\hat{K}$ is a good estimate when $R=3=\mathrm{rank}M$ when we choose $\xi$ close to $3/8$. On the other hand for smaller values of $n$, our estimator will select too few blocks regardless of the choice of $\xi\in (3/8,1/2)$. Interestingly, choosing $\xi$ close to $3/8$, $\hat{K}$ always equals the true number of blocks when we let $R=6=2\mathrm{rank}M$, suggesting that this estimator has interesting behavior as a function of $R$. Note that for larger values of $\xi$, $\hat{K}$ will tend to be smaller, and for smaller values of $\xi$, $\hat{K}$ will tend to be larger. ![Test statistic for estimating $K$ using the parameters in line  for $R=3,6$ and $K'=2,3,4$. The unmarked dash line shows $\xi=3/8$.[]{data-label="fig:kest"}](k_est_fig){width="\textwidth"} Discussion {#sec:disc} ========== Our simulation experiment for estimating $K$ demonstrates that good performance is possible for moderate $n$ under certain parameter selections. This buttresses the theoretical and practical interest, as this estimator may serve as a stepping stone for the development of other more effective estimators. Indeed, bounds shown in [@Oliveira] suggest that it may be possible to allow $\xi$ to be as small as $1/4$ using different proof methods. These methods in terms of the operator norm are an important area for further investigation when considering spectral techniques for inference on random graphs. Note additionally that for our first simulation, we used $k$-means rather than minimizing $\|\mathcal{C}-X\|_F$ since the latter is computationally unfeasible. This, together with fast methods to compute the singular value decomposition, indicates that this method can be used even on quite large graphs. For even larger graphs, there are also techniques to approximate the singular value decomposition that should be considered in future work. Further extensions of this work can be made in various directions. Rohe et al. [@RCY] and others allow for the number of blocks to grow. We believe that this method could be extended to this scenario, though careful analysis is necessary to show that the estimator for the number of blocks is still consistent. Another avenue is the problem of missing data, in the form of missing edges; results for this setting follow immediately provided that the edges are missing uniformly at random. This is because the observed graph will still be a stochastic block model with the same block structure. Other forms of missing data are deserving of further study. Sparse graphs are also of interest and this work can likely be extended to the case of moderately sparse graphs, for example with minimum degree $\Theta(n/\sqrt{\log n})$, without significant additional machinery. Another form of missing data is that since we consider graphs with no self-loops, the diagonal of the adjacency matrix are all zeros. Marchette et al. [@Marchette] and Scheinerman and Tucker [@Scheinerman] both suggest methods to impute the diagonals, and this has been show to improve inference in practice. This is related to one final point to mention: Is it better to do spectral partitioning on the adjacency matrix (as we do here in this article) or on the Laplacian (to be used in place of the adjacency matrix in our procedure of this article)? There doesn’t currently seem to be a clear answer; for some choices of stochastic block model parameters it seems empirically that the adjacency matrix gives fewer misassignments than the Laplacian, and for other choices of parameters the Laplacian seems to be better. A determination of exact criterion (on the stochastic block model parameters) for which the adjacency matrix is better than the Laplacian and vice versa deserves attention in future work. But the analysis that we used here to reduce the required knowledge of the model parameters and to show robustness in the procedure will hopefully serve as an impetus to achieve formal results for spectral partitioning in the nonparametric setting for which the block model assumptions don’t hold.\ [**Acknowledgements:**]{} This work (all authors) is partially supported by National Security Science and Engineering Faculty Fellowship (NSSEFF) grant number N00244-069-1-0031, Air Force Office of Scientific Research (AFOSR), and Johns Hopkins University Human Language Technology Center of Excellence (JHU HLT COE). We also thank the editors and the anonymous referees for their valuables comments and critiques that greatly improved this work. [9]{} P.J. Bickel and A. Chen, A nonparametric view of network models and Newman-Girvan and other modularities, *Proceedings of the National Academy of Sciences of the United States of America* [**106**]{} (2009). P.J. Bickel, A. Chen, and E. Levina, The method of moments and degree distributions for network models, *The Annals of Statistics* [**39**]{} (2011), pages 2280–2301. D.S. Choi, P.J. Wolfe, and E.M. Airoldi, Stochastic blockmodels with growing number of classes (2010), preprint. K. Chaudhuri, F. Chung, A. Tsiatas, Spectral Clustering of Graphs with General Degrees in the Extended Planted Partition Model, [*Journal of Machine Learning Research: Workshop and Conference Proceedings*]{}, (2012) pages 1–23 F. Chung, L. Lu, V. Vu, The spectra of random graphs with given expected degrees, [*Internet Mathematics*]{} [**1**]{} (3) (2004) 257–275. A. Coja-Oghlan, Graph partitioning via adaptive spectral techniques, [*Combinatorics, Probability and Computing*]{} [**19**]{}(02) (2010) pages 227–284 A. Condon and R.M. Karp, Algorithms for graph partitioning on the planted partition model, *Random Structures and Algorithms* [**18**]{} (2001), pages 116–140. C. Davis and W.M. Kahan, The rotation of eigenvectors by a perturbation III, *SIAM J. Numer. Anal.* [**7**]{} (1970), pages 1–46. P. Fjallstrom, Algorithms for Graph Partitioning: A Survey, [*Computer and Information Science*]{}, [**[3]{}**]{}(10) (1998) S. Fortunato, Community Detection in graphs, [*Physics Reports*]{} [**486**]{} (2010) pages 74-174 P. Hoff, A. Rafferty, and M. Handcock, Latent space approaches to social network analysis. *Journal of the American Statistical Association* [**97**]{} (2002), pages 1090–1098. P.W. Holland, K. Laskey, and S. Lienhardt, Stochastic blockmodels: First steps, *Social Networks* [**5**]{} (1983), pages 109–137. R.A. Horn, C.R. Johnson, *Matrix Analysis*, Cambridge University Press, (1985). T.C. Hu, F. Moricz, and R.L. Taylor, Strong laws of large numbers for arrays of rowwise independent random variables, *Acta Math. Hung* [**54**]{} (1989), pages 153–162. B. Karrer , M. E. J. Newman, Stochastic blockmodels and community structure in networks, [*Physical Review E*]{}, [**83**]{} 1 (2011), D. J. Marchette, C. E. Priebe, and G. Coppersmith, Vertex nomination via attributed random dot product graphs, In [*Proceedings of the 57th ISI World Statistics Congress*]{} (2011) F. McSherry, Spectral partitioning of random graphs, *42nd IEEE Symposium on Foundations of Computer Science* (2001), pages 529–537. M. Newman and M. Girvan, Finding and evaluating community structure in networks, *Physical Review* [**69**]{} (2004), pages 1–15. R. I. Oliveira, Concentration of the adjacency matrix and of the laplacian in random graphs with independent edges, Arxiv preprint ArXiv:0911.0600 (2010). K. Rohe, S. Chatterjee, and B. Yu, Spectral clustering and the high-dimensional stochastic blockmodel, *The Annals of Statistics* [**39**]{} (2011), pages 1878–1915. K. Rohe and B. Yu, Co-clustering for Directed Graphs; the Stochastic Co-Blockmodel and a Spectral Algorithm, Arxiv preprint arXiv:1204.2296 (2012). E. Scheinerman and K. Tucker, Modeling graphs using dot product representations. [*Computational Statistics*]{}, [**25**]{} (2010). T. Snijders and K. Nowicki, Estimation and prediction for stochastic block models for graphs with latent block structure, *Journal of Classification* [**14**]{} (1997), pages 75–100. D. Sussman, M. Tang, D.E. Fishkind, C.E. Priebe, A consistent adjacency spectral embedding for stochastic blockmodel graphs, submitted for publication. Available at Y.J. Wang and G.Y. Wong, Stochastic blockmodels for directed graphs, *Journal of the American Statistical Association* [**82**]{} (1987). S. Young and E. Scheinerman, Random dot product models for social networks, *Proceedings of the 5th International Conference on Algorithms and Models for the Web-graph* (2007), pages 138–149. F. Zhang and Q. Zhang, Eigenvalue inequalities for matrix product, *IEEE Transaction on Automatic Control* [**51**]{} (2006), pages 1506–1509.
--- abstract: 'Deep neural networks have been shown to lack robustness to small input perturbations. The process of generating the perturbations that expose the lack of robustness of neural networks is known as adversarial input generation. This process depends on the goals and capabilities of the adversary, In this paper, we propose a unifying formalization of the adversarial input generation process from a formal methods perspective. We provide a definition of robustness that is general enough to capture different formulations. The expressiveness of our formalization is shown by modeling and comparing a variety of adversarial attack techniques.' author: - | Tommaso Dreossi, Shromona Ghosh, Alberto Sangiovanni-Vincentelli, Sanjit A. Seshia[^1]\ University of California, Berkeley, USA bibliography: - 'biblio.bib' title: A Formalization of Robustness for Deep Neural Networks --- [^1]: The first two authors contributed equally to this work. This work was supported in part by NSF grants 1545126 (VeHICaL), 1646208, 1739816, and 1837132, the DARPA BRASS program under agreement number FA8750-16-C0043, the DARPA Assured Autonomy program, the iCyPhy center, and Berkeley Deep Drive.
--- abstract: 'Due to their atomic-scale thickness, the resonances of 2D material membranes show signatures of nonlinearities at amplitudes of only a few nanometers. While the linear dynamics of membranes is well understood, the exact relation between the nonlinear response and the resonator’s material properties has remained elusive. In this work, we propose a method to determine the Young’s modulus of suspended 2D material membranes from their nonlinear dynamic response. The method is demonstrated by interferometric measurements on graphene and resonators, which are electrostatically driven into the nonlinear regime at multiple driving forces. It is shown that a set of response curves can be fitted by the solutions of the Duffing equation using only one fit parameter, from which the Young’s modulus is extracted using membrane theory. Our method is fast, contactless, and provides a platform for high-frequency characterization of the mechanical properties of 2D materials.' author: - 'D. Davidovikj$^1$' - 'F. Alijani$^2$' - 'S. J. Cartamil-Bueno$^1$' - 'H. S. J. van der Zant$^1$' - 'M. Amabili$^3$' - 'P. G. Steeneken$^{1,2}$' title: 'Young’s modulus of 2D materials extracted from their nonlinear dynamic response ' --- The remarkable mechanical properties of 2D material membranes have sparked interest for potential uses as pressure [@smith13pressure; @dolleman15], gas [@bunch12; @dolleman16] and mass [@sakhaee08mass; @atalaya10] sensors. For such applications it is essential to have accurate methods for determining their mechanical properties. One of the most striking properties of these ultra-thin materials is their high Young’s modulus. In order to measure the Young’s modulus, a number of static deflection techniques have been used, including Atomic Force Microscopy (AFM) [@hone08elastic; @poot08; @castellanos12; @castellanos12am], the pressurized blister test [@koenig11] and the electrostatic deflection method [@wong10; @nicholl15]. The most widely used method is AFM, where by performing a nanoindentation measurement at the center of a suspended membrane, the pre-tension ($n_0$) and Young’s modulus ($E$) are extracted from the force-deflection curve. Whereas AFM has been the method of choice for static studies, laser interferometry has proven to be an accurate tool for the dynamic characterization of suspended 2D materials, with dynamic displacement resolutions better than 20 fm$/\sqrt{\mathrm{Hz}}$ at room temperature [@bunch07; @castellanos13; @davidovikj16]. Since for very thin structures the resonance frequency is directly linked to the pre-tension in the membrane, these measurements have been used to mechanically characterize 2D materials in the linear limit [@bunch07; @castellanos13; @cartamil15; @wang15]. At high vibrational amplitudes nonlinear effects start playing a role, which have lately attracted a lot of interest [@eichler2011; @croy12; @eriksson13; @dealba16; @mathew16]. In particular, Duffing-type nonlinear responses have been regularly observed [@bunch07; @chen09; @chen13; @castellanos13; @davidovikj16]. These geometrical nonlinearities, however, have never been related to the intrinsic material properties of the 2D membranes. Here, we introduce a method for determining the Young’s modulus of 2D materials by fitting their forced nonlinear Duffing response. Using nonlinear membrane theory, we derive an expression that allows us to relate the fit parameters to both the pre-tension and Young’s modulus of the material. The proposed method offers several advantages: (i) The excitation force is purely electrostatic, requiring no physical contact with the membrane that can influence its shape [@han15; @vella2017]; (ii) The on-resonance dynamic operation significantly reduces the required actuation force, compared to static deflection methods; (iii) The high-frequency resonance measurements allow for fast testing by averaging over millions of deflection cycles per second, using mechanical frequencies in the MHz range; (iv) The membrane motion is so fast that slow viscoelastic deformations due to delamination, slippage, and wall adhesion effects are strongly reduced. To demonstrate the method, we measure and analyze the nonlinear dynamic response of suspended 2D nanodrums. Measurements ============ The samples consist of cavities on top of which exfoliated flakes of 2D materials are transferred using a dry transfer technique [@castellanos14]. One of the measured devices, a few-layer (FL) graphene nanodrum, is shown in the inset of Fig. \[fig:Fig1\](a). The measurements are performed in vacuum at room temperature. Electrostatic force is used to actuate the membrane and a laser interferometer to detect its motion, as described in [@bunch07; @castellanos13; @wang15; @davidovikj16]. A schematic of the measurement setup is shown in Fig. \[fig:Fig1\](a). The details on the sample preparation and measurement setup are described in the Experimental Section below. ![\[fig:Fig1\] (a) Schematic of the measurement setup: a laser interferometer setup is used to read out the motion of the nanodrum. The Si substrate is grounded and, using a bias-tee (BT), a combination of ac- and dc voltage is applied to electrostatically actuate the motion of the drum. This motion modulates the reflected laser intensity and the modulation is read out by a photodiode. Inset: an optical image of a FL graphene nanodrum (scale bar: 2 $\mathrm{\mu m}$). (b) Frequency response curves of the calibrated root-mean-square (RMS) motion amplitude for increasing electrostatic driving force. The onset of nonlinearity is visible above $F_\mathrm{RMS}$ = 15 pN. The color of the curves indicates the corresponding driving force.](Figure1v13.pdf){width="45.00000%"} Fig. \[fig:Fig1\](b) shows a set of calibrated frequency response curves of the fundamental mode of this graphene drum (with thickness $h = 5 $ nm and radius $R = 2.5\,\mathrm{\mu m}$) driven at different ac voltages ($V_\mathrm{ac}$). The dc voltage is kept constant ($V_\mathrm{dc} = 3$ V) throughout the entire measurement with $V_\mathrm{dc}\gg V_\mathrm{ac}$. All measurements are taken using upward frequency sweeps. The RMS force $F_\mathrm{RMS}$ is the root-mean-square of the electrostatic driving force. For high driving amplitudes ($F_\mathrm{RMS} > 15 \mathrm{pN}$), the resonance peak starts to show a nonlinear hardening behavior, which contains information on the cubic spring constant of the membrane. Fitting the nonlinear response ============================== We can approximate the nonlinear response of the fundamental resonance mode by the Duffing equation (see Section I of the Supporting Information): $$m_\mathrm{eff}\ddot{x} + c\dot{x} + k_1 x + k_3 x^3 = \xi F_\mathrm{el} cos(\omega t), \label{Duffing}$$ where $x$ is the deflection of the membrane’s center, $c$ is the damping constant, $k_1$ and $k_3$ are the linear and cubic spring constants and $m_\mathrm{eff} = \alpha m$ and $\xi F_\mathrm{el}$ are the mass and the applied electrostatic force corrected by factors ($\alpha$ and $\xi$) that account for the mode-shape of the resonance (for a rigid-body vertical motion of the membrane $\alpha$ and $\xi$ are both 1). As shown in the Supporting Information Section I, for the fundamental mode of a fixed circular membrane $\xi=0.432$ and $\alpha=0.269$. The parameters in the Duffing equation (\[Duffing\]) are related to the resonance frequency $\omega_0$ ($\omega_0 = 2\pi f_0$) and the $Q$-factor by $Q=\omega_0 m_\mathrm{eff}/c$ and $\omega_0^2=k_1/m_\mathrm{eff}$. The fundamental resonance frequency ($f_0 = 14.7$ MHz) is extracted from the linear response curves at low driving powers (Fig. \[fig:Fig1\](b)), and is directly related to the pre-tension ($n_0$) of the membrane: $n_0 = 0.69 \pi^2 f_0^2 R^2\rho h $, where $\rho$ is the mass density of the membrane (in this case $n_0 =$ 0.107 N/m). In order to fit the set of nonlinear response curves, the steady-state solution of the Duffing equation (eq. \[Duffing\]) is converted to a set of algebraic equations using the harmonic balance method (see Section II of the Supporting Information). Using these equations, the entire set of curves can then be fitted by a least-squares optimization algorithm. Since $N$ curves are fitted simultaneously, the expected fitting error is roughly a factor $\sqrt{N}$ lower than that of single curve fit. ![\[fig:Fig2\] Measured traces (blue scatter plot) and the corresponding fits (red curves) showing both the stable (solid line) and the unstable (dashed line) solutions of the Duffing equation. (a)-(d) are frequency response curves of the device from Fig. \[fig:Fig1\] at four different driving forces, denoted in the top left corner of each panel, along with the extracted $Q$-factors. The extracted cubic spring constant is $k_3$ = 1.35 $\cdot 10^{15} \mathrm{N/m^3}$.](Figure2v5.pdf){width="45.00000%"} The $Q$-factor is implicitly related to $k_3$ by a function $Q_\mathrm{i}=Q_\mathrm{i}(k_3,A_\mathrm{max,i},F_\mathrm{el,i})$, where $A_\mathrm{max,i}$ are the peak amplitudes and $F_\mathrm{el,i}$ are the driving force amplitudes for each of the measured curves [@lifshitz08; @amabili16] (see Section II of the Supporting Information). The amplitudes $A_\mathrm{max,i}$ are found from the experimental data and the whole dataset is fitted using a single fit parameter: the cubic spring constant $k_3$. The results of this procedure are presented in Fig. \[fig:Fig2\](a-d), which shows four frequency response curves and their corresponding fits. The solutions of the steady-state amplitude for the Duffing equation (red curves in Fig. \[fig:Fig2\]) are plotted by finding the positive real roots $x^2$ of: $$\begin{aligned} \xi^2 F_\mathrm{el}^2&=& (\omega^2 c^2 + m_\mathrm{eff}^2(\omega^2-\omega_0^2)^2) x^2 \nonumber \\ & &- \frac{3}{2} m_\mathrm{eff}(\omega^2-\omega_0^2) k_3 x^4 + \frac{9}{16} k_3^2 x^6. \label{Duffing2}\end{aligned}$$ A good agreement between fits and data is found using the single extracted value $k_3$ = $1.35~\cdot~10^{15}~ \mathrm{N/m^3}$, which demonstrates the correspondence between the measurement and the underlying physics. We note that at higher driving amplitudes, we also observe a reduction in the $Q$-factor (by nearly 10% at the highest measured driving amplitude). This can be a signature of nonlinear damping mechanisms which is in line with previously reported measurements on graphene mechanical resonators  [@eichler2011; @croy12; @singh16]. In the following section, we will lay out the theoretical framework to relate the extracted cubic spring constant $k_3$ to the Young’s modulus of the membrane. Theory ====== The nonlinear mechanics of a membrane can be related to its material parameters via its potential energy. The potential energy of a radially deformed circular membrane with isotropic mechanical properties can be approximated by a function of the form: $$U = \frac{1}{2} C_1(\nu)n_0 x^2 + \frac{1}{4} C_3 (\nu)\frac{E h\pi}{R^2} x^4, \label{potE}$$ where $R$ and $h$ are the membrane’s radius and thickness respectively. Bending rigidity is neglected, which is a good approximation for $h/R > 1/1000$ [@mansfield2005]. $C_1(\nu)$ and $C_3(\nu)$ are dimensionless functions that depend on the deformed shape of the membrane and the Poisson’s ratio $\nu$ of the material. The term in equation (\[potE\]) involving $C_1$ represents the energy required to stretch a membrane under a constant tensile pre-stress, the $C_3$ term signifies that the tension itself starts to increase for large membrane deformations. The out-of-plane modeshape for the fundamental resonance mode of a circular membrane is described by a zero-order Bessel function of the first kind ($J_0(r)$). Numerical calculations of the potential energy (\[potE\]) of this mode give $C_1(\nu)=1.56\pi n_0$ and $C_3(\nu)=1/(1.269-0.967 \nu - 0.269 \nu^2)$ (see Section I of the Supporting Information). Using equation (\[potE\]) the nonlinear force-deflection relation of circular membranes is given by $$\label{forceDefl} F= \frac{dU}{dx} = k_1 x + k_3 x^3 = C_1(\nu)n_0 x + C_3 (\nu)\frac{E h\pi}{R^2}x^3.$$ The functions $C_1$ and $C_3$ have previously been determined for the potential energies of statically deformed membranes by AFM  [@komaragiri05; @castellanos12] and uniform gas pressure  [@hencky15; @boddeti13]. Their functional dependence depends entirely on the shape of the deformation of the membrane. In Table 1 we summarize the functional dependences of $k_1$ and $k_3$ for the 3 types of membrane deformation. By combining eq.  \[forceDefl\] with the obtained functions for $C_1$ and $C_3$ from Table 1 (last row), the Young’s modulus $E$ can be determined from the cubic spring constant $k_3$ by $$E = \frac{(1.27-0.97 \nu - 0.27 \nu^2)R^2}{\pi h}k_3.$$ $k_1$ $k_3$ Def. shape ------------ --------------- ---------------------------------------------------------- ------------------------ AFM $ \pi n_0$ $\frac{1}{(1.05-0.15\nu-0.16\nu^2)^3}\frac{E h}{R^2}$ ![image](AFM.png) $\Delta P$ $4\pi n_0$ $\frac{8\pi}{3(1-\nu)}\frac{E h}{R^2}$ ![image](pressure.png) This work $1.56\pi n_0$ $\frac{\pi}{1.27-0.97 \nu - 0.27 \nu^2}\frac{ E h}{R^2}$ ![image](bessel.png) Table 1. $k_1$ and $k_3$ for AFM nanoindentation (AFM), bulge testing of membranes ($\Delta P$) and the nonlinear dynamics method (this work) for the fundamental resonance mode. The corresponding deformation shape, which determines the functional dependence of $k_1$ and $k_3$, is shown on the right. From this equation, with the value of $k_3$ extracted from the fits, a Young’s modulus of $E$ = $594~\pm~45$ GPa is found, which is in accordance with literature values which range from $430 - 1200$ GPa  [@castellanos15_review; @isacsson2017_review]. Using this value, the nonlinear dynamic response of the system can be modeled for different driving powers and frequencies. Figure \[fig:Fig4\] shows color plots representing the RMS amplitude of the motion of the membrane center as a function of frequency and driving force. Excellent agreement is found between the experiment (Fig.  \[fig:Fig4\](a)) and the model (Fig.  \[fig:Fig4\](b)). ![\[fig:Fig4\] Comparison of the RMS motion amplitude ($x_\mathrm{RMS}$) between experiment (a) and model (b) using the identified value for the Young’s modulus ($E$ = 594 GPa) for the device shown in Fig. \[fig:Fig1\]. ](Figure4v4.pdf){width="45.00000%"} In order to confirm the validity of the method, we performed an AFM nanoindentation measurement on the same graphene drum. A force-deflection measurement, taken at the center of the drum, is plotted in Fig. \[fig:afm\] (black dots). The curve is fitted by the AFM force-deflection equation given in Table 1, yielding $E$=591 GPa and $n_0$=0.093 N/m (red curve in Fig. \[fig:afm\]). The blue curve shows the expected force-deflection curve based on the values for the Young’s modulus and pre-tension extracted from the nonlinear dynamic response fits. The two curves are in close agreement. ![\[fig:afm\]AFM force-deflection curve during tip retraction and the corresponding fit (red curve). Inset shows the AFM image of the drum (scale bar is 1 $\mathrm{\mu m}$). The curve is taken at the center of the drum from Fig. \[fig:Fig1\] (marked by the red dot in inset). The blue curve represents the predicted AFM response using the $n_0 = 0.107$ N/m and $E=594$ GPa, obtained from the fit of the nonlinear dynamic response.](Figure5v5.pdf){width="40.00000%"} Finally, to demonstrate the versatility of the method, additional measurements on an nanodrum are presented in Fig.  \[fig:Fig5\]. The extracted Young’s modulus of ($E = 315 \pm 23$ GPa) is also in agreement with literature values ($E_\mathrm{MoS_2} = 140 - 430$ GPa [@castellanos12; @castellanos15_review]). The extracted pre-tension of the drum is $n_0 = 0.22$ N/m. ![\[fig:Fig5\] Measurement (blue dots) and fit (drawn red curve: stable solutions; dashed red curve: unstable solutions) of a 5 nm thick drum with a Young’s modulus of 315 GPa. ](otherMaterials_v4.pdf){width="33.00000%"} Discussion ========== There are several considerations that one needs to be aware of when applying the proposed method. In an optical detection scheme, as the one presented in this work, the cavity depth has to be optimized so that the photodiode voltage is still linearly related to the motion at high amplitudes and the power of the readout laser has to be kept low to avoid significant effects of optothermal back-action [@barton12]. The proposed mathematical model assumes that the bending energy is much smaller than the membrane energy. This is valid for membranes under tension (thickness-to-radius ratio $h/R<0.001$) [@mansfield2005], as is most often the case with suspended 2D materials [@bunch07; @castellanos13; @cartamil15]. It is noted that the electrostatic force also has a nonlinear spring-softening component due to its displacement amplitude dependence. However, in the current study, the vibration amplitudes are much smaller than the cavity depth and this contribution can be safely neglected (see Section III of the Supporting Information for derivation). Compared to conventional mechanical characterization methods [@hone08elastic; @poot08; @castellanos12; @castellanos12am; @koenig11; @wong10; @nicholl15], the presented method provides several advantages. Firstly, no physical contact to the flake is required. This prevents effects such as adhesion and condensation of liquids between an AFM tip and the membrane, that can influence the measurements. Moreover, the risk of damaging the membrane is significantly reduced. The on-resonance operation allows the usage of very small actuation forces, since the motion amplitude at resonance is enhanced by the $Q$-factor. Unlike AFM, where the force is concentrated in one point, here the force is more equally distributed across the membrane, resulting in a more uniform stress distribution. Additionally, for resonators with a high quality factor, the modeshape of vibrations is practically independent of the shape or geometry of the actuator. The high-frequency nature of the presented technique is advantageous, since it allows for fast characterization of samples, and might even be extended for fast wafer-scale characterization of devices. Every point of the frequency response curve corresponds to many averages of the full force-deflection curve (positive and negative part) which reduces the error of the measurement and eliminates the need of offset calibration of the zero point of displacement [@lifshitz08]. The close agreement between the AFM and nonlinear dynamics value for the Young’s modulus $E$ indicates that viscoelasticity, and other time dependent effects like slippage and relaxation, are small in graphene. Therefore, the dynamic stiffness is practically coinciding with the static stiffness. For future studies it is of interest to apply the method to study viscoelastic effects in 2D materials, where larger differences between AFM and resonant characterization measurements are expected. Conclusion ========== In conclusion, we provide a contactless method for characterizing the mechanical properties of suspended 2D materials using their nonlinear dynamic response. A set of nonlinear response curves is fitted using only one fit parameter: the cubic spring constant. Mathematical analysis of the membrane mechanics is used to relate the Duffing response of the membrane to its material and geometrical properties. These equations are used to extract the pre-tension and Young’s modulus of both graphene and MoS$_2$, which are in close agreement with nanoindentation experiments. The non-contact, on-resonant, high-frequency nature of the method provides numerous advantages, and makes it a powerful alternative to AFM for characterizing the mechanical properties of 2D materials. We envision applications in metrology tools for fast and non-contact characterization of 2D membranes in commercial sensors and actuators.\ Experimental section ==================== *Sample fabrication.* A chip with cavities is fabricated from a thermally oxidized Si wafer, with a thickness of 285 nm, using standard lithographic and metal deposition techniques. Circular cavities are etched into the oxide by using a 100 nm gold-palladium ($\mathrm{Au_{0.6}Pd_{0.4}}$) hard mask, which also functions as an electrical contact to the 2D flake. The final depth of the cavities is $g=$385 nm and their radii are $R = 2 - 2.5\mathrm{\mu m}$. The flakes of graphene and are exfoliated from natural crystals.\ *Measurement setup.* The sample is mounted in a vacuum chamber ($2\cdot 10^{-6}$ mbar) to minimize damping by the surrounding gas. Using the silicon wafer as a backgate, the membrane is driven by electrostatic force and its dynamic motion is detected using a laser interferometer (see  [@davidovikj16]). The detection is performed at the center of the drum, using a Vector Network Analyzer (VNA). A dc voltage ($V_\mathrm{dc}$) is superimposed on the ac output of the VNA ($V_\mathrm{ac}$) through a bias-tee (BT), such that the small-amplitude driving force at frequency $\omega$ is given by $F_\mathrm{el}(t) =~\varepsilon_0 R^2\pi V_\mathrm{dc}V_\mathrm{ac}\cos{(\omega t)}/d^2$. The measured VNA signal (in V/V) is converted to a root-mean-squared amplitude ($ x _\mathrm{RMS}$) of the drum motion, using a calibration measurement of the thermal motion taken with a spectrum analyzer [@bunch07; @hauer13; @davidovikj16]. Acknowledgments =============== This work was supported by the Netherlands Organisation for Scientific Research (NWO/OCW), as part of the Frontiers of Nanoscience (NanoFront) program and the European Union Seventh Framework Programme under grant agreement $\mathrm{n{^\circ}~604391}$ Graphene Flagship.\ [39]{} ifxundefined \[1\][ ifx[\#1]{} ]{} ifnum \[1\][ \#1firstoftwo secondoftwo ]{} ifx \[1\][ \#1firstoftwo secondoftwo ]{} ““\#1”” @noop \[0\][secondoftwo]{} sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{} @startlink\[1\] @endlink\[0\] @bib@innerbibempty [****, ()](\doibase http://dx.doi.org/10.1016/j.sse.2013.04.019) @noop [****,  ()]{} @noop [****,  ()]{} [****, ()](http://stacks.iop.org/2053-1583/4/i=1/a=011002) @noop [****,  ()]{} @noop [****,  ()]{} [****,  ()](\doibase 10.1126/science.1157996) @noop [****,  ()]{} @noop [****,  ()]{}@noop [****,  ()]{}@noop [****,  ()]{}@noop [****,  ()]{}@noop [**** ()]{}@noop [****,  ()]{}@noop [****,  ()]{}@noop [****,  ()]{}@noop [**** ()]{}@noop [****,  ()]{}@noop [****,  ()]{}[****,  ()](\doibase 10.1103/PhysRevB.86.235435)@noop [****,  ()]{}@noop [****, ()]{}@noop [****,  ()]{}@noop [****, ()]{}@noop [****, ()]{}[****,  ()](\doibase 10.1039/C5NR04134A)@noop [ ()]{}@noop [****,  ()]{}@noop [****, ()]{}[****,  ()](\doibase 10.1103/PhysRevB.93.245407)@noop [**]{} (, )@noop [****, ()]{}@noop [****, ()]{}@noop [****,  ()]{}[****,  ()](\doibase 10.1002/andp.201400153)@noop [****,  ()]{}[****,  ()](\doibase 10.1021/nl302036x)@noop [****, ()]{}@noop [****,  ()]{} Supporting Information {#supporting-information .unnumbered} ====================== 1. Equations of motion {#equations-of-motion .unnumbered} ---------------------- The strain energy of the circular membrane can be obtained as [@amabili08] $$U=\int_{0}^{2\pi}\int_{0}^{R} \frac{ Eh}{2 (1-\nu^2)} \Big(\epsilon_{rr}^2+\epsilon_{\theta \theta}^2+2\nu \epsilon_{rr} \epsilon_{\theta \theta}+\frac{1-\nu}{2}\gamma_{r \theta}^2\Big) r dr d\theta ,$$ where $E$ is the Young’s modulus, $\nu$ is the Poisson’s ratio, $h$ is the thickness and $R$ is the radius of the membrane. Moreover, $\epsilon_{rr}$, $\epsilon_{\theta \theta}$, and $\gamma_{r \theta}$ are the normal and shear strains that are determined as $$\epsilon_{rr}=\frac{\partial u}{\partial r}+\frac{1}{2}\Big(\frac{\partial w}{\partial r}\Big)^2 ,$$ $$\epsilon_{\theta \theta}=\frac{\partial v}{r \partial \theta}+\frac{u}{r}+\frac{1}{2}\Big(\frac{\partial w}{r \partial \theta}\Big)^2,$$ $$\gamma_{r \theta}=\frac{\partial v}{ \partial r}-\frac{v}{r}+\frac{\partial u}{r \partial \theta}+\Big(\frac{\partial w}{\partial r}\Big)\Big(\frac{\partial w}{r \partial \theta}\Big),$$ where $u$, $v$ and $w$ are the radial, tangential and transverse displacements respectively. For a membrane with fixed edges $u$ and $w$ shall vanish at $r=R$. Moreover, $u$ should be zero at $r=0$ for continuity and symmetry. Assuming only axisymmetric vibrations ($v=0$ and $\partial /\partial \theta = 0$) and fixed edges, the solution is approximated as [@timoshenko59] $$w=x(t) J_{0} \Big(\alpha_{0}\frac{r}{R}\Big) ,$$ $$u= {u_{0} r}+ r (R-r) \sum_{k=1}^{\bar{N}} q_{k}(t) r^{k-1}.$$ Here it should be noted that for axisymmetric vibrations the shear strain $\gamma_{r \theta}$ would become zero. In eqs. (5a,b), $x(t)$ is the generalized coordinate associated with the fundamental axisymmetric mode and $q_{k}(t)$ are the generalized coordinates associated with the radial motion. Moreover, $J_{0}$ is the Bessel function of order zero, and $\alpha_{0}=2.40483$. In addition, $\bar{N}$ is the number of necessary terms in the expansion of radial displacement, and $u_{0}$ is the initial displacement due to pre-tension $n_{0}$ that is obtained from the initial stress $\sigma_0=n_{0}/h$ as follows: $$u_{0}=\frac {\sigma_{0} (1-\nu)}{E}.$$ The kinetic energy of the memebrane neglecting radial (i.e. in-plane) inertia , is given by $$T=\frac{1}{2}\rho h\int_{0}^{2\pi} \int_{0}^{R} \dot{w^2} r dr d\theta,$$ where the overdot indicates differentiation with respect to time $t$. In the presence of transverse harmonic distributed force of constant direction, the virtual work done is $$W=2\pi\int_{0}^{R} p w r dr=\frac{2} {R^2}\int_{0}^{R} F_\mathrm{el} cos(\omega t) w r dr,$$ where $\omega$ is the excitation frequency and $F_{el}$ gives the force amplitude, positive in transverse direction. Higher-order terms in w are neglected in eq(8)  [@amabili15]. The Lagrange equations of motion are $$\frac {d}{dt} (\frac{\partial T}{\partial \dot{\textbf{q}}})-\frac{\partial T}{\partial \textbf{q}}+\frac{\partial U}{\partial \textbf{q}}=\frac{\partial W}{\partial \textbf{q}} ,$$ and $\textbf{q}=[x(t),q_k(t)] , k=1,...,\bar{N}$ , is the vector including all the generalized coordinates. Since radial inertia has been neglected, eq. (9) leads to a system of nonlinear equations comprising of a single differential equation associated with the generalized coordinate $x(t)$ and $\bar{N}$ algebraic equations in terms of $q_{k}(t)$. By solving the $\bar{N}$ algebraic equations it is possible to determine $q_{k}(t)$ in terms of $x(t)$. This will reduce the set of nonlinear equations to a single Duffing oscillator as follows: $$m_\mathrm{eff} \ddot{x}+c \dot{x}+k_{1}x+k_{3}x^3=\xi F_{el} cos(\omega t),$$ where $$m_\mathrm{eff}=0.847\rho hR^2 , \, k_{1}=4.897 n_{0} , \, \xi=0.432,$$ and $c$ is the damping coeficient that has been added to the equation of motion to introduce linear viscous dissipation. Moreover, $k_{3}$ is the cubic stiffness, which is a function of the Young’s modulus and the Poisson’s ratio, and its convergence and accuracy is determined by using different number of terms in the expression of the radial displacement (eq. (5b)). The value of $k_3$ converges for $\bar{N}>3$ and its relation to the Young’s modulus can be determined by fixing the value of the Poisson’s ratio and numerically solving the set of $\bar{N}$ Lagrange equations. $k_3$ can be expressed in the form: $$k_3 = C_3 (\nu) \frac{Eh\pi}{R^2},$$ where $C_3$ is dimensionless constant which is a function of the Poisson’s ratio. The solutions for $C_3$ as a function of $\nu$ are plotted in Figure S1 for values of the Poisson’s ratio between 0 and 0.35. ![\[fig:FigS1\] Numerical solutions for $C_3$ as a function of $\nu$ and the corresponding fit (red line). ](k3_nu_v2.pdf){width="45.00000%"} The relation between $C_3$ and $\nu$ is best described with a second-order polynomial, namely: $$C_3 = \frac{1}{1.269-0.967 \nu - 0.269 \nu^2}.$$ This functional dependence is similar to the one used for AFM nanoindentation measurements, often referred to as $q(\nu)$  [@hone08elastic]. Next, the following dimensionless parameters are introduced: $$\hat{t}=\omega t ,$$ $$\hat{x}=x/h.$$ By using eqs. (10) and (14) the following dimensionless equation of motion can be obtained: $$r^2 \ddot{\hat{x}}+\frac {1} {Q} r \dot{\hat{x}}+\hat{x}+\eta_{3}\hat{x}^3=\lambda cos(t) ,$$ where $$\omega_{0}=\sqrt{\frac{k_{1}}{m}} , Q=\frac{m\omega_{0}}{c} , \eta_3=\frac{k_{3} h^2}{k_{1}} , \lambda=\frac{\xi F_{el}}{m\omega_{0}^2 h} , r=\frac{\omega}{\omega_{0}} .$$ Eq. (15) is valid for studying nonlinear vibrations of membranes subjected to external harmonic excitation in the frequency neighborhood of the fundamental mode if the fundamental mode of vibration is not involved in an internal resonance with other modes. If such condition retains, then other modes accidentally excited will decay with time to zero due to the presence of damping [@nayfeh08]. In this work, it is assumed that this condition is preserved and therefore the response of the membrane is described by a single Duffing oscillator for performing nonlinear parameter estimation. 2. Nonlinear identification {#nonlinear-identification .unnumbered} --------------------------- In order to obtain the coefficients of the Duffing oscillator, here we utilize the harmonic balance method. This method is a suitable and accurate mathematical technique that entails the solution of nonlinear equations to be approximated by a truncated Fourier series. In case of the dimensionless Duffing oscillator (i.e. eq. (15)), a first order trucation has been shown to provide accurate results [@nayfeh08]. Hence, $$\hat{x}\approx x_{1}sint+x_{2}cos t.$$ Substituting equation (17) into equation (15) yields: $$x_{1}(1-r^2)-r \frac{1}{Q}x_2+\frac{3} {4}\eta_3 x_1 A^2 =0,$$ $$x_{2}(1-r^2)+r \frac{1}{Q}x_1+\frac{3} {4}\eta_3 x_2 A^2 =\lambda,$$ where $A=\sqrt{x_{1}^2+x_{2}^2}$ is the amplitude of motion. Moreover, $x_{1}=A sin\phi$ and $x_{2}=A cos\phi$, $\phi$ being the phase difference between the excitation and the response. From equations (18a) and (18b) the following analytic frequency-amplitude relation could be found: $$A^2 \Big[\Big((1-r^2)+\frac{3}{4}\eta_3 A^2\Big)^2+(\frac{r}{Q})^2\Big]=\lambda^2.$$ The idea of harmonic balance based parameter estimation is to follow a reverse path [@amabili16]. In other words, the identification is conducted by assuming that the vibration amplitude $A$ and frequency ratio $r$ are already known for every frequency step from experiments. Therefore, in order to obtain unknown parameters, the following system should be solved for every $j$th frequency step, $r^{(j)}$: $$\begin{pmatrix} -r^{(j)}x_2&\frac{3}{4}x_1 A^2 \\ r^{(j)}x_1&\frac{3}{4}x_2 A^2 \end{pmatrix} \cdot \begin{bmatrix} \frac{1}{Q} \\ \eta_3\\ \end{bmatrix} = \begin{bmatrix} -x_1(1-(r^{(j)})^2) \\ -x_2(1-(r^{(j)})^2)+\lambda \end{bmatrix} \quad , j=[1:m]$$ System (20) can be compactly written as $\bar{A_{h}} \cdot X=\bar{B_{h}}$ . This system is over constrained since it contains $2\times m$ equations. In order to solve system (20) and to estimate system parameters, least squares technique is used and the norm of the error $Er=\Big(\bar{A_{h}} \cdot X-\bar{B_{h}}\Big)\cdot\Big(\bar{A_{h}} \cdot X-\bar{B_{h}}\Big)^T $ should be minimized. Accordingly, here the pseudo-inverse of matrix $A_h$ is calculated and the solution is obtained as follows: $$X=\Big(\bar{A_h}^T \bar{A_h}\Big)^{-1} \bar{A_h}\cdot \bar{B_h}.$$ A problem in utilizing the least squares technique is that the identified peak amplitudes in the frequency-response curves do not correspond to the ones obtained from the experiments. In order to resolve this issue, a correction on the quality factor is made by making use of the following expression (see [@amabili16] for the derivation details): $$Q=\frac{1}{2}\bigg[\sqrt{\frac{1}{2}+\frac{3}{8}\eta_3 A_{max}^2-\sqrt{(\frac{1}{2}+\frac{3}{8}\eta_3 A_{max}^2)^2-\frac{\lambda^2}{4 A_{max}^2}}}\bigg]^{-1} ,$$ in which $A_{max}$ is the experimentally measured peak amplitude for each frequency-amplitude curve. This will yield the nonlinear identification procedure to a single-fit parameter estimation algorithm for the estimation of $\eta_3$. 3. Estimation of the electrostatic spring softening {#estimation-of-the-electrostatic-spring-softening .unnumbered} --------------------------------------------------- The electrostatic force acting on the membrane is given by $$F_{el} = \frac{dU_{el}}{dx} = -\frac{1}{2}\frac{dC_g}{dx}V^2,$$ where $U_{el} = -\frac{1}{2}C_g V^2$ is the electrostatic energy, $V = V_{dc} + V_{ac} cos(\omega t)$ is the applied voltage and $C_g$ is the gate capacitance. Assuming $x<<g$, where $g$ is the gap between the membrane and the backgate, the gate capacitance can be approximated using a parallel plate capacitor model: $$C_g = \varepsilon_0\frac{R^2 \pi}{g-x},$$ where $R$ is the radius of the membrane and $\varepsilon_0$ is the vacuum permittivity. The resulting electrostatic force is given by $$F = \frac{1}{2}\frac{\varepsilon_0 R^2\pi}{(g-x)^2}(V_{dc}+V_{ac}cos(\omega t))^2.$$ If we expand this expression around $x = 0$, we get $$F\approx \frac{1}{2}\varepsilon_0 R^2\pi(V_{dc}+V_{ac}cos(\omega t))^2 \big[\frac{1}{g^2}+\frac{2x}{g^3}+\frac{3x^2}{g^4}+\frac{4x^3}{g^5}\big].$$ The first term ($\frac{1}{g^2}$) is the electrostatic actuation term and the second term is what is usually described as a spring softening term ($\frac{2x}{g^3}$). This term influences only the linear spring constant of the resonator. The term including $x^3$ will have a softening effect on the cubic spring constant: $k_{3,soft} = \frac{1}{2}\varepsilon_0 R^2\pi(V_{dc}+V_{ac}cos(\omega t))^2\frac{4x^3}{g^5} $. The resulting cubic spring constant $k_{3,tot}$ will be given by $$k_{3,tot} = k_3 - k_{3,soft}.$$ The ratio of the two contributions is $$\frac{k_3}{k_{3,soft}} = \frac{1}{1.269-0.967 \nu - 0.269 \nu^2}\frac{Ehg^5}{2\varepsilon_0 R^4 (V_{dc}+V_{ac}cos(\omega t))^2}.$$ For a Young’s modulus $E = 594 GPa$, a radius of $R = 2.5 \mu m$, thickness of $h = 5 nm$ and a $V_{dc} < 3 V$, provided that $V_{dc}>>V_{ac}$, this ratio becomes $$\label{ratio} \frac{k_3}{k_{3,soft}} \approx 3000,$$ which means that the electrostatic softening will have a negligible effect on the extracted Young’s modulus (resulting in an error of $<0.1 \%$). It should be noted that the cavity depth has a significant influence on the effect of electrostatic softening of the cubic spring constant. To get resonable error margins ($< 5\%$), the ratio of eq. (\[ratio\]) should be kept above 20. 4. Nonlinear dynamic response as a function of Young’s modulus {#nonlinear-dynamic-response-as-a-function-of-youngs-modulus .unnumbered} -------------------------------------------------------------- In Fig. S2 we show the frequency response of the strongly-driven graphene drum (black dots) under a constant force ($F_\mathrm{RMS} = 48$ pN). The colored curves are the modeled response under constant force and with a fixed quality factor ($Q = $ 129) and resonance frequency $f_0 = 14.7$ MHz). The different colors correspond to the frequency responses of the model using different values for the Young’s modulus to show how the nonlinear response is influenced by the Young’s modulus. ![\[fig:Fig3\] Sensitivity of the nonlinear response to the Young’s modulus. The measured trace (at $F_\mathrm{RMS} = 48$ pN) are represented by the black dots. The colored lines represent the modeled response using fixed values for the damping and the driving force and varying values for the Young’s modulus. The yellow line represents the modeled response using the identified value for the Young’s modulus (E = 594 GPa).](differentEv1.pdf){width="45.00000%"} [6]{} ifxundefined \[1\][ ifx[\#1]{} ]{} ifnum \[1\][ \#1firstoftwo secondoftwo ]{} ifx \[1\][ \#1firstoftwo secondoftwo ]{} ““\#1”” @noop \[0\][secondoftwo]{} sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{} @startlink\[1\] @endlink\[0\] @bib@innerbibempty @noop [**]{} (, ) @noop [  ()]{} @noop [****,  ()]{} [****,  ()](\doibase 10.1126/science.1157996) @noop [**]{} (, ) @noop [****,  ()]{}
--- abstract: 'We present a new method for producing series for $1/\pi$ and other constants using Legendre’s relation, starting from a generation function that can be factorised into two elliptic $K$’s; this way we avoid much of modular theory or creative telescoping. Many of our series involve special values of Legendre polynomials; their relationship to the more traditional Ramanujan series is discussed.' address: 'School of Mathematical and Physical Sciences, The University of Newcastle, Callaghan, NSW 2308, Australia' author: - 'J. G. Wan' title: 'Series for $1/\pi$ Using Legendre’s Relation' --- Introduction ============ Ramanujan-type series for $1/\pi$ have been extensively studied since [@Ra]. Originally taking the form $$\label{rama} \sum_{n=0}^\infty\frac{(\frac12)_n(s)_n(1-s)_n}{n!^3}(a+bn)z_0^n =\frac c\pi,$$ where $s \in \{1/2,1/3,1/4,1/6\}$, many such series were found to be rational – that is, they enjoy the property $a,b,z_0,c^2 \in \mathbb{Q}$. Those rational series are of theoretical as well as practical interest. Recently, with works such as [@CTYZ] and [@WZ], more general series are being studied. A more encompassing series for $1/\pi$ would take the form $$\label{genseries} \sum_{n=0}^\infty U(n) \,p(n)\,z_0^n =\frac{c}{\pi^k},$$ where $U(n)$ is an arithmetic sequence, and $p(n)$ is a polynomial (often linear or quadratic in $n$). For instance, in [@WZ], $U(n)$ can take the form of an Apéry-like sequence times a Legendre polynomial evaluated at a special argument, $P_n(x_0)$. Most of the current methods for producing such series rely on one of the following methods: - Hypergeometric series (through the use of Clausen’s formula and singular values of the complete elliptic integral $K$), pioneered by the Borweins [@Bor]; - Modular machinery (taking advantages of algebraic relations of modular forms of the same weight, e.g. modular equations) together with the first approach, see for example [@Chud], and more recently [@CTYZ; @CWZ]; - Experimental mathematics (creative telescoping with the Wilf-Zeilberger algorithm), explored in say [@JG2]; - Summation formulas for hypergeometric series at special arguments, Fourier-Legendre series, and other miscellaneous methods, for instance see [@chu] and [@glaisher]. There exists a huge literature apart from the ones we cited above; we cannot even give a brief account for them here since our approach is quite different, so we only direct interested readers to the survey article [@survey]. A notable feature of $1/\pi$ series produced using the methods above has been severe restrictions in the argument of the geometric term ($z_0$ in ), as $z_0$ may need to come from singular values of $K$, or be a special value for a summation formula to work. Here we give a new method for producing series for $1/\pi$, as well as some related constants, using only Legendre’s relation. The method presented here breaks such restrictions so the argument can be any real number for which the underlying series converges. Legendre’s relation =================== Our analysis hinges on Legendre’s relation [@Bor Theorem 1.6], which states $$\label{legendrer} E(x)K'(x)+E'(x)K(x)-K(x)K'(x) = \frac{\pi}{2}.$$ As usual, in hypergeometric notation $$K(x) = \frac{\pi}{2}{_2F_1}\biggl({{\frac12,\frac12}\atop 1};x^2\biggr)$$ is the complete elliptic integral of the first kind, $E(x)$ is the complete elliptic integral of the second kind, and $K'(x)$ denotes $K(x')$, where $x' = \sqrt{1-x^2}$ is the complementary modulus. Equation can be easily proven by differentiating both sides and showing that they agree at one point (say at $x=1/\sqrt{2}$). A more general form of in fact holds [@Bor equation (5.5.6)]: $$\label{legendregen} E_s(x)K_s'(x)+E_s'(x)K_s(x)-K_s(x)K_s'(x) = \frac{\pi}{2}\frac{\cos(\pi s)}{1+2s},$$ where $$K_s(x) = \frac{\pi}{2}\,{_2F_1}\biggl({{\frac12-s,\frac12+s}\atop 1};x^2\biggr), \quad E_s(x) = \frac{\pi}{2}\,{_2F_1}\biggl({{-\frac12-s,\frac12+s}\atop 1};x^2\biggr).$$ Set $s=0$ in and we recover . Note also that in , $s$ is not restricted to the four values as in .\ Suppose we have a factorisation of the following type: $$\label{key} \pi^2 G(z) = K(a(z)) K(b(z)),$$ where $G$ is analytic near the origin and satisfies an ordinary differential equation of degree no less than 4 – for instance, $G$ could be a $_4F_3$. (The condition on the degree of the differential equation for $G$ is imposed because we will solve a system of four equations below, so having three linearly independent derivatives help.) Suppose further that we can find a number $z_0$ such that $a(z_0)^2 = 1-b(z_0)^2$, so that the right hand side of becomes $K(a(z_0))K'(a(z_0))$. We then consider a linear combination of derivatives of equation , namely $$\begin{aligned} & \pi^2\bigl(A_0 G(z_0) + A_1 \frac{\mathrm d}{\mathrm d z}G(z_0) + A_2 \frac{\mathrm d}{\mathrm d z^2}G(z_0) + A_3 \frac{\mathrm d}{\mathrm d z^3}G(z_0)\bigr) \nonumber \\ = & B_0 KK'(z_0) + B_1 EK'(z_0)+ B_2 E'K(z_0) + B_3 EE'(z_0), \label{diff3}\end{aligned}$$ where $A_i$ are constants that may depend on $z_0$, while $B_i$ depend on $A_i$. The equality in holds because derivatives of $E$ and $K$ are again expressible in terms of $E$ and $K$. It remains to solve (if possible) the following system of equations for $A_i$, $$B_0=-1, \ B_1 = 1, \ B_2 = 1, \ B_3 = 0,$$ so that we may apply Legendre’s relation to and obtain, for those choices of $A_i$, $$\label{solved} A_0 G(z_0) + A_1 \frac{\mathrm d}{\mathrm d z}G(z_0) + A_2 \frac{\mathrm d}{\mathrm d z^2}G(z_0) + A_3 \frac{\mathrm d}{\mathrm d z^3}G(z_0) = \frac{1}{2\pi}.$$ A series for $1/\pi$ is thus obtained; when written as a sum, the left hand side typically contains a cubic of the summation variable. We will illustrate such series using different choices of $G$ below. Brafman’s formula ================= An example of a factorisation in the form of comes from Brafman’s formula [@Br1] involving the Legendre polynomials, $$P_n(x)={}_2F_1\biggl({{-n, n+1}\atop 1}; \frac{1-x}2 \biggr).$$ Brafman’s formula has been used to produce (a different type of) series for $1/\pi$ in [@CWZ; @WZ]; it states that $$\label{braf1} \sum_{n=0}^\infty\frac{(s)_n(1-s)_n}{n!^2}P_n(x)z^n ={}_2F_1\biggl({{s,1-s}\atop 1};\alpha\biggr)\, {}_2F_1\biggl({{s,1-s}\atop 1};\beta\biggr),$$ where $\alpha=(1-\rho-z)/2$, $\beta=(1-\rho+z)/2$, and $\rho=(1-2xz+z^2)^{1/2}$. Although equation is of type , solving for $\alpha^2=1-\beta^2$ only results in a trivial identity. Therefore our strategy is to modify the arguments $\alpha$ or $\beta$ via some transformations. The $s=1/2$ case ---------------- Using $s=1/2$ and applying a quadratic transformation of $K$ [@Bor theorem 1.2] to one of the terms in , we obtain $$\label{braf2} \frac{\pi^2}{4} \sum_{n=0}^\infty \frac{(\frac12)_n^2}{n!^2} P_n(x)z^n = \frac{1}{1+\alpha^{1/2}} \, K\biggl(\frac{2\alpha^{1/4}}{1+\alpha^{1/2}}\biggr) K\bigl(\beta^{1/2}\bigr).$$ This fits the type of . After significant amount of algebra as outlined by the approaches leading to , we have the following: \[thm1leg\] For $k \in (0,1)$, $$\begin{aligned} \nonumber & \sum_{n=0}^\infty \binom{2n}{n}^2 P_n\biggl(\frac{-k^4 + 6 k^3 - 2k + 1}{(k^2 + 1) (k^2 + 2 k - 1)}\biggr) \biggl(\frac{(k^2 + 1) (k^2 + 2 k - 1)}{16(k + 1)^2}\biggr)^n \bigl(C_3 n^3 + C_2 n^2+C_1 n + C_0\bigr) \\ & = \frac{2(k+1)^3(k^2+1)}{\pi}, \label{thm1state}\end{aligned}$$ where $$\begin{aligned} C_3 & = 4 (k-1)^2 k^2 (k^2 + 3 k + 4)^2, \\ C_2 & = 12 (k - 1) k (k^6 + 5 k^5 + 10 k^4 + 10 k^3 + 5 k^2 - 3 k + 4), \\ C_1 & = 9 k^8 + 36 k^7 + 37 k^6 + 8 k^5 - 9 k^4 - 56 k^3 + 63 k^2 - 28 k + 4, \\ C_0 & = (k^2 + 2k - 1)^2 (2 k^4 + 3 k^2 - 2 k + 1).\end{aligned}$$ A little algebra shows that if we choose $$x = \frac{1-2k+6k^3-k^4}{(k^2+1)(k^2+2k-1)}, \ z_0 = \frac{(k^2+1)(k^2+2k-1)}{(k+1)^2},$$ then, viewing $\alpha$ and $\beta$ as functions of $z$, we get $\beta(z_0)^{1/2} = k$, and $2\alpha(z_0)^{1/4}/(1+\alpha(z_0)^{1/2}) = \sqrt{1-k^2}$, as desired. With these choices we have $\alpha(z_0) = (1-k)^2/(1+k)^2$; we can also compute and simplify the derivatives $a'(z), a''(z), a'''(z)$ and $b'(z), b''(z), b'''(z)$ at $z=z_0$. Thus, as in , we have an equation of the type $$\pi^2 \biggl[\frac{1+\alpha(z)^{1/2}}{4} \sum_{n=0}^\infty \frac{(\frac12)_n^2}{n!^2} P_n(x)z^n\biggr] = K\biggl(\frac{2\alpha(z)^{1/4}}{1+\alpha(z)^{1/2}}\biggr) K\bigl(\beta(z)^{1/2}\bigr),$$ where at $z=z_0$ the arguments of the two $K$’s are complementary. We take a linear combination (with coefficients $A_i$) of the $z$-derivatives of the above equation, as done in , then substitute in $z=z_0$ and simplify the resulting expression using the precomputed values for $\alpha'(z_0), \beta'(z_0)$ etc. Finally, we solve for $A_i$ so that Legendre’s relation may be applied to obtain a series of the form . The result, after tidying up, is (where we have replaced the Pochhammer symbols by binomial coefficients). We now look at the convergence. From the standard asymptotics for the Legendre polynomials, we have, as $n \to \infty$, $$P_n(x) =O\bigl( \bigl(|x|+\sqrt{x^2-1}\bigr)^n\bigr) \ \mathrm{for} \ |x|>1 \ \mathrm{and} \ P_n(x) =O\bigl( n^{-1/2}\bigr) \ \mathrm{for} \ |x|\le 1.$$ Therefore, for any rational $k \in (0,1)$, the sum in converges geometrically, where the rate is given by $$\frac{1-2k+6k^3-k^4}{(1+k)^2}+4\biggl(\frac{k(1-k)}{1+k}\biggr)^{3/2}.$$ Note that this is a convex function in $k$ with minimum at $(\sqrt2-1, 8(\sqrt2-1)^3)$ and maxima at $(0,1)$ and $(1,1)$. Note that any rational choice of $k \in (0,1)$ leads to a *rational* series in Theorem \[thm1leg\], which is indicative that such series are likely to be fundamentally different from ones that are entirely modular in nature (see e.g. [@CWZ]), whose arguments are much more restricted. For instance, with the choice of $k=1/2$ in Theorem \[thm1leg\], we get $$\sum_{n=0}^\infty \binom{2n}{n}^2 P_n\biggl(\frac{11}{5}\biggr)\biggl(\frac{5}{576}\biggr)^n(14-171n-4452n^2+2116n^3) = \frac{2160}{\pi},$$ while with $k=2/3$, we have $$\sum_{n=0}^\infty \binom{2n}{n}^2 P_n\biggl(\frac{101}{91}\biggr)\biggl(\frac{91}{3600}\biggr)^n(5537 + 11304 n - 173328n^2 + 53824 n^3) = \frac{87750}{\pi}.$$ Theorem \[thm1leg\] is by no means the unique consequence of with $s=1/2$. For example, we can apply quadratic transformations to both arguments on the right hand side of . The result is also a rational series, convergent for $k \in (0,1)$ and genuinely different from Theorem \[thm1leg\], though the general formula is too messy to be exhibited here. We give only one instance (with the choice $k=1/2$) here: $$\sum_{n=0}^\infty \binom{2n}{n}^2 P_n\biggl(\frac{19}{13}\biggr)\biggl(\frac{65}{20736}\biggr)^n(97756868 n^3 - 24254580 n^2 - 539415n - 264590)= \frac{6065280}{\pi}.$$ As another example, if we apply to one term in a cubic transformation (corresponding to the rational parametrisation of the cubic modular equation), $$\label{cubicm} K\biggl(\frac{p^{1/2}(2+p)^{3/2}}{(1+2p)^{3/2}}\biggr) =(1+2p)\, K\biggl(\frac{p^{3/2}(2+p)^{1/2}}{(1+2p)^{1/2}}\biggr),$$ then after a lot of work it is possible to obtain a general, rational series convergent for $p \in (0,1)$. At $p=1/2$ for instance, we get the series $$\sum_{n=0}^\infty \binom{2n}{n}^2 P_n\biggl(\frac{353}{272}\biggr)\biggl(\frac{17}{2^{11}}\biggr)^n (44100 n^3 - 30420 n^2 - 1559n - 206)= \frac{8704}{\pi}.$$ However, it is important to note that not all transformations lead to series of type .\ We give another general theorem for the $s=1/2$ case here. Recall that one of Euler’s hypergeometric transformations leads to $$\label{eulert} K(x) = \frac{1}{\sqrt{1-x^2}}\, K\biggl(\sqrt{\frac{x^2}{x^2-1}}\biggr).$$ If we apply a quadratic transformation to one argument of and Euler’s transformation to the other, the result is also rational series with at most a quadratic surd on the right hand side. Once again convergence is easy to establish (the rate is $|z_0| = (1+k)(4k^2-3k+1)/(4k)$), and the general solution recorded below is proven in exactly the same way as Theorem \[thm1leg\]. \[thm2leg\] For $k \in \bigl(\frac{\sqrt{41}-5}{8},1\bigr)$, $$\begin{aligned} & \nonumber \sum_{n=0}^\infty \binom{2n}{n}^2 P_n\biggl(\frac{1-3k+2k^2-2k^3}{4k^2-3k+1}\biggr) \biggl(\frac{-(1+k)(4k^2-3k+1)}{64k}\biggr)^n \bigl(C_3 n^3 + C_2 n^2+C_1 n + C_0\bigr) \\ & = \frac{8k^{3/2}(4k^2-3k+1)}{\pi},\end{aligned}$$ where $$\begin{aligned} C_3 & = \frac{4(k-1)^2}{k+1}(2k-1)(4k^2+3k+1)^2, \\ C_2 & = 12(k-1)(2k-1)(16k^4+k^2-1), \\ C_1 & = 288k^6-400k^5+102k^4+97k^3-93k^2+47k-9, \\ C_0 & = 2 (32k^6-44k^5+9k^4+16k^3-14k^2+6k-1).\end{aligned}$$ Examples include $$\sum_{n=0}^\infty \binom{2n}{n}^2 P_n\biggl(\frac13\biggr) \biggl(\frac{-1}{36}\biggr)^n (1-3n-84n^2-121n^3) = \frac{18\sqrt3}{\pi},$$ from $k=1/3$, and when $k=1/2$ (chosen so that $C_3$ vanishes), $$\label{weird} \sum_{n=0}^\infty \binom{2n}{n}^2 P_n\biggl(\frac12\biggr) \biggl(\frac{3}{128}\biggr)^n (3+14n) = \frac{8\sqrt2}{\pi}.$$ The formula is particularly interesting, because although it fits the form of the $1/\pi$ series considered in [@CWZ] perfectly, it cannot be explained by the general theory of [@CWZ] (in the notation used there, its $\tau_0$ is $iK(\sqrt3/2)/(2K(1/2))$, which is not a quadratic irrationality).\ Just as in [@CWZ], we can produce ‘companion series’ using Legendre’s relation; one example is $$\begin{aligned} \sum_{n=0}^\infty \binom{2n}{n}^2 \Bigl(\frac{3}{128}\Bigr)^n & \biggl[14n(196n^2+196n-3)P_{n-1}\Bigl(\frac12\Bigr) \\ & -(1372n^3+3024n^2+1631n+375)P_n\Bigl(\frac12\Bigr)\biggr] = \frac{400\sqrt2}{\pi}.\end{aligned}$$ \[rmk1leg\] One might wonder what happens if we set $a(z)=b(z)$ in . In the case of , as the quadratic transformation is effectively the degree 2 modular equation, any series thus produced would be subsumed under the theory in [@CWZ] with the choice $N=2$, and where $\sqrt{\beta}$ could be taken as a singular value. If we applied Euler’s transformation followed by a quadratic transformation to one of the terms in , however, we arrive at a class of series not explicitly studied in [@CWZ] (in the notations used there, the relationship is $\alpha = t_4(1/2+\tau_0)$, $\beta = t_4(\tau_0/2)$). A rational example of such a series is $$\sum_{n=0}^\infty \binom{2n}{n}^2 P_n\biggl(\frac{2\sqrt2}{3}\biggr)\biggl(\frac{3\sqrt2}{128}\biggr)^n (6n+1) = \frac{2\sqrt{8+6\sqrt2}}{\pi}.$$ It is interesting to note that this series has the same $x$ and $z$ as, but is different from, Theorem \[thm2leg\] with $k=1/\sqrt{2}$. Similarly, with $k=(3-i\sqrt7)/8$ in Theorem \[thm1leg\], we get $$\sum_{n=0}^\infty \binom{2n}{n}^2 P_n\biggl(\frac{i}{3\sqrt7}\biggr)\biggl(\frac{3i\sqrt7}{256}\biggr)^n (900n^3-564n^2-39n-14) = \frac{384}{\pi},$$ which is quite similar to entry (I1) in [@CWZ] (first conjectured by Sun); the only difference being that the polynomial term is $16(30n+7)$ in the latter sum. This phenomenon ultimately stems from the fact that the same (modular) transformations are being used. See also Section \[secbraf3\] for more discussions. The $s=1/4$ case ---------------- Even though equation holds for $s \in (0,1)$, we see in the last two theorems that transformations need to be applied to the right hand side of before Legendre’s relation can be used. Since many such transformations are modular in nature, we are again confined to $s \in \{1/2, 1/3, 1/4,1 /6\}$. We now consider the $s=1/4$ case in . One strategy here is to transform the right hand side of in terms of $K$; the transformation required is $$_2F_1\biggl({{\frac14,\frac34}\atop 1};x^2\biggr) = \frac{1}{\sqrt{1+x}} \, _2F_1\biggl({{\frac12,\frac12}\atop 1}; \frac{2x}{1+x}\biggr).$$ The transformed expression is of type and we solve for $a(z_0)^2 = 1- b(z_0)^2$ in the notation there. Proceeding along the same lines as in the proof of Theorem \[thm1leg\], the following theorem can then be established: For $k \in (0,1)$, \[thm3leg\] $$\begin{aligned} \nonumber & \sum_{n=0}^\infty \frac{(\frac14)_n(\frac34)_n}{n!^2} P_n\biggl(\frac{(1+k)(1-4k+7k^2)}{(1-3k)(1+3k^2)}\biggr) \biggl(\frac{(1+k)(1-3k)(1+3k^2)}{(1+3k)^2}\biggr)^n \\ & \quad \times\bigl(C_3 n^3 + C_2 n^2+C_1 n + C_0\bigr) = \frac{3\sqrt2 (1+3k)^{5/2}(1+3k^2)}{(1+k)\pi}, \label{s14c}\end{aligned}$$ where $$\begin{aligned} C_3 & = \frac{16(k-1)^2k^2}{(1+k)^2}(8+15k+9k^2)^2, \\ C_2 & = 48(k-1)k(8-15k+27k^2+27k^3+81k^4), \\ C_1 & = (4-33k+45k^2)(4-17k+17k^2-3k^3+63k^4), \\ C_0 & = 3(1-3k)^4(1+k+2k^2).\end{aligned}$$ An example of an identity produced by Theorem \[thm3leg\] is $$\sum_{n=0}^\infty \frac{(\frac14)_n(\frac34)_n}{n!^2} P_n\biggl(\frac{9}{7}\biggr)\biggl(\frac{21}{100}\biggr)^n(216-2385n-108432n^2+80656n^3) = \frac{12600\sqrt{5}}{\pi}.$$ Note that we may also choose $k$ for the right hand side of to be rational. We can perform a trick here: if the denominator of the argument in $P_n$ is 0 at some $k_0$, and at the same time the geometric term $z_0$ vanishes, when we may take the limit $k \mapsto k_0$ which gets rid of the Legendre polynomial altogether (note that the leading coefficient of $P_n$ is $\binom{2n}{n}2^{-n}$). In , this occurs when $k_0=1/3$. After taking the limit and eliminating the $n^3$ term using a differential equation, we recover the Ramanujan series (of the type ) $$\label{ramaleg1} \sum_{n=0}^\infty\frac{(\frac14)_n(\frac12)_n(\frac34)_n}{n!^3}\biggl(\frac{32}{81}\biggr)^n (1+7n) =\frac{9}{2\pi}.$$ The same trick, applied to the series which following from the cubic transformation mentioned in the $s=1/2$ case, results in $$\label{ramaleg3} \sum_{n=0}^\infty\frac{(\frac12)_n^3}{n!^3}\biggl(\frac{1}{4}\biggr)^n (1+6n) =\frac{4}{\pi},$$ from the choice $p=(\sqrt3-1)/2$; this formula originated from Ramanujan [@Ra] and was first proven by Chowla. The $s=1/3$ case ---------------- This case is slightly trickier. An attempt to transform the right hand side of in terms of $K$, as we did for the $s=1/4$ case, results in exceedingly messy computations. Applying low degree modular equations to one of the $_2F_1$’s (as we did in the $s=1/2$ case, for essentially uses the degree 2 modular equation) does not give convergent series. Instead, we resort to a formula in [@goursat], $$_2F_1\biggl({{\frac13,\frac23}\atop 1};x\biggr) = (1+8x)^{-1/4} \, _2F_1\biggl({{\frac16,\frac56}\atop 1};\frac12-\frac{1-20x-8x^2}{2(1+8x)^{3/2}}\biggr),$$ to transform the right hand side of , then solve for $a(z_0)^2 = 1- b(z_0)^2$ in the notation of , followed by applying the generalized Legendre relation with $s=1/3$. We succeed in obtaining the following theorem, where $\alpha(z_0) = k^3, \, \beta(z_0) = \bigl(\frac{1-k}{1+2k}\big)^3$, and the rate of convergence is $(1+k+k^2)(1-2k+4k^2)^2/(1+2k)^3$. \[thm4leg\] For $k \in (0,1)$, $$\begin{aligned} \nonumber & \sum_{n=0}^\infty \frac{(\frac13)_n(\frac23)_n}{n!^2} P_n\biggl(\frac{1-4k+6k^2-4k^3+10k^4}{(1-2k-2k^2)(1-2k+4k^2)}\biggr)\biggl(\frac{(1+k+k^2)(1-2k-2k^2)(1-2k+4k^2)}{(1+2k)^3}\biggr)^n \\ & \quad \times \bigl(C_3 n^3 + C_2 n^2+C_1 n + C_0\bigr)= \frac{\sqrt{3} \,(1+2k)^4(1-2k+4k^2)}{\pi}, \end{aligned}$$ where $$\begin{aligned} C_3 & = \frac{9(k-1)^2k^2}{1+k+k^2}(3+4k+2k^2)^2 (3+2k+4k^2)^2, \\ C_2 & = 27(k-1)k(9-18k+10k^2+12k^3+60k^4+160k^5+240k^6+192k^7+64k^8), \\ C_1 & = 9-144k+540k^2-584k^3+314k^4-228k^5-1256k^6-1072k^7+768k^8+2560k^9+1280k^{10}, \\ C_0 & = 2(1-2k-2k^2)^2(1-10k+12k^2-24k^3+16k^4+32k^6).\end{aligned}$$ Note that in this case the right hand side contains a surd for rational $k$. When $k \to (\sqrt3-1)/2$, we get the series $$\sum_{n=0}^\infty \frac{(\frac13)_n(\frac12)_n(\frac23)_n}{n!^3}\biggl(\frac{3(7\sqrt3-12)}{2}\biggr)^n \bigl(5-\sqrt3+22n\bigr) = \frac{7+3\sqrt3}{\pi}.$$ The $s=1/6$ case ---------------- It is also possible to produce a general series for this case, though the details are formidable and require hours of computer algebra. The derivation is similar to the $s=1/3$ case, and we use Goursat’s result [@goursat] $$_2F_1\biggl({{\frac16,\frac56}\atop 1};\frac12-\frac12 \sqrt{1-\frac{64(1-t)t^3}{(9-8t)^3}}\biggr) = \Bigl(1-\frac{8t}{9}\Bigr)^{\frac14} \, _2F_1\biggl({{\frac13,\frac23}\atop 1};t\biggr),$$ followed by the generalized Legendre relation for $s=1/6$. The general result for $s=1/6$ is too lengthy to be included here, though in essence its derivation is similar to that of Theorem (but with more liberal use of the chain rule). We will only remark on some of its features below. Just to find suitable $x$ (in $P_n$) and $z_0$, we need to solve $$\alpha(z_0) = \frac12-\frac12 \sqrt{1-\frac{64(1-t)t^3}{(9-8t)^3}}, \ \beta(z_0) = \frac12-\frac12 \sqrt{1-\frac{64t(1-t)^3}{(1+8t)^3}},$$ and for both $x$ and $z_0$ to admit rational parametrisations is equivalent to having $1+8t$ and $9-8t$ both as rational squares – that is, we require a parametrised solution for rational points on the curve $u^2+v^2=10$. Having done so, the resulting series converges for $k \in (1/3, 1)$ where $k$ is the aforementioned parameter; the coefficient of $n$ alone is a degree 24 polynomial in $k$. Even for $k=1/2$, large integers are involved: $$\begin{aligned} & \sum_{n=0}^\infty \frac{(\frac16)_n(\frac56)_n}{n!^2} P_n\biggl(\frac{2437}{2365}\biggr)\biggl(\frac{15136}{296595}\biggr)^n\Bigl(710512440561n^3-118714528800n^2 \\ & \quad -19263658756n-2627089880\Bigr) = \frac{1402894350\sqrt{39}}{\pi}.\end{aligned}$$ With the limit $k \to (\sqrt5-1)/2$, however, we recover the Ramanujan series $$\label{ramaleg2} \sum_{n=0}^\infty\frac{(\frac16)_n(\frac12)_n(\frac56)_n}{n!^3}\biggl(\frac{4}{125}\biggr)^n (1+11n) =\frac{5\sqrt{15}}{6\pi}.$$ In the general series, the $1/\pi$ side is actually the square root of a quartic in $k$, and hence rational points on it may be found by the standard process of converting it to a cubic elliptic curve (namely, $y^2=62208 + 3312x - 144x^2 + x^3$). It follows that there are infinitely many rational solutions. The smallest solution for $k$ (in terms of the size of the denominator) which admits a rational right hand side is $k=6029/8693$, and the resulting series involves integers of over 100 digits. We include the series in Appendix \[verybig\] for amusement. Rarefied Legendre polynomials ----------------------------- Factorisations of the type for generating functions of rarefied Legendre polynomials $$\sum_{n=0}^\infty \frac{(\frac12)_n^2}{n!^2} P_{2n}(x)z^{2n}, \ \mathrm{and} \ \sum_{n=0}^\infty \frac{(\frac13)_n(\frac23)_n}{n!^2} P_{3n}(x)z^{3n}$$ are given in [@WZ]. Using standard partial differentiation techniques, we may also use Legendre’s relation to deduce parameter-dependent rational series for them. The algebra is formidable and we do not present the general forms here; only two examples are given to demonstrate their existence: $$\begin{aligned} & \sum_{n=0}^\infty \frac{(\frac12)_n^2}{n!^2}P_{2n}\biggl(\frac{91}{37}\biggr)\biggl(\frac{5}{37}\biggr)^{2n} (3108999168n^3-3255264000n^2-75508700n+24025) \\ & = \frac{896968800}{\pi}, \\ & \sum_{n=0}^\infty \frac{(\frac13)_n(\frac23)_n}{n!^2}P_{3n}\biggl(\frac{19}{3\sqrt{33}}\biggr) \frac{39887347500n^3-6141658302n^2+172862917n-15262470}{(11\sqrt{33})^n} \\ & = \frac{442203651\sqrt{11}}{2\pi}.\end{aligned}$$ Under appropriate limits, the series involving $P_{2n}$ again gives , while the one for $P_{3n}$ recovers equation . Orr-type theorems ================= A result from Bailey or Brafman ------------------------------- There are other formulas, notably ones of Orr-type, which satisfy ; an example was given by Bailey [@bailey1 equation (6.3) or (7.2)]: $$_4F_3\biggl({{s,s,1-s,1-s}\atop{\frac12,1,1}}; \frac{-x^2}{4(1-x)}\biggr) = {_2F_1}\biggl({{s,1-s}\atop 1};x\biggr){_2F_1}\biggl({{s,1-s}\atop 1};\frac{x}{x-1}\biggr). \label{baileys}$$ Formula is also record in [@slater equation (2.5.32)] (this reference contains a rich collection of Orr-type theorems). Specializing Bailey’s result using $s=1/4$, we have $$\label{baileys1} \frac{\pi^2}{4} \, _4F_3\biggl({{\frac14,\frac14,\frac34,\frac34}\atop{\frac12,1,1}}; \frac{-4x^4(1-x^2)^2}{(1-2x^2)^2}\biggr) = K(x)K\biggl(\frac{x}{\sqrt{2x^2-1}}\biggr).$$ This formula also follows from setting $x=0$, $s=1/2$ in Brafman’s formula . We try different transformations for the right hand side of , in order to find a suitable $z_0$ for which the two arguments are complementary, so the procedures leading up to may be applied. Indeed, after using Euler’s transformation to both terms followed by a quadratic transformation, we obtain the equivalent formulation $$\frac{\pi^2}{4} \sqrt{\frac{(1+z)(1+z')}{2z'}}\, _4F_3\biggl({{\frac14,\frac14,\frac34,\frac34}\atop{\frac12,1,1}}; \frac{z^4}{4(z^2-1)}\biggr) = K\biggl(\sqrt{\frac{2z}{z+1}}\biggr)K\biggl(\sqrt{\frac{z'-1}{z'+1}}\biggr),$$ where at $z_0=(-1)^{1/6}$ the arguments in the $K$’s are complementary (and correspond to argument $1/4$ in the $_4F_3$). Proceeding as we did for our previous results, Legendre’s relation gives $$\label{guic1} \sum_{n=0}^\infty \binom{4n}{2n}^2 \binom{2n}{n} \frac{3+26n+48n^2-96n^3}{2^{12n}} = \frac{2\sqrt{2}}{\pi}.$$ This time we do not have a more general rational series depending on a parameter, since there is only one free variable $x$ in . For other values of $z_0$, algebraic irrationalities are involved, for instance $$\begin{aligned} & \sum_{n=0}^\infty \binom{4n}{2n}^2 \binom{2n}{n} \bigl(-16(62+33\sqrt2)n^3+24(2-\sqrt2)n^2+3(10+\sqrt2)n+3\bigr)\biggl(\frac{2-\sqrt{2}}{2^8}\biggr)^{2n} \\ &= \frac{4\sqrt{2+\sqrt{2}}}{\pi}, \\ & \sum_{n=0}^\infty \binom{4n}{2n}^2 \binom{2n}{n} \bigl(4(151+73\sqrt5)n^3-96(3+\sqrt5)n^2-(25-\sqrt5)n-3\bigr)\biggl(\frac{17\sqrt{5}-38}{2^6}\biggr)^n \\ &= \frac{38+17\sqrt5}{\pi}. \end{aligned}$$ We note that it is routine to obtain results *contiguous* to , i.e. equations where the left hand side is a $_4F_3$ whose parameters differ from the left hand side of by some integers. It is known that the corresponding right hand side relates to that of by a suitable differential operator. Two such contiguous relations give elegant variations of : $$\begin{aligned} \nonumber \sum_{n=0}^\infty \binom{4n}{2n}^2 \binom{2n}{n} \frac{1-48n^2}{(1-4n)^2 \, 2^{12n}}& = \frac{2\sqrt{2}}{\pi},\\ \sum_{n=0}^\infty \binom{4n}{2n}^2 \binom{2n}{n} \frac{3+32n+48n^2}{(1+2n) \, 2^{12n}} & = \frac{8\sqrt{2}}{\pi}, \label{guic2}\end{aligned}$$ where the second sum has been proven in [@JG2 table 2] using creative telescoping. In fact, using the same argument $z_0$ as in , we may invoke instead of its specialisation , and appeal to the generalised Legendre relation. The result, and those contiguous to it, are rather neat and hold for any $s$ that ensures convergence: $$\begin{aligned} \nonumber & \sum_{n=0}^\infty \frac{(s)_n^2 (1-s)_n^2}{(\frac12)_n(1)_n^3} \frac{s(1-s)+2(1-s+s^2)n+3n^2-6n^3}{(1-2s)^2 \,4^n} \\ \nonumber = & \sum_{n=0}^\infty \frac{(s)_n^2 (1-s)_n^2}{(\frac32)_n(1)_n^3} \frac{s(1-s)+2n+3n^2}{4^n} \\ = & \sum_{n=0}^\infty \frac{(s)_n^2 (-s)_n^2}{(\frac12)_n(1)_n^3} \frac{s^2-3n^2}{s \, 4^n} = \frac{\sin(\pi s)}{\pi}. \label{guic3}\end{aligned}$$ These series generalise and . For rational $s$, the rightmost term in is algebraic; e.g. for $s=1/6$ we get the rational series $$\sum_{n=0}^\infty \binom{6n}{4n}\binom{6n}{3n}\binom{4n}{2n} \frac{25-108n^2}{(6n-5)^2 \, 2^{8n} 3^{6n}} = \frac{3}{5\pi}.$$ Another result due to Bailey ---------------------------- We can take [@bailey1 equation (6.1)] (or [@slater (2.5.31)]), from which we find $$\label{baileys2} \pi K\Bigl(\frac{1}{\sqrt2}\Bigr) \, _4F_3\biggl({{\frac14,\frac14,\frac14,\frac34}\atop{\frac12,\frac12,1}}; 16x^2 x'^2 (x'^2-x^2)^2\biggr) = \bigl(K(x)+K'(x)\bigr)K\biggl(\sqrt{\frac12-xx'}\biggr).$$ To prepare this identity for Legendre’s relation so as to produce even just one rational series, we need to do more work than we did for . We apply the cubic modular equation to the rightmost term in . Denoting the $_4F_3$ in by $G$, we have $$\begin{aligned} & \pi K\Bigl(\frac{1}{\sqrt2}\Bigr) (1+2p) \, G\biggl(\frac{16p^3(1+p)^3(2-p-p^2)(1+2p-4p^3-2p^4)^2}{(1+2p)^4}\biggr) \\ = & \Bigl(K+K'\Bigr)\biggl(\sqrt{\frac12-\frac{\sqrt{p^3(1+p)^3(2-p-p^2)}}{1+2p}}\biggr) \, K\biggl(\sqrt{\frac{p(2+p)^3}{(1+2p)^3}}\biggr).\end{aligned}$$ At $p = \frac{\sqrt{14}-\sqrt2-2}{4}$ (corresponding to $x^2 = (1-k_7)/2$, where $k_r$ denotes the $r$th singular value of $K$), the arguments in the two $K$’s coincide. We then compute the derivatives up to the 3th order for the above equation. Note that as $G$ satisfies a differential equation of order 4, higher order derivatives are not required; however, since the derivatives also contain the terms $EK, \,E^2$ and $K^2$, we are not a priori guaranteed a solution. After a significant amount of algebra, we amazingly end up with the rational series $$\sum_{n=0}^\infty \frac{\binom{4n}{2n}\bigl(\frac14\bigr)_n^2}{(2n)!} \frac{5+92n+3120n^2-4032n^3}{2^{8n}}= \frac{8}{K(1/\sqrt2)} = \frac{32\sqrt{\pi}}{\Gamma(\frac14)^2}.$$ Some related constants {#secbraf3} ---------------------- Using a different set of parameters ($\alpha=\beta=\gamma/2=1/4$ in [@bailey1 equation (6.3)]), we have $$\label{baileys1a} \frac{\pi^3}{(2x x')^{\frac12} \, \Gamma(\frac34)^4} \ _3F_2\biggl({{\frac14,\frac14,\frac14}\atop{\frac12,\frac34}}; \frac{(1-2x^2)^4}{16x^2(x^2-1)}\biggr) = \bigl(K(x)+K'(x)\bigr)^2.$$ In this case, applying Legendre’s relation straightaway does not give anything non-trivial, but if we apply a quadratic transform to the $K(x)$ term first, then for the two arguments in the $K$’s to equal, we need to solve the equation $\sqrt{1-x^2} = 2\sqrt{x}/(1+x),$ which gives $x=\sqrt2-1$. Subsequently we can use Legendre’s relation to obtain $$\sum_{n=0}^\infty \frac{(\frac14)_n^4}{(4n)!} \bigl(8(457-325\sqrt2)\bigr)^n \bigl(7+20(11+6\sqrt2)n\bigr) = \frac{28 (82+58\sqrt2)^{\frac14}\, \pi^2}{\Gamma(\frac14)^4}.$$ More series of this type are possible at special values of $t$, which are in fact singular values; c.f. the equation solved above is precisely the one to solve for the 2nd singular value (because $k_r$ and $k_r'$ are related by the modular equation of degree $r$ and satisfy $k_r^2+k_r'^2=1$). Therefore, to produce a series from we do not need Legendre’s relation; instead a single differentiation (in the same way Ramanujan series are produced in [@Bor]) suffices. For example, using $k_3$ we obtain one series corresponding to $1/\pi$ and another to $1/K(k_r)^2$: $$\begin{aligned} \sum_{n=0}^\infty \frac{(\frac14)_n^4}{(4n)!}(-144)^n(1+20n) & = \frac{8\sqrt2\,\pi^2}{\Gamma(\frac14)^4}, \\ \sum_{n=0}^\infty \frac{(\frac14)_n^4}{(4n)!}(-144)^n(5-8n+400n^2) & = \frac{(\frac{2}{\sqrt3}-1)2^{\frac{49}6}\,\pi^5}{\Gamma(\frac13)^6\Gamma(\frac14)^4}.\end{aligned}$$ Concluding remarks ================== In equations , and , we witness the ability of Legendre’s relation to produce Ramanujan series which have linear (as opposed to cubic) polynomials in $n$. Series of the latter type are often connected with singular values (or more precisely, when $iK'(t)/K(t)$ is a quadratic irrationality), as is further supported by Remark \[rmk1leg\] and Section \[secbraf3\]. We take this connection slightly further here. We can bypass the need for Brafman’s formula completely and produce Ramanujan series of type only using Legendre’s relation and modular transforms. For instance, take the following version of Clausen’s formula, $$\label{3f2trans} _3F_2\left({{\frac12,\frac12,\frac12}\atop{1,1}};4x^2(1-x^2)\right) = \frac{4}{\pi^2(1+x)} K(x) K\Bigl(\frac{2\sqrt{x}}{1+x}\Bigr),$$ where we have performed a quadratic transformation to get the right hand side. When $x^2+4x/(1+x)^2=1$, $x=\sqrt2-1$, the 2nd singular value. At this $x$, we take a linear combination of the right hand side of and its first derivative (since we know a Ramanujan series exists and involves no higher order derivatives), then apply Legendre’s relation . The result is the series $$\sum_{n=0}^\infty \frac{(\frac12)_n^3}{n!^3} \bigl(2(\sqrt2-1)\bigr)^{3n} \bigl(1+(4+\sqrt2)n\bigr) = \frac{3+2\sqrt2}{\pi},$$ which also follows from under the limit $k \to \sqrt2-1$. (Applying Legendre’s relation to and its derivatives when $x$ is not a singular value results in the trivial identity $0=0$, perhaps as expected.) Applying the quadratic transform twice (i.e. giving the modular equation of degree 4), followed by transforming the $_3F_2$ in and using Legendre’s relation, we recover Ramanujan’s series $$\sum_{n=0}^\infty\frac{(\frac12)_n^3}{n!^3}\biggl(\frac{-1}{8}\biggr)^n (1+6n) =\frac{2\sqrt2}{\pi}.$$ For our final examples, using the degree 3 modular equation , we have $$_3F_2\left({{\frac12,\frac12,\frac12}\atop{1,1}};\frac{4p^3(1+p)^3(1-p)(2+p)}{(1+2p)^2}\right) = \frac{1}{1+2p} K\biggl(\frac{p^3(2+p)}{1+2p}\biggr)K\biggl(\frac{p(2+p)^3}{(1+2p)^3}\biggr).$$ From this and a similar identity with the $_3F_2$ transformed, we derive the Ramanujan series as well as . This method seems to be a simple alternative to producing the Ramanujan series , since we only need to know the modular equations and Legendre’s relation; there is no need to find, say, singular values of the second kind as is required in the approach in [@Bor]. Computational notes ------------------- While all the results presented here are rigorously proven, we outline a method to discover such results numerically on a computer algebra system. Take the right hand side function in and compute a linear combination of its derivatives with coefficients $A_i$. Replace the elliptic integrals ($K, K', E, E'$) by $X, X^2, X^4, X^8$ respectively (the indices are powers of 2). Evaluate to several thousand decimal places at the appropriate $z_0$ and collect the coefficients in $X$. Solve for $A_i$ so that Legendre’s relation is satisfied (note all the terms such as $KK'$, $E^2$ are separated as different powers of $X$). Finally, identify $A_i$ with an integer relations program like PSLQ. Many of our (algebraically proven) identities required several hours of computer time due to the complexity of the calculations and the sheer number of steps which needed human direction. Computational shortcuts, in particular the chain rule, had to be applied manually in order to prevent overflows or out of memory errors. **Acknowledgment:** the author would like to thank Wadim Zudilin for insightful discussions. A rational series corresponding to $s=1/6$ in {#verybig} ============================================== $$\begin{aligned} & \sum_{n=0}^\infty \frac{(\frac16)_n(\frac56)_n}{n!^2} P_n\biggl(\frac{2711618193169694758695252404104061775156278113}{2258716409636704529221049652293204395071099745}\biggr) \times \\ & \biggl(\frac{166184937571425083357425841157708260870933280014464273}{7435780195982339266650249045977973659251599896998500145}\biggr)^n \times \\ & \bigg\{ -20346216828676992290712717150581898023487858358236131416304525612650289286877 09222379612782511175132608694355 - \\ & 926288201652707493107464063135571905251263636429595499527809454558701924211802 5175759095513913027910053062876 n - \\ & 1572343886557363398495068846593570880095595404692503176333162218457845 00043816520011553398701765788958763171200 n^2+ \\ & 188465262305106044960803571194037001709905551334563776406464312165980638288843 6566607878257622986156264868909056n^3 \bigg\} \\ = \ & \frac{334603705874692071432125440218193102622139292751902590492288221670640916771026 08071535705583949762471120516075}{2\pi}.\end{aligned}$$ [9]{} <span style="font-variant:small-caps;">W.N. Bailey</span>, Some theorems concerning products of hypergeometric series, *Proc. London Math. Soc.* **38** (1935), 377-384. <span style="font-variant:small-caps;">N.D. Baruah, B.C. Berndt</span> and <span style="font-variant:small-caps;">H.H. Chan</span>, Ramanujan’s Series for $1/\pi$: A Survey, *Amer. Math. Monthly* **116** (2009), 567–587. <span style="font-variant:small-caps;">J.M. Borwein</span> and <span style="font-variant:small-caps;">P.B. Borwein</span>, *Pi and the AGM: A study in analytic number theory and computational complexity* (Wiley, New York, 1987). <span style="font-variant:small-caps;">F. Brafman</span>, Generating functions of Jacobi and related polynomials, *Proc. Amer. Math. Soc.* **2** (1951), 942–949. <span style="font-variant:small-caps;">H.H. Chan</span>, <span style="font-variant:small-caps;">Y. Tanigawa</span>, <span style="font-variant:small-caps;">Y. Yang</span> and <span style="font-variant:small-caps;">W. Zudilin</span>, New analogues of Clausen’s identities arising from the theory of modular forms, *Adv. in Math.* **228** (2011), 1294–1314. <span style="font-variant:small-caps;">H.H. Chan, J. Wan</span> and <span style="font-variant:small-caps;">W. Zudilin</span>, Legendre polynomials and Ramanujan-type series for $1/\pi$, *Israel J. Math.* (2012), 25 pages, doi: 10.1007/s11856-012-0081-5. <span style="font-variant:small-caps;">W. Chu</span>, Inversion techniques and combinatorial identities: A unified treatment for the $_7F_6$-series identities, *Collect. Math.* **45** (1994), 13–43. <span style="font-variant:small-caps;">D.V. Chudnovsky</span> and <span style="font-variant:small-caps;">G.V. Chudnovsky</span>, Approximations and complex multiplication according to Ramanujan, in: *Ramanujan revisited* (Urbana-Champaign, IL, 1987) (Academic Press, Boston, MA, 1988), 375–472. <span style="font-variant:small-caps;">J.W.L. Glaisher</span>, On series for $1/\pi$ and $1/\pi^2$, *Quart. J. Pure Appl. Math.* **37** (1905), 173–198. <span style="font-variant:small-caps;">E. Goursat</span>, Sur l’équation différentielle linéaire, qui admet pour intégrale la série hypergómétrique, *Ann. Sci. École Norm. Sup.* **10** (1881), 3–142. <span style="font-variant:small-caps;">J. Guillera</span>, Generators of some Ramanujan formulas, *Ramanujan J.* **11** (2006), 41–48. <span style="font-variant:small-caps;">S. Ramanujan</span>, Modular equations and approximations to $\pi$, *Quart. J. Math.* (*Oxford*) **45** (1914), 350–372. <span style="font-variant:small-caps;">L.J. Slater</span>, *Generalized Hypergeometric Functions* (Cambridge Univ. Press, 1966). <span style="font-variant:small-caps;">J. Wan</span> and <span style="font-variant:small-caps;">W. Zudilin</span>, Generating functions of Legendre polynomials: a tribute to Fred Brafman, *J. Approximation Theory* **164** (2012), 488–503.
--- abstract: 'Using Coulomb blockaded double quantum dots, we realize the superconducting analog of the celebrated two-impurity Kondo model. Focusing on gate regions with a single spin-1/2 on each dot, we demonstrate gate-tuned changes of the ground state from an interdot singlet to independently screened Yu-Shiba-Rusinov singlets. In contrast to the zero-temperature two-impurity Kondo model, the crossover between these two singlets is heralded by quantum phase boundaries to nearby doublet phases, in which only a single spin is screened. We identify all four ground states via transport measurements.' author: - 'J. C. Estrada Saldaña$^{1}$' - 'A. Vekris$^{1}$' - 'R. Žitko$^{2,3}$' - 'G. Steffensen$^{1}$' - 'P. Krogstrup$^{1,4}$' - 'J. Paaske$^{1}$' - 'K. Grove-Rasmussen$^{1}$' - 'J. Nyg[å]{}rd$^{1}$' title: 'Two-Impurity Yu-Shiba-Rusinov States in Coupled Quantum Dots' --- Magnetism relies on the presence of magnetic moments and their mutual exchange interactions. At low temperatures, local moments in metals may be screened by the Kondo effect and magnetism disrupted. This competition was first proposed by Mott [@Mott1974] as a mechanism for the vanishing of magnetism at low temperatures in the $f$-electron metal CeAl$_3$, and later explored by Doniach [@Doniach1977] within a simple one-dimensional Kondo-lattice model, from which he established a phase diagram delineating the magnetic phase as a function of the ratio between Kondo temperature, $T_{K}$, and inter-impurity exchange. The essence of this competition was subsequently reduced to the two-impurity Kondo model, which exhibits an unstable fixed point separating a ground state (GS) of two local Kondo singlets from an inter-impurity exchange singlet[@jones1987study]. This competition remains a central ingredient in the current understanding of many heavy-fermion materials and their quantum critical properties [@georges1996georges; @hewson1997kondo; @coleman2007heavy; @Lohneysen2007; @Bulla2008; @gull2011continuous]. In a superconductor, the gap around the Fermi surface precludes the Kondo effect, but local magnetic moments may still be screened by forming a local singlet with BCS quasiparticles. As demonstrated by Yu, Shiba, and Rusinov (YSR) [@yu1965; @shiba1968classical; @rusinov1969theory], a local exchange coupling between a superconductor and a magnetic impurity leads to a spin singlet sub-gap bound state, which crosses zero energy and becomes the GS at a coupling strength corresponding to $T_K\approx 0.3\Delta$, where $\Delta$ denotes the superconducting gap [@Satori1992; @Bauer2007]. This quantum phase transition reduces the spin by $\hbar/2$, quenching a spin-1/2 altogether and prohibiting coexistence of ($S=1/2$) local-moment magnetism and superconductivity. In contrast to the normal state Kondo effect, the zero-temperature two-impurity phase diagram depends strongly on the two different local exchange couplings, including not only the two different singlets, but also a doublet GS in which only a single spin is screened. \[t!\] ![(a) Micrograph of the device. Gate 2 (4) is used as the plunger $V_{gL}$ ($V_{gR}$) of the left (right) dot. Gates 1, 3, 5 and a backgate, $V_{bg}$, tune couplings. (b-d) Schematized interaction of dot spins and quasiparticles from the superconducting leads for various tunnelling rates $\Gamma_L, \Gamma_R$. $D$ stands for doublet and $S$ for singlet; the index indicates the number of screened dot spins. (e) DQD GS phase diagram versus $\Gamma_L, \Gamma_R$ for two spinful levels, as calculated by NRG for $t_d=0.09$ meV, charging energies $U_L=U_R=2$ meV and $\Delta=0.25$ meV, similar to experimental $\Delta=0.265$ meV, $U_L \approx 1.85$ meV and $U_R \approx 1.62$ meV in the decoupled regime [@saldana2018supercurrent]. Black lines denote the boundaries between singlet and doublet GS. The shading provides the interdot spin-spin correlation $\langle \bm{S_1} \cdot \bm{S_2} \rangle$, which is $\approx -3/4$ for $S_0$ and zero for $D_1, S_2$. The diagram is qualitatively reproduced by the Zero Bandwidth (ZBW) approximation [@NoteSM].[]{data-label="fig1"}](Fig1.png "fig:"){width="1\linewidth"} Here, we report on the experimental realization of this two-impurity YSR model within a Coulomb blockaded serial double quantum dot (DQD) coupled to superconducting source, and drain contacts (cf. Fig. \[fig1\]a). Tuning voltages on the multiple gates, we first load a single spin-1/2 on each dot and then adjust the individual tunnel couplings so as to realize all three different ground states (cf. Fig. \[fig1\]b-d): the unscreened inter-dot exchange singlet ($S_0$), the singly screened doublet ($D_1$), and the fully screened singlet ($S_2$). A weak tunnel coupling, $t_d$, between the two dots gives rise to antiferromagnetic superexchange, which correlates the two local spins as long as they remain unscreened, as captured in Fig. \[fig1\]e. The DQD operates as an overdamped Josephson junction. Using standard lock-in techniques to measure the differential conductance, the critical current, $I_{C}$, may be deduced from a fit of the narrow zero-bias conductance peaks, which are observed in all regimes [@Anchenko1969; @jorgensen2009critical; @saldana2018supercurrent]. Each of the distinct ground states are identified from the measured stability diagrams together with their characteristic $I_{c}$ dependence on gate voltage, both of which we have calculated from a two-orbital Anderson model using the Numerical Renormalization Group (NRG) technique. Finally, we show spectroscopy of the YSR sub-gap states when the GS is the novel $S_2$ state, unreachable in previous experiments on S-DQD-N [@grove2017yu] or weakly-coupled S-DQD-S junctions [@su2017andreev; @saldana2018supercurrent], where N stands for normal metal. We find a complex subgap spectrum, which nonetheless follows the general attributes of a simplified Bardeen-model calculation, and allows assigning the strengths of dot-lead couplings for each of the two shells. \[t!\] ![Colormaps of linear conductance ($G$) versus plunger voltages at slightly different gate settings corresponding to different $\Gamma_L,\Gamma_R$. From (a) to (d), ($V_{g1}$, $V_{g3}$, $V_{g5}$, $V_{bg}$) in V is (-9.2, -9.3, -0.25, 10.4), (-9.2, -9.3, -1.25, 11), (-9.2, -9.2, -2, 10), and (-9.2, -9.3, -0.25, 9.6). In panel (a), the conductance abruptly jumps when the GS changes from the addition of a charge on the left (right) dot predominantly tuned by $V_{gL}$ ($V_{gR}$). However, such jumps are missing for addition of a charge in the left dot in panel (b), in the right dot in panel (c), and in either dot in panel (d). The GS is indicated in some charge sectors to facilitate comparison with Fig. \[fig3\]. The two-impurity GS in the (1,1) sector evolves as (a) $S_0$, (b) $D_1$, (c) $D_1$, (d) $S_2$. The screened impurity is different in panels (b) and (c).[]{data-label="fig2"}](Fig2.png "fig:"){width="1\linewidth"} The device (Fig. \[fig1\]a) consists of an Al-covered InAs nanowire [@krogstrup2015epitaxy] deposited on top of narrow hafnium oxide-covered gates, five of which were used to define two quantum dots. Al was etched away to form a Josephson junction comprising the dots. The junction, which was also equipped with a global backgate, was contacted by two Au leads and measured at $T=15$ mK. We label the charge sectors of the DQD by $(N_L,N_R)$, where $N_L$ and $N_R$ are the (integer) charges in the highest level of the left and right dots, respectively. All measurements were done in the same DQD shells to simplify the parameter space. We employ three well-established methods for obtaining the quantum states of the system: 1) gate-tuning of the stability diagram [@grove2017yu; @jellinggaard2016tuning], 2) gate dispersion of the Josephson current [@grove2007kondo; @saldana2018supercurrent; @estrada2018supercurrent; @van2006supercurrent; @Maurand2012; @cleuziou2006carbon; @delagrange2015manipulating; @delagrange2017], and 3) spectroscopy of the subgap states [@jellinggaard2016tuning; @lee2014spin; @saldana2018supercurrent; @deacon2010tunneling; @su2017andreev]. The same device configured to the same shells was studied in Ref. [@saldana2018supercurrent] in the weak-coupling regime. Here we gate the system so as to reach the strong-coupling regime, where the antiferromagnetic ($S_{0}$) GS is replaced by the fully YSR screened ($S_{2}$) GS. \[t!\] ![NRG calculation of GS stability diagrams at fixed $\Gamma_L, \Gamma_R$ (in meV), chosen to simultaneously match the magnitude and gate dependence of experimental $I_c$, the shape of the experimental stability diagrams and the line-shape of the lowest pair of peaks in the spectra in Figs. \[fig6\]a,b. Each lower-left corner symbol indicates a point in Fig. \[fig1\]e. Energy difference between the lowest singlet and doublet $E_S-E_D$ versus $n_L, n_R$ (gate-induced charges controlled in the experiment by $V_{gL}, V_{gR}$). (a) $t_d=0.3$ meV. (b-d) $t_d=0.09$ meV. $t_d$ in the decoupled regime was chosen larger than in the screened regimes to match the experimental $I_c$. For simplicity, we keep $t_d$ constant in panels (b-d).[]{data-label="fig3"}](Fig3.png "fig:"){width="0.8\linewidth"} We begin our exploration of the phase diagram by showing in Fig. \[fig2\] the experimental DQD stability diagram at four $\Gamma_L$, $\Gamma_R$ points indicated by a square, circle, star and cross in Fig. \[fig1\]e. The step-by-step metamorphosis of the stability diagram between these endpoints as a result of gate tuning is included as Supplemental Material [@NoteSM]. We are particularly interested in the (1,1) charge sector, for which two-impurity physics arises. Note that the zero-bias conductance, $G$, plotted in each panel, is dominated by supercurrent. In panel (a), the diagram shows the characteristic honeycomb pattern of dots decoupled from the leads, characterized by abrupt jumps in $G$ denoting GS transitions when the total occupation changes from an odd to an even number [@saldana2018supercurrent]. In panel (b), a slightly different gate configuration, corresponding to an increase in $\Gamma_L$, prompts dramatic changes in the diagram. Boundaries of the $N_L=1$ sectors pointed to by white arrows in panel (a) vanish, and $G$ in sector (1,1) drops. These changes are compatible with a GS change from $S_0$ in panel (a) to $D_1$ in panel (b), as the left of the two impurities is screened. In turn, the odd-occupied DQD state $D_0$ transits to $S_1$. The odd-occupied DQD phase diagram, analog to Fig. \[fig1\]e, is shown as Supplemental Material [@NoteSM]. Such distortions of the honeycomb pattern also appear in a S-DQD-N system [@grove2017yu]. However, due to the presence of the extra S lead in the S-DQD-S system, we can rotate the pattern of panel (b) by $\approx 90^\circ$ by inverting the $\Gamma_L,\Gamma_R$ asymmetry as shown in panel (c), when independent screening of the right spin occurs (compare star and circle in Fig. \[fig1\]e). Exact $90^\circ$ rotation is prevented by gate cross-coupling. \[t!\] ![Differential conductance, $dI/dV_{sd}$, colormaps taken with the gate swept through the solid line in corresponding panels of Fig. \[fig2\], overlaid by fitted $I_c$ (white curves). The symbol at the bottom-right corner in panel (a) indicates the direction of the line-cut on the stability diagram (the same in the four cases). In this direction, $n_R =1$ and $n_L$ is varied. Dashed lines indicate unreliable fitting due to crossings of subgap states [@saldana2018supercurrent].[]{data-label="fig4"}](Fig4.png "fig:"){width="1\linewidth"} \[h!\] ![$|I_c|$ versus $n_L$ calculated by NRG at $n_R=1$ using the parameters of the corresponding panels in Fig. \[fig3\]. The curves support the $I_c$ data in Fig. \[fig4\], with the slight electron-hole asymmetry in $I_c$ versus $V_{gL}$ attributed to unintended gate and barrier cross-coupling, not included in the calculations. $\Gamma_L,\Gamma_R$ are given in meV.[]{data-label="fig5"}](Fig5 "fig:"){width="1\linewidth"} Additional gate tuning, which effectively increases $\Gamma_L$ while keeping strong $\Gamma_R$, results in a complete distortion of the honeycomb pattern, which is seen in panel (d). The remaining boundaries of the doublet domain in panel (c) (white arrows) vanish and the pattern becomes a broad blob of conductance devoid of lines of parity crossings of either dot, indicating that the same GS is enforced in the nine charge sectors initially mapped in panel (a). This can be interpreted as a change of the two-impurity GS from $D_1$ to $S_2$. Other DQD shells retain their boundaries, most probably due to different couplings. Our diagrams are supported by the theory colormaps of Fig. \[fig3\], which show the doublet to singlet energy difference, $E_S-E_D$, versus $n_L,n_R$ at different tunneling rates. Note that in Fig. \[fig3\]d the novel $S_2$ GS is attained even though there is an asymmetry in $\Gamma_L,\Gamma_R$, which results in smaller $|E_S-E_D|$ at $n_L=1$, $n_R=0,2$ as compared to $n_L=0,2$, $n_R=1$. Case (d) is a unique feature of this strongly coupled S-DQD-S system [@Bergeret2006Oct; @Zitko2010Sep; @zitko2015numerical]. The GS transitions uncovered through the changes in the stability diagram are confirmed by the evolution of the gate dependence of $I_c$. Figure \[fig4\] shows $dI/dV_{sd} (V_{sd},V_{g})$ colormaps taken with the gate swept through the two-impurity sector (1,1) following the solid line in the corresponding $G$ colormaps of Fig. \[fig2\]. A narrow bias-symmetric zero-bias peak in these maps is interpreted as supercurrent from an overdamped Josephson junction with thermal fluctuations. We fit the peak with the resistively and capacitively shunted junction (RCSJ) model [@Anchenko1969] to extract $I_c$ [@Jorgensen2007; @Eichler2009; @saldana2018supercurrent; @estrada2018supercurrent; @steinbach2001direct; @NoteSM], which is overlaid as a white curve on each panel. In panel (a), $I_c$ exhibits asymmetric peaks which mark $D_0 \to S_0$ GS transitions at parity changes [@saldana2018supercurrent]. The number of possible doublet to singlet excitations is halved with respect to the singlet to doublet case, resulting in this asymmetry. In panel (b), these peaks are washed away and replaced by a smooth dependence as screening of the left spin occurs, compatible with an all-doublet $D_0-D_1-D_0$ GS sequence following a $S_0$ to $D_1$ GS change in the (1,1) sector. In panel (c), independent screening of the right dot (instead of the left one) results in a radically different $I_c$ lineshape, displaying the asymmetric peaks of a $S_1-D_1-S_1$ GS sequence. This happens because the single-impurity sectors crossed by the $I_c$ line-cut are also screened, in contrast to panel (b). In panel (d), simultaneous two-impurity screening results in the all-singlet GS sequence $S_1-S_2-S_1$, and thus in a smooth $I_c$ lineshape. In view of the similar $I_c$ lineshapes in panels (b) and (d), the context provided by the stability diagram in Figs. \[fig2\]b,d is crucial for the distinction of the associated GS. The non-trivial line-shape and magnitude of $I_c$ are consistent with NRG calculations shown in Fig. \[fig5\] [@NoteSM]. \[t!\] ![YSR subgap states bias-spectroscopy versus gate through (a) solid and (b) dashed line in Fig. \[fig2\]d. (c,d) NRG calculations of the respective excitation spectra for the same parameters as Fig. \[fig3\]d. $E_0$ is the GS energy. The five subgap states of the system are colored according to their total spin (one singlet lies outside the gap); only excitations that change spin by $\hbar/2$ are possible. An arrow in (c) points to a doublet anti-crossing. (e,f) $dI/dV_{sd}$ colormaps calculated from spectra in (c,d) by Bardeen’s tunnelling approach. Since both singlet $\to$ doublet excitations in panel (d) are dispersionless, the corresponding $dI/dV_{sd}$ peaks in panel (f) are also dispersionless. In panel (c), in contrast, one excitation is highly dispersive, leading to dispersive $dI/dV_{sd}$ peaks in panel (e).[]{data-label="fig6"}](Fig6 "fig:"){width="1\linewidth"} The identification of the GS is further supported by spectroscopy of YSR subgap states [@NoteSM; @zitko2015numerical]. In particular, in the novel $S_2$ GS these states show a different dependence on $n_L$ and $n_R$, which allows to distinguish the degree of screening of each of the two spins, and helps placing the measurement of Fig. \[fig2\]d in the phase diagram of Fig. \[fig1\]e. The corresponding $dI/dV_{sd}$ maps, shown in Figs. \[fig6\]a,b, were taken following gate trajectories which cross the center of $S_2$ in Fig. \[fig2\]d and are parallel to left- and right-dot parity lines, respectively, so as to change the occupation of only one of the dots. The maps display sub-gap state peaks split in bias voltage [@lee2014spin; @jellinggaard2016tuning; @deacon2010tunneling; @pillet2013tunneling; @lee2017scaling] which are accompanied by negative differential conductance (NDC) [@kim2013transport; @pillet2010andreev; @grove2009superconductivity; @eichler2007even; @lee2012zero]. These indicate that a gapped and peaked density of states in one lead is probing its counterpart in the other lead [@saldana2018supercurrent]. Apparent replicas of the lowest pair of peaks may be related to our hybrid nanowire/superconductor leads [@su2018mirage; @deng2016majorana; @saldana2018supercurrent]. A charge-independent singlet GS in both panels is deduced from the absence of peak crossings [@lee2014spin; @grove2017yu] of the pair of states at lowest bias, in agreement with the $I_c$ gate dependence in Fig. \[fig4\]d and the stability diagram of Fig. \[fig2\]d. Intriguingly, whereas the lowest pair of states in Fig. \[fig6\]a is further lowered in bias voltage in the center of $S_2$ ($V_{gL} \approx -1.85$ V), the same pair is nearly gate-independent in panel (b). To explain this observation, we calculate the excitation spectra for the same parameters as the $I_c$ line-cut in Fig. \[fig5\]d (shown in Figs. \[fig6\]c,d). Based on $t_d \ll \Gamma_L,\Gamma_R$, we use the spectra to obtain $dI/dV_{sd}$ maps following Bardeen’s tunnelling approach [@Bardeen1961Jan; @gottlieb2006bardeen] (shown in Figs. \[fig6\]e,f). The outcome can be visualized as the result of the density of states in one dot probing its counterpart in the other through a large barrier. This simple model captures the differences in dispersion seen in the lowest pair of peaks in Figs. \[fig6\]a,b, as it reflects the differences in the spectra due to $\Gamma_L,\Gamma_R$ asymmetry. It also reproduces NDC and the order of magnitude of $dI/dV_{sd}$. However, the model does not take into account multiple Andreev reflection [@nilsson2011supercurrent], the AC Josephson effect, hybrid leads [@su2018mirage; @deng2016majorana; @saldana2018supercurrent], and higher-order processes (the two sides are decoupled), and it is therefore unable to reflect all the details of the measurement. We emphasize that, despite the absence of features in the experimental stability diagram of Fig. \[fig2\]d, consistent with the theory Fig. \[fig3\]d, our gate tuning of the stability diagram and the consistent $I_c$ and subgap state behavior constitute different routes to observe the $S_2$ GS in our device. Further support is provided in the Supplemental Material, where we show more details on the gradual $S_{0}\rightarrow D_{1}\rightarrow S_{2}$ transition [@NoteSM]. In summary, we have demonstrated two-impurity YSR physics in a hybrid nanowire. The Kondo-YSR analogy breaks down at zero temperature, as confirmed by the existence of doublet domains in the phase diagram at $k_BT \ll \Delta$. Our spectroscopy methods maintain the sharpness of the relevant features in all regimes independently of the tunnelling rates, in stark contrast to the case of impurities coupled to metals, which broaden the conductance features at strong hybridization [@jeong2001kondo; @bork2011tunable; @chorley2012tunable; @spinelli2015exploring; @chang2009kondo; @georges1999electronic]. Unlike scanning tunnelling spectroscopy of dimers of magnetic adatoms on superconducting surfaces [@Ji2008Jun; @Ruby2018Apr; @Kamlapure2018Aug], our DQD realization comprises two spin-1/2 states which can be completely screened *on demand* by individual superconducting channels. In addition, the demonstrated gate control of a two-site quantum dot chain in superconducting proximity is a crucial step towards the implementation in our hybrid wires of the YSR analog of Doniach’s Kondo necklace [@Doniach1977] and of the Kitaev chain [@kitaev2001unpaired; @sau2012realizing; @fulga2013adaptive], complementing ongoing research of emergent manifestations of topology [@deng2016majorana]. **Acknowledgements** We thank A. Jellinggaard, M.C. Hels, and J. Ovesen for experimental assistance. We acknowledge financial support from the Carlsberg Foundation, the Independent Research Fund Denmark, QuantERA ’SuperTop’ (NN 127900) and the Danish National Research Foundation. P. K. acknowledges support from Microsoft and the ERC starting Grant No. 716655 under the Horizon 2020 program. R. Ž. acknowledges support from the Slovenian Research Agency (ARRS) under Grants No. P1-0044 and J1-7259. [63]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , ****, (). , ****, (). , ****, (). , , , , ****, (). , **, vol.  (, ). , ** (, ). , , , , ****, (). , , , ****, (). , , , , , , ****, (). , ****, (). , ****, (). , ****, (). , , , , ****, (). , , , ****, (). , , , , , , , , ****, (). , ****, (). , , , , ****, (). , , , , , , , ****, (). , , , , , , , , , ****, (). , , , , , , , , , ****, (). , , , , ****, (). , , , ****, (). , , , , , , , , , (). , , , , , ****, (). , , , , , , ****, (). , , , , , ****, (). , , , , , , , ****, (). , , , , , , ****, (). , , , , , , ****, (). , , , , , , , , ****, (). , , , ****, (). , , , , , ****, (). , ****, (). , , , , , ****, (). , , , , , , , ****, (). , , , , , , , ****, (). , , , , ****, (). , , , , , ****, (). , , , , , , , , , ****, (). , , , , , , ****, (). , , , , , , , , ****, (). , , , , , , , ****, (). , , , , , , ****, (). , , , , , , , , , , , ****, (). , , , , , , , , , ****, (). , ****, (). , ****, (). , , , , ****, (). , , , ****, (). , , , , , , , , ****, (). , , , , , , ****, (). , , , , , , ****, (). , ****, (). , ****, (). , , , , , , , , , ****, (). , , , , , ****, (). , , , , ****, (). , ****, (). , ****, (). , , , , ****, (). , ****, (). , , , ****, ().
--- abstract: 'The introduction of a strong Rashba spin orbit coupling (SOC) has been predicted to enhance the spin motive force (SMF) \[see Phys. Rev. Lett. [**108**]{}, 217202 (2012)\]. In this work, we predict further enhancement of the SMF by time modulation of the Rashba coupling $\alpha_R$, which induces an additional electric field $E^R_d={\dot \alpha_R} m_e/e\hbar({\hat z}\times {\mathbf m})$. When the modulation frequency is higher than the magnetization precessing frequency, the amplitude of this field is significantly larger than previously predicted results. Correspondingly, the spin torque on the magnetization is also effectively enhanced. We also suggest a biasing scheme to achieve rectification of SMF, [*i.e.*]{}, by application of a square wave voltage at the resonant frequency. Finally, we numerically estimate the resulting spin torque field arising from a Gaussian pulse time modulation of $\alpha_R$.' author: - Cong Son Ho - 'Mansoor B. A. Jalil' - Seng Ghee Tan title: 'Gate-control of spin-motive force and spin-torque in Rashba SOC systems' --- Introduction ============ One of the most important subjects of current study in spintronics is the manipulation of spin and magnetization. A spin current can modify the dynamics of magnetization through spin-transfer torque (STT); [@PhysRevB.54.9353; @Kiselev03; @Lee04] conversely the dynamics of magnetization can in turn modify the spin dynamics and may generate a spin current through spin motive force (SMF)[@PhysRevB.33.1572; @0022-3719-20-7-003; @PhysRevLett.98.246601; @PhysRevLett.102.086601] and spin pumping. [@PhysRevLett.88.117601; @PhysRevB.76.184434; @PhysRevB.77.014409] Despite the mutual connection between STT and SMF, the latter has been shown to be inefficient in generating the spin current due to its weak magnitude,[@PhysRevLett.102.067201; @Ohe09; @PhysRevLett.107.236602] hence the on-going research on the means of obtaining a large SMF. Recently, Kim [*et al.*]{}[@Kim12] predicted that in systems with large Rashba spin-orbit coupling,[@Mihai10; @PhysRevB.77.214429; @Pi10; @Ishizaka11; @Bahramy12] the induced SMF is more than an order of magnitude larger than the conventional SMF. In such systems, the Rashba-induced SMF (RSMF) can generate large spin current, which in turn can significantly enhance the STT. However, the requirement of the strong Rashba coupling appears to be a limitation of the prediction in the sense that it is hard to increase the RSOC strength in order to enhance the RSMF. In systems with typical values of the Rashba coupling [@PhysRevLett.91.056602], the conventional SMF and Rashba-induced SMF are almost comparable. Moreover, in contrast to the conventional SMF which can be made time-independent (dc), the Rashba-induced SMF is not rectified, [*i.e.*]{}, it oscillates with time (ac). Therefore, enhancement and modulation of the SMF over a broader range of Rashba SOC strength is still essential. In this study, we propose a method that can significantly enhance the Rashba SOC spin-motive force, in which a strong Rashba coupling is not prerequisite. The idea is based on the fact that the Rashba coupling can be modulated in time[@Datta90; @Nit97; @Grundler00; @Caviglia10; @PhysRevB.89.220409] by applying some AC gate voltage ($\omega_R$). In such systems, the dynamics of both spin and magnetization are controlled by either the Rashba amplitude and/or Rashba modulation frequency. While the former is a material-dependent parameter limited to its reported values, the latter can be easily modified by external means. We showed that in the presence Rashba coupling that is sinusoidally modulated, the resulting RSMF is enhanced with increasing modulation frequency. If $\omega_R>\omega_0$, with $\omega_0$ being the angular frequency of the magnetization dynamics, the RSMF can exceed that induced by a constant Rashba SOC. [@Kim12] Therefore, the generated spin current and the corresponding STT can be enhanced by the time modulation of the Rashba coupling. Moreover, the RSMF can also be rectified by applying a square-wave gate voltage, [*i.e.*]{}, the SMF is unidirectional, rendering it more useful for practical spintronics. Spin motive force ================= The conduction electron in a magnetization texture with Rashba SOC can be described by the following Hamiltonian: $$\label{eq1} H=\frac{{\mathbf p}^2}{2m_e}-J_{\textrm ex}\left({\hat \sigma}\cdot{\mathbf m}\right)+H_R$$ where $\hat\sigma$ is the vector of Pauli matrix, $m_e$ is the effective mass of electron, $J_{\textrm ex}$ is the exchange coupling, $\mathbf m=(\sin\theta \cos\phi,\sin\theta \sin\phi,\cos\theta)$ is the unit vector of the local magnetization, and $H_R=(\alpha_R/\hbar) {\hat \sigma}\cdot({\mathbf p}\times{\hat z})$ is the Rashba Hamiltonian, with $\alpha_R$ being the Rashba coupling. To model the effect of the precessing magnetization and the Rashba SOC on the electron dynamics, we adopt the conventional method,[@0022-3719-20-7-003; @Kim12] [*i.e.*]{}, by introducing an unitary transformation $U^\dagger=e^{i \theta \sigma_y/2} e^{i \phi \sigma_z/2}$. With this transformation, the above Hamiltonian becomes $$\label{eq2} H'=\frac{({\mathbf p}+e{\mathbf A}' )^2}{2m_e }-J_{\textrm ex} \sigma_z-eA_0',$$ in which ${\mathbf A'}$ is the vector potential and it is given by: $$\label{eq3} {\mathbf A'}=-\frac{i\hbar}{e} U^\dagger\nabla U+U^\dagger {\mathbf A}^R U,$$ with ${\mathbf A^R}=-(\alpha_R m_e)/e\hbar ({\hat \sigma}\times\hat z )$ is the non-Abelian gauge field associated with Rashba SOC, which is explicitly time-dependent, and $$\label{eq4} A_0'=i\hbar/e U^\dagger \partial_t U.$$ These gauge fields will induce an effective electric field as $\mathbf E=-\partial_t{\mathbf A'}-\nabla A_0'$. We assume that the exchange coupling is very strong so that the spin tends to align along $\mathbf m$ in the lab frame (along ${\mathbf m}'=(0,0,1)$ in the rotated frame). With this, the effective electric field reads as: $$\label{eq5} {\mathbf E}^{\uparrow(\downarrow)}=\pm\left({{\mathbf E}^m+\mathbf E}^{R,0}+{\mathbf E}^{R,1}\right),$$ where the $\pm$ is corresponding to the majority ($\uparrow$) and minority ($\downarrow$) electrons, respectively. In the above, the first term $$\label{eq6} {E}_i^{m}={\frac{\hbar}{2e}} (\partial_t{\mathbf m}\times\partial_i{\mathbf m})\cdot{\mathbf m},$$ is the conventional electric field induced by the variation of the magnetization pattern in space and time [@PhysRevLett.102.086601; @PhysRevLett.98.246601] in the absence of SOC, and $$\label{eq7} {E}_i^{R,0}= \alpha_R \frac{m_e}{e\hbar} (\partial_t {\mathbf m}\times{\hat z})_i,$$ which is scaled with the value of the Rashba coupling,[@Kim12] and the last term $$\label{eq8} {E}_i^{R,1}={\dot\alpha_R} \frac{m_e}{e\hbar} ({\mathbf m}\times{\hat z})_i,$$ is an extra term being dependent on the time variation of the Rashba coupling. In previous works,[@Ho12; @Ho13] we showed that in a time-dependent Rashba system, there is an effective field induced by the time-dependent gauge field, i.e, ${\mathcal E}={\dot\alpha_R} \frac{m_e}{e\hbar} ({\hat\sigma}\times{\hat z})$. For strong exchange coupling, we have $\hat\sigma=\pm {\mathbf m}$, which recovers Eq. . Remarkably, we can obtain Eq.  without the need of the unitary transformation, implying that this electric field can be generated in any magnetization pattern, which can even be static or uniform,[@PhysRevB.88.014430] and the driving force in this case is the gate-modulated Rashba SOC. Therefore, we can electrically control the generation of the total SMF via an applied gate voltage. Gate-controlled spin-motive force ================================= Consider a magnetization profile ${\mathbf m}[\theta({\mathbf{r}}),\phi(t)]$, where $\phi(t)=\omega_0 t$, and the spatial-dependence is implied by $\theta$ so that ${\vec{\nabla}}\theta\propto 1/L$, with $L$ being the characteristic length of the magnetic structure,[@Kim12] e.g., the domain wall width. This magnetization configuration can be made by assuming that the domain wall (DW) precesses periodically between the Bloch-like DW and Neel-like DW (see Fig.\[Fig4\_2\]). From now on, we only use this magnetization profile for simplicity. In this case, the above effective electric fields are explicitly given as $$\begin{aligned} {\mathbf E}^{m}&=&-{\frac{\hbar}{2e}} \omega_0 \sin\theta {\vec{\nabla}}\theta,\label{eq8a}\\ {\mathbf E}^{R,0}&=&- \frac{m_e}{e\hbar}\alpha_R \omega_0 \sin{\theta} {\mathbf n},\label{eq8b}\\ {\mathbf E}^{R,1}&=&- \frac{m_e}{e\hbar}{\dot \alpha_R} \sin{\theta} ({\mathbf n}\times{\hat{z}}),\label{eq8c}\end{aligned}$$ ![The oscillation of the domain wall (DW): the DW is assumed to precess periodically between Block-like DW and Neel-like DW at frequency $\omega_0$.[]{data-label="Fig4_2"}](DW4){width="40.00000%"} where ${\mathbf {n}}=(\cos \omega_0 t,\sin\omega_0 t,0)$. It is obvious that the effective in Eq. is independent of time. Meanwhile, for a constant Rashba coupling, the field in Eq. oscillates with time. However, since these electric field components depend on both magnetization profile and the Rashba dynamics, we may rectify and enhance the spin motive force by introducing a suitable AC gate modulation of the Rashba coupling. In following, we will show that by appropriately modulating the Rashba coupling, the electric field induced by RSOC can be significantly enhanced and rectified. Enhancement of SMF ------------------ Consider a sinusoidal modulated Rashba coupling $\alpha_R(t)=\alpha_0 \cos(\omega_R t)$. In this case, the order of magnitude of the above electric field components are evaluated to be $E^m=\frac{\hbar\omega_0}{eL}\sin\theta, E^{R,0}= \alpha_0 \frac{m_e}{e\hbar} \omega_0 \sin\theta, E^{R,1}= \alpha_0 \frac{m_e}{e\hbar} \omega_R \sin\theta$. The ratio $E^{R,0}/E^m=\alpha_0 m_eL/\hbar^2$ shows that one can enhance the SMF in the presence of strong Rashba coupling of $\alpha>\hbar^2/m_eL$.[@Kim12] On the other hand, the ratio $E^{R,1}/E^{R,0}=\omega_R/\omega_0$ suggests that the additional SMF component due to the time-modulation of the Rashba coupling can have a comparable or larger magnitude if the Rashba modulation $\omega_R\ge\omega_0$. Thus, with time-modulation of the Rashba coupling, i) one obtains an additional component to enhance the SMF; ii) this enhancement is no longer just restricted by the requirement of strong Rashba coupling; and iii) the frequency of modulation $\omega_R$ provides another external parameter to control the size of the SMF. For example, in systems with Rashba coupling such as GaMnAs having $\alpha_R=10^{-11}~{\mathrm{eV m}}$,[@PhysRevLett.91.056602; @PhysRevB.78.212405] and assuming $L=20$ nm, $\omega_0=2\pi\times 100~\mathrm{MHz}$, the conventional SMF signal comes up to $V(E^m)\approx 0.4~\mu\mathrm V$, meanwhile the RSOC SMF signal is $V(E^{R,0})\approx 1~\mu\mathrm V$, which just has the same order of magnitude as the former. However, upon modulating the Rashba coupling at a frequency of $\omega_R=2\pi\times $ GHz,[@Mal03; @PhysRevB.78.245312] the new SMF signal can be further increased up to $V(E^{R,1})\approx 10~\mu\mathrm V$. We shall now discuss the rectification of the RSMF signal which would render it more useful for practical spintronic applications. Rectification of SMF -------------------- ![The rectification of the Rashba spin-orbit SMF by applying a square wave gate voltage at resonant frequency $\omega_R=\omega_0$. Here, $V_0=\frac{m_eL}{e\hbar}\alpha_0\omega_0$ is the amplitude of the SMF, where $L$ is the domain wall width. \[Fig4\_1\]](Rec){width="40.00000%"} In the previous section, we showed that the introduction of Rashba SOC can induce a large SMF. However, the Rashba-induced SMF generally oscillates with time, and so averages to zero over time. Here, we will show that, by applying a square-wave gate voltage, one can rectify one of the Rashba-induced SMF components. Let us suppose the Rashba coupling is modulated as $\alpha_R(t)=\alpha_0 f(t)$, with $f(t)=\mathrm{sgn}(\sin\omega_R t)$ representing a square wave function \[see Fig.\[Fig4\_1\]\]. In this case, $E^{R,1}=0$ since $\dot\alpha_R=0$. On the other hand, by choosing $\omega_R=\omega_0$, the sign of $\alpha$ vary in step with the magnetization orientation, yielding $E^{R,0}$ with a constant sign. In this case, $E^{R,0}$ in Eq. becomes $$\label{eq9a} {E}_{y}^{R,0}=\mp \frac{m_e}{e\hbar}\alpha_0 |\sin\omega_R t|.$$ The above electric field and the corresponding SMF is unidirectional, although its value changes with time (pulsating) \[see Fig. \[Fig4\_1\]\]. Similarly, we can rectify the SMF component in the $x$-direction by applying a $\pi/2$ phase shift to the gate voltage, [*i.e.*]{}, $\alpha_R(t)\rightarrow\alpha_R(t+\pi/2\omega_R)$. This rectification effect is similar to that in spin-torque diode effect,[@ST.Diode05; @ST.Diode13] where a dc voltage is generated when the frequency of the applied alternating current is resonant with the spin oscillations. Spin current-induced spin torque ================================ We have derived the effective electric fields generated by the dynamics of either the magnetization or the electron spin. These electric fields can drive a charge current and a spin current. In turn, the spin current induces a torque on the magnetization through spin-transfer mechanism [@PhysRevLett.102.086601], while the charge current contributes a field-like torque.[@Tan07; @Tan11; @PhysRevB.78.212405; @PhysRevB.79.094422; @PhysRevB.77.214429; @PhysRevB.80.094424] Generally, the spin current $J_s$ and charge current $J_e$ induced by the SMF is given by $$\begin{aligned} J_{s,i}=\frac{g\mu_B}{2e}\left(G^\uparrow E^\uparrow_i-G^\downarrow E^\downarrow_i\right)=\frac{g\mu_B G_0}{2e} E^\uparrow_i,\label{eq10}\\ J_{e,i}=\left(G^\uparrow E^\uparrow_i+G^\downarrow E^\downarrow_i\right)={P G_0} E^\uparrow_i,\label{eq10b}\end{aligned}$$ where $G^{\uparrow(\downarrow)}$ is the longitudinal electrical conductivity of majority (minority) electrons, $G_0=G^\uparrow +G^\downarrow $ is the total charge conductivity, and $P=(G^{\uparrow}-G^{\downarrow})/G_0$ is the spin polarization. Notice that the spin polarization $P$ appears in the charge current instead of the spin current. This can be explained by the fact that electrons with opposiste spins (parallel or anti-parallel to $\mathbf m$) are driven by opposite electric fields \[see Eq.\], thus the spin current is fully polarized regardless the value of $P$. Explicitly, the spin currents driven by the spin motive force components given in Eqs.- are $$\begin{aligned} J^{m}_{s,i}=\frac{g\hbar{\mu }_B G_0}{4e^2}\left[\left({\partial }_t{\mathbf m}\times {\partial }_i{\mathbf m}\right)\cdot{\mathbf m}\right],\label{eq10a}\\ J^{R,0}_{s,i}=\frac{g{\mu }_B G_0 m_e}{2e^2\hbar }\alpha_R{\left({\partial }_t{\mathbf m}\times\hat{z}\right)}_i,\label{eq10b}\\ J^{R,1}_{s,i}=\frac{g{\mu }_B G_0 m_e}{2e^2\hbar }{{{\partial }_t\alpha }_R\left({\mathbf m}\times\hat{z}\right)}_i.\label{eq10c}\end{aligned}$$ The corresponding charge currents can be obtained by using the relation $J_e= P \frac{2e}{g\mu_B} J_s$. In the absence of spin relaxation, the spin current $J_s$ induces a torque on the magnetization expressed as[@PhysRevLett.102.086601] $$\label{eq11} {\mathbf T}({\mathbf J}_s)=\frac{g\mu_B}{2M_s}\partial_t{\mathbf n}_s+\frac1{M_s}\sum_i\partial_i (J_{s,i}{\mathbf m}),$$ where ${\mathbf n}_s$ is the spin density of the conduction electrons and $M_s$ is the saturation magnetization. In the above, the total spin current can include externally supplied sources and SMF induced sources. In our case, to examine the feedback effect by SMF, we assume that there is no externally supplied spin current, [*i.e.*]{}, $\mathbf J_s$ is induced internally by $\mathbf m$. Recall that in the adiabatic limit the spin is alighned along the magnetization, [*i.e.*]{}, ${\hat{\sigma}}={\mathbf m}$, hence, the spin density ${\mathbf n}_s$ is also parallel to ${\mathbf m}$. Therefore, the torque due to the first term in Eq. , $\partial_t{\mathbf n}_s\propto ({\mathbf m}\times {\mathbf n}_s)$, can be neglected. The divergence of the spin current in Eq.  can be decomposed as $\partial_i (J_{s,i}{\mathbf m})\propto(\partial_i J_{s,i}) {\mathbf m}+ J_{s,i} \partial_i{\mathbf m}$, where the first term will give rise to a parallel torque which is not interesting as it does not contribute to magnetization switching. Therefore, the torque on the magnetization in Eq.  is simply given as $$\label{eq11a1} {\mathbf T}^{ad}({\mathbf J}_s)=\frac1{M_s}({\mathbf J}_{s}\cdot\nabla){\mathbf m}.$$ Similarly, the charge current induces a field-like spin torque in the presence of Rashba SOC, which does not involve the spin transfer mechanism and just depends on the intrinsic band structure.[@PhysRevB.78.212405; @PhysRevB.79.094422; @PhysRevB.77.214429; @PhysRevB.80.094424; @Tan07; @Tan11] Explicitly, the field-like torque is calculated as $$\label{eq11a2} {\mathbf T}^{field}=\frac{e^2}{M_s\mu_B^2m_e}\alpha_R {\mathbf m}\times({\hat z}\times {\mathbf J}_s).$$ In the presence of the spin torque, the LLG equation is modified accordingly to become: $$\label{eq11b} \frac{\partial {\mathbf m}}{\partial t}=-\gamma {\mathbf m}\times{\mathbf H}_{eff}+\gamma_G {\mathbf m}\times\frac{\partial {\mathbf m}}{\partial t}+ {\mathbf T}({\mathbf J}_s),$$ where the second term indicates the damping torque that includes all contributions other than the SMF contribution, with $\gamma_G$ being the intrinsic Gilbert damping constant. Spin torque in a static Rashba SOC ---------------------------------- As a pedagogical example, we first consider the spin torque due to the spin current generated by the conventional spin motive force given by Eq. . Substituting the spin current expression of into Eqs.  or , the spin torque reads as $$\label{eq12b} {\mathbf T}\left({\mathbf J}_s^m\right)=-\eta\sum_i{\left[\left({\partial }_t{\mathbf m}\times {\partial }_i{\mathbf m}\right)\cdot{\mathbf m}\rm \right]\partial_i{\mathbf m}},$$ with $\eta=\frac{\hbar g{\mu }_B{G_0}}{4e^2M_s}$. To express the above torque in the conventional form, [*i.e.*]{}, ${\mathbf T}\left({{\mathbf J}}_s\right)\propto {\mathbf m}\times{\mathbf H}$, we can use the identity ${\partial }_i{\mathbf m}{\mathbf =}{\mathbf -}{\mathbf m}{\mathbf \times }\left({\mathbf m}\times {\partial }_i{\mathbf m}\right)$. With this, Eq.  becomes $$\begin{aligned} {\mathbf T}\left({{\mathbf J}}^m_s\right)={\mathbf m}\times D\partial_t{\mathbf m}\label{eq12d},\end{aligned}$$ in which $D$ is the damping tensor given by $$\label{eq12e} D_{uv}=\eta\sum_i X_{iu}X_{iv},$$ with $X_{iu}=\left({\mathbf m}\times {\partial }_i{\mathbf m}\right)_u$. Eqs. and are consistent with the results of Ref. \[\] which considered the SMF in the absence of the spin-orbit coupling. Similarly, in the presence of the Rashba SOC, the STT is modified to be[@Kim12] $$\begin{aligned} {\mathbf T}\left({{\mathbf J}}^m_s+{{\mathbf J}}^{R,0}_s\right)={\mathbf m}\times {\tilde D}\partial_t{\mathbf m}\label{eq13a},\end{aligned}$$ $$\label{eq13} {\tilde D}_{uv}=\eta\sum_i (X_{iu}-A_R\epsilon_{3iu})(X_{iv}-A_R\epsilon_{3iv}),$$ where $A_R=2\alpha_Rm_e/\hbar^2$. Spin torque in a time-dependent Rashba SOC ------------------------------------------ As the Rashba coupling becomes time-dependent, there is an additional SMF-induced spin current given in Eq. . By substituting Eq.  into Eq.  and Eq. , the total STT due to the dynamics of Rashba coupling is directly calculated as $$\label{eq14} {{\mathbf T}}\left({{\mathbf J}}^{R,1}_s\right)=-{\mathbf m}\times \eta\partial_t A_R\sum_{iv}\left (X_{iu}-A_R \epsilon_{3iu}\right)\epsilon_{3iv}m_v.$$ To examine the nature of the torque in the presence of time-dependent Rashba SOC, we consider a uniform and static magnetization pattern. In this case, the torque on the magnetization only comes from the spin current ${\mathbf J}_s^{R1}$. Thus, the total torque is ${\mathbf{T}}=-\eta A_R\partial_t A_R ({\mathbf{m}}\times\hat z)m_z$. If we define a gate-controlled effective magnetic field via ${\mathbf T}=-\gamma\mu_0 ({\mathbf m}\times {\mathbf H}_g)$, we have $$\label{eq15} {\mathbf H}_g=\frac{\eta}{\gamma\mu_0} A_R\partial_t A_R m_z {\hat z}.$$ This field is directed along the $z-$ direction and it is generally dependent on time due to the varying $\alpha_R$. To see the effect of this field on the magnetization, we assume that initially the magnetization is in $+z$-direction, [*i.e.*]{}, $m_z=+1$. If at time $t=t_0$, the Rashba SOC is modulated by a Gaussian pulse such that $\alpha_R=\alpha_0 {\exp{(-\frac{(t-t_0)^2}{2\tau_R^2})}}$, where $\tau_R$ is the pulse width, the field in Eq.  becomes $H_g=-H_0 e^{-(t - t_0)^2/\tau_R^2} (t - t_0)/\tau_R$, where $H_0=\frac{4\eta m_e^2\alpha_0^2}{\hbar^4\gamma\mu_0\tau_R}$ . In Fig. \[FigHg\], the magnetic field is illustrated during the action of the pulse. At $t<t_0$, the magnetic field is parallel to the magnetization ($+\hat z$), thus yielding no effect. However, as $t>t_0$, the field reverses to the anti-parallel direction ($-\hat z$), and it can switch the magnetization (note that in practice, there is a slight misalignment of $\mathbf m$ to the $z$-direction, which would provide the initial torque for switching). The anti-parallel field reaches its maximum value ${\mathrm {max}}|H_g|=H_0/\sqrt{2e}$ at $\Delta t=t-t_0=\tau_R/\sqrt{2}$. As an example, we consider a system with $\eta=0.2 ~{\mathrm {nm^2}}$, $\alpha_0=10^{-10}~ {\mathrm {eV.m}}$, $\tau_R=0.1~ {\mathrm {ns}}$, the switching field is estimated to be $H_g=5.6\times 10^4 {\mathrm {A/m}}$. For comparison, the switching fields in Fe, Ni, and Co are $4.5\times 10^4~ {\mathrm {A/m}}$, $1.85\times 10^4 ~{\mathrm {A/m}}$, and $59.1\times 10^4~ {\mathrm {A/m}}$, respectively.[@Tan11] Therefore, the modulation of Rashba SOC by gate voltage pulses is an all-electrical method for the magnetization switching, which has recently attracted many research works. [@PhysRevB.78.180401; @Vol.pulse11; @Vol.pulse14] ![Effective magnetic field induced by a Gaussian gate voltage with the pulse width $\tau_R$. At time $t<t_0$, the field is parallel to the magnetization, yielding no effect. At time $t>t_0$, the field is anti-parallel to the magnetization, which may result in a switching.[]{data-label="FigHg"}](Hg){width="45.00000%"} Summary ======= In summary, we proposed the enhancement and rectification of the spin-motive force in magnetization systems with Rashba SOC by application of an AC gate voltage to modulate the Rashba coupling strength. The amplitude of the SMF increases as the frequency of the sinusoidal gate voltage increases, and would exceed the conventional (static) RSOC-induced SMF if the modulation frequency is tuned to be larger than the precession frequency of the magnetization. Moreover, the AC spin-motive force can be rectified by applying a square-wave gate voltage at the resonant frequency. We also calculated the spin current induced by the SMF and the associated spin torque. We showed that the modulation of Rashba coupling by gate voltages can generate a spin-torque on a uniform and static magnetization, which can be utilized as an all-electrical method for magnetization switching. We thank the National Research Foundation of Singapore under the Competitive Research Program “Non-Volatile Magnetic Logic And Memory Integrated Circuit Devices" NRF-CRP9-2011-01 and “Next Generation Spin Torque Memories" NRF-CRP12-2013-01 for financial support. [40]{}ifxundefined \[1\][ ifx[\#1]{} ]{}ifnum \[1\][ \#1firstoftwo secondoftwo ]{}ifx \[1\][ \#1firstoftwo secondoftwo ]{}““\#1””@noop \[0\][secondoftwo]{}sanitize@url \[0\][‘\ 12‘\$12 ‘&12‘\#12‘12‘\_12‘%12]{}@startlink\[1\]@endlink\[0\]@bib@innerbibempty @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****, ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [ ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [****,  ()]{} @noop [**** ()]{}
--- abstract: 'A system exhibiting multiple simultaneously broken symmetries offers the opportunity to influence physical phenomena such as tunneling currents by means of external control parameters. In this paper, we consider the broken $SU(2)$ (internal spin) symmetry of ferromagnetic systems coexisting with *i)* the broken $U(1)$ symmetry of superconductors and *ii)* the broken spatial inversion symmetry induced by a Rashba term in a spin-orbit coupling Hamiltonian. In order to study the effect of these broken symmetries, we consider tunneling currents that arise in two different systems; tunneling junctions consisting of non-unitary spin-triplet ferromagnetic superconductors and junctions consisting of ferromagnets with spin-orbit coupling. In the former case, we consider different pairing symmetries in a model where ferromagnetism and superconductivity coexist uniformly. An interplay between the relative magnetization orientation on each side of the junction and the superconducting phase difference is found, similarly to that found in earlier studies on spin-singlet superconductivity coexisting with spiral magnetism. This interplay gives rise to persistent spin- and charge-currents in the absence of an electrostatic voltage that can be controlled by adjusting the relative magnetization orientation on each side of the junction. In the second system, we study transport of spin in a system consisting of two ferromagnets with spin-orbit coupling separated by an insulating tunneling junction. A persistent spin-current across the junction is found, which can be controlled in a well-defined manner by external magnetic and electric fields. The behavior of this spin-current for important geometries and limits is studied.' author: - 'J. Linder' - 'M. S. Gr[ø]{}nsleth' - 'A. Sudb[ø]{}' date: Received title: Tunneling currents in ferromagnetic systems with multiple broken symmetries --- Introduction ============ Due to the increasing interest in the field of spintronics in recent years [@zutic2004], the idea of utilizing the spin degree of freedom in electronic devices has triggered an extensive response in many scientific communities. The spin-Hall effect is arguably the research area which has received most focus in this context, with substantial effort being put into theoretical considerations [@dyakonov1971] as well as experimental observations [@wunderlich2005]. In spintronics, a main goal is to make use of the spin degree of freedom rather than electrical charge, investigations of mechanisms that offer ways of controlling spin-currents are of great interest. The study of systems with multiple broken symmetries is highly relevant in this context, since such systems promise rich physics with the opportunity to learn if the tunneling currents can be influenced by means of external control parameters such as electric and/or magnetic fields. Here, we will focus on two specific systems: ferromagnetism coexisting with superconductivity, which we shall refer to as ferromagnetic superconductors (FMSC), and systems where ferromagnetism and spin-orbit coupling are present (FMSO). In terms of broken symmetries, we will then study the broken $SU(2)$ (internal spin) symmetry of ferromagnetic systems coexisting with the broken $U(1)$ symmetry of superconductors and also consider ferromagnets with broken inversion (spatial) symmetry induced by a Rashba term in a spin-orbit coupling Hamiltonian. The coexistence of ferromagnetism (FM) and superconductivity (SC) has a short history in experimental physics [@saxena2000; @aoki2001; @tallon1999], although a theoretical proposition of this phenomenon was offered as early as 1957 by Ginzburg [@ginzburg1957]. Spin-singlet superconductivity originating with BCS theory seems to be ruled out as a plausible pairing mechanism for a ferromagnetic superconductor [@shen2003], at least with regard to uniform coexistence of the FM and SC order parameters $\zeta$ and $\Delta$, respectively. It could be achieved for a superconductor taking up a so-called Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) state [@fflo]. However, it seems likely that the coexistence of FM and SC call for [@walker2002; @machida2001] $p$-wave spin-triplet Cooper pairs which have a non-zero magnetic moment. This type of pairing has been observed in superfluid $^3$He, and is perfectly compatible with FM order. Spin-triplet superconductivity has moreover been experimentally verified [@ishida1998; @nelson2004] in Sr$_2$RuO$_4$, and the study of such a pairing in a FMSC could unveil interesting effects with respect to quantum transport. The concept of simultaneously broken $U(1)$ and $SU(2)$ symmetries are of great interest from a fundamental physics point of view, and could be suggestive to a range of novel applications. This topic has been the subject of theoretical research in [*e.g.*]{} Refs. . In this paper, we follow up Ref.  with a more comprehensive study of the tunneling currents between two $p$-wave FMSC separated by an insulating junction; RuSr$_2$GdCu$_2$O$_8$, UGe$_2$, and URhGe have been proposed as candidates for such unconventional superconductors [@tallon1999; @saxena2000; @aoki2001]. In our model, we assume uniform coexistence of the FM and SC order parameters and that superconductivity arises from the same electrons that are responsible for the magnetism. As argued in Ref. , this can be understood most naturally as a spin-triplet rather than spin-singlet pairing phenomenon. Furthermore, it seems that SC in the metallic compounds mentioned above always coexists with the FM order and is enhanced by it [@shopova2005]; the experiments conducted on the compounds UGe$_2$ and URhGe do not give any evidence for the existence of a standard normal-to-superconducting phase transition in a zero external magnetic field, but instead indicate a phase corresponding to a mixed state of FM and SC. We provide detailed calculations for single-particle and Josephson (two-particle) tunneling between two non-unitary equal-spin pairing (ESP) FMSC. We examine both the charge- and spin-sector in detail within linear response theory using the Kubo formula. We find that the supercurrent of spin and charge may be controlled by adjusting the misorientation of the exchange fields on both sides of the junction. Such an effect was first discovered by Kulic and Kulic [@kulic2005], who derived an expression for the Josephson current over a junction separating two BCS superconductors with spiral magnetic order. It was found that the supercurrent could be controlled by adjusting the relative orientation of the exchange field on both sides of the junction, a finding that quite remarkably suggested a way of tuning a supercurrent in a well-defined manner from [*e.g.* ]{}a 0- to $\pi$-junction. Later investigations made by Eremin, Nogueira, and Tarento [@eremin2006] considered a similar system as Kulic and Kulic [@kulic2005], namely two Fulde-Ferrel-Larkin-Ovchinnikov (FFLO) superconductors [@fflo] coexisting with helimagnetic order. Recently, the same opportunity was found to exist in a FMSC/I/FMSC junction as shown by Gr[ø]{}nsleth [*et al.* ]{}[@gronsleth]. In the case of a system where both ferromagnetism and spin-orbit coupling are present, it is clear that these are physical properties of a system that crucially influence the behavior of spins present in that system. For instance, the presence of spin-orbit coupling is highly important when considering ferromagnetic semiconductors [@dietl2002; @matsukura2002]. Such materials have been proposed as devices for obtaining controllable spin injection and manipulating single electron spins by means of external electrical fields, making them a central topic of semiconductor spintronics [@engel2006]. In ferromagnetic metals, spin-orbit coupling is ordinarily significantly smaller than for semiconductors due to the bandstructure. However, the presence of a spin-orbit coupling in ferromagnets could lead to new effects in terms of quantum transport. Studies of tunneling between ferromagnets have uncovered interesting physical effects [@nogueira2004b; @lee2003; @slonczewski1989]. Nogueira [*et al.* ]{} predicted [@nogueira2004b] that a dissipationless spin-current should be established across the junction of two Heisenberg ferromagnets, and that the spin-current was maximal in the special case of tunneling between planar ferromagnets. Also, there has been investigations of what kind of impact spin-orbit coupling constitutes on tunneling currents in various contexts, [*e.g.* ]{}for noncentrosymmetric superconductors [@yokoyama2005], and two-dimensional electron gases coupled to ferromagnets [@wang2005]. Broken time reversal- and inversion-symmetry are interesting properties of a system with regard to quantum transport of spin and charge, and the exploitment of such asymmetries has given rise to several devices in recent years. For instance, the broken $SU(2)$ symmetry exhibited by ferromagnets has a broad range of possible applications. This has led to spin current induced magnetization switching [@kiselev2003], and suggestions have been made for more exotic devices such as spin-torque transistors [@bauer2003] and spin-batteries [@brataas2002]. It has also led to investigations into such phenomena as spin-Hall effect in paramagnetic metals [@hirsch1999], spin-pumping from ferromagnets into metals, enhanced damping of spins when spins are pumped from one ferromagnet to another through a metallic sample [@tserkovnyak2002], and the mentioned spin Josephson effects in ferromagnet/ferromagnet tunneling junctions [@nogueira2004b]. Here, we study the spin-current that arises over a tunneling junction separating two ferromagnetic metals with substantial spin-orbit coupling. It is found that the total current consists of three terms; one due to a twist in magnetization across the junction (in agreement with the result of Ref. ), one term originating from the spin-orbit interactions in the system, and finally an interesting mixed term that stems from an interplay between the ferromagnetism and spin-orbit coupling. After deriving the expression for the spin-current between Heisenberg ferromagnets with substantial spin-orbit coupling, we consider important tunneling geometries and physical limits of our generally valid results. Finally, we make suggestions concerning the detection of the predicted spin-current. Our results indicate how spin transport between systems exhibiting both magnetism and spin-orbit coupling can be controlled by external fields, and should therefore be of considerable interest in terms of spintronics. This paper is organized as follows. In Sec. \[sec:fmsc\], we consider transport between spin-triplet ferromagnetic superconductors, while a study of transport between ferromagnets with spin-orbit coupling is given in Sec. \[sec:fmsoc\]. A discussion of our results is provided in Sec. \[sec:discuss\], with emphasis on how the novel effects predicted in this paper could be tested in an experimental setup. Finally, we give concluding remarks in Sec. \[sec:summary\]. Ferromagnetic superconductors {#sec:fmsc} ============================= Coexistence of ferromagnetism and superconductivity {#sec:uniform} --------------------------------------------------- An important issue to address concerning FMSC is whether the SC and FM order parameters coexist uniformly or if they are phase-separated. One possibility [@tewari2004] is that a spontaneously formed vortex lattice due to the internal magnetization $\mathbf{m}$ is realized in a spin-triplet FMSC, while there also have been studies of Meissner (uniform) SC phases in spin-triplet FMSC [@shopova2005]. As argued in Ref. , a key variable with respect to whether a vortice lattice appears or not is the strength of the internal magnetization $\mathbf{m}$. Ref.  suggested that vortices arise if $4\pi\mathbf{m}>\mathbf{H}_{c1}$, where $\mathbf{H}_{c1}$ is the lower critical field. When considering a weak FM state coexisting with SC, a scenario which seems to be the case for URhGe, the domain structure in the absence of an external field is thus vortex-free. Current experimental data concerning URhGe are not strong enough to unambiguously settle this question, while evidence for uniform coexistence of FM and SC has been indicated [@kotegawa2005] in UGe$_2$. Furthermore, a bulk Meissner state in the FMSC RuSr$_2$GdCu$_2$O$_8$ has been reported in Ref. , hence suggesting the existence of uniform FM and SC as a bulk effect. In our study, we shall consequently take the order parameters as coexisting homogeneously and use their bulk values, as justified by the argumentation above. However, we emphasize that one in general should take into account the possible suppression of the SC order parameter in the vicinity of the tunneling interface due to the formation of midgap surface states [@hu1994] which occur for certain orientations of the SC gap. The pair-breaking effect of these states in unconventional superconductors has been studied in e.g. [@ambegaokar1974; @buchholtz1981; @tanuma2001], and we discuss this in more detail in Sec. \[sec:discuss\]. A sizeable formation of such states would suppress the Josephson current, although it is nonvanishing in the general case. Also, we use bulk uniform magnetic order parameters, as in [@nogueira2004b]. The latter is justified on the grounds that a ferromagnet with a planar order parameter is mathematically isomorphic to an $s$-wave superconductor, where the use of bulk values for the order parameter right up to the interface is a good approximation due to the lack of midgap surface states. It is generally believed that the same electrons that are responsible for itinerant FM also participate in the formation of Cooper pairs below the SC critical temperature [@aoki2001]. As a consequence, uniform coexistence of spin-singlet SC and FM can be discarded since $s$-wave Cooper pairs carry a total spin of zero, although spatially modulated order parameters could allow for magnetic $s$-wave superconductors [@kulic2005; @eremin2006]. However, spin-triplet Cooper pairs are in principle perfectly compatible with FM order since they can carry a net magnetic moment. To see this, consider the $\mathbf{d}_{{\mathbf{k}}}$-vector formalism[@datta1990] which is convenient when dealing with spin-triplet superconductors, regardless of whether they are magnetic or not. For a complete and rigorous treatment of the $\mathbf{d}_{{\mathbf{k}}}$-vector order parameter, see [*e.g.* ]{}Ref.  . The spin dependence of triplet pairing can be represented by a 2$\times$2 matrix $$\begin{aligned} \hat{\Delta}_{{\mathbf{k}}}&= \begin{pmatrix} \Delta_{{{\mathbf{k}}}\uparrow\uparrow} & \Delta_{{{\mathbf{k}}}\uparrow\downarrow}\\ \Delta_{{{\mathbf{k}}}\downarrow\uparrow} & \Delta_{{{\mathbf{k}}}\downarrow\downarrow} \end{pmatrix} \\ \nonumber &= \begin{pmatrix} -d_x({{\mathbf{k}}}) +\i d_y({{\mathbf{k}}}) & d_z({{\mathbf{k}}})\\ d_z({{\mathbf{k}}}) & d_x({{\mathbf{k}}}) +\i d_y({{\mathbf{k}}}) \end{pmatrix} = \i\mathbf{d}_{{\mathbf{k}}}\cdot{\hat{\boldsymbol{\sigma}}^{\vphantom{\dagger}}}\hat{\sigma}_y,\end{aligned}$$ where $\Delta_{{{\mathbf{k}}}\alpha\beta}$ represent the SC gap parameters for different triplet pairings, ${\hat{\boldsymbol{\sigma}}^{\vphantom{\dagger}}}= (\hat{\sigma}_x,\hat{\sigma}_y,\hat{\sigma}_z)$ where $\hat{\sigma}_i$ are the Pauli matrices, and $\mathbf{d}_{{\mathbf{k}}}= (d_x({{\mathbf{k}}}),d_y({{\mathbf{k}}}),d_z({{\mathbf{k}}}))$ is given by $$\mathbf{d}_{{\mathbf{k}}}= \Big( \frac{\Delta_{{{\mathbf{k}}}\downarrow\downarrow}-\Delta_{{{\mathbf{k}}}\uparrow\uparrow}}{2}, -\i\frac{(\Delta_{{{\mathbf{k}}}\downarrow\downarrow}+\Delta_{{{\mathbf{k}}}\uparrow\uparrow})}{2}, \Delta_{{{\mathbf{k}}}\uparrow\downarrow}\Big).$$ Note that $\mathbf{d}_{{\mathbf{k}}}$ transforms like a vector under spin rotations and that $\Delta_{{{\mathbf{k}}}\uparrow\downarrow} = \Delta_{{{\mathbf{k}}}\downarrow\uparrow}$ for triplet pairing since it is of no significance *which* electron in the Cooper pair that has spin up or down. This is because spin-part of the two-particle wavefunction is symmetric under exchange of particles, as opposed to spin-singlet SC, where the gap changes sign when the spin indices are exchanged. Spin-triplet SC states are classified as unitary if $\i\mathbf{d}_{{\mathbf{k}}}\times\mathbf{d}_{{\mathbf{k}}}^*=0$ and non-unitary if the equality sign does not hold. Since the average spin of a $\mathbf{d}_{{\mathbf{k}}}$-state is given by [@leggett1975] $$\label{eq:Cooperpairspin} \langle\mathbf{S}_{{\mathbf{k}}}\rangle = \i\mathbf{d}_{{\mathbf{k}}}\times\mathbf{d}^*_{{\mathbf{k}}},$$ it is clear that we must have a non-unitary $\mathbf{d}_{{\mathbf{k}}}$ in a model where FM and SC coexist uniformly. Indeed, there is strong reason to believe that the correct pairing symmetries in the discovered FMSC constitute non-unitary states [@hardy2005; @samokhin2002; @machida2001]. As a consequence, one can rule out for instance a state where only $\Delta_{{{\mathbf{k}}}\uparrow\downarrow}\neq0$ since it would imply $\langle\mathbf{S}_{{\mathbf{k}}}\rangle=0$ according to Eq. (\[eq:Cooperpairspin\]). In the most general case where all SC gaps are included, $\Delta_{\uparrow\downarrow}$ would be suppressed in the presence of a Zeeman-splitting between the $\uparrow, \downarrow$ conduction bands [@aoki2001]; see Fig. \[fig:split\]. However, such a splitting between energy-bands need not be present and one could in theory then consider a $\mathbf{d}$-vector where $$\label{eq:condxy} |\Delta_{{{\mathbf{k}}}\uparrow\uparrow}|=|\Delta_{{{\mathbf{k}}}\downarrow\downarrow}|\neq0,\;\Delta_{{{\mathbf{k}}}\uparrow\downarrow}\neq 0$$ such that $\langle\mathbf{S}_{{\mathbf{k}}}\rangle$ lies in the local $xy$-plane. This scenario would be equivalent to an A2-phase as is seen when performing a spin rotation on the gap parameters into a quantization axis lying in the $xy$-plane. Denoting up- and down-spins with respect to the new quantization axis by $+$ and $-$, respectively, the transformation yields $$\begin{aligned} \label{eq:rotmat} \begin{pmatrix} \Delta_{{{\mathbf{k}}}\uparrow\uparrow}\\ \Delta_{{{\mathbf{k}}}\uparrow\downarrow}\\ \Delta_{{{\mathbf{k}}}\downarrow\downarrow} \end{pmatrix} = \frac{1}{2} \begin{pmatrix} 1 & 2{\mathrm{e}^{\i\phi}} & {\mathrm{e}^{2\i\phi}} \\ -{\mathrm{e}^{-\i\phi}} & 0 & {\mathrm{e}^{\i\phi}} \\ {\mathrm{e}^{-2\i\phi}} & -2{\mathrm{e}^{-\i\phi}} & 1 \end{pmatrix} \begin{pmatrix} \widetilde{\Delta}_{{{\mathbf{k}}}++}\\ \widetilde{\Delta}_{{{\mathbf{k}}}+-}\\ \widetilde{\Delta}_{{{\mathbf{k}}}--} \end{pmatrix}, \end{aligned}$$ where $\phi$ is the azimuthal angle as shown in Fig. \[fig:gap\]. When introducing the conditions in Eq. (\[eq:condxy\]), it is readily seen that $\widetilde{\Delta}_{{{\mathbf{k}}}+-}=0$ while $|\widetilde{\Delta}_{{{\mathbf{k}}}++}|\neq|\widetilde{\Delta}_{{{\mathbf{k}}}--}|\neq0$, thus corresponding to an A2-phase. *Consequently, the entire span of physically possible pairing symmetries in a FMSC can be reduced to the equivalence of an A1- or A2-phase in $^3$He by a change of spin-basis.* The definitions of A-, A1-, and A2-phases in $^3$He are as follow: an A-phase corresponds to a pairing symmetry such that $|\Delta_{{{\mathbf{k}}}\uparrow\uparrow}|=|\Delta_{{{\mathbf{k}}}\downarrow\downarrow}|\neq 0$, an A1-phase has only one gap $\Delta_{{{\mathbf{k}}}\sigma\sigma}\neq 0$ while $\Delta_{{{\mathbf{k}}},-\sigma,-\sigma}=0$, and an A2-phase satisfies $|\Delta_{{{\mathbf{k}}}\uparrow\uparrow}|\neq|\Delta_{{{\mathbf{k}}}\downarrow\downarrow}|\neq 0$. In this case, $\Delta_{{{\mathbf{k}}}\alpha\beta}$ represents the superfluid gap for the fermionic $^3$He-atoms, and $\Delta_{{{\mathbf{k}}}\uparrow\downarrow}=0$ for all $A_i$-phases. The resulting spin of the Cooper pair is then in general given by $$\label{eq:spinA2} \langle\mathbf{S}_{{\mathbf{k}}}\rangle = (1/2)[|\Delta_{{{\mathbf{k}}}\uparrow\uparrow}|^2 - |\Delta_{{{\mathbf{k}}}\downarrow\downarrow}|^2]\hat{\mathbf{z}}.$$ In the following, we shall accordingly consider tunneling between non-unitary ESP FMSC in an A1- or A2-phase. Moreover, we consider thin film FMSC, ensuring that no accumulation of charge at the surface will take place due to an orbital effect. Our system can be thought to have arisen by first cooling down a sample below the Curie temperature $T_\text{M}$ such that FM order is introduced. At further cooling below the critical temperature $T_\text{c}$, the same electrons that give rise to FM condense into Cooper pairs with a net magnetic moment parallel to the original direction of magnetization. Our model is shown in Fig. \[fig:junction\]. The Hamiltonian {#sec:hamiltonian} --------------- The system consists of two FMSC separated by an insulating layer such that the total Hamiltonian can be written as [@cohen1962] $H = H_\text{L} + H_\text{R} + H_\text{T},$ where L and R represents the individual FMSC on each side of the tunneling junction, and $H_\text{T}$ describes tunneling of particles through the insulating layer separating the two pieces of bulk material. Using mean-field theory, one finds that the individual FMSC are described by a Hamiltonian similar to the one used in Ref. , $$\begin{aligned} \label{eq:1} H_\text{FMSC} &= H_0 + \sum_{{{\mathbf{k}}}}\psi_{{{\mathbf{k}}}}^\dag \hat{\cal{A}}_{{{\mathbf{k}}}}\psi_{{{\mathbf{k}}}}, \notag\\ H_0 &= JN\eta(0)\mathbf{m}^2 + \frac{1}{2}\sum_{{{\mathbf{k}}}\sigma}\varepsilon_{{{\mathbf{k}}}\sigma} + \sum_{{{\mathbf{k}}}\alpha\beta} \Delta_{{{\mathbf{k}}}\alpha\beta}^\dag b_{{{\mathbf{k}}}\alpha\beta}.\end{aligned}$$ Here, ${{\mathbf{k}}}$ is the electron momentum and we have introduced $$\varepsilon_{{{\mathbf{k}}}\sigma} = \varepsilon_{{{\mathbf{k}}}} - \sigma\zeta_z,\; \sigma=\uparrow,\downarrow=\pm 1.$$ Furthermore, $J$ is a spin coupling constant, $\eta({{\mathbf{k}}})$ is a geometrical structure factor which for ${{\mathbf{k}}}=0$ reduces to the number of nearest lattice neighbors $\eta(0)$, $\mathbf{m}=\{m_x,m_y,m_z\}$ is the magnetization vector, while $\Delta_{{{\mathbf{k}}}\alpha\beta}$ is the superconducting order parameter and $b_{{{\mathbf{k}}}\alpha\beta} = \langle c_{-{{\mathbf{k}}}\beta}c_{{{\mathbf{k}}}\alpha}\rangle$ denotes the two-particle operator expectation value. The ferromagnetic order parameters are given by $$\zeta = 2J\eta(0)(m_x-\i m_y),\; \zeta_z = 2J\eta(0)m_z.$$ The interesting physics of the FMSC/FMSC junction lies in the matrix $\hat{{\cal{A}}}_{{{\mathbf{k}}}}$ to be given below. Above, we used a basis $$\psi_{{{\mathbf{k}}}} = (c_{{{\mathbf{k}}}\uparrow}~c_{{{\mathbf{k}}}\downarrow}~c_{-{{\mathbf{k}}}\uparrow}^\dag ~c_{\mathbf{-k}\downarrow}^\dag)^{\text{T}},$$ where $c_{{{\mathbf{k}}}\sigma}$ ($c_{{{\mathbf{k}}}\sigma}^\dag$) are annihilation (creation) fermion operators. Note that we have not incorporated any spin-orbit coupling of the type $(\mathbf{E}\times\mathbf{p})\cdot{\hat{\boldsymbol{\sigma}}^{\vphantom{\dagger}}}$ in the Hamiltonian described in Eq. (\[eq:1\]) such that spatial inversion symmetry is not broken, [*i.e.* ]{}we consider centrosymmetric FMSC. Consider now the matrix $$\label{eq:A} \hat{{\cal{A}}}_{{\mathbf{k}}}= -\frac{1}{2} \begin{pmatrix} -\varepsilon_{{{\mathbf{k}}}\uparrow} & \zeta & \Delta_{{{\mathbf{k}}}\uparrow\uparrow} & \Delta_{{{\mathbf{k}}}\uparrow\downarrow} \\ \zeta^\dag & -\varepsilon_{{{\mathbf{k}}}\downarrow} & \Delta_{{{\mathbf{k}}}\downarrow\uparrow} & \Delta_{{{\mathbf{k}}}\downarrow\downarrow} \\ \Delta^\dag_{{{\mathbf{k}}}\uparrow\uparrow} & \Delta^\dag_{{{\mathbf{k}}}\downarrow\uparrow} & \varepsilon_{{{\mathbf{k}}}\uparrow} & -\zeta^\dag \\ \Delta^\dag_{{{\mathbf{k}}}\uparrow\downarrow} & \Delta^\dag_{{{\mathbf{k}}}\downarrow\downarrow} & -\zeta & \varepsilon_{{{\mathbf{k}}}\downarrow} \end{pmatrix}, $$ which is valid for a FMSC with arbitrary magnetization. As explained in the previous sections, we will study in detail tunneling between non-unitary ESP FMSC, [*i.e.* ]{}$\Delta_{{{\mathbf{k}}}\uparrow\downarrow}=\Delta_{{{\mathbf{k}}}\downarrow\uparrow}=0$, $\zeta= 0$ in Eq. (\[eq:A\]). We take the quantization axis on each side of the junction to coincide with the magnetization direction. One then needs to include the Wigner $d$-function [@wigner1931] denoted by $\hat{{\cal{D}}}^{(j)}_{\sigma'\sigma}(\vartheta)$ with $j=1/2$ to account for the fact that a $\uparrow$ spin on one side of the junction is not the same as a $\uparrow$ spin on the other side of the junction, since the magnetization vectors point can point in different directions. The angle $\vartheta$ is consequently defined by $$\mathbf{m}_\text{R}\cdot\mathbf{m}_\text{L} = m_\text{R}m_\text{L}\cos(\vartheta),\; m_i = |\mathbf{m}_i|.$$ Specifically, we have that $${{\hat{\cal{D}}}_{}^{(1/2)}(\vartheta)} = \begin{pmatrix} \cos(\vartheta/2) & -\sin(\vartheta/2) \\ \sin(\vartheta/2) & \cos(\vartheta/2) \end{pmatrix}$$ such that a spin-rotated fermion operator is given by $$\widetilde{d}_{{{\mathbf{p}}}\sigma} = \sum_{\sigma'}{{\hat{\cal{D}}}_{\sigma'\sigma}^{(1/2)}(\vartheta)}d_{{{\mathbf{p}}}\sigma'}.$$ The tunneling Hamiltonian then reads $$\begin{aligned} \label{eq:tunnelham} H_\mathrm{T} &= \sum_{{{\mathbf{k}}}{{\mathbf{p}}}\sigma\sigma'}{{\hat{\cal{D}}}_{\sigma'\sigma}^{(1/2)}(\vartheta)} \Big( T_{{{\mathbf{k}}}{{\mathbf{p}}}}^{\vphantom{\dagger}}{c_{{{\mathbf{k}}}\sigma}^{{\dagger}}}{d_{{{\mathbf{p}}}\sigma'}^{\vphantom{\dagger}}} + T_{{{\mathbf{k}}}{{\mathbf{p}}}}^{\vphantom{\dagger}\ast}{d_{{{\mathbf{p}}}\sigma'}^{{\dagger}}}{c_{{{\mathbf{k}}}\sigma}^{\vphantom{\dagger}}}\Big),\end{aligned}$$ where we neglect the possibility of spin-flips in the tunneling process. Note that we distinguish between fermion operators on the right and left side of the junction corresponding to $c_{{{\mathbf{k}}}\sigma}$ and $d_{{{\mathbf{p}}}\sigma}$, respectively. Demanding that $H_\text{T}$ is invariant under time reversal ${\cal{K}}$, one finds that the condition ${\cal{K}}^{-1}H_\text{T}{\cal{K}} = H_\text{T}$ with $$\begin{aligned} {\cal{K}}^{-1}H_\text{T}{\cal{K}} =& \sum_{{{\mathbf{k}}}{{\mathbf{p}}}\sigma\sigma'}\sigma\sigma' {{\hat{\cal{D}}}_{\sigma'\sigma}^{(1/2)}(\vartheta)}\times\\ &\Big( T_{{{\mathbf{k}}}{{\mathbf{p}}}}^*c_{-{{\mathbf{k}}},-\sigma}^\dag d_{-{{\mathbf{p}}},-\sigma'} + T_{{{\mathbf{k}}}{{\mathbf{p}}}}d_{-{{\mathbf{p}}},-\sigma'}^\dag c_{-{{\mathbf{k}}},-\sigma} \Big)\notag\end{aligned}$$ dictates that $T_{{{\mathbf{k}}}{{\mathbf{p}}}} = T_{-{{\mathbf{k}}},-{{\mathbf{p}}}}^*$. Furthermore, we write the superconducting order parameters as $ \Delta_{{{\mathbf{k}}}\sigma\sigma} = |\Delta_{{{\mathbf{k}}}\sigma\sigma}|{\mathrm{e}^{\i(\theta_{{\mathbf{k}}}+ \theta^\text{R}_{\sigma\sigma})}}$, where R (L) denotes the bulk superconducting phase on the right (left) side of the junction while $\theta_{{\mathbf{k}}}$ is a general (complex) internal phase factor originating from the specific form of the gap in ${{\mathbf{k}}}$-space that ensures odd symmetry under inversion of momentum, [*i.e.* ]{}$\theta_{{\mathbf{k}}}= \theta_{-{{\mathbf{k}}}} + \pi$. For our system, Eq. (\[eq:1\]) takes the form $$\begin{aligned} H_\text{FMSC} = H_0 + H_A,\; H_A = \sum_{{{\mathbf{k}}}\sigma}\phi_{{{\mathbf{k}}}\sigma}^\dag \hat{A}_{{{\mathbf{k}}}\sigma} \phi_{{{\mathbf{k}}}\sigma},\end{aligned}$$ where we have block-diagonalized $\hat{{\cal{A}}}_{{\mathbf{k}}}$ and chosen a convenient basis $\phi_{{{\mathbf{k}}}\sigma}^\dag = (c_{{{\mathbf{k}}}\sigma}^\dag, c_{-{{\mathbf{k}}}\sigma})$, with the definition $$\hat{A}_{{{\mathbf{k}}}\sigma} = -\frac{1}{2} \begin{pmatrix} -\varepsilon_{{{\mathbf{k}}}\sigma} & \Delta_{{{\mathbf{k}}}\sigma\sigma}\\ \Delta_{{{\mathbf{k}}}\sigma\sigma}^\dag & \varepsilon_{{{\mathbf{k}}}\sigma} \end{pmatrix}.$$ This Hamiltonian is diagonalized by a $2\times 2$ spin generalized unitary matrix $\hat{U}_{{{\mathbf{k}}}\sigma}$, so that the superconducting sector is expressed in the diagonal basis $$\tilde{\phi}_{{{\mathbf{k}}}\sigma}^\dagger = \phi_{{{\mathbf{k}}}\sigma}^\dagger \hat{U}_{{{\mathbf{k}}}\sigma} \equiv ({\gamma_{{{\mathbf{k}}}\sigma}^{{\dagger}}}, {\gamma_{-{{\mathbf{k}}}\sigma}^{\vphantom{\dagger}}}).$$ Thus, $H_A =\sum_{{{\mathbf{k}}}\sigma}\tilde{\phi}_{{{\mathbf{k}}}\sigma}^\dagger \hat{\tilde{A}}_{{{\mathbf{k}}}\sigma} \tilde{\phi}_{{{\mathbf{k}}}\sigma}$, in which $$\begin{aligned} \label{eq:eigenvalues} \hat{\tilde{A}}_{{{\mathbf{k}}}\sigma} &= \hat{U}_{{{\mathbf{k}}}\sigma}\hat{A}_{{{\mathbf{k}}}\sigma}\hat{U}_{{{\mathbf{k}}}\sigma}^{-1} = \operatorname{diag}(\widetilde{E}_{{{\mathbf{k}}}\sigma}, -\widetilde{E}_{{{\mathbf{k}}}\sigma})/2,\notag\\ \widetilde{E}_{{{\mathbf{k}}}\sigma} &= \sqrt{\varepsilon_{{{\mathbf{k}}}\sigma}^2+{|\Delta_{{{\mathbf{k}}}\sigma\sigma}|}^2}.\end{aligned}$$ The explicit expression for $\hat{U}_{{{\mathbf{k}}}\sigma}$ is $$\begin{aligned} \label{eq:U} \hat{U}_{{{\mathbf{k}}}\sigma} &= N_{{{\mathbf{k}}}\sigma} \begin{pmatrix} 1 & \frac{\Delta_{{{\mathbf{k}}}\sigma\sigma}}{\varepsilon_{{{\mathbf{k}}}\sigma}+\widetilde{E}_{{{\mathbf{k}}}\sigma}} \\ -\frac{\Delta_{{{\mathbf{k}}}\sigma\sigma}^*}{\varepsilon_{{{\mathbf{k}}}\sigma}+\widetilde{E}_{{{\mathbf{k}}}\sigma}} & 1 \end{pmatrix},\notag\\ N_{{{\mathbf{k}}}\sigma} &= \frac{\varepsilon_{{{\mathbf{k}}}\sigma}+\widetilde{E}_{{{\mathbf{k}}}\sigma}}{\sqrt{(\varepsilon_{{{\mathbf{k}}}\sigma}+\widetilde{E}_{{{\mathbf{k}}}\sigma})^2 + |\Delta_{{{\mathbf{k}}}\sigma\sigma}|^2}}.\end{aligned}$$ We now proceed to investigate the tunneling currents that can arise across a junction of two such FMSC. Tunneling formalism {#sec:tunneling} ------------------- Although the treatment in this section is fairly standard, it comes with certain extension to the standard cases due to the coexistence of two simultaneously broken symmetries. Thus, for completeness, we present it here. In order to find the spin- and charge-current over the junction, we define the generalized number operator [^1] by $N_{\alpha\beta}= \sum_{{\mathbf{k}}}{c_{{{\mathbf{k}}}\alpha}^{{\dagger}}}{c_{{{\mathbf{k}}}\beta}^{\vphantom{\dagger}}}$. Consider now the transport operator $$\begin{aligned} \label{eq:trans1} \dot{N}_{\alpha\beta} &= {\mathrm{i}}[H_\text{T},N_{\alpha\beta}] \notag\\ &= -{\mathrm{i}}\sum_{{{\mathbf{k}}}{{\mathbf{p}}}\sigma}[ {{\hat{\cal{D}}}_{\sigma\beta}^{(1/2)}(\vartheta)}T_{{{\mathbf{k}}}{{\mathbf{p}}}}{c_{{{\mathbf{k}}}\alpha}^{{\dagger}}}{d_{{{\mathbf{p}}}\sigma}^{\vphantom{\dagger}}}-{{\hat{\cal{D}}}_{\sigma\alpha}^{(1/2)}(\vartheta)}T_{{{\mathbf{k}}}{{\mathbf{p}}}}^\ast{d_{{{\mathbf{p}}}\sigma}^{{\dagger}}}{c_{{{\mathbf{k}}}\beta}^{\vphantom{\dagger}}}].\end{aligned}$$ We now write $H = H' + H_\text{T}$ where $H' = H_\text{L}+H_\text{R}$ and $H_i = K_i + \mu_i N_{i}$, $i$ = L, R, where $\mu_i$ is the chemical potential on side $i$ and $N_{i}$ is the number operator. In the interaction picture, the time-dependence of $\dot{N}_{\alpha\beta}$ is then governed by $$\dot{N}_{\alpha\beta}(t) = {\mathrm{e}^{\i H't}}\dot{N}_{\alpha\beta} {\mathrm{e}^{-\i H't}},$$ while the time-dependence of the fermion operators reads $$c_{{{\mathbf{k}}}\sigma}(t) = {\mathrm{e}^{\i K_\text{R}t}}c_{{{\mathbf{k}}}\sigma}{\mathrm{e}^{-\i K_\text{R}t}}.$$ Effectively, one can write $$K_\text{R} = H_0 + \sum_{{{\mathbf{k}}}\sigma} E_{{{\mathbf{k}}}\sigma}\gamma_{{{\mathbf{k}}}\sigma}^\dag \gamma_{{{\mathbf{k}}}\sigma},$$ where the chemical potential is now included in the quasi-particle excitation energies $E_{{{\mathbf{k}}}\sigma}$ according to $$E_{{{\mathbf{k}}}\sigma} = \sqrt{\xi_{{{\mathbf{k}}}\sigma}^2 + |\Delta_{{{\mathbf{k}}}\sigma\sigma}|^2}$$ with $\xi_{{{\mathbf{k}}}\sigma} = \varepsilon_{{{\mathbf{k}}}\sigma} - \mu_\text{R}$, and correspondingly for the left side. Consequently, we are able to write down $$\begin{aligned} \label{eq:transopip} \begin{split} \dot{N}_{\alpha\beta}(t)=-{\mathrm{i}}\sum_{{{\mathbf{k}}}{{\mathbf{p}}}\sigma}\Big(& {{\hat{\cal{D}}}_{\sigma\beta}^{(1/2)}(\vartheta)}T_{{{\mathbf{k}}}{{\mathbf{p}}}}{c_{{{\mathbf{k}}}\alpha}^{{\dagger}}}(t){d_{{{\mathbf{p}}}\sigma}^{\vphantom{\dagger}}}(t) {\mathrm{e}^{-{\mathrm{i}}teV}} \\-&{{\hat{\cal{D}}}_{\sigma\alpha}^{(1/2)}(\vartheta)}T_{{{\mathbf{k}}}{{\mathbf{p}}}}^\ast{d_{{{\mathbf{p}}}\sigma}^{{\dagger}}}(t){c_{{{\mathbf{k}}}\beta}^{\vphantom{\dagger}}}(t) {\mathrm{e}^{{\mathrm{i}}teV}}\Big),\! \end{split}\end{aligned}$$ where $eV\equiv \mu_\text{L}-\mu_\text{R}$ is the externally applied potential. Within linear response theory, we can identify a general current $$\label{eq:eqgen} \mathbf{I}(t) = \sum_{\alpha\beta} \hat{\boldsymbol{\tau}}_{\alpha\beta}\langle\dot{N}_{\alpha\beta}(t)\rangle,\; \hat{\boldsymbol{\tau}}=(-e\hat{1},{\hat{\boldsymbol{\sigma}}^{\vphantom{\dagger}}}),$$ such that the charge-current is $I^\text{C}(t) = I_0(t)$ while the spin-current reads $\mathbf{I}^\text{S}(t) = (I_1(t),I_2(t),I_3(t))$. In Eq. (\[eq:eqgen\]), $\hat{1}$ denotes the 2$\times$2 identity matrix. Explicitely, we have $$\begin{aligned} \label{eq:currents} I^\text{C}(t) &= I^\text{C}_\text{sp}(t) + I^\text{C}_\text{tp}(t) = -e\sum_{\alpha}\langle \dot{N}_{\alpha\alpha}(t) \rangle\notag\\ \mathbf{I}^\mathrm{S}(t) &=\mathbf{I}^\text{S}_\text{sp}(t) + \mathbf{I}^\text{S}_\text{tp}(t) = \sum_{\alpha\beta}{\hat{\boldsymbol{\sigma}}_{\alpha\beta}^{\vphantom{\dagger}}}{\langle\dot{N}_{\alpha\beta}(t)\rangle},\end{aligned}$$ where the subscripts sp and tp denote the single-particle and two-particle contribution to the currents, respectively. As recently pointed out by the authors of Ref. , defining a spin-current is not as straight-forward as defining a charge-current. Specifically, the conventional definition of a spin-current given as spin multiplied with velocity suffers from severe flaws in systems where spin is not a conserved quantity. In this paper, we define the spin-current across the junction as $\mathbf{I}^\text{S}(t) = \langle \text{d}\mathbf{S}(t)/\text{d}t \rangle$ where $\mathrm{d}\mathbf{S}/\mathrm{d}t = \i[H_\mathrm{T},\mathbf{S}]$. It is then clear that the concept of a spin-current in this context refers to the rate at which the spin-vector $\mathbf{S}$ on one side of the junction changes *as a result of tunneling across the junction*. The spatial components of $\mathbf{I}^\text{S}$ are defined with respect to the corresponding quantization axis. In this way, we avoid non-physical interpretations of the spin-current in terms of real spin transport as we only calculate the contribution to $\mathrm{d}\mathbf{S}/\mathrm{d}t$ from the tunneling Hamiltonian *instead* of the entire Hamiltonian $H$. Should we have chosen the latter approach, one would in general run the risk of obtaining a non-zero spin-current due to [*e.g.* ]{}local spin-flip processes which are obviously not relevant in terms of real spin transport across the junction. However, in our system such spin-flip processes are absent. The tunneling currents are calculated in the linear response regime by using the Kubo formula, $$\label{eq:Kubo} {\langle\dot{N}_{\alpha\beta}(t)\rangle} = -{\mathrm{i}}\int_{-\infty}^{t}{\mathrm{d}}t' {\langle[\dot{N}_{\alpha\beta}(t),H_\mathrm{T}(t')]\rangle},$$ where the right hand side is the statistical expectation value in the unperturbed quantum state, [*i.e.* ]{}when the two subsystems are not coupled. This expression includes both single-particle and two-particle contributions to the current. Details of the calculations are found in Sec. \[app:fs\]. We now consider the cases of an A2- and A1-phase at zero external potential, giving special attention to the charge-current and $\hat{\mathbf{z}}$-component of the spin-current in the Josephson channel. Two-particle currents --------------------- For an A2-phase in the case of zero externally applied voltage ($eV=0$), Eqs. (\[eq:currents\]) and (\[eq:GenMat\]) generates a quasiparticle interference term $I_\text{qi}$, in addition to a term $I_\text{J}$ identified as the Josephson current. Thus, the total two-particle currents of charge and spin can be written as $I^\text{C(S)}_{\text{tp}(,z)} = I^\text{C(S)}_{\text{qi}(,z)} + I^\text{C(S)}_{\text{J}(,z)}$ where $$\begin{aligned} I^\text{C(S)}_{\text{qi}(,z)} &= \sum_{{{\mathbf{k}}}{{\mathbf{p}}}} I^\text{C(S)}(\theta^\text{L}_{\sigma\sigma} - \theta^\text{R}_{\alpha\alpha}, \Delta\theta_{{{\mathbf{p}}}{{\mathbf{k}}}}),\notag\\ I^\text{C(S)}_{\text{J}(,z)} &= \sum_{{{\mathbf{k}}}{{\mathbf{p}}}} I^\text{C(S)}(\Delta\theta_{{{\mathbf{p}}}{{\mathbf{k}}}}, \theta^\text{L}_{\sigma\sigma} - \theta^\text{R}_{\alpha\alpha})\end{aligned}$$ with the definitions $$\begin{aligned} \label{eq:currentsA2} I^\text{C}(\phi_1,\phi_2) = \frac{e}{2}\sum_{\sigma\alpha} &[1+\sigma\alpha\cos(\vartheta)]|T_{{{\mathbf{k}}}{{\mathbf{p}}}}|^2 \frac{|\Delta_{{{\mathbf{k}}}\alpha\alpha}| |\Delta_{{{\mathbf{p}}}\sigma\sigma}|}{E_{{{\mathbf{k}}}\alpha}E_{{{\mathbf{p}}}\sigma}}\notag\\ &\times\cos(\phi_1)\sin(\phi_2) F_{{{\mathbf{k}}}{{\mathbf{p}}}\alpha\sigma},\notag\\ I^\text{S}(\phi_1,\phi_2) = -\frac{1}{2}\sum_{\sigma\alpha} \alpha &[1+\sigma\alpha\cos(\vartheta)]|T_{{{\mathbf{k}}}{{\mathbf{p}}}}|^2 \frac{|\Delta_{{{\mathbf{k}}}\alpha\alpha}| |\Delta_{{{\mathbf{p}}}\sigma\sigma}|}{E_{{{\mathbf{k}}}\alpha}E_{{{\mathbf{p}}}\sigma}}\notag\\ &\times\cos(\phi_1)\sin(\phi_2) F_{{{\mathbf{k}}}{{\mathbf{p}}}\alpha\sigma},\end{aligned}$$ where we have introduced $\Delta\theta_{{{\mathbf{p}}}{{\mathbf{k}}}} \equiv \theta_{{\mathbf{p}}}- \theta_{{\mathbf{k}}}$ and $$\begin{aligned} F_{{{\mathbf{k}}}{{\mathbf{p}}}\alpha\sigma} &= \sum_\pm \frac{f(\pm E_{{{\mathbf{k}}}\alpha})-f(E_{{{\mathbf{p}}}\sigma})}{E_{{{\mathbf{k}}}\alpha} \mp E_{{{\mathbf{p}}}\sigma}}.\end{aligned}$$ Above, $f(x)$ is the Fermi distribution. Thus, we have found a two-particle current, for both spin and charge, that can be tuned in a well-defined manner by adjusting the relative orientation $\vartheta$ of the magnetization vectors [^2]. We will discuss the detection of such an effect later in this paper. Note that the ${{\mathbf{k}}}$-dependent symmetry factor $\theta_{{\mathbf{k}}}$ enters the above expressions, thus giving rise to an extra contribution to the two-particle current besides the ordinary Josephson effect. This is due to the fact that we included it in the SC gaps as a factor ${\mathrm{e}^{\i\theta_{{\mathbf{k}}}}}$ which in general is complex. However, this specific form may for certain models, depending on the Fermi surface in question, be reduced to a real function, [*i.e.* ]{}${\mathrm{e}^{\i\theta_{{\mathbf{k}}}}} \to \cos\theta_{{\mathbf{k}}}$, in which case the quasi-particle interference term becomes zero. Hence, in most of the remaining discussion we will focus on the Josephson part of the two-particle current. The A1-phase with only one SC order parameter $\Delta_{{{\mathbf{k}}}\alpha\alpha}$, $\alpha\in\{\uparrow,\downarrow\}$ also corresponds to a non-unitary state $\mathbf{d}_{{\mathbf{k}}}$ according to Eq. (\[eq:Cooperpairspin\]), and is thus compatible with coexistence of FM and SC. In this case, we readily see that Eq. (\[eq:currentsA2\]) reduces to $$\begin{aligned} \label{eq:currentsA1} \begin{split} I^\text{C}_\text{tp} &= e\cos^2(\vartheta/2)X_{\alpha} \\ I^\text{S}_{\text{tp},z} &= -\alpha\cos^2(\vartheta/2)X_{\alpha} \end{split} \qquad \alpha\in\{\uparrow,\downarrow\}\end{aligned}$$ where we have defined the quantity $$\begin{aligned} X_{\alpha} &= \sum_{{{\mathbf{k}}}{{\mathbf{p}}}}|T_{{{\mathbf{k}}}{{\mathbf{p}}}}|^2\frac{|\Delta_{{{\mathbf{k}}}\alpha\alpha}\Delta_{{{\mathbf{p}}}\alpha\alpha}|}{E_{{{\mathbf{k}}}\alpha}E_{{{\mathbf{p}}}\alpha}}F_{{{\mathbf{k}}}{{\mathbf{p}}}\alpha\alpha}\notag\\ &\times[\sin\Delta\theta_{\alpha\alpha}\cos\Delta\theta_{{{\mathbf{p}}}{{\mathbf{k}}}} + \cos\Delta\theta_{\alpha\alpha}\sin\Delta\theta_{{{\mathbf{p}}}{{\mathbf{k}}}}]\end{aligned}$$ with $\Delta\theta_{\alpha\alpha} \equiv \theta^\text{L}_{\alpha\alpha}-\theta^\text{R}_{\alpha\alpha}$, and $\Delta_{{{\mathbf{k}}}\alpha\alpha}$ is the surviving order parameter. As expected, the spin-current changes sign depending on whether it is the $\Delta_{{{\mathbf{k}}}\uparrow\uparrow}$ or $\Delta_{{{\mathbf{k}}}\downarrow\downarrow}$ order parameter that is present. For collinear magnetization $(\vartheta=0)$, an ordinary Josephson effect occurs with the superconducting phase difference as the driving force. Interestingly, one is able to tune both the spin- and charge-current to zero in the A1-phase when $\mathbf{m}_\text{L} \parallel -\mathbf{m}_\text{R}$ ($\vartheta=\pi$). It follows from Eq. (\[eq:currentsA1\]) that the spin- and charge-current only differ by a constant pre-factor $$I^\text{C}_\text{tp}/I^\text{S}_{\text{tp},z} = -\alpha e,\quad \alpha=\pm1.$$ It is then reasonable to draw the conclusion that we are dealing with a completely *spin-polarized current* such that both $I^\text{C}_\text{tp}$ and $I^\text{S}_{\text{tp},z}$ must vanish simultaneously at $\vartheta=\pi$. Another result that can be extracted from Eqs. (\[eq:currentsA2\]) and (\[eq:currentsA1\]) is a persistent non-zero DC spin-Josephson current even if the magnetizations on each side of the junction are of equal magnitude and collinear ($\vartheta=0$). This is quite different from the spin-Josephson effect recently considered in ferromagnetic metal junctions  [@nogueira2004b]. In that case, a twist in the magnetization across the junction is required to drive the spin-Josephson effect. Note that in the common approximation $T_{{{\mathbf{k}}}{{\mathbf{p}}}} = T$, [*i.e.* ]{}the tunneling probability is independent of the electron magnitude and direction of electron momentum, the two-particle current predicted above is identically equal to zero. Of course, such a crude approximation does not correspond to the correct physical picture (see [*e.g.* ]{}Ref. ), and in general one cannot neglect the directional dependence of the tunneling matrix element. This demonstrates that we are dealing with a more subtle effect than what could be unveiled when applying the approximation of a constant tunneling matrix element. An interesting situation arises in the case of zero externally applied voltage *and* identical superconductors on each side of the junction with SC phase differences $\Delta\theta_{\sigma\sigma} = 0$. In this case, we find that $I^\mathrm{C}_\mathrm{J} =0$ while $$\begin{aligned} \label{eq:finalspincrazy} I^\text{S}_{\text{J},z} = -2\sum_{{{\mathbf{k}}}{{\mathbf{p}}}} &|T_{{{\mathbf{k}}}{{\mathbf{p}}}}|^2 \sin^2(\vartheta/2) |\Delta_{{{\mathbf{k}}}\uparrow\uparrow}\Delta_{{{\mathbf{p}}}\downarrow\downarrow}|F_{{{\mathbf{k}}}{{\mathbf{p}}}\uparrow\downarrow}\notag\\ &\times \sin(\theta_{\downarrow}^\text{L}-\theta_{\uparrow}^\text{R})/(E_{{{\mathbf{k}}}\uparrow}E_{{{\mathbf{p}}}\downarrow}).\end{aligned}$$ when $eV=0$, $\Delta\theta_{\sigma\sigma}=0$. Thus, we have found a dissipationless spin-current in the two-particle channel without an externally applied voltage *and* without a SC phase difference. This effect is present as long as $\vartheta$ is not 0 or $\pi$, corresponding to parallel or anti-parallel magnetization on each side of the junction. It is seen from Eq. (\[eq:finalspincrazy\]) that the spin-current is driven by an interband phase difference on each side of the junction. A necessary condition for this effect to occur is that no inter-band Josephson coupling is present, [*i.e.* ]{}electrons in the two energy-bands $E_{{{\mathbf{k}}}\uparrow}$ and $E_{{{\mathbf{k}}}\downarrow}$ do not communicate with each other. To understand why a Josephson coupling would destroy the above effect, consider the free energy density for a $p$-wave FMSC first proposed in Ref. , given by $$\label{eq:joscop} {\cal{F}} = {\cal{F}}' - \lambda_\text{J}\cos(\theta_{\uparrow\uparrow}-\theta_{\downarrow\downarrow})$$ in the presence of a Josephson coupling. In Eq. (\[eq:joscop\]), $\lambda_\text{J}$ determines the strength of the interaction while ${\cal{F}}'$ contains the SC and FM contribution to the free energy density in addition to the coupling terms between the SC and FM order parameters. Consequently, the phase difference $\theta_{\uparrow\uparrow}-\theta_{\downarrow\downarrow}$ is locked to 0 or $\pi$ in order to minimize ${\cal{F}}$, depending on sgn($\lambda_\text{J}$). Considering Eq. (\[eq:finalspincrazy\]), we see that $I^\text{S}_{\text{J},z}=0$ in this case, since the argument of the last sine is zero. Mechanisms that would induce a Josephson coupling include magnetic impurities causing inelastic spin-flip scattering between the energy-bands and spin-orbit coupling. Recently, the authors of Ref.  proposed that $p$-wave SC arising out of a FM metal state could be explained by the Berry curvature field that is present in ferromagnets with spin-orbit coupling. It is clear that in the case where spin-orbit coupling is included in the problem, spin-flip scattering processes occur between the energy bands such that the $\uparrow$ and $\downarrow$ spins can not be considered as two independent species any more. The SC phases will then be locked to each other with a relative phase of $0$ or $\pi$. However, note that in the general case, Eq. (\[eq:currentsA2\]) produces a non-zero charge- and spin-current even if the spin-up and spin-down phases are locked to each other. Single-particle currents ------------------------ In the single-particle channel, we find that the charge and spin-currents read $$\begin{aligned} \label{eq:spcurrent} I^\text{C}_{\text{sp}} &= -e\sum_\alpha \langle \dot{N}_{\alpha\alpha}(t) \rangle_\text{sp} \notag\\ I^\text{S}_{\text{sp},z} &= \sum_{\alpha} \alpha \langle \dot{N}_{\alpha\alpha}(t) \rangle_\text{sp},\end{aligned}$$ as seen from Eq. (\[eq:currents\]). From Eq. (\[eq:N1\]), we then extract the proper expectation value, which is found to be $$\begin{aligned} \langle \dot{N}_{\alpha\alpha}(t) \rangle_\text{sp} &= 4\pi\sum_{{{\mathbf{k}}}{{\mathbf{p}}}\sigma} [1+\sigma\alpha\cos(\vartheta)]|T_{{{\mathbf{k}}}{{\mathbf{p}}}\alpha}|^2 N_{{{\mathbf{k}}}\alpha}^2N_{{{\mathbf{p}}}\sigma}^2\notag\\ &\times\Bigg[ [f(E_{{{\mathbf{k}}}\alpha})-f(E_{{{\mathbf{p}}}\sigma})]\Big( \delta(-eV+E_{{{\mathbf{k}}}\alpha}-E_{{{\mathbf{p}}}\sigma}) - \frac{|\Delta_{{{\mathbf{k}}}\alpha\alpha}\Delta_{{{\mathbf{p}}}\sigma\sigma}|^2}{(\xi_{{{\mathbf{k}}}\alpha}+E_{{{\mathbf{k}}}\alpha})^2(\xi_{{{\mathbf{p}}}\sigma}+E_{{{\mathbf{p}}}\sigma})^2 }\delta(-eV-E_{{{\mathbf{p}}}\sigma}+E_{{{\mathbf{k}}}\alpha}) \Big)\notag\\ &+ [1-f(E_{{{\mathbf{k}}}\alpha})-f(E_{{{\mathbf{p}}}\sigma})] \Big( \frac{|\Delta_{{{\mathbf{k}}}\alpha\alpha}|^2}{(\xi_{{{\mathbf{k}}}\alpha}+E_{{{\mathbf{k}}}\alpha})^2 }\delta(-eV-E_{{{\mathbf{k}}}\alpha}-E_{{{\mathbf{p}}}\sigma}) - \frac{|\Delta_{{{\mathbf{p}}}\sigma\sigma}|^2}{(\xi_{{{\mathbf{p}}}\sigma}+E_{{{\mathbf{p}}}\sigma})^2 }\delta(-eV+E_{{{\mathbf{k}}}\alpha}+E_{{{\mathbf{p}}}\sigma})\Big) \Bigg]. \end{aligned}$$ The currents in Eq. (\[eq:spcurrent\]) are thus seen to require an applied voltage in order to flow in the tunneling junction. Clearly, this is because the Cooper pairs need to be split up in order for a single-particle current to exist, such that both spin- and charge-currents vanish at $eV=0$. In Ref. , the presence of a persistent spin-current in the single-particle channel for FM/FM junctions with a twist in magnetization across the junction was predicted. For consistency, our results must confirm this prediction for the single-particle current in the limit where SC is lost, [*i.e.* ]{}$\Delta_{{{\mathbf{k}}}\sigma\sigma}\to0$. Note that the $\hat{\mathbf{z}}$-direction in Ref.  corresponds to a vector in our local $xy$-plane since the present quantization axis lies parallel with the magnetization direction. Upon calculating the $\mathbf{x}$- and $\mathbf{y}$-components of the single-particle spin-current for our system in the limit where SC is lost, [*i.e.* ]{}$\Delta_{{{\mathbf{k}}}\sigma\sigma}\to 0$, a persistent spin Josephson-like current proportional to $\sin(\vartheta)$ is identified. More precisely, $$\begin{aligned} \label{eq:nogcompare} {\mathbf{I}}^\text{S}_{\text{sp}}(t) = 2\sum_{{{\mathbf{k}}}{{\mathbf{p}}}}\sum_{\alpha\beta\sigma}& {{\hat{\cal{D}}}_{\sigma\alpha}^{(1/2)}(\vartheta)}{{\hat{\cal{D}}}_{\sigma\beta}^{(1/2)}(\vartheta)}{|{T_{{{\mathbf{k}}}{{\mathbf{p}}}}}|}^2\notag\\ &\times {{\mathrm{Im}}}{\big\{{\hat{\boldsymbol{\sigma}}_{\beta\alpha}^{\vphantom{\dagger}}}}\Lambda_{\beta\sigma}^{1,1}(-eV)\big\}\end{aligned}$$ when $\Delta_{{{\mathbf{q}}}\sigma\sigma}=0$ (see Appendix for details). In agreement with Ref. , the component of the spin-current parallel to ${\mathbf{m}}_\text{L}\times{\mathbf{m}}_\text{R}$ is seen to vanish for $\vartheta=\{0,\pi\}$ at $eV=0$. Ferromagnets with spin-orbit coupling {#sec:fmsoc} ===================================== Coexistence of ferromagnetism and spin-orbit coupling ----------------------------------------------------- In a system where time-reversal and spatial inversion symmetry are simultaneously broken, it is clear that spins are heavily affected by these properties. There is currently much focus on ferromagnetic semiconductors where spin-orbit coupling plays a crucial role with regard to transport properties [@dietl2002; @matsukura2002]. In fact, there has in recent years been much progress in the semiconductor research community where the spin-Hall effect in particular has received much attention [@engel2006]. With the discovery [@ohno1992] of hole-mediated ferromagnetic order in (In,Mn)As, extensive research on III-V host materials was triggered. Moreover, it is clear that properties such as ferromagnetic transition temperatures in excess of 100 K [@matsukura1998] and long spin-coherence times [@kikkawa1999] in GaAs have strongly contributed to opening up a vista plethora for information processing and storage technologies in these new magnetic mediums [@jungwirth2002]. Generally, spin-orbit coupling (SOC) can be roughly divided into two categories – *intrinsic* and *extrinsic*. Intrinsic SOC is found in materials with a non-centrosymmetric crystal symmetry, [*i.e.* ]{}where inversion symmetry is broken, whereas extrinsic SOC is due to asymmetries caused by impurities, local confinements of electrons or externally applied electrical fields. In the present paper, we investigate the tunneling current of spin between two ferromagnetic metals with spin-orbit coupling induced by an external electric field. This way, we will have two externally controllable parameters; the magnetization ${\mathbf{m}}$ and the electrical field ${\mathbf{E}}$. The case of tunneling between two noncentrosymmetric superconductors with significant spin-orbit coupling, but no ferromagnetism, has previously been considered in Ref. . The Hamiltonian {#the-hamiltonian} --------------- Our system consists of two Heisenberg ferromagnets with substantial spin-orbit coupling, separated by a thin insulating barrier which is assumed to be spin-inactive. This is shown in Fig. \[fig:setup\]. We now operate with only one quantization axis, such that a proper tunneling Hamiltonian for this purpose is $$H_\text{T}= \sum_{{{\mathbf{k}}}{{\mathbf{p}}}\sigma}(T_{{{\mathbf{k}}}{{\mathbf{p}}}} c_{{{\mathbf{k}}}\sigma}^\dag d_{{{\mathbf{p}}}\sigma} + \text{h.c.}),$$ where $\{c^\dag_{{{\mathbf{k}}}\sigma},c_{{{\mathbf{k}}}\sigma}\}$ and $\{d^\dag_{{{\mathbf{k}}}\sigma},d_{{{\mathbf{k}}}\sigma}\}$ are creation and annihilation operators for an electron with momentum ${{\mathbf{k}}}$ and spin $\sigma$ on the right and left side of the junction, respectively, while $T_{{{\mathbf{k}}}{{\mathbf{p}}}}$ is the spin-independent tunneling matrix element. In ${{\mathbf{k}}}$-space, the Hamiltonian describing the ferromagnetism reads $$H_\text{FM} = \sum_{{{\mathbf{k}}}\sigma}\varepsilon_{{{\mathbf{k}}}} c_{{{\mathbf{k}}}\sigma}^\dag c_{{{\mathbf{k}}}\sigma}- JN\sum_{{{\mathbf{k}}}}\eta({{\mathbf{k}}}) \mathbf{S}_{{\mathbf{k}}}\cdot\mathbf{S}_{-{{\mathbf{k}}}}$$ in which $\varepsilon_{{{\mathbf{k}}}}$ is the kinetic energy of the electrons, $J$ is the ferromagnetic coupling constant, $N$ is the number of particles in the system, while $\mathbf{S}_{{\mathbf{k}}}= (1/2) \sum_{\alpha\beta} c_{{{\mathbf{k}}}\alpha}^\dag{\hat{\boldsymbol{\sigma}}_{\alpha\beta}^{\vphantom{\dagger}}}c_{{{\mathbf{k}}}\beta}$ is the spin operator. As we later adopt the mean-field approximation, $\mathbf{m}=(m_x,m_y,m_z)$ will denote the magnetization of the system. The spin-orbit interactions are accounted for by a Rashba Hamiltonian $$H_\text{S-O} = -\sum_{{{\mathbf{k}}}} \varphi_{{\mathbf{k}}}^\dag [\xi(\nabla V\times{{\mathbf{k}}})\cdot\hat{\boldsymbol{\sigma}}]\varphi_{{\mathbf{k}}},$$ where $\varphi_{{\mathbf{k}}}= [c_{{{\mathbf{k}}}\uparrow}, c_{{{\mathbf{k}}}\downarrow}]^\text{T}$, $\mathbf{E}=-\nabla V$ is the electrical field felt by the electrons and $\hat{\boldsymbol{\sigma}}=(\hat{\sigma}_1,\hat{\sigma}_2,\hat{\sigma}_3)$ in which $\hat{\sigma}_i$ are Pauli matrices, while the parameter $\xi$ is material-dependent. From now on, the notation $\xi(\mathbf{E}\times{{\mathbf{k}}})\equiv\mathbf{B}_{{\mathbf{k}}}= (B_{{{\mathbf{k}}},x},B_{{{\mathbf{k}}},y},B_{{{\mathbf{k}}},z})$ will be used. In general, the electromagnetic potential $V$ consists of two parts $V_\text{int}$ and $V_\text{ext}$ (see [*e.g.* ]{}Ref.  for a detailed discussion of the spin-orbit Hamiltonian). The crystal potential of the material is represented by $V_\text{int}$, and only gives rise to a spin-orbit coupling if inversion symmetry is broken in the crystal structure. Asymmetries such as impurities and local confinements of electrons are included in $V_\text{ext}$, as well as any external electrical field. Note that any lack of crystal inversion symmetry results in a so-called Dresselhaus term in the Hamiltonian, which is present in the absence of any impurities and confinement potentials. In the following, we focus on the spin-orbit coupling resulting from $V_\text{ext}$, thus considering any symmetry-breaking electrical field that arises from charged impurities or which is applied externally. In the case where the crystal structure does not respect inversion symmetry, a Dresselhaus term [@dresselhaus1955] can be easily included in the Hamiltonian by performing the substitution $$(\mathbf{E}\times{{\mathbf{k}}})\cdot\hat{\boldsymbol{\sigma}} \to [(\mathbf{E}\times{{\mathbf{k}}}) + {\cal{D}}({{\mathbf{k}}})]\cdot\hat{\boldsymbol{\sigma}},$$ where ${\cal{D}}({{\mathbf{k}}})$ = $-{\cal{D}}(-{{\mathbf{k}}}).$ We now proceed to calculate the spin-current that is generated across the junction as a result of tunneling. Note that in our model, the magnetization vector and electrical field are allowed to point in arbitrary directions. In this way, the obtained result for the spin-current will be generally valid and special cases, [*e.g.* ]{}thin films, are easily obtained by taking the appropriate limits in the final result. It should be mentioned that the effective magnetic field from the spin-orbit interactions might influence the direction of the magnetization in the ferromagnet. This is, however, not the main focus of our work, and we leave this question open for study. Our emphasis in the present paper concerns the derivation of general results onto which specific restrictions may be applied as they seem appropriate. In the mean-field approximation, the Hamiltonian for the right side of the junction can be written as $H= H_\text{FM}+H_\text{S-O}$, which in a compact form yields $$\label{eq:H1} H_\mathrm{R} =H_0 + \sum_{{{\mathbf{k}}}} \varphi_{{\mathbf{k}}}^\dag \begin{pmatrix} \varepsilon_{{{\mathbf{k}}}\uparrow} & -\zeta_\text{R} + {B_{{{\mathbf{k}}},-}}\\ -\zeta^\dag_\text{R} + B_{{{\mathbf{k}}},+} & \varepsilon_{{{\mathbf{k}}}\downarrow} \end{pmatrix} \varphi_{{\mathbf{k}}},$$ where $\varepsilon_{{{\mathbf{k}}}\sigma}\equiv\varepsilon_{{\mathbf{k}}}- \sigma (\zeta_{z,\text{R}} - B_{{{\mathbf{k}}},z})$ and $H_0$ is an irrelevant constant. The FM order parameters are $\zeta_\text{R} = 2J\eta(0)(m_{\text{R},x}-\i m_{\text{R},y})$ and $\zeta_{z,\text{R}} = 2J\eta(0)m_{\text{R},z}$ and $B_{{{\mathbf{k}}},\pm} \equiv B_{{{\mathbf{k}}},x}\pm\i B_{{{\mathbf{k}}},y}$. For convenience, we from now on write $\zeta=|\zeta|{\mathrm{e}^{\i\phi}}$ and $B_{{{\mathbf{k}}},\pm} = |B_{{{\mathbf{k}}},\pm}|{\mathrm{e}^{\mp\i\chi_{{\mathbf{k}}}}}$. The Hamiltonian for the left side of the junction is obtained from Eq. (\[eq:H1\]) simply by the doing the replacements ${{\mathbf{k}}}\to{{\mathbf{p}}}$ and $\text{R}\to\text{L}$. Tunneling formalism {#tunneling-formalism} ------------------- In order to obtain the expressions for the spin- and charge- tunneling currents, it is necessary to calculate the Green functions. These are given by the matrix $${\hat{\cal{G}}}_{{\mathbf{k}}}(\i\omega_n) = (-\i\omega_n\hat{1}+\hat{{{\cal{A}}_{\mathbf{k}}}})^{-1},$$ where $\hat{{{\cal{A}}_{\mathbf{k}}}}$ is the matrix in Eq. (\[eq:H1\]). Explicitly, we have that $$\hat{{\cal{G}}}_{{\mathbf{k}}}(\i\omega_n)=\begin{pmatrix} G_{{\mathbf{k}}}^{\uparrow\uparrow}(\i\omega_n)& F_{{\mathbf{k}}}^{\downarrow\uparrow}(\i\omega_n)\\ F_{{\mathbf{k}}}^{\uparrow\downarrow}(\i\omega_n) & G_{{\mathbf{k}}}^{\downarrow\downarrow}(\i\omega_n)\\ \end{pmatrix}.$$ Above, $\omega_n = 2(n+1)\pi/\beta, n=0,1,2\ldots$ is the fermionic Matsubara frequency and $\beta$ denotes inverse temperature. Introducing $$X_{{{\mathbf{k}}}}(\i\omega_n) = (\varepsilon_{{{\mathbf{k}}}\uparrow}-\i\omega_n)(\varepsilon_{{{\mathbf{k}}}\downarrow}-\i\omega_n) - |\zeta_\mathrm{R}-{B_{{{\mathbf{k}}},-}}|^2,$$ the normal and anomalous Green functions are $$\begin{aligned} \label{eq:green} G_{{\mathbf{k}}}^{\sigma\sigma}(\i\omega_n) &= (\varepsilon_{{{\mathbf{k}}},-\sigma} - \i\omega_n)/X_{{{\mathbf{k}}}}(\i\omega_n),\;\notag\\ F_{{\mathbf{k}}}^{\downarrow\uparrow}(\i \omega_n) &= F_{{\mathbf{k}}}^{\uparrow\downarrow,\dag}(\i\omega_n) = (\zeta_\text{R}-{B_{{{\mathbf{k}}},-}})/X_{{{\mathbf{k}}}}(\i\omega_n).\end{aligned}$$ The expression for $\mathbf{I}^\text{S}(t)$ is established by first considering the generalized number operator $N_{\alpha\beta} = \sum_{{\mathbf{k}}}c_{{{\mathbf{k}}}\alpha}^\dag c_{{{\mathbf{k}}}\beta}$. This operator changes with time due to tunneling according to $\dot N_{\alpha\beta} = \i[H_\text{T},N_{\alpha\beta}]$, which in the interaction picture representation becomes $\dot N_{\alpha\beta}(t) = -\i\sum_{{{\mathbf{k}}}{{\mathbf{p}}}}(T_{{{\mathbf{k}}}{{\mathbf{p}}}} c_{{{\mathbf{k}}}\alpha}^\dag d_{{{\mathbf{p}}}\beta}{\mathrm{e}^{\-\i teV}} - \text{h.c.}).$ The voltage drop across the junction is given by the difference in chemical potential on each side, [*i.e.* ]{}$eV = \mu_\text{R}-\mu_\text{L}$. In the linear response regime, the spin-current across the junction is $$\mathbf{I}^\text{S}(t) = \frac{1}{2}\sum_{\alpha\beta}{\hat{\boldsymbol{\sigma}}_{\alpha\beta}^{\vphantom{\dagger}}}\langle \dot N_{\alpha\beta} (t) \rangle,$$ where the expectation value of the time derivative of the transport operator is calculated by means of the Kubo formula Eq. (\[eq:Kubo\]). Details will be given in Sec. \[app:fmsoc\]. Single-particle currents ------------------------ At $eV=0$, it is readily seen from the discussion in Sec. \[app:fmsoc\] that the charge-current vanishes. Consider now the $z$-component of the spin-current in particular, which can be written as $I^\text{S}_z = \Im\text{m}\{\Phi(-eV)\}$. The Matsubara function $\Phi(-eV)$ is found by performing analytical continuation $\i {\widetilde{\omega}_\nu}\to -eV + \i 0^+$ on $\widetilde{\Phi}(\i{\widetilde{\omega}_\nu})$, where $$\begin{aligned} \label{eq:mat} \widetilde{\Phi}(\i{\widetilde{\omega}_\nu})=\frac{1}{\beta}\sum_{\i \omega_m, {{\mathbf{k}}}{{\mathbf{p}}}} \sum_\sigma \sigma \Big( &G_{{\mathbf{k}}}^{\sigma\sigma}(\i\omega_m) G^{\sigma\sigma}_{{\mathbf{p}}}(\i\omega_m -\i {\widetilde{\omega}_\nu}) \notag\\ + &F^{-\sigma,\sigma}_{{\mathbf{k}}}( \i\omega_m)F^{\sigma,-\sigma}_{{\mathbf{p}}}(\i\omega_m -\i {\widetilde{\omega}_\nu}) \Big).\end{aligned}$$ Here, ${\widetilde{\omega}_\nu}= 2\nu\pi/\beta$, $\nu=0,1,2\ldots$ is the bosonic Matsubara frequency. Inserting the Green functions from Eq. (\[eq:green\]) into Eq. (\[eq:mat\]), one finds that a persistent spin-current is established across the tunneling junction. For zero applied voltage, we obtain \[eq:finalspin\] $$\begin{aligned} I^\text{S}_z &= \sum_{{{\mathbf{k}}}{{\mathbf{p}}}} \frac{|T_{{{\mathbf{k}}}{{\mathbf{p}}}}|^2J_{{{\mathbf{k}}}{{\mathbf{p}}}}}{2\gamma_{{\mathbf{k}}}\gamma_{{\mathbf{p}}}}\Big[|\zeta_\text{R}\zeta_\text{L}|\sin\Delta\phi + |{B_{{{\mathbf{k}}},-}}{B_{{{\mathbf{p}}},-}}|\sin\Delta\chi_{{{\mathbf{k}}}{{\mathbf{p}}}}\notag\\ &-|{B_{{{\mathbf{k}}},-}}\zeta_\text{L}|\sin(\chi_{{\mathbf{k}}}-\phi_\text{L}) - |{B_{{{\mathbf{p}}},-}}\zeta_\text{R}|\sin(\phi_\text{R}-\chi_{{\mathbf{p}}})\Big], \\ J_{{{\mathbf{k}}}{{\mathbf{p}}}} &= \sum_{\substack{\alpha=\pm\\\beta=\pm}} \alpha\beta\Bigg[\frac{n(\varepsilon_{{{\mathbf{k}}}}+\alpha\gamma_{{\mathbf{k}}})-n(\varepsilon_{{\mathbf{p}}}+\beta\gamma_{{\mathbf{p}}})}{(\varepsilon_{{\mathbf{k}}}+\alpha\gamma_{{\mathbf{k}}})-(\varepsilon_{{\mathbf{p}}}+\beta\gamma_{{\mathbf{p}}})}\Bigg]. \end{aligned}$$ In Eqs. (\[eq:finalspin\]), $\Delta\chi_{{{\mathbf{k}}}{{\mathbf{p}}}} \equiv \chi_{{\mathbf{k}}}- \chi_{{\mathbf{p}}}$, $\Delta\phi \equiv \phi_\text{R}-\phi_\text{L}$, while $$\label{eq:gamma} \gamma_{{\mathbf{k}}}^2= (\zeta_{z,\text{R}} - B_{{{\mathbf{k}}},z})^2 + |\zeta_\text{R} - {B_{{{\mathbf{k}}},-}}|^2$$ and $n(\varepsilon)$ denotes the Fermi distribution. In the above expressions, we have implicitly associated the right side R with the momentum label ${{\mathbf{k}}}$ and L with ${{\mathbf{p}}}$ for more concise notation, such that [*e.g.* ]{}$B_{{{\mathbf{k}}},z} \equiv B_{{{\mathbf{k}}},z}^\text{R}$. Defining $\zeta_i = 2J\eta(0)m_i$, we see that Eq. (\[eq:gamma\]) can be written as $$\label{eq:gammavec} \gamma_{{\mathbf{k}}}= |\boldsymbol{\zeta}_\text{R} - \mathbf{B}_{{\mathbf{k}}}|.$$ The spin-current described in Eq. (\[eq:finalspin\]) can be controlled by adjusting the relative orientation of the magnetization vectors on each side of the junction, [*i.e.* ]{}$\Delta\phi$, and also responds to a change in direction of the applied electric fields. The presence of an external magnetic field $\mathbf{H}_i$ would control the orientation of the internal magnetization $\mathbf{m}_i$. Alternatively, one may also use exchange biasing to an anti-ferromagnet in order to lock the magnetization direction. Consequently, the spin-current can be manipulated by the external control parameters $\{\mathbf{H}_i,\mathbf{E}_i\}$ in a well-defined manner. This observation is highly suggestive in terms of novel nanotechnological devices. We stress that Eq. (\[eq:finalspin\]) is *non-zero* in the general case, since $\gamma_{{\mathbf{k}}}\neq -\gamma_{-{{\mathbf{k}}}}$ and $\chi_{-{{\mathbf{k}}}} = \chi_{{{\mathbf{k}}}}+\pi$. Moreover, Eq. (\[eq:finalspin\]) is valid for any orientation of both $\mathbf{m}$ and $\mathbf{E}$ on each side of the junction, and a number of interesting special cases can now easily be considered simply by applying the appropriate limits to this general expression. Special limits -------------- Consider first the limit where ferromagnetism is absent, such that the tunneling occurs between two bulk materials with spin-orbit coupling. Applying $\mathbf{m}\to 0$ to Eq. (\[eq:finalspin\]), it is readily seen that the spin-current vanishes for any orientation of the electrical fields. Intuitively, one can understand this by considering the band structure of the quasi-particles with energy $E_{{{\mathbf{k}}}\sigma} = \varepsilon_{{\mathbf{k}}}+ \sigma\gamma_{{\mathbf{k}}}$ and the corresponding density of states $N(E_{{{\mathbf{k}}}\sigma})$ when only spin-orbit coupling is present, as shown in Fig. \[fig:energy\]. Since the density of states is equal for $\uparrow$ and $\downarrow$ spins [^3], one type of spin is not preferred compared to the other with regard to tunneling, resulting in a net spin-current of zero. Formally, the vanishing of the spin-current can be understood by replacing the momentum summation with integration over energy, [*i.e.* ]{}$\sum_{{{\mathbf{k}}}{{\mathbf{p}}}} \to \int\int \mathrm{d} E_\text{R} \mathrm{d} E_\text{L} N_\mathrm{R}(E_\text{R})N_\mathrm{L} (E_\text{L})$. When $\mathbf{m}\to 0$, Eq. (\[eq:finalspin\]) dictates that $$\begin{aligned} \label{eq:integral} I^\mathrm{S}_z \sim \sum_{\substack{\alpha=\pm\\\beta=\pm}} \alpha\beta \int \int& \mathrm{d} E_{\mathrm{R},\alpha}\mathrm{d} E_{\mathrm{L},\beta} N^\alpha_\mathrm{R}(E_{\mathrm{R},\alpha})N^\beta_\mathrm{L}(E_{\mathrm{L},\beta})\notag\\ &\times \Bigg[\frac{n(E_{\mathrm{R},\alpha})-n(E_{\mathrm{L},\beta})}{E_{\mathrm{R},\alpha} - E_{\mathrm{L},\beta}}\Bigg].\end{aligned}$$ Since the density of states for the $\uparrow$- and $\downarrow$-populations are equal in the individual subsystems, [*i.e.* ]{}$N^\uparrow(E)=N^\downarrow(E)\equiv N(E)$, the integrand of Eq. (\[eq:integral\]) becomes spin-independent such that the summation over $\alpha$ and $\beta$ yields zero. Thus, no spin-current will exist at $eV=0$ over a tunneling barrier separating two systems with spin-orbit coupling alone. In the general case where both ferromagnetism and spin-orbit coupling are present, the density of states at, say, Fermi level are different, leading to a persistent spin-current across the junction due to the difference between $N^\uparrow(E)$ and $N^\downarrow(E)$. We now consider a special case where the bulk structures indicated in Fig. \[fig:setup\] are reduced to two thin-film ferromagnets in the presence of electrical fields that are perpendicular to each other, say $\mathbf{E}_\text{L} = (E_\text{L},0,0)$ and $\mathbf{E}_\text{R} = (0,E_\text{R},0)$, as shown in Fig. \[fig:specialcase\](a) and (b). In this case, we have chosen an in-plane magnetization for each of the thin-films. Solving specifically for Fig. \[fig:specialcase\](a), it is seen that $\mathbf{m}_\text{L} = (0,m_{\text{L},y}, m_{\text{L},z})$ and $\mathbf{m}_\text{R} = (m_{\text{R},x},0, m_{\text{R},z})$. Furthermore, assume that the electrons are restricted from moving in the “thin” dimension, [*i.e.* ]{}${{\mathbf{p}}}= (0,p_y,p_z)$ and ${{\mathbf{k}}}= (k_x,0,k_z)$. In this case, Eq. (\[eq:finalspin\]) reduces to the form $$I^\text{S}_z = I_0\text{sgn}(m_{\text{L},y}) + \sum_{{{\mathbf{k}}}{{\mathbf{p}}}} I_{1,{{\mathbf{k}}}{{\mathbf{p}}}}\text{sgn}(p_z),$$ where the constants above are $$\begin{aligned} \label{eq:I0} I_0 &= \sum_{{{\mathbf{k}}}{{\mathbf{p}}}} \frac{|T_{{{\mathbf{k}}}{{\mathbf{p}}}}|^2 J_{{{\mathbf{k}}}{{\mathbf{p}}}}(|\zeta_\mathrm{R}\zeta_\mathrm{L}|-E_\mathrm{R}|k_z\zeta_\mathrm{L}|)}{2|\boldsymbol{\zeta}_\mathrm{R}+\mathbf{B}_{{\mathbf{k}}}||\boldsymbol{\zeta}_\mathrm{L} + \mathbf{B}_{{\mathbf{p}}}|},\notag\\ I_{1,{{\mathbf{k}}}{{\mathbf{p}}}} &= \frac{|T_{{{\mathbf{k}}}{{\mathbf{p}}}}|^2J_{{{\mathbf{k}}}{{\mathbf{p}}}}E_\mathrm{L}(E_\mathrm{R}|k_zp_z| - |p_z\zeta_\mathrm{R}|)}{2|\boldsymbol{\zeta}_\mathrm{R}+\mathbf{B}_{{\mathbf{k}}}||\boldsymbol{\zeta}_\mathrm{L} + \mathbf{B}_{{\mathbf{p}}}|},\end{aligned}$$ with $$\begin{aligned} \boldsymbol{\zeta}_\mathrm{L} &= 2J\eta(0)(0,m_{\mathrm{L},y},m_{\mathrm{L},z}) &\mathbf{B}_{{\mathbf{p}}}= \xi_\mathrm{L}E_\mathrm{L}(0,-p_z,p_y)& \notag\\ \boldsymbol{\zeta}_\mathrm{R} &= 2J\eta(0)(m_{\mathrm{R},x},0,m_{\mathrm{R},z}) &\mathbf{B}_{{\mathbf{k}}}= \xi_\mathrm{R}E_\mathrm{R}(k_y,0,-k_x)& \notag\\\end{aligned}$$ such that $I_{1,{{\mathbf{k}}}{{\mathbf{p}}}} \neq I_{1,-{{\mathbf{k}}},-{{\mathbf{p}}}}$. Likewise, for the setup sketched in Fig. \[fig:specialcase\](b), one obtains $$\begin{aligned} I^\mathrm{S}_z &= \sum_{{{\mathbf{k}}}{{\mathbf{p}}}}\frac{|T_{{{\mathbf{k}}}{{\mathbf{p}}}}|^2 J_{{{\mathbf{k}}}{{\mathbf{p}}}}}{2|\zeta_\mathrm{R} + E_\mathrm{R}(k_y-\i k_x)|^2|\zeta_\mathrm{L}+E_\mathrm{L}(p_y-\i p_x)|^2}\notag\\ &\times\Big[|\zeta_\mathrm{R}\zeta_\mathrm{L}|\sin\Delta\phi + E_\mathrm{R}E_\mathrm{L}(k_y^2+k_x^2)(p_y^2+ p_x^2)|\sin\Delta\chi_{{{\mathbf{k}}}{{\mathbf{p}}}}\notag\\ &\hspace{0.4in}-E_\mathrm{R}|\zeta_\mathrm{L}|(k_y^2+k_x^2)\sin(\chi_k-\phi_\mathrm{L}) \notag\\ &\hspace{0.4in}- \mathrm{E_L}|\zeta_\mathrm{R}|(p_y^2+ p_x^2)\sin(\phi_\mathrm{R}-\chi_{{\mathbf{p}}})\Big],\end{aligned}$$ where $\chi_{{{\mathbf{q}}}}$ obeys $$\tan \chi_{{{\mathbf{q}}}} = -\frac{q_x}{q_y},\; {{\mathbf{q}}}= {{\mathbf{k}}},{{\mathbf{p}}}.$$ From these observations, we can draw the following conclusions: whereas the spin-current is zero for the system in Fig. \[fig:specialcase\](a) and (b) if only spin-orbit coupling is considered, it is non-zero when only ferromagnetism is taken into account. However, in the general case where both ferromagnetism and spin-orbit coupling are included, an *additional term* in the spin-current is induced compared to the pure ferromagnetic case. Accordingly, there is an interplay between the magnetic order and the Rashba-interaction that produces a spin-current which is more than just the sum of the individual contributions. Discussion of results {#sec:discuss} ===================== Having presented the general results for tunneling currents between systems with multiple broken symmetries in the preceeding sections, we now focus on detection and experimental issues concerning verification of our predictions. Consider first the system consisting of two ferromagnetic superconductors separated by a thin, insulating barrier. It is well-known that for tunneling currents flowing between two $s$-wave SC in the presence of a magnetic field that is perpendicular to the tunneling direction, the resulting flux threading the junction leads to a Fraunhofer-like variation in the DC Josephson effect, given by a multiplicative factor $$D_\text{F}(\Phi) = \frac{\sin(\pi\Phi/\Phi_0)}{(\pi\Phi/\Phi_0)}$$ in the critical current. Here, $\Phi_0 = \pi\hbar/e$ is the elementary flux quantum, and $\Phi$ is the total flux threading the junction due to a magnetic field. Consequently, the presence of magnetic flux in the tunneling junction of two $s$-wave SC threatens to nullify the total Josephson current. In the present case of two $p$-wave FMSC, this is not an issue since we have assumed uniform coexistence of the SC and FM order parameters which is plausible for a weak intrinsic magnetization. The effect of an external magnetic field $\mathbf{H}$ would then simply be to rotate the internal magnetization as dictated by the term $-\mathbf{H}\cdot\mathbf{m}$ in the free energy ${\cal{F}}$ (see [*e.g.* ]{}Ref. ). Thus, there is no diffraction pattern present for the tunneling-currents between two non-unitary ESP FMSC, regardless of how the internal magnetization is oriented. Since the motion of the Cooper-pairs is also restricted by the thin-film structure, there is no orbital effect from such a magnetization. Note that the interplay between ferromagnetism and superconductivity is manifest in the charge- as well as spin-currents, the former being readily measurable. Detection of the induced spin-currents would be challenging, although recent studies suggest feasible methods of measuring such quantities [@malshukov2005]. We comment more on this later in this section. First, we adress the issue of how boundary effects affect the order parameters. Studies [@ambegaokar1974; @buchholtz1981; @tanuma2001] have shown that interfaces/surfaces may have a pair-breaking effect on unconventional SC order parameters. This is highly relevant in tunneling junction experiments as in the present case. The suppression of the order parameter is caused by a formation of so-called midgap surface states (also known as zero-energy states) [@hu1994] which occurs for certain orientations of the ${{\mathbf{k}}}$-dependent SC gaps that satisfy a resonance condition. Note that this is not the case for conventional $s$-wave superconductors since the gap is isotropic in that case. This pair-breaking surface effect was studied specifically for $p$-wave order parameters in Refs. , and it was found that the component of the order parameter that experiences a sign change under the transformation $k_\perp \to -k_\perp$, where $k_\perp$ is the component of momentum perpendicular to the tunneling junction, was suppressed in the vicinity of the junction. By vicinity of the junction, we here mean a distance comparable to the coherence length, typically of order 1-10 nm. Thus, depending on the explicit form of the superconducting gaps in the FMSC, these could be subject to a reduction close to the junction, which in turn would reduce the magnitude of the Josephson effect we predict. Nevertheless, the latter is nonvanishing in the general case. Since the critical Josephson currents depend on the relative magnetization orientation, one is able to tune these currents in a well-defined manner by varying $\vartheta$. This can be done by applying an external magnetic field in the plane of the FMSC. In the presence of a rotating magnetic moment on either side of the junction, the Josephson currents will thus vary according to Eq. (\[eq:currentsA2\]), which may be cast into the form $I^\mathrm{C}_\mathrm{J} = I_0 + I_m\cos(\vartheta)$. Depending on the relative magnitudes of $I_0$ and $I_m$, the sign of the critical current may change. Note that such a variation of the magnetization vectors must take place in an adiabatic manner so that the systems can be considered to be in, or near, equilibrium at all times. Our predictions can thus be verified by measuring the critical current at $eV=0$ for different angles $\vartheta$ and compare the results with our theory. Recently, it has been reported that a spin-triplet supercurrent, induced by Josephson tunneling between two $s$-wave superconductors across a ferromagnetic metallic contact, can be controlled by varying the magnetization of the ferromagnetic contact [@keizer2006]. Moreover, concerning the spin-Josephson current we propose, detection of induced spin-currents are challenging, although recent studies suggest feasible methods of measuring such quantities [@malshukov2005]. Observation of macroscopic spin-currents in superconductors may also be possible via angle resolved photo-emission experiments with circularly polarized photons [@simon2002], or in spin-resolved neutron scattering experiments [@hirsch1999]. We reemphasize that the above ideas should be experimentally realizable by [*e.g.* ]{}utilizing various geometries in order to vary the demagnetization fields. Alternatively, one may use exchange biasing to an anti-ferromagnet. Such techniques of achieving non-collinearity are routinely used in ferromagnet-normal metal structures  [@bass1999]. With regard to the predicted DC spin-current in for a system consisting of two ferromagnetic metals with spin-orbit coupling, we here suggest how this effect could be probed for in an experimental setup. For instance, the authors of Ref.  propose a spin-mechanical device which exploits nanomechanical torque for detection and control of a spin-current. Similarly, a setup coupling the electron spin to the mechanical motion of a nanomechanical system is proposed in Ref. . The latter method employs the strain-induced spin-orbit interaction of electrons in a narrow gap semiconductor. In Ref. , it was demonstrated that a steady-state magnetic-moment current, [*i.e.* ]{}spin-current, will induce a static electric field. This fact may be suggestive in terms of detection [@meier2003; @schutz2003], and could be useful to observe the novel effects predicted in this paper. Summary {#sec:summary} ======= In summary, we have considered supercurrents of spin and charge that exist in FMSC/FMSC and FMSO/FMSO tunneling junctions. In the former case, we have found an interplay between the relative magnetization orientation on each side of the junction and the SC phase difference when considering tunneling between two non-unitary ESP FMSC with coexisting and uniform FM and SC order. This interplay is present in the Josephson channel, offering the opportunity to tune dissipationless currents of spin and charge in a well-defined manner by adjusting the relative magnetization orientation on each side of the junction. As a special case, we considered the case where the SC phase difference is zero, and found that a dissipationless spin-current *without* charge-current would be established across the junction. Suggestions concerning the detection of the effects we predict have been made. Moreover, we have derived an expression for a dissipationless spin-current that arises in the junction between two Heisenberg ferromagnets with spin-orbit coupling. We have shown that the spin-current is driven by terms originating from both the ferromagnetic phase difference, in agreement with the result of Ref. , and the presence of spin-orbit coupling itself. In addition, it was found that the simultaneous breaking of time-reversal and inversion symmetry fosters an interplay between ferromagnetism and spin-orbit coupling in the spin-current. Availing oneself of external magnetic and electric fields, our expressions show that the spin-current can be tuned in a well-defined manner. These results are of significance in the field of spintronics in terms of quantum transport, and offer insight into how the spin-current behaves for nanostructures exhibiting both ferromagnetism and spin-orbit coupling. Acknowledgments {#acknowledgments .unnumbered} =============== The authors acknowledge J.-M. B[ø]{}rven for his valuable contribution, and thank A. Brataas, K. B[ø]{}rkje, and E. K. Dahl for helpful discussions. This work was supported by the Norwegian Research Council Grants No. 157798/432 and No. 158547/431 (NANOMAT), and Grant No. 167498/V30 (STORFORSK). Details of Matsubara formalism ============================== Ferromagnetic superconductors {#app:fs} ----------------------------- Inserting Eq. (\[eq:transopip\]) into Eq. (\[eq:Kubo\]), one finds that $$\begin{aligned} \label{eq:N1} \langle \dot{N}_{\alpha\beta}(t)\rangle &= \langle \dot{N}_{\alpha\beta}(t)\rangle_\text{sp} + \langle \dot{N}_{\alpha\beta}(t)\rangle_\text{tp} \notag\\ &= -\int^t_{-\infty}{\mathrm{d}}t \Big[ \langle [M_{\alpha\beta}(t),M^\dag(t')]\rangle{\mathrm{e}^{-\i eV(t-t')}} \notag\\ &\;\;\;\;\;\;\;- \langle [M_{\beta\alpha}^\dag(t),M(t')]\rangle{\mathrm{e}^{\i eV(t-t')}} \notag\\ &\;\;\;\;\;\;\;+ \langle [M_{\alpha\beta}(t),M(t')]\rangle{\mathrm{e}^{-\i eV(t+t')}}\notag\\ &\;\;\;\;\;\;\; - \langle [M_{\beta\alpha}^\dag(t),M^\dag(t')]\rangle{\mathrm{e}^{\i eV(t+t')}} \Big]\end{aligned}$$ where the two first terms in Eq. (\[eq:N1\]) contribute to the single-particle current while the two last terms constitute the Josephson current. Above, we defined $$\begin{aligned} \label{eq:M} M_{\alpha\beta}(t) &= \sum_{{{\mathbf{k}}}{{\mathbf{p}}}\sigma} {{\hat{\cal{D}}}_{\sigma\beta}^{(1/2)}(\vartheta)}T_{{{\mathbf{k}}}{{\mathbf{p}}}}c_{{{\mathbf{k}}}\alpha}^\dag(t)d_{{{\mathbf{p}}}\sigma}(t) \notag\\ M(t) &= \sum_{{{\mathbf{k}}}{{\mathbf{p}}}\sigma\sigma'} {{\hat{\cal{D}}}_{\sigma'\sigma}^{(1/2)}(\vartheta)} T_{{{\mathbf{k}}}{{\mathbf{p}}}} c_{{{\mathbf{k}}}\sigma}^\dag(t)d_{{{\mathbf{p}}}\sigma'}(t).\end{aligned}$$ By observing that ${\hat{\boldsymbol{\sigma}}_{\alpha\beta}^{\vphantom{\dagger}}}= {(\hat{\boldsymbol{\sigma}}_{\beta\alpha})^{\ast}}$, we can combine Eqs. (\[eq:Kubo\])-(\[eq:M\]) to yield $$\begin{aligned} \label{eq:N2} {\hat{\boldsymbol{\sigma}}_{\alpha\beta}^{\vphantom{\dagger}}}{\langle\dot{N}_{\alpha\beta}(t)\rangle}_\text{sp} &= 2\Im\text{m}\{ {\hat{\boldsymbol{\sigma}}_{\alpha\beta}^{\vphantom{\dagger}}}\Phi_{\alpha\beta,\text{sp}}(-eV) \}\notag\\ {\hat{\boldsymbol{\sigma}}_{\alpha\beta}^{\vphantom{\dagger}}}{\langle\dot{N}_{\alpha\beta}(t)\rangle}_\text{tp} &= 2\Im\text{m}\{ {\hat{\boldsymbol{\sigma}}_{\alpha\beta}^{\vphantom{\dagger}}}\Phi_{\alpha\beta,\text{J}}(eV){\mathrm{e}^{-2\i etV}} \}\end{aligned}$$ where the Matsubara functions are obtained by performing analytical continuation according to $$\begin{aligned} \label{eq:tildemat} \Phi_{\alpha\beta,\text{sp}}(-eV) &= \lim_{\i{\widetilde{\omega}_\nu}\to -eV+\i 0^+} \widetilde{\Phi}_{\alpha\beta,\text{sp}}(\i{\widetilde{\omega}_\nu}) \notag\\ \Phi_{\alpha\beta,\text{J}}(eV) &= \lim_{\i{\widetilde{\omega}_\nu}\to eV+\i 0^+} \widetilde{\Phi}_{\alpha\beta,\text{tp}}(\i{\widetilde{\omega}_\nu}),\end{aligned}$$ In Eq. (\[eq:tildemat\]), ${\widetilde{\omega}_\nu}= 2\pi \nu/\beta,\; \nu=1,2,3\ldots$ is the bosonic Matsubara frequency and $$\begin{aligned} \label{eq:matsubara} \widetilde{\Phi}_{\text{sp},\alpha\beta}(\i{\widetilde{\omega}_\nu}) &= -\int^\beta_0 \text{d}\tau {\mathrm{e}^{\i{\widetilde{\omega}_\nu}\tau}} \sum_{\stackrel{{{\mathbf{k}}}{{\mathbf{p}}}\sigma} {_{{{\mathbf{k}}}'{{\mathbf{p}}}'\sigma_1\sigma_2}}}{{\hat{\cal{D}}}_{\sigma\beta}^{(1/2)}(\vartheta)}{{\hat{\cal{D}}}_{\sigma_1\sigma_2}^{(1/2)}(\vartheta)}\times\notag\\ &T_{{{\mathbf{k}}}{{\mathbf{p}}}}T_{{{\mathbf{k}}}'{{\mathbf{p}}}'}^* \langle{\tilde{\mathrm{T}}}\{ c_{{{\mathbf{k}}}\alpha}^\dag(\tau)d_{{{\mathbf{p}}}\sigma}(\tau)d_{{{\mathbf{p}}}'\sigma_1}^\dag(0)c_{{{\mathbf{k}}}'\sigma_2}(0)\} \rangle,\notag\\ \widetilde{\Phi}_{\text{tp},\alpha\beta}(\i{\widetilde{\omega}_\nu}) &= -\int^\beta_0 \text{d}\tau {\mathrm{e}^{\i{\widetilde{\omega}_\nu}\tau}} \sum_{\stackrel{{{\mathbf{k}}}{{\mathbf{p}}}\sigma} {_{{{\mathbf{k}}}'{{\mathbf{p}}}'\sigma_1\sigma_2}}}{{\hat{\cal{D}}}_{\sigma\beta}^{(1/2)}(\vartheta)}{{\hat{\cal{D}}}_{\sigma_1\sigma_2}^{(1/2)}(\vartheta)}\times\notag\\ &T_{{{\mathbf{k}}}{{\mathbf{p}}}}T_{{{\mathbf{k}}}'{{\mathbf{p}}}'}\langle{\tilde{\mathrm{T}}}\{ c_{{{\mathbf{k}}}\alpha}^\dag(\tau)d_{{{\mathbf{p}}}\sigma}(\tau)c_{{{\mathbf{k}}}'\sigma_2}^\dag(0)d_{{{\mathbf{p}}}'\sigma_1}(0)\}\rangle.\end{aligned}$$ Here, ${\tilde{\mathrm{T}}}$ denotes the time-ordering operator, and $\beta=1/k_\text{B}T$ is the inverse temperature. Only ${{\mathbf{k}}}'=(-){{\mathbf{k}}}, {{\mathbf{p}}}'=(-){{\mathbf{p}}}$ contributes in the single-particle (two-particle) channel, while the diagonalized basis $\widetilde{\varphi}_{{{\mathbf{k}}}\sigma}$ dictates that only $\sigma_2=\alpha,\; \sigma_1=\sigma$ contributes in the spin summation. Making use of the relation $\widetilde{\phi}_{{{\mathbf{k}}}\sigma}^\dagger = \phi_{{{\mathbf{k}}}\sigma}^\dagger \hat{U}_{{{\mathbf{k}}}\sigma}$, Eq. (\[eq:matsubara\]) becomes $$\begin{aligned} \label{eq:mat1} \widetilde{\Phi}_{\text{sp},\alpha\beta}(\i{\widetilde{\omega}_\nu}) =& \int^\beta_0 \text{d}\tau {\mathrm{e}^{\i{\widetilde{\omega}_\nu}\tau}} \sum_{{{\mathbf{k}}}{{\mathbf{p}}}\sigma} {{\hat{\cal{D}}}_{\sigma\beta}^{(1/2)}(\vartheta)}{{\hat{\cal{D}}}_{\sigma\alpha}^{(1/2)}(\vartheta)} T_{{{\mathbf{k}}}{{\mathbf{p}}}}T_{{{\mathbf{k}}}{{\mathbf{p}}}}^* \notag\\ &\times \langle{\tilde{\mathrm{T}}}\{ \Big[ \hat{U}_{11{{\mathbf{k}}}\alpha}^*\gamma_{{{\mathbf{k}}}\alpha}^\dag(\tau) + \hat{U}_{12{{\mathbf{k}}}\alpha}^*\gamma_{-{{\mathbf{k}}}\alpha}(\tau) \Big]\notag\\ &\times\Big[ \hat{U}_{11{{\mathbf{k}}}\alpha}\gamma_{{{\mathbf{k}}}\alpha}(0) + \hat{U}_{12{{\mathbf{k}}}\alpha}\gamma_{-{{\mathbf{k}}}\alpha}^\dag(0)\Big]\} \rangle\notag\\ &\times \langle {\tilde{\mathrm{T}}}\Big[ \hat{U}_{11{{\mathbf{p}}}\sigma}\gamma_{{{\mathbf{p}}}\sigma}(\tau) + \hat{U}_{12{{\mathbf{p}}}\sigma}\gamma_{-{{\mathbf{p}}}\sigma}^\dag(\tau) \Big]\notag\\ &\times \Big[ \hat{U}_{11{{\mathbf{p}}}\sigma}^*\gamma_{{{\mathbf{k}}}\sigma}^\dag(0) + \hat{U}_{12{{\mathbf{p}}}\sigma}^*\gamma_{-{{\mathbf{k}}}\sigma}(0) \Big]\} \rangle\end{aligned}$$ $$\begin{aligned} \widetilde{\Phi}_{\text{tp},\alpha\beta}(\i{\widetilde{\omega}_\nu}) =& -\int^\beta_0 \text{d}\tau {\mathrm{e}^{\i{\widetilde{\omega}_\nu}\tau}} \sum_{{{\mathbf{k}}}{{\mathbf{p}}}\sigma} {{\hat{\cal{D}}}_{\sigma\beta}^{(1/2)}(\vartheta)} {{\hat{\cal{D}}}_{\sigma\alpha}^{(1/2)}(\vartheta)} T_{{{\mathbf{k}}}{{\mathbf{p}}}}T_{-{{\mathbf{k}}},-{{\mathbf{p}}}} \notag\\ &\times\langle {\tilde{\mathrm{T}}}\{\Big[\hat{U}_{11{{\mathbf{k}}}\alpha}^*\gamma_{{{\mathbf{k}}}\sigma}^\dag(\tau) + \hat{U}_{12{{\mathbf{k}}}\alpha}^*\gamma_{-{{\mathbf{k}}}\sigma}(\tau)\Big]\notag\\ &\times\Big[\hat{U}_{21{{\mathbf{k}}}\alpha}\gamma_{{{\mathbf{k}}}\sigma}(0) + \hat{U}_{22{{\mathbf{k}}}\alpha}\gamma_{-{{\mathbf{k}}}\sigma}^\dag(0) \Big]\}\rangle\notag\\ &\times\langle{\tilde{\mathrm{T}}}\Big[ \hat{U}_{21{{\mathbf{p}}}\sigma}^*\gamma_{{{\mathbf{p}}}\sigma}^\dag(0) + \hat{U}_{22{{\mathbf{p}}}\sigma}^*\gamma_{-{{\mathbf{p}}}\sigma}(0) \Big]\notag\\ &\times\Big[ \hat{U}_{11{{\mathbf{p}}}\sigma}\gamma_{{{\mathbf{p}}}\sigma}(\tau) + \hat{U}_{12{{\mathbf{p}}}\sigma}\gamma_{-{{\mathbf{p}}}\sigma}^\dag(\tau)\Big]\}\rangle\end{aligned}$$ Since our diagonalized Hamiltonian has the form of a free-electron gas, [*i.e.* ]{} $$\begin{aligned} H_\text{FMSC} = \widetilde{H}_0 + \sum_{{{\mathbf{k}}}\sigma} E_{{{\mathbf{k}}}\sigma} \gamma_{{{\mathbf{k}}}\sigma}^\dag\gamma_{{{\mathbf{k}}}\sigma}\end{aligned}$$ with $\widetilde{H}_0 = H_0 - (E_{{{\mathbf{k}}}\uparrow}+E_{{{\mathbf{k}}}\downarrow})$, the product of the new fermion operators $\widetilde{\varphi}_{{{\mathbf{k}}}\sigma}$ in Eq. (\[eq:mat1\]) yield unperturbed Green’s functions according to $$\label{eq:Green} G_\alpha({{\mathbf{k}}},\tau-\tau') = \langle {\tilde{\mathrm{T}}}\{c_{{{\mathbf{p}}}\alpha}^\dag(\tau')c_{{{\mathbf{k}}}\alpha}(\tau)\} \rangle$$ We then Fourier-transform Eq. (\[eq:Green\]) into $$G_\alpha({{\mathbf{k}}},\tau) = \frac{1}{\beta} \sum_{{\omega_m}} {\mathrm{e}^{-\i{\omega_m}\tau}}G_\alpha({{\mathbf{p}}},\i{\omega_m}),$$ where ${\omega_m}= (2m+1)\pi/\beta$, $m=1,2,3\ldots$ is a fermionic Matsubara frequency. The frequency summation over $m$ is evaluated by contour integration as in [*e.g.* ]{}Ref.  to yield the result $$\begin{aligned} \frac{1}{\beta}\sum_m G_\alpha({{\mathbf{k}}},\i{\omega_m})G_\sigma({{\mathbf{p}}},\i{\widetilde{\omega}_\nu}+&\i{\omega_m}) = \frac{f(E_{{{\mathbf{k}}}\alpha})-f(E_{{{\mathbf{p}}}\sigma})}{\i{\widetilde{\omega}_\nu}+ E_{{{\mathbf{k}}}\alpha} - E_{{{\mathbf{p}}}\sigma}}\notag\\ \frac{1}{\beta}\sum_mG_\alpha({{\mathbf{k}}},\i{\omega_m})G_\sigma({{\mathbf{p}}},\i{\widetilde{\omega}_\nu}-&\i{\omega_m})=\frac{f(E_{{{\mathbf{p}}}\sigma})-f(-E_{{{\mathbf{k}}}\alpha})}{\i{\widetilde{\omega}_\nu}- E_{{{\mathbf{k}}}\alpha} - E_{{{\mathbf{p}}}\sigma}},\end{aligned}$$ where $f(E)=1-f(-E)=1/(1+{\mathrm{e}^{\beta E}})$ is the Fermi distribution. It is then a matter of straight-forward calculations to obtain the result $$\begin{aligned} \label{eq:GenMatsp} \Phi_{\text{sp},\alpha\beta}(-eV)& = \sum_{{{\mathbf{k}}}{{\mathbf{p}}}\sigma} {{\hat{\cal{D}}}_{\sigma\beta}^{(1/2)}(\vartheta)}{{\hat{\cal{D}}}_{\sigma\alpha}^{(1/2)}(\vartheta)} T_{{{\mathbf{k}}}{{\mathbf{p}}}}T_{{{\mathbf{k}}}{{\mathbf{p}}}}^*N_{{{\mathbf{k}}}\alpha}^2N_{{{\mathbf{p}}}\sigma}^2\notag\\ \Bigg[& \frac{|\Delta_{{{\mathbf{k}}}\alpha\alpha}\Delta_{{{\mathbf{p}}}\sigma\sigma}|^2 \Lambda_{{{\mathbf{k}}}{{\mathbf{p}}}\sigma\alpha}^{-1,-1}(-eV)}{(\xi_{{{\mathbf{k}}}\alpha}+E_{{{\mathbf{k}}}\alpha})(\xi_{{{\mathbf{p}}}\sigma}+E_{{{\mathbf{p}}}\sigma})} + \Lambda_{{{\mathbf{k}}}{{\mathbf{p}}}\sigma\alpha}^{1,1}(-eV) \notag\\ &+ \frac{|\Delta_{{{\mathbf{p}}}\sigma\sigma}|^2\Lambda_{{{\mathbf{k}}}{{\mathbf{p}}}\sigma\alpha}^{-1,1}(-eV)}{\xi_{{{\mathbf{p}}}\sigma}+E_{{{\mathbf{p}}}\sigma}} \notag\\ &+\frac{|\Delta_{{{\mathbf{k}}}\alpha\alpha}|^2\Lambda_{{{\mathbf{k}}}{{\mathbf{p}}}\sigma\alpha}^{1,-1}(-eV)}{\xi_{{{\mathbf{k}}}\alpha}+E_{{{\mathbf{k}}}\alpha}} \Bigg]\end{aligned}$$ $$\begin{aligned} \label{eq:GenMat} \Phi_{\text{tp},\alpha\beta}(eV) =& -\sum_{{{\mathbf{k}}}{{\mathbf{p}}}\sigma} {{\hat{\cal{D}}}_{\sigma\beta}^{(1/2)}(\vartheta)} {{\hat{\cal{D}}}_{\sigma\alpha}^{(1/2)}(\vartheta)} T_{{{\mathbf{k}}}{{\mathbf{p}}}}T_{-{{\mathbf{k}}},-{{\mathbf{p}}}}\notag\\ &\times\frac{\Delta_{{{\mathbf{k}}}\alpha\alpha}^\ast\Delta_{{{\mathbf{p}}}\sigma\sigma}}{ 4E_{{{\mathbf{k}}}\alpha}E_{{{\mathbf{p}}}\sigma}} \sum_{\substack{{\lambda}=\pm 1\\{\rho}=\pm 1}} \Lambda_{{{\mathbf{k}}}{{\mathbf{p}}}\sigma\alpha}^{{\lambda}{\rho}}(eV),\end{aligned}$$ where $\Lambda_{{{\mathbf{k}}}{{\mathbf{p}}}\sigma\alpha}^{{\lambda}{\rho}}(eV)$ is obtained by performing analytical continuation ${\mathrm{i}}{\widetilde{\omega}_\nu}\to eV + {\mathrm{i}}0^+$ on $$\widetilde{\Lambda}_{{{\mathbf{k}}}{{\mathbf{p}}}\sigma\alpha}^{{\lambda}{\rho}}({\mathrm{i}}{\widetilde{\omega}_\nu}) =\frac{{\lambda}[f(E_{{{\mathbf{k}}}\alpha}) - f({\lambda}{\rho}E_{{{\mathbf{p}}}\sigma})]}{{\mathrm{i}}{\widetilde{\omega}_\nu}+ {\rho}E_{{{\mathbf{k}}}\alpha}-{\lambda}E_{{{\mathbf{p}}}\sigma}};\quad {\lambda},{\rho}=\pm 1.$$ We also provide the details of the persistent spin-supercurrent for $\Delta_{\sigma\sigma}=0$. Writing the Josephson current Eq. (\[eq:currentsA2\]) out explicitly, one has that $I^\text{C}_\text{J} = e I^+$ and $I^\text{S}_\text{J} = -I^-$ where $$\begin{aligned} \label{eq:spincrazy} I^\pm =& \sum_{{{\mathbf{k}}}{{\mathbf{p}}}} |T_{{{\mathbf{k}}}{{\mathbf{p}}}}|^2 \Bigg[ \cos^2(\vartheta/2) \frac{|\Delta_{{{\mathbf{k}}}\uparrow\uparrow}\Delta_{{{\mathbf{p}}}\uparrow\uparrow}|}{E_{{{\mathbf{k}}}\uparrow}E_{{{\mathbf{p}}}\uparrow}}\sin\Delta\theta_{\uparrow\uparrow} F_{{{\mathbf{k}}}{{\mathbf{p}}}\uparrow\uparrow} \notag\\ &+ \sin^2(\vartheta/2) \frac{|\Delta_{{{\mathbf{k}}}\uparrow\uparrow}\Delta_{{{\mathbf{p}}}\downarrow\downarrow}|}{E_{{{\mathbf{k}}}\uparrow}E_{{{\mathbf{p}}}\downarrow}}\sin(\theta^\text{L}_{\downarrow\downarrow} - \theta^\text{R}_{\uparrow\uparrow}) F_{{{\mathbf{k}}}{{\mathbf{p}}}\uparrow\downarrow} \notag\\ &\pm \sin^2(\vartheta/2) \frac{|\Delta_{{{\mathbf{k}}}\downarrow\downarrow}\Delta_{{{\mathbf{p}}}\uparrow\uparrow}|}{E_{{{\mathbf{k}}}\downarrow}E_{{{\mathbf{p}}}\uparrow}} \sin(\theta^\text{L}_{\uparrow\uparrow} - \theta^\text{R}_{\downarrow\downarrow}) F_{{{\mathbf{k}}}{{\mathbf{p}}}\downarrow\uparrow} \notag\\ &\pm \cos^2(\vartheta/2) \frac{|\Delta_{{{\mathbf{k}}}\downarrow\downarrow}\Delta_{{{\mathbf{p}}}\downarrow\downarrow}|}{E_{{{\mathbf{k}}}\downarrow}E_{{{\mathbf{p}}}\downarrow}}\sin\Delta\theta_{\downarrow\downarrow} F_{{{\mathbf{k}}}{{\mathbf{p}}}\downarrow\downarrow}\Bigg].\end{aligned}$$ The first and fourth term above vanish when $\Delta\theta_{\sigma\sigma}=0$. By observing that $F_{{{\mathbf{k}}}{{\mathbf{p}}}\uparrow\downarrow} = F_{{{\mathbf{p}}}{{\mathbf{k}}}\downarrow\uparrow}$, we are then able to re-write Eq. (\[eq:spincrazy\]) as $$\begin{aligned} I^\pm =& \sum_{{{\mathbf{k}}}{{\mathbf{p}}}} |T_{{{\mathbf{k}}}{{\mathbf{p}}}}|^2 \sin^2(\vartheta/2) \frac{|\Delta_{{{\mathbf{k}}}\uparrow}\Delta_{{{\mathbf{p}}}\downarrow}|}{E_{{{\mathbf{k}}}\uparrow}E_{{{\mathbf{p}}}\downarrow}}F_{{{\mathbf{k}}}{{\mathbf{p}}}\uparrow\downarrow}\notag\\ &\times \Big[\sin(\theta_{\downarrow}^\text{L} - \theta_{\uparrow}^\text{R}) \pm \sin(\theta_{\uparrow}^\text{L} - \theta_{\downarrow}^\text{R}) \Big] \notag\\ =& e\sum_{{{\mathbf{k}}}{{\mathbf{p}}}} |T_{{{\mathbf{k}}}{{\mathbf{p}}}}|^2 \sin^2(\vartheta/2) \frac{|\Delta_{{{\mathbf{k}}}\uparrow}\Delta_{{{\mathbf{p}}}\downarrow}|}{E_{{{\mathbf{k}}}\uparrow}E_{{{\mathbf{p}}}\downarrow}}F_{{{\mathbf{k}}}{{\mathbf{p}}}\uparrow\downarrow}\notag\\ &\times \Big[\sin\Big( (\theta_{\downarrow}^\text{L}\mp\theta_{\downarrow}^\text{R} - \theta_{\uparrow}^\text{R}\pm\theta_{\uparrow}^\text{L} )/2 \Big)\notag\\ &\times \cos\Big( (\theta_{\downarrow}^\text{L}\pm\theta_{\downarrow}^\text{R} - \theta_{\uparrow}^\text{R}\mp\theta_{\uparrow}^\text{L} )/2 \Big) \Big].\end{aligned}$$ It is clear that the argument of the sine gives 0 for the upper sign, such that $I^\text{C}_\text{J}=0$. But for the lower sign, the argument of the cosine is equal to 0, such that Eq. (\[eq:finalspincrazy\]) is obtained. Ferromagnets with spin-orbit coupling {#app:fmsoc} ------------------------------------- The spin-current across the junction can be written as $$\begin{aligned} {\mathbf{I}^\text{S}}&= \Im\mathrm{m}\{{\mathbf{\Phi}}(-eV)\},\notag\\ {\mathbf{\Phi}}(-eV) &= \lim_{\i{\widetilde{\omega}_\nu}\to -eV+\i 0^+}\widetilde{{\mathbf{\Phi}}}(\i{\widetilde{\omega}_\nu}),\end{aligned}$$ where we have defined the Matsubara function $$\begin{aligned} \label{eq:matt1} \widetilde{{\mathbf{\Phi}}}(\i{\widetilde{\omega}_\nu}) = \sum_{{{\mathbf{k}}}{{\mathbf{p}}}\alpha\beta\sigma} &|T_{{{\mathbf{k}}}{{\mathbf{p}}}}|^2{\hat{\boldsymbol{\sigma}}_{\alpha\beta}^{\vphantom{\dagger}}}\int^\beta_0\mathrm{d}\tau{\mathrm{e}^{\i{\widetilde{\omega}_\nu}\tau}}\notag\\ \times&\langle{\mathrm{T}}\{c_{{{\mathbf{k}}}\sigma}(0)c_{{{\mathbf{k}}}\alpha}^\dag(\tau)\}\rangle\langle{\mathrm{T}}\{d_{{{\mathbf{p}}}\beta}(\tau)d_{{{\mathbf{p}}}\sigma}^\dag(0)\}\rangle.\end{aligned}$$ In Eq. (\[eq:matt1\]), we defined the time-ordering operator ${\mathrm{T}}$ while $\beta$ in the upper integration limit is inverse temperature and ${\widetilde{\omega}_\nu}= 2n\pi/\beta, n=0,1,2,\ldots$ is a bosonic Matsubara frequency. From the definition of the spin-generalized Green’s function $$G^{\alpha\beta}_{{\mathbf{k}}}(\tau-\tau') = -\langle{\mathrm{T}}\{c_{{{\mathbf{k}}}\alpha}(\tau)c_{{{\mathbf{k}}}\beta}^\dag(\tau')\}\rangle,$$ Eq. (\[eq:matt1\]) can be written out explicitely to yield $$\begin{aligned} \label{eq:mat2} \widetilde{{\mathbf{\Phi}}}(\i{\widetilde{\omega}_\nu}) = \frac{1}{\beta} \sum_{{{\mathbf{k}}}{{\mathbf{p}}},m}|T_{{{\mathbf{k}}}{{\mathbf{p}}}}|^2 \Bigg[&{\hat{\boldsymbol{\sigma}}^{\vphantom{\dagger}}}_{\uparrow\uparrow}\Big(G^{\uparrow\uparrow}_{{\mathbf{k}}}(\i{\omega_m})G^{\uparrow\uparrow}_{{\mathbf{p}}}(\i{\omega_m}- \i{\widetilde{\omega}_\nu}) + G^{\downarrow\uparrow}_{{\mathbf{k}}}(\i{\omega_m})G^{\uparrow\downarrow}_{{\mathbf{p}}}(\i{\omega_m}- \i{\widetilde{\omega}_\nu})\Big) +{\hat{\boldsymbol{\sigma}}^{\vphantom{\dagger}}}_{\uparrow\downarrow}\Big(G^{\uparrow\uparrow}_{{\mathbf{k}}}(\i{\omega_m})G^{\downarrow\uparrow}_{{\mathbf{p}}}(\i{\omega_m}- \i{\widetilde{\omega}_\nu}) \notag\\ &+ G^{\downarrow\uparrow}_{{\mathbf{k}}}(\i{\omega_m})G^{\downarrow\downarrow}_{{\mathbf{p}}}(\i{\omega_m}- \i{\widetilde{\omega}_\nu})\Big) +{\hat{\boldsymbol{\sigma}}^{\vphantom{\dagger}}}_{\downarrow\uparrow}\Big(G^{\uparrow\downarrow}_{{\mathbf{k}}}(\i{\omega_m})G^{\uparrow\uparrow}_{{\mathbf{p}}}(\i{\omega_m}- \i{\widetilde{\omega}_\nu}) + G^{\downarrow\downarrow}_{{\mathbf{k}}}(\i{\omega_m})G^{\uparrow\downarrow}_{{\mathbf{p}}}(\i{\omega_m}- \i{\widetilde{\omega}_\nu})\Big) \notag\\ &+{\hat{\boldsymbol{\sigma}}^{\vphantom{\dagger}}}_{\downarrow\downarrow}\Big(G^{\uparrow\downarrow}_{{\mathbf{k}}}(\i{\omega_m})G^{\downarrow\uparrow}_{{\mathbf{p}}}(\i{\omega_m}- \i{\widetilde{\omega}_\nu}) + G^{\downarrow\downarrow}_{{\mathbf{k}}}(\i{\omega_m})G^{\downarrow\downarrow}_{{\mathbf{p}}}(\i{\omega_m}- \i{\widetilde{\omega}_\nu})\Big)\Bigg]. \end{aligned}$$ We made use of the Fourier-transformations $$\begin{aligned} G^{\alpha\beta}_{{\mathbf{k}}}(\i{\omega_m}) &= \int^\beta_0 \mathrm{d}\tau{\mathrm{e}^{\i{\omega_m}}}G^{\alpha\beta}_{{\mathbf{k}}}(\tau),\notag\\ G^{\alpha\beta}_{{\mathbf{k}}}(\tau) &= \frac{1}{\beta} \sum_m {\mathrm{e}^{-\i{\omega_m}\tau}}G^{\alpha\beta}_{{\mathbf{k}}}(\i{\omega_m})\end{aligned}$$ in writing down Eq. (\[eq:mat2\]), where ${\omega_m}= 2(m+1)\pi/\beta, m=0,1,2,\ldots$ is a fermionic Matsubara frequency. Having written down the full expression for the Matsubara function in Eq. (\[eq:mat2\]), one can now easily distinguish between components of the spin-current. For instance, only ${\hat{\boldsymbol{\sigma}}^{\vphantom{\dagger}}}_{\alpha\alpha}$ will contribute to the $\hat{\mathbf{z}}$-component of ${\mathbf{I}^\text{S}}$, and the corresponding terms can be read out from Eq. (\[eq:mat2\]). From the present Green functions in Eq. (\[eq:green\]), it is obvious that three types of frequency summations must be performed, namely $$\begin{aligned} J_{{{\mathbf{k}}}{{\mathbf{p}}},r} = \frac{1}{\beta} &\sum_m\Bigg[ \frac{{\omega_m}^r}{[(\varepsilon_{{{\mathbf{k}}}\uparrow}-\i{\omega_m})(\varepsilon_{{{\mathbf{k}}}\downarrow}-\i{\omega_m})-y_{{\mathbf{k}}}^2]}\notag\\ \times&\frac{1}{[(\varepsilon_{{{\mathbf{p}}}\uparrow}-\i{\omega_m}+\i{\widetilde{\omega}_\nu})(\varepsilon_{{{\mathbf{p}}}\downarrow}-\i{\omega_m}+\i{\widetilde{\omega}_\nu})-y_{{\mathbf{p}}}^2]}\Bigg],\end{aligned}$$ with $r$ is an integer. Performing the summation over $m$ using residue calculus, one finds that $$\begin{aligned} J_{{{\mathbf{k}}}{{\mathbf{p}}},r} &= \sum_{\substack{\alpha=\pm\\\beta=\pm}} \frac{\alpha\beta}{4y_{{\mathbf{k}}}y_{{\mathbf{p}}}}\Bigg[\frac{\psi^r_{{{\mathbf{k}}}\alpha}n(\psi_{{{\mathbf{k}}}\alpha})-(\i{\widetilde{\omega}_\nu}+\psi_{{{\mathbf{p}}}\beta})^rn(\psi_{{{\mathbf{p}}}\beta})}{-\i{\widetilde{\omega}_\nu}+\psi_{{{\mathbf{k}}}\alpha}-\psi_{{{\mathbf{p}}}\beta}}\Bigg]\end{aligned}$$ with the definition $\psi_{{{\mathbf{k}}}\alpha} \equiv \varepsilon_{{{\mathbf{k}}}}+\alpha y_{{\mathbf{k}}}$. Separating the general expression Eq. (\[eq:mat2\]) into its spatial components $\widetilde{{\mathbf{\Phi}}} = (\widetilde{\Phi}_x,\widetilde{\Phi}_y,\widetilde{\Phi}_z)$, the components of the spin-current can be extracted according to $I_i^\text{S} = \Im\mathrm{m}\{\Phi_i(-eV)\}$, $i=x,y,z.$ Note that the charge-current in this model, which vanishes for $eV=0$, is obtained by the performing the replacement $\hat{\boldsymbol{\sigma}}_{\alpha\beta} \to \hat{1}_{\alpha\beta}$, where $\hat{1}$ is the 2$\times$2 unit matrix. We find that $$\begin{aligned} \label{eq:mat4} \widetilde{\Phi}_x(\i{\widetilde{\omega}_\nu}) &= \sum_{{{\mathbf{k}}}{{\mathbf{p}}}} \frac{|T_{{{\mathbf{k}}}{{\mathbf{p}}}}|^2}{4\gamma_{{\mathbf{k}}}\gamma_{{\mathbf{p}}}}\Bigg[J_{{{\mathbf{k}}}{{\mathbf{p}}},0}\Big( \varepsilon_{{{\mathbf{k}}}\downarrow}(\zeta_\text{L}-{B_{{{\mathbf{p}}},-}}) + (\varepsilon_{{{\mathbf{p}}}\uparrow}+\i{\widetilde{\omega}_\nu})(\zeta_\text{R}-{B_{{{\mathbf{k}}},-}}) + (\varepsilon_{{{\mathbf{p}}}\downarrow}+\i{\widetilde{\omega}_\nu})(\zeta_\text{R}^\dag - B_{{{\mathbf{k}}},+}) + \varepsilon_{{{\mathbf{k}}}\uparrow}(\zeta_\text{L}^\dag - {B_{{{\mathbf{p}}},-}}) \Big) \notag\\ &\hspace{0.8in} -J_{{{\mathbf{k}}}{{\mathbf{p}}},1}\Big( (\zeta_\text{L}-{B_{{{\mathbf{p}}},-}}) + (\zeta_\text{R}-{B_{{{\mathbf{k}}},-}}) + (\zeta_\text{R}^\dag - B_{{{\mathbf{k}}},+}) + (\zeta_\text{L}^\dag -B_{{{\mathbf{p}}},+}) \Big)\Bigg],\notag\\ \widetilde{\Phi}_y(\i{\widetilde{\omega}_\nu}) &= \sum_{{{\mathbf{k}}}{{\mathbf{p}}}} \i\frac{|T_{{{\mathbf{k}}}{{\mathbf{p}}}}|^2}{4\gamma_{{\mathbf{k}}}\gamma_{{\mathbf{p}}}}\Bigg[J_{{{\mathbf{k}}}{{\mathbf{p}}},0}\Big( -\varepsilon_{{{\mathbf{k}}}\downarrow}(\zeta_\text{L}-{B_{{{\mathbf{p}}},-}}) - (\varepsilon_{{{\mathbf{p}}}\uparrow}+\i{\widetilde{\omega}_\nu})(\zeta_\text{R}-{B_{{{\mathbf{k}}},-}}) + (\varepsilon_{{{\mathbf{p}}}\downarrow}+\i{\widetilde{\omega}_\nu})(\zeta_\text{R}^\dag - B_{{{\mathbf{k}}},+}) + \varepsilon_{{{\mathbf{k}}}\uparrow}(\zeta_\text{L}^\dag - {B_{{{\mathbf{p}}},-}}) \Big) \notag\\ &\hspace{0.8in} -J_{{{\mathbf{k}}}{{\mathbf{p}}},1}\Big( -(\zeta_\text{L}-{B_{{{\mathbf{p}}},-}}) - (\zeta_\text{R}-{B_{{{\mathbf{k}}},-}}) + (\zeta_\text{R}^\dag - B_{{{\mathbf{k}}},+}) + (\zeta_\text{L}^\dag -B_{{{\mathbf{p}}},+}) \Big)\Bigg],\notag\\ \widetilde{\Phi}_z(\i{\widetilde{\omega}_\nu}) &= \sum_{{{\mathbf{k}}}{{\mathbf{p}}}} \frac{|T_{{{\mathbf{k}}}{{\mathbf{p}}}}|^2}{4\gamma_{{\mathbf{k}}}\gamma_{{\mathbf{p}}}}\Bigg[ J_{{{\mathbf{k}}}{{\mathbf{p}}},0}\Big(\varepsilon_{{{\mathbf{k}}}\downarrow}(\varepsilon_{{{\mathbf{p}}}\downarrow}+\i{\widetilde{\omega}_\nu}) - \varepsilon_{{{\mathbf{k}}}\uparrow}(\varepsilon_{{{\mathbf{p}}}\uparrow}+\i{\widetilde{\omega}_\nu}) + (\zeta_\text{R}-{B_{{{\mathbf{k}}},-}})(\zeta_\text{L}^\dag-B_{{{\mathbf{p}}},+})-(\zeta_\text{R}^\dag-B_{{{\mathbf{k}}},+})(\zeta_\text{L}-{B_{{{\mathbf{p}}},-}}) \Big) \notag\\ &\hspace{0.8in} +J_{{{\mathbf{k}}}{{\mathbf{p}}},1}\Big( \varepsilon_{{{\mathbf{k}}}\uparrow}-\varepsilon_{{{\mathbf{k}}}\downarrow} + \varepsilon_{{{\mathbf{p}}}\uparrow}-\varepsilon_{{{\mathbf{p}}}\downarrow} \Big) \Bigg]. \end{aligned}$$ [99]{} I. Zutic, J. Fabian, and S. D. Sarma, Rev. Mod. Phys **76**, 323 (2004). M. I. D’yakonov, V. I. Perel, Phys. Lett. A **35**, 459 (1971). J. Wunderlich, B. Kaestner, J. Sinova, and T. Jungwirth, Phys. Rev. Lett. **94**, 047204 (2005). J. Tallon, C. Bernhard, M. Bowden, P. Gilberd, T. Stoto, and D. Pringle, IEEE, Trans. Appl. Supercond. **9**, 1696 (1999). S. S. Saxena, P. Agarwal, K. Ahilan, F. M. Grosche, R. K. W. Haselwimmer, M. J. Steiner, E. Pugh, I. R. Walker, S. R. Julian, and P. Monthoux, Nature **406**, 587 (2000). D. Aoki, A. Huxley, E. Ressouche, D. Braihwaite, J. Flouquet, J.-P. Brison, E. Lhotel, and C. Paulsen, Nature **413**, 613 (2001). V. L. Ginzburg, Sov. Phys. JETP **4**, 153 (1957). R. Shen, Z. M. Zheng, S. Liu, and D. Y. Xing, Phys. Rev. B **67**, 024514 (2003). P. Fulde and R. A. Ferrel, Phys. Rev. **135**, A550 (1964); A. I. Larkin and Y. N. Ovchinnikov, Zh. Eksp. Teor. Fiz. **47**, 1136 (1964) \[Sov. Phys. JETP **20**, 762 (1965)\]. M. B. Walker and K. V. Samokhin, Phys. Rev. Lett. **88**, 207001 (2002). K. Machida and T. Ohmi, Phys. Rev. Lett. **86**, 850 (2001). K. Ishida, H. Mukuda, Y. Kitaoka, K. Asayama, Z. Q. Mao, Y. Mori, and Y. Maeno, Nature **396**, 658 (1998). K. D. Nelson, Z. Q. Mao, Y. Maeno, and Y. Liu, Science **306**, 1151 (2004). A. Brataas and Y. Tserkovnyak, Phys. Rev. Lett. **93**, 087201 (2004). Y. Tanaka and S. Kashiwaya, Phys. Rev. B **70**, 012507 (2004). T. Koyama and M. Tachiki, Phys. Rev. B **30**, 6463 (1984). M. Gr[ø]{}nsleth, J. Linder, J.-M. B[ø]{}rven, and A. Sudb[ø]{}, Phys. Rev. Lett. **97**, 147002 (2006). D. V. Shopova and D. I. Uzunov, Phys. Rev. B **72**, 024531 (2005). M. L. Kulic, C. R. Physique **7**, 4 (2006); M. L. Kulic, and I. M. Kulic, Phys. Rev. B **63**, 104503 (2001). I. Eremin, F. S. Nogueira, and R.-J. Tarento, Phys. Rev. B **73**, 054507 (2006). T. Dietl, Semicond. Sci. Technol. **17**, 377 (2002). F. Matsukura, H. Ohno, and T. Dietl, Handbook of Magnetic Materials, vol. 14 (Elsevier, 2002). H.-A. Engel, E. I. Rashba, and B. I. Halperin, cond-mat/0603306. F. S. Nogueira and K.-H. Bennemann, Europhys. Lett. **67**, 620 (2004). Y.-L. Lee and Y.-W. Lee, Phys. Rev. B **68**, 184413 (2003). J. C. Slonczewski, Phys. Rev. B **39**, 6995 (1989). T. Yokoyama, Y. Tanaka, and J. Inoue, Phys. Rev. B **72**, 220504 (2005). J. Wang and K. S. Chan, cond-mat/0512430. S. I. Kiselev, J. C. Sankey, I. N. Krivorotov, N. C. Emley, R. J. Schoelkopf, R. A. Buhrman, and D. C. Ralph, Nature **425**, 380 (2003). G. E. W. Bauer, A. Brataas, Y. Tserkovnyak, and B. J. van Wees, Appl. Phys. Lett. **82**, 3928 (2003). A. Brataas, Y. Tserkovnyak, G. E. W. Bauer, and B. I. Halperin, Phys. Rev. B **66**, 060404 (2002). J. E. Hirsch, Phys. Rev. Lett. **83**, 1834 (1999). Y. Tserkovnyak, A. Brataas, and G. E. W. Bauer, Phys. Rev. Lett. **88**, 117601 (2002). S. Tewari, D. Belitz, T. R. Kirkpatrick, and J. Toner, Phys. Rev. Lett. **93**, 177002 (2004). V. P. Mineev, cond-mat/0507572. V. P. Mineev and K. V. Samokhin, Introduction to Unconventional Superconductivity (Gordon and Breach, New York, 1999). H. Kotegawa, A. Harada, S. Kawasaki, Y. Kawasaki, Y. Kitaoka, Y. Haga, E. Yamamoto, Y. Onuki, K. M. Itoh, and E. E. Haller, J. Phys. Soc. Jpn. **74**, 705 (2005). C. Bernhard, J. L. Tallon, E. Br¨ucher, and R. K. Kremer, Phys. Rev. B **61**, R14960 (2000). C.-R. Hu, Phys. Rev. Lett. **72**, 1526 (1994). V. Ambegaokar, P. G. deGennes, and D. Rainer, Phys. Rev. A **9**, 2676 (1974). L. J. Buchholtz and G. Zwicknagl, Phys. Rev. B **23**, 5788 (1981). Y. Tanuma, Y. Tanaka, and S. Kashiwaya, Phys. Rev. B **64**, 214519 (2001). S. Datta and B. Das, Appl. Phys. Lett. **56**, 665 (1990). A. J. Leggett, Rev. Mod. Phys. **47**, 331 (1975). F. Hardy and A. D. Huxley, Phys. Rev. Lett. **94**, 247006 (2005). K. V. Samokhin and M. B. Walker, Phys. Rev. B **66**, 174501 (2002). M. H. Cohen, L. M. Falicov, and J. C. Phillips, Phys. Rev. Lett. **8**, 316 (1962). H. P. Dahal, J. Jackiewicz, and K. S. Bedell, Phys. Rev. B **72**, 172506 (2005). E. P. Wigner, Gruppentheorie und ihre Anwendung auf die Quantenmechanik der Atomspektren (Frieder. Vieweg, Braunschweig, 1931). J. Shi and Q. Niu, cond-mat/0601531. C. Bruder, A. van Otterlo, and G. T. Zimanyi, Phys. Rev. B **51**, 12904 (1995). H. Ohno, H. Munekata, T. Penney, S. von Molnr, and L. L. Chang, Phys. Rev. Lett. **68**, 2664 (1992). F. Matsukura, H. Ohno, A. Shen, and Y. Sugawara, Phys. Rev. B **57**, R2037 (1998). J. Kikkawa and D. Awschalom, Nature **397**, 139 (1999). T. Jungwirth, Q. Niu, and A. H. MacDonald, Phys. Rev. Lett. **88**, 207208 (2002). K. B[ø]{}rkje and A. Sudb[ø]{}, Phys. Rev. B [**74**]{}, 054506 (2006). G. Dresselhaus, Phys. Rev. **100**, 580 (1955). A. G. Mal’shukov, C. S. Tang, C. S. Chu, and K. A. Chao, Phys. Rev. Lett. **95**, 107203 (2005). R. S. Keizer, S. T. B. Goennenwein, T. M. Klapwijk, G. Miao, G. Xiao, and A. Gupta, Nature **439**, 825 (2006). M. E. Simon and C.M. Varma, Phys. Rev. Lett. **89**, 247003 (2002). J. Bass and W. P. Pratt Jr., Journal of Magnetism and Magnetic Materials **200**, 274 (1999). P. Mohanty, G. Zolfagharkhani, S. Kettemann, and P. Fulde, Phys. Rev. B **70**, 195301 (2004). Q. Feng Sun, H. Guo, and J. Wang, Phys. Rev. B **69**, (2004). F. Meier and D. Loss, Phys. Rev. Lett. **90**, 167204 (2003). F. Schutz, M. Kollar, and P. Kopietz, Phys. Rev. Lett. **91**, 017205 (2003). G. D. Mahan, Many-Particle Physics (Kluwer Academic/Plenum Publishers, 2002), 3rd ed. [^1]: Note that $N_{\alpha\beta}$ reduces to the number operator when we sum over equal spins, [*i.e.* ]{}$N=\sum_\sigma N_{\sigma\sigma}$. [^2]: For corresponding results in spin-singlet superconductors with helimagnetic order, see Refs. . [^3]: Note that the index $\alpha$ on the quasi-particles does not denote the physical spin of electrons, but is rather to be considered as some unspecified helicity-index. The usage of the word “spin” in this context then refers to this helicity.
--- author: - | Amanpreet Singh^1^, Vivek Natarajan, Meet Shah^1^, Yu Jiang^1^, Xinlei Chen^1^,\ Dhruv Batra^1,2^, Devi Parikh^1,2^, Marcus Rohrbach^1^ bibliography: - 'egbib.bib' title: Towards VQA Models That Can Read ---
--- author: - 'C. Gall , A. C. Andersen ,' - 'J. Hjorth' bibliography: - 'reflist\_ch.bib' date: 'Received January 07, 2011' subtitle: ' II. Rapid dust evolution in quasars at $z$ $\gtrsim$ 6' title: Genesis and evolution of dust in galaxies in the early Universe --- [We intend to assess the most plausible scenarios for generating large amounts of dust in high-$z$ quasars (QSOs) on the basis of observationally derived physical properties of QSOs at $z$ $\gtrsim$ 6. ]{} [We use a chemical evolution model to compute the temporal progression of quantities such as the amount of dust and gas, stellar masses, star formation rates (SFRs) and the metallicity for various combinations of the initial mass function (IMF), the mass of the galaxy, dust production efficiencies, and the degree of dust destruction in the ISM. We investigate the influence of the SFR on the evolution of these quantities, and determine the earliest epochs at which agreement with observations can be achieved. We apply the obtained results to individual QSOs at $z$ $\gtrsim$ 6. ]{} [We find that large quantities of dust can be generated rapidly as early as 30 Myr after the onset of the starburst when the SFR of the starburst is $\gtrsim$ $10^{3}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$. The amount of dust and several other physical quantities of individual QSOs at $z$ $\gtrsim$ 6 are satisfactorily reproduced by models at epochs 30, 70, 100, and 170 Myr for galaxies with initial gas masses of 1–3 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$. The best agreement with observations is obtained with top-heavy IMFs. A sizable dust contribution from supernovae (SNe) is however required, while at these epochs dust production by asymptotic giant branch (AGB) stars is negligible. Moderate dust destruction in the ISM can be accommodated. ]{} Introduction ============ Studying QSOs and their host galaxies at high redshift ($z$ $>$ 6) is important to gain deeper insight into the formation and evolution of galaxies, the origin of dust production, and the build up of stellar bulge masses in coevolution with supermassive black holes (SMBHs). While the most distant known QSO, J114816.64+525150.3 [@fan03 herafter J1148+5251], is at $z$ = 6.4, several tens of QSOs have been discovered at $z$ $\sim$ 6 [e.g., @fan04; @fan06; @will07; @jiang10]. Most of the observed QSOs at this redshift, where the epoch of cosmic evolution is $\sim$ 1 Gyr, exhibit extreme physical properties such as very high far-infrared (FIR) luminosities which imply large dust masses [e.g., @omo01; @omo03; @caril01; @bertol02], and SMBHs with masses $>$ 10$^{9}$ ${\mathrm{M}_{\odot}}$ [e.g., @bar03; @will03; @vest04]. Observations of QSOs have shown that dust emission at near-infrared (NIR) wavelengths arise from warm and hot dust ($T$ $\lesssim$ 1000 K) assembled within a few parsec [e.g., @hin06; @jiang06]. The NIR emission is believed to be powered by the active galactic nucleus and related to the QSO activity [e.g., @poll00]. However, two QSOs at $z$ $\sim$ 6 without detectable emission from hot dust have been found [@jiang06; @jiang10]. It has been proposed that these QSOs are at a too early evolutionary stage to have built up significant amounts of hot dust. Alternative scenarios including for example the destruction of the hot dust or dust misalignments from the SMBH have also been discussed [@hao10a; @hao10b; @gued10]. The FIR luminosity of $L_{\mathrm{FIR}}$ $\sim$ 10$^{12-13}$ ${\mathrm{L}_{\odot}}$ is attributed to cold dust ($T$ $\sim$ 30–60 K) [e.g., @wan08] which is probably distributed over kilo-parsec scales throughout the host galaxy [@leip10]. The amount of cold dust inferred is about a few times 10$^{8}$ ${\mathrm{M}_{\odot}}$ [e.g., @bertol03; @robs04; @beel06; @mich10b]. The dominant source of the high FIR luminosity is believed to be dust heated by intense star formation in the circumnuclear region [e.g., @caril04; @rie07; @wan08]. Detection of \[\] line emission at 158 $\mu$m [@mai05] within a central region with radius $\sim$ 750 pc of the host galaxy of J1148+5251 also implies a high star formation rate surface density of 1000 ${\mathrm{M}_{\odot}}$ yr$^{-1}$ kpc$^{-2}$ [@walt09]. @wan10 derived SFRs between 530–2300 ${\mathrm{M}_{\odot}}$ yr$^{-1}$ from observations of a sample of QSOs at redshift $z$ $>$ 5. Observations of strong metal emission of high-$z$ QSOs [e.g., @bar03; @diet03; @mai03; @bec06] indicate strong star forming activity in the QSO hosts and solar or supersolar metallicity [e.g., @fan03; @freu03; @juar09]. Theoretical studies of the gas metallicity of QSO hosts also predict supersolar metallicities for $z$ = 5–6 QSOs [e.g., @dimat04]. The high inferred SFRs imply short timescales ($\le 10^{8}$ yr) of the starburst [e.g. @bertol03; @walt04; @dwe07; @rie09], and consequently a young age of the QSOs. An early evolutionary stage of $z$ $>$ 4 QSOs has also been suggested from studies of extinction curves of broad absorption line QSOs [e.g., @galler10] which turned out to be best fitted with extinction curves for SN-like dust [e.g., @mai04; @mai06; @galler10]. This suggests SNe as the preferential source of dust at early epochs [e.g., @dwe98; @mor03; @hiras05; @dwe07; @dwe10], even though the dust productivity of SNe is poorly constrained (for a review see Gall et al. in prep). The dust in high-$z$ QSOs could also be grown in the ISM [e.g., @drai09; @mich10b; @pip10]. Finally, a dominant dust production by asymptotic giant branch stars has been claimed [@val09]. Molecular gas masses of the order of $\sim$ 1–2.5 $\times$ $10^{10}$ ${\mathrm{M}_{\odot}}$ have been inferred from detections of high excitation CO line emission in QSOs at $z$ $>$ 5 within a $\sim$ 2.5 kpc radius region [e.g., @bertol03b; @walt03; @walt04; @wan10]. The dynamical masses inferred from these CO observations are a few times $\sim$ $10^{10-11}$ ${\mathrm{M}_{\odot}}$ which sets an upper limit on stellar bulge masses. These however are roughly two orders of magnitude lower than required from the present day black hole-bulge relation [e.g., @mah03]. It therefore has been proposed that the formation of the SMBH occurs prior to the formation of the stellar bulge. QSOs will then have to accrete additional material to build up the required bulge mass [e.g., @walt04; @rie09; @wan10]. For QSOs at $z$ $>$ 6 super-Eddington growth on timescales shorter than $\sim$10$^{8}$ yr seem to be required to form a SMBH $>$ 10$^{9}$ ${\mathrm{M}_{\odot}}$ [e.g., @kaw09]. It has also been predicted that QSOs at $z$ $\sim$ 6 likely have formed in dark matter halos of 10$^{12-13}$ ${\mathrm{M}_{\odot}}$ [e.g., @li07; @kaw09]. In @gall10a [herafter ] we developed a chemical evolution model to elucidate the conditions required for generating large dust masses in high-$z$ starburst galaxies. We showed that galaxies with masses of 1–5 $\times$ 10$^{11}$ ${\mathrm{M}_{\odot}}$ are suitable for enabling the production of large amounts of dust within $\sim$ 400 Myr. In the present paper we apply this model to QSOs at $z$ $\gtrsim$ 6. We perform more detailed comparison between model results and values inferred from observations of $z$ $\gtrsim$ 6 QSOs to identify the most likely scenario. Furthermore, we consult additional parameters such as the H$_2$ mass and the CO conversion factor for more refined evaluations. In particular, calculations with higher SFRs than in are considered. We aim to determine the earliest epochs at which the model results are in agreement with those from observations. The structure of the paper is as follows: In Sec. \[SEC:MOD\] we briefly review the model developed in . A detailed analysis of the results is presented in Sec. \[SEC:LOSB\] followed by a discussion in Sec. \[SEC:DISC\]. The model {#SEC:MOD} ========= The galactic chemical evolution model from is self-consistent, numerically solved and has been developed to ascertain the temporal progression of dust, gas, metals, and diverse physical properties of starburst galaxies. The incorporated stellar sources are AGB stars in the mass range 3–8 ${\mathrm{M}_{\odot}}$ and SNe. A differentiation between diverse SN subtypes has been implemented. Their roles as sources of dust production, dust destruction or suppliers of gas and heavy elements are taken into account. The lifetime dependent yield injection by the stellar sources, as well as dust destruction in the ISM due to SN shocks are also taken into account. Moreover, the formation of a SMBH is considered. Due to the very high SFRs of the starbursts, infall of neutral gas will only effect the system for comparable high infall rates. Thus, gas infall and outflows are neglected. Possible caveats of such an approach are discussed in . The model allows investigations of a broad range of physical properties of galaxies. The prime parameters are summarized in the following. - Three different possible prescriptions for the stellar yields of SNe are implemented, i.e., (i) stellar evolution models by @eld08 (referred to as ‘EIT08M’), (ii) rotating stellar models by @geo09, or (iii) nucleosynthesis models by either @woos95 or @nom06. The stellar yields for AGB stars are taken from @vhoek97. - We differentiate between five different IMFs. These are a @salp55 IMF, a top-heavy, and a mass-heavy IMF, as well as IMFs [@lars98] with characteristic masses of either $m_{\mathrm{ch}}$ = 0.35 (Larson 1) or $m_{\mathrm{ch}}$ = 10 (Larson 2). - The SFR at a certain epoch is given by the Kennicutt law [@kenn98] as $\psi(t) = \psi_{\mathrm{ini}} \, (M_{\mathrm{ISM}}(t) / M_{\mathrm{ini}})^k$, where $\psi_{\mathrm{ini}}$ is the initial SFR of the starburst, $M_{\mathrm{ISM}}(t)$ is the initial gas mass of the galaxy and $k$ = 1.5. - The amount of dust produced by SNe and AGB stars is calculated using the dust formation efficiencies discussed in . For SNe three different dust production efficiency limits are determined, i.e. a ‘maximum’ SN efficiency, a ‘high’ SN efficiency, and a ‘low’ SN efficiency. The ‘maximum’ SN efficiency originates from theoretical SN dust formation models, and corresponds to dust masses of approximately 3–10 $\times$ 10$^{-1}$ ${\mathrm{M}_{\odot}}$. Similar dust masses have been observed in SN remnants such as Cas A [e.g., @dun09] or Kepler [e.g., @gom09]. Dust destruction in reverse shock interaction of about 93 % has been applied to the ‘maximum’ SN efficiency, to obtain the ‘high’ SN efficiency. The amount of dust for instance is $\sim$ 2–6 $\times$ 10$^{-2}$ ${\mathrm{M}_{\odot}}$, which is also comparable to some observations of older SN remnants . The ‘low’ SN efficiency is based on SN dust yields (on average about 3 $\times$ 10$^{-3}$ ${\mathrm{M}_{\odot}}$) inferred from observations of SN ejecta. - Dust destruction in the ISM is implemented in terms of the mass of ISM material, $M_{\mathrm{cl}}$, swept up by a single SN shock and cleared of the containing dust. For calculations in this paper most parameters have the same settings as defined in . We apply the models where the formation of a SMBH has been included. A constant growth rate has been estimated based on the final mass of the SMBH and the considered growth timescale. In this paper the SMBH growth is considered with a shorter growth timescale and calculations are performed with higher initial SFRs. For the SN yields we only consider the case of EIT08M. The parameters which differ from those used in are listed in Table \[TAB:PAR\]. [llll]{} Parameters & Value & Unit &Description\ \ $\psi_{\mathrm{ini}}$ & 3 $\times$ $10^3$, 1 $\times$ $10^4$ & ${\mathrm{M}_{\odot}}$ yr$^{-1}$ & Star formation rate\ $M_{\mathrm{SMBH}}$ & 3 $\times$ $10^{9}$ & ${\mathrm{M}_{\odot}}$ & Mass of the SMBH\ $t_{\mathrm{SMBH}}$ & 1 $\times$ $10^{8}$ & yr & Growth timscale\ & & & for the SMBH\ \ Results {#SEC:LOSB} ======= ![image](15605fig1.eps){width="12cm"} ![image](15605fig2.eps){width="\textwidth"} In this section we present the results of models calculated within short timescales after the starburst. A short enrichment timescale of a few times 10$^7$ yr for an intense starburst with a SFR of $\sim$ 3 $\times$ $10^{3}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$ has been proposed by e.g., @bertol03, @walt04, @dwe07, @rie09. Owing to this suggestion we are interested in whether the observed large dust masses in excess of $10^{8}$ ${\mathrm{M}_{\odot}}$ can be reached within about 100 Myr. Consequently we performed calculations with an initial SFR for the starburst with $\psi_{\mathrm{ini}}$ = 3 $\times$ $10^{3}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$ for galaxies with initial gas masses $M_{\mathrm{ini}}$ = 5 $\times$ $10^{10}$ ${\mathrm{M}_{\odot}}$, $M_{\mathrm{ini}}$ = 1 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$, $M_{\mathrm{ini}}$ = 3 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$, and $M_{\mathrm{ini}}$ = 5 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$. For the most massive system with $M_{\mathrm{ini}}$ = 1.3 $\times$ $10^{12}$ ${\mathrm{M}_{\odot}}$ an initial SFR $\psi_{\mathrm{ini}}$ = 1 $\times$ $10^{4}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$ is adopted. We included the results for a lower initial SFR of $10^{3}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$ from models computed in for comparison. In we analyzed the evolution of the amount of dust and various physical properties, and found that these are strongly dependent on the mass of the galaxy. Moreover, for a given initial SFR all quantities evolve faster in less massive galaxies. In this paper we perform detailed comparisons between calculated and observed values of the total dust mass, $M_{\mathrm{d}}$, the stellar mass, $M_{\mathrm{\ast}}$, the SFR, $\psi$, and the metallicity, $Z$. We identified the shortest epoch, where some model results are in accordance with observations to be 30 Myr. Furthermore, we discuss quantities such as the CO conversion factor, the gas-to-H$_{2}$ mass ratio, and the possible amount of molecular hydrogen. Dust and stellar mass --------------------- In Fig. \[FIG:DUSE30\] we present the results for the mass of dust versus the stellar mass for galaxies with different initial gas masses and initial SFRs at an epoch of 30 Myr. The displayed models are computed for a ‘maximum’ SN efficiency. Dust destruction in the ISM is considered for values of $M_{\mathrm{cl}}$ = 100 ${\mathrm{M}_{\odot}}$ (left panel) and $M_{\mathrm{cl}}$ = 0 (right panel). The dark grey region represents the mass ranges of the stellar mass and dust mass derived from observations of QSOs at $z$ $>$ 6. The lower and upper limits of the stellar mass are estimated by subtracting the molecular gas masses, $M_{\mathrm{H_{2}}}$ from the total dynamical masses, $M_{\mathrm{dyn}}$. Values for $M_{\mathrm{dyn}}$ and $M_{\mathrm{H_{2}}}$ are based on data from @wan10 [and references therein] for three QSOs at $z$ $>$ 6. For an estimation of $M_{\mathrm{dyn}}$ an inclination angle $i = 65\degr$ of the gas disk is taken for QSO J1148+5251 [@walt04], while $i = 40\degr$ similar to @wan10 is applied to the remaining two QSOs. We adopt the lower and upper limits for the dust masses from @beel06 and @mich10b. The light grey region covers the range of derived stellar masses and dust masses from observations of QSOs $>$ 5 [@wan10; @mich10b]. The boundaries for the stellar masses are estimated similar to the QSOs at $z$ $>$ 6 (with $i = 40\degr$ for deriving $M_{\mathrm{dyn}} $). We set the lower dust limit to $10^{8}$ ${\mathrm{M}_{\odot}}$ to account for the uncertainties of derived dust masses from observations. Despite the short time span of 30 Myr, it is evident that most models are within the plausible mass ranges illustrated by the light and dark grey regions. This signifies a rapid build-up of a large amount of dust, provided SNe produced dust with a ‘maximum’ SN efficiency. For galaxies with $M_{\mathrm{ini}}$ = 1–5 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$ all models with an initial SFR of 3 $\times$ $10^{3}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$ are in agreement with the observed values for the stellar masses for QSOs at $z$ $>$ 6. The requirements for $M_{\mathrm{d}}$ are best accomplished with either a top-heavy, mass-heavy or Larson 1 IMF for both values of $M_{\mathrm{cl}}$. In a galaxy with $M_{\mathrm{ini}}$ = 1 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$ the amount of dust reached with a Larson 2 IMF and $M_{\mathrm{cl}}$ = 100 ${\mathrm{M}_{\odot}}$ also matches with the dark grey region. Models for either a ‘high’ or ‘low’ SN efficiency did not reach $10^{8}$ ${\mathrm{M}_{\odot}}$ of dust. Only in the most massive galaxy ($M_{\mathrm{ini}}$ = 1.3 $\times$ $10^{12}$ ${\mathrm{M}_{\odot}}$) and for top-heavy IMFs with a ‘high’ SN efficiency an amount of dust $>$ $10^{8}$ ${\mathrm{M}_{\odot}}$ is obtained. In Fig. \[FIG:DUSE\] we illustrate the results for dust and stellar masses at an epoch of 100 Myr. We present models for a ‘maximum’ SN efficiency (top row) and a ‘high’ SN efficiency (bottom row), while dust destruction in the ISM is considered for a $M_{\mathrm{cl}}$ = 800 ${\mathrm{M}_{\odot}}$ (left column), $M_{\mathrm{cl}}$ = 100 ${\mathrm{M}_{\odot}}$ (middle column), and $M_{\mathrm{cl}}$ = 0 (right column). We carried out calculations for a ‘low’ SN efficiency, but the obtained dust masses of these models remained below $10^{8}$ ${\mathrm{M}_{\odot}}$. At these early epochs the stellar mass, $M_{\mathrm{\ast}}$, is higher for models with an initially larger SFR (at fixed IMF and $M_{\mathrm{ini}}$). The stellar mass is also larger for IMFs biased towards low mass stars (at fixed $M_{\mathrm{ini}}$ and $\psi_{\mathrm{ini}}$). It is interesting to note that in the less massive galaxies (0.5–1 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$) dust masses obtained for the higher initial SFR ($\psi_{\mathrm{ini}}$ = 3 $\times$ $10^{3}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$) are lower than dust masses obtained for the lower SFR ($\psi_{\mathrm{ini}}$ = $10^{3}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$). Moreover, in these galaxies the amount of dust reached at an epoch of 30 Myr (see Fig. \[FIG:DUSE30\]) and for $M_{\mathrm{cl}}$ = 100–800 ${\mathrm{M}_{\odot}}$ is also higher than that seen at the epoch of 100 Myr for same $M_{\mathrm{cl}}$. We find that the stellar masses for models with an initial SFR $\psi_{\mathrm{ini}}$ = 1–3 $\times$ $10^{3}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$ are within the observed region for $z$ $>$ 5 QSOs. For some models with $\psi_{\mathrm{ini}}$ = 3 $\times$ $10^{3}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$, stellar masses are within the mass range for $z$ $>$ 6 QSOs. This in particular applies to systems with either $M_{\mathrm{ini}}$ = 0.5–1 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$ (all IMFs) or $M_{\mathrm{ini}}$ = 3–5 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$ with top heavy IMFs. Stellar masses within the dark grey area are also found with $\psi_{\mathrm{ini}}$ = $10^{3}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$ for galaxies with either $M_{\mathrm{ini}}$ = 3–13 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$ and top heavy IMFs or for the less massive galaxies in combination with IMFs favoring low mass stars. In the case of $M_{\mathrm{cl}}$ = 800 ${\mathrm{M}_{\odot}}$ and for a ‘maximum’ SN efficiency most models with $M_{\mathrm{ini}}$ = 3–13 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$ and $\psi_{\mathrm{ini}}$ = $10^{3}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$ fit within the dark grey region. However for the higher initial SFR $M_{\mathrm{d}}$ is within or close to this zone only for galaxies with $M_{\mathrm{ini}}$ = 3–5 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$ and top-heavy IMFs. For $M_{\mathrm{cl}}$ = 100 ${\mathrm{M}_{\odot}}$ and a ‘maximum’ SN efficiency the dust mass obtained in a galaxy with $M_{\mathrm{ini}}$ = 1 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$, $\psi_{\mathrm{ini}}$ = 3 $\times$ $10^{3}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$ and for top-heavy IMFs is in agreement with observations, while the dust masses in the more massive galaxies for some IMFs and SFRs are higher than required. In the case of no dust destruction the dust masses reached for some IMFs and SFRs are able to match within the dark grey area also in the least massive galaxy. We find that in case of a ‘high’ SN efficiency and for $\psi_{\mathrm{ini}}$ = 3 $\times$ $10^{3}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$ in galaxies with initial masses 3–5 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$ and top-heavy IMFs high dust masses are possible, even if dust destruction is included (i.e., $M_{\mathrm{cl}}$ = 0–100 ${\mathrm{M}_{\odot}}$). Metallicity and SFR ------------------- We next present the obtained metallicities and SFRs at the time of observation for the models discussed above. Fig. \[FIG:MZSFR\] depicts the metallicity versus SFR at epochs of 30 Myr (left panel) and 100 Myr (right panel). With respect to observations of QSOs $>$ (5) 6 we marked the range of derived values as a dark grey shaded zone. The lower and upper limits of the SFR are based on observations by @bertol03 and @wan10. We set the lower limit for the metallicity at the solar value and the upper limit at 5 ${\mathrm{Z}_{\odot}}$. This is based on the inferred solar or supersolar metallicities in high-$z$ QSOs [e.g., @bar03; @diet03; @fan03; @freu03; @mai03; @dimat04; @bec06; @juar09]. We note that there are no strong constraints on the upper limit and therefore the zone above 5 ${\mathrm{Z}_{\odot}}$ is marked as light grey shaded region to account for the uncertainty. We find that at an epoch of 30 Myr high metallicities in the less massive galaxies are already reached. The best result is attained by a system with $M_{\mathrm{ini}}$ = 1 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$, $\psi_{\mathrm{ini}}$ = 3 $\times$ $10^{3}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$, and IMFs biased towards higher masses. For a galaxy with $M_{\mathrm{ini}}$ = 5 $\times$ $10^{10}$ ${\mathrm{M}_{\odot}}$ all models with either the same $\psi_{\mathrm{ini}}$ or with the lower initial SFR, and top-heavy IMFs are within the dark grey shaded region as well. At an epoch of 100 Myr the metallicity has increased in all models, while the SFR in the less massive galaxies has significantly decreased. The models for $M_{\mathrm{ini}}$ = 3–5 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$, $\psi_{\mathrm{ini}}$ = 3 $\times$ $10^{3}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$, and top heavy IMFs constitute the best results. In galaxies with $M_{\mathrm{ini}}$ = 3 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$, the same initial SFR, and either a mass-heavy or Larson 1 IMF the obtained values for $Z$ and $\psi(t)$ are also in agreement with the observed values. The metallicities in the low mass galaxies which give the best agreement at 30 Myr are now shifted above the upper limit, while the SFRs remain in the observed range. The models for a galaxy with $M_{\mathrm{ini}}$ = 1 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$, a lower initial SFR of $10^{3}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$, and top-heavy IMFs at this epoch (100 Myr) reach sufficiently high metallicities, while high enough SFRs are sustained. CO conversion factor and gas-to-H$_2$ mass ratio {#SEC:CONH2M} ------------------------------------------------ To evaluate the calculated models, we additionally consider the relation between the gas-to-H$_2$ mass ratio and the CO conversion factor used to derive the molecular gas mass in a galaxy. Detections of high excitation CO line emission in QSOs at $z$ $>$ (5) 6 indicate the presence of 0.7–2.5 $\times$ $10^{10}$ ${\mathrm{M}_{\odot}}$ of molecular hydrogen [e.g., @bertol03b; @walt03; @walt04; @rie09; @wan10]. This molecular gas mass is derived from the relation $M_{\mathrm{H_{\mathrm{2}}}}$ = $\alpha$ $\times$ $L'_{\mathrm{CO(1- 0)}}$, where $\alpha$ is the conversion factor between the low excitation CO J = 1–0 line luminosity $L'_{\mathrm{CO(1- 0)}}$ and $M_{\mathrm{H_{\mathrm{2}}}}$. For spiral galaxies $\alpha$ is typically $\sim$ 4.6 ${\mathrm{M}_{\odot}}$ (K km s$^{-1}$ pc$^{2}$)$^{-1}$ [e.g., @sol91], while for the centre of nearby ultra luminous starburst galaxies a conversion factor of $\alpha$ = 0.8–1 ${\mathrm{M}_{\odot}}$ (K km s$^{-1}$ pc$^{2}$)$^{-1}$ is appropriate [e.g., @dow98]. The latter value of $\alpha$ is usually used for e.g., high-$z$ QSOs [e.g., @bertol03b; @walt03; @wan10], Ultra Luminous Infrared Galaxies (ULIRGs) [@yan10] or for high-$z$ sub-mm galaxies (SMGs) [@tec04; @gre05]. However $\alpha$ is not well known in the case of very high excitation. In our models we have computed the total (H + He) gas mass $M_{\mathrm{G}}$ which remains in the galaxies at a given epoch. The molecular gas mass, $M_{\mathrm{H_{\mathrm{2}}}}$, constitutes a certain fraction of the total gas mass, $M_{\mathrm{G}}$. Hence we introduce the gas-to-H$_2$ mass ratio as $\eta_{\mathrm{g,H_{\mathrm{2}}}} = M_{\mathrm{G}} / M_{\mathrm{H_{\mathrm{2}}}}$. The CO conversion factor can thereby be expressed as a function of $\eta_{\mathrm{g,H_{\mathrm{2}}}}$ as $$\label{EQ:METALL} \alpha = \frac{ M_{\mathrm{G}} }{\eta_{\mathrm{g,H_{\mathrm{2}}}} \, L'_{\mathrm{CO(1- 0)}} } ,$$ where $\eta_{\mathrm{g,H_{\mathrm{2}}}}$ $\ge$ 1 is kept as a free parameter. In ULIRGs and SMGs a major fraction of the gas is believed to exist in form of molecular hydrogen [e.g., @sanmi96]. For example a value for $\eta_{\mathrm{g,H_{\mathrm{2}}}}$ of $\sim$ 1 has been found for the $z$ = 3 radio galaxy B3 J2330+3927 [@debreu03]. This might also be the case for QSOs and suggests a gas-to-H$_2$ ratio between 1 and 2. In Fig. \[FIG:ALPHA\] we show the results for $\alpha$ as a function of $\eta_{\mathrm{g,H_{\mathrm{2}}}}$ with $\psi_{\mathrm{ini}}$ = 3 $\times$ $10^{3}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$ for models with $M_{\mathrm{ini}}$ $\le$ 5 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$ and with $\psi_{\mathrm{ini}}$ = $10^{4}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$ for the most massive galaxy. Calculations are performed for two different epochs; 30 Myr (top panel) and 100 Myr (bottom panel). The IMFs involved are the top-heavy IMF and the Salpeter IMF. We adopt a CO line luminosity $L'_{\mathrm{CO(1- 0)}}$ = 2.7 $\times$ 10$^{10}$ K km s$^{-1}$ pc$^{2}$ which is based on the derived values of J1148+5251 and J0840+5624 [e.g., @bertol03b; @walt03; @wan10]. The difference of $\alpha$ from calculations with a lower $L'_{\mathrm{CO(1- 0)}}$ (i.e., $L'_{\mathrm{CO(1- 0)}}$ = 1.5 $\times$ 10$^{10}$ K km s$^{-1}$ pc$^{2}$) is indicated by the arrow in Fig. \[FIG:ALPHA\]. The grey shaded area signifies a possible range for $\alpha$ and $\eta_{\mathrm{g,H_{\mathrm{2}}}}$ as discussed above. For a fixed value of $\alpha$ the gas-to-H$_2$ ratio increases with increasing initial mass of the galaxy. This is as a consequence of the larger amounts of gas mass remaining in the more massive galaxies at the epochs of interest (see also ). Conversely, for a fixed $\eta_{\mathrm{g,H_{\mathrm{2}}}}$, $\alpha$ increases with increasing $M_{\mathrm{ini}}$. The maximum value of $\alpha$ is obtained for $\eta_{\mathrm{g,H_{\mathrm{2}}}}$ = 1, i.e., $M_{\mathrm{G}} \equiv M_{\mathrm{H_{\mathrm{2}}}}$. We find that at both epochs, the maximum value of $\alpha$ for the less massive galaxies is lower than $\sim$ 4.6 ${\mathrm{M}_{\odot}}$ (K km s$^{-1}$ pc$^{2}$)$^{-1}$. For a given $M_{\mathrm{ini}}$, $\alpha$, and $\eta_{\mathrm{g,H_{\mathrm{2}}}}$ are lower at later epochs. For a lower $L'_{\mathrm{CO(1- 0)}}$, $\alpha$ shifts to higher values for a given $\eta_{\mathrm{g,H_{\mathrm{2}}}}$. At an epoch of 30 Myr the values for $\alpha$ and $\eta_{\mathrm{g,H_{\mathrm{2}}}}$ are similar for all IMFs and galaxies with $M_{\mathrm{ini}}$ $>$ 1 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$, while the difference becomes larger with decreasing $M_{\mathrm{ini}}$. Feasible values of $\alpha$ and $\eta_{\mathrm{g,H_{\mathrm{2}}}}$ are possible for galaxies with $M_{\mathrm{ini}}$ = 1 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$ and the higher value of $L'_{\mathrm{CO(1- 0)}}$. For top-heavy IMFs $\eta_{\mathrm{g,H_{\mathrm{2}}}}$ = 1 results in a maximum $\alpha$ of $\sim$ 2.3 ${\mathrm{M}_{\odot}}$ (K km s$^{-1}$ pc$^{2}$)$^{-1}$, while for $\alpha$ = 0.8 ${\mathrm{M}_{\odot}}$ (K km s$^{-1}$ pc$^{2}$)$^{-1}$, the fraction of molecular hydrogen is about one third of the total gas mass. In the least massive galaxy ($M_{\mathrm{ini}}$ = 5 $\times$ $10^{10}$ ${\mathrm{M}_{\odot}}$) and for a top-heavy IMF $\alpha$ $\approx$ 0.8 ${\mathrm{M}_{\odot}}$ (K km s$^{-1}$ pc$^{2}$)$^{-1}$ presupposes that all the gas in this system is in the form of molecular hydrogen. In more massive systems with $M_{\mathrm{ini}}$ = 1–3 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$, a value of $\alpha$ $\approx$ 0.8–1 ${\mathrm{M}_{\odot}}$ (K km s$^{-1}$ pc$^{2}$)$^{-1}$ presumes that the molecular hydrogen constitutes only a small fraction of about 1/10–1/20 of the total gas mass. At an epoch of 100 Myr a clear separation between the IMFs is noticeable. For a Salpeter IMF the galaxies underwent a stronger gas exhaustion than for a top-heavy IMF, which is more significant for the less massive galaxies. As for the epoch at 30 Myr the system with $M_{\mathrm{ini}}$ = 1 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$ and top-heavy IMF is plausible , i.e., for $\alpha$ $\sim$ 0.8 ${\mathrm{M}_{\odot}}$ (K km s$^{-1}$ pc$^{2}$)$^{-1}$ the gas-to-H$_2$ ratio $\eta_{\mathrm{g,H_{\mathrm{2}}}}$ = 2. For the galaxies with $M_{\mathrm{ini}}$ = 3–5 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$ and top-heavy IMF we obtain $\alpha$ = 1.4–1.5 for a corresponding gas-to-H$_2$ ratio $\eta_{\mathrm{g,H_{\mathrm{2}}}}$ = 5–10, resulting in a molecular mass of $M_{\mathrm{H_{\mathrm{2}}}}$ $\sim$ 3.7 $\times$ $10^{10}$ ${\mathrm{M}_{\odot}}$. Alternatively, a higher value for $\alpha$ up to 4.6 results in a lower $\eta_{\mathrm{g,H_{\mathrm{2}}}}$ = 2–4. It is noteworthy that for the assumed $L'_{\mathrm{CO(1- 0)}}$ = 2.7 $\times$ 10$^{10}$ K km s$^{-1}$ pc$^{2}$, $\alpha$ = 4.6 ${\mathrm{M}_{\odot}}$ (K km s$^{-1}$ pc$^{2}$)$^{-1}$ implies $M_{\mathrm{H_{\mathrm{2}}}}$ = 1.2 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$. The likelihood that such a high $M_{\mathrm{H_{\mathrm{2}}}}$ could have been built up within a short timescale of 30–100 Myr however is unclear. Discussion {#SEC:DISC} ========== [lccccccl]{} Object & $z$ &$L'_{\mathrm{CO(1- 0)}}$ & SFR &$M_{\mathrm{d}}$ &$M_{\mathrm{H_{\mathrm{2}}}}$ &$M_{\mathrm{dyn}}\sin^{2}i$ &Ref.\ & &10$^{10}$ K km s$^{-1}$ pc$^{2}$ &${\mathrm{M}_{\odot}}$ yr$^{-1}$ &$10^{8}$ ${\mathrm{M}_{\odot}}$ &$10^{10}$ ${\mathrm{M}_{\odot}}$ &$10^{10}$ ${\mathrm{M}_{\odot}}$ &\ \ J1148+5251 & 6.42 & 3.0 $\pm$ 0.3 & 2380 & 5.9 $\pm$ 0.7 & 2.4 / 3.7 & 4.5 & 1,2,3,4\ J1048+4637 & 6.23 & 1.2 $\pm$ 0.2 & 650 & 4.3 $\pm$ 0.6 & 1.0 & 4.5 & 1,2,3\ J2054-0005 & 6.06 & 1.5 $\pm$ 0.3 & 1180 & 3.4 $\pm$ 0.8 & 1.2 & 4.2 & 5,2,3\ J0840+5624 & 5.85 & 3.2 $\pm$ 0.4 & 1460 & 4.7$\pm$ 0.9 & 2.5 & 24.2 & 6,2,3\ [lcccclll]{} Object & SFR & $M_{\mathrm{d}}$ & $M_{\mathrm{\ast}}$ & $Z$ & $\alpha$ & $\eta_{\mathrm{g,H_{\mathrm{2}}}}$ & $M_{\mathrm{H_{\mathrm{2}}}}$\ & ${\mathrm{M}_{\odot}}$ yr$^{-1}$ &$10^{8}$ ${\mathrm{M}_{\odot}}$ &$10^{10}$ ${\mathrm{M}_{\odot}}$ &${\mathrm{Z}_{\odot}}$ & & &$10^{10}$ ${\mathrm{M}_{\odot}}$\ \ J1148+5251(A) &1600 & 3.1–5.1 & 3.5 & 2 & 0.8–2.3 & 3.0–1.0 &2.16–6.2\ J1148+5251(B) &1000 & 2.4–8.9 & 5.4 & 5 & 0.8–1.55 & 2.0–1.0 &2.10–4.1\ J1048+4637(A) &1000 & 2.4–8.9 & 5.4 & 5 & 0.8–2.8 &3.4–1.0 &1.2–4.2\ J1048+4637(B) &610 & 3.5 & 2.8 & 3.4 & 0.8–4.5 & 5.8–1.0 & 1.2–6.7\ J2054-0005 &1150 & 2.7 & 4.7 & 4.4& 0.8–3.2 & 3.0–1.0 &1.2–4.8\ J0840+5624(A) &1500 & 2.1 & 11.0 & 4 & 0.8–4.6 & 7.0–1.2 &2.5–14.7\ J0840+5624(B) &1400 & 4.8 & 20.0 & 5 & 0.8–4.6 & 10–1.8 &2.5–14.7\ \ Individual QSOs at $z$ $\gtrsim$ 6 {#SEC:AIQ} ---------------------------------- We ascertain plausible scenarios by comparing the model results discussed in Sect. \[SEC:LOSB\] with the derived values from observations for specific quantities of individual QSOs listed in Table \[TAB:OBS\]. The calculated values for diverse properties such as $M_{\mathrm{d}}$, $M_{\mathrm{\ast}}$, $M_{\mathrm{H_{\mathrm{2}}}}$, metallicity, and SFR from the models discussed below, which best match the QSOs, are listed in Table \[TAB:CALC\]. The corresponding model parameters, and all models which match the discussed properties within the range defined by observations, are summarized in Table \[TAB:SUMA\]. We find that at an epoch of 30 Myr the models with an initial mass of the galaxy of $M_{\mathrm{ini}}$ = 1 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$, an initial SFR of $\psi_{\mathrm{ini}}$ = 3 $\times$ $10^{3}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$ and either a Larson 2 IMF, a top-heavy or a mass-heavy IMF reproduce the observed quantities of some QSOs at $z$ $>$ 6 in the case of a ‘maximum’ SN efficiency. In particular, the model with a top-heavy IMF is best applicable to the QSO J1148+5251. The amount of dust reached is between 3.1–5.1 $\times$ $10^{8}$ ${\mathrm{M}_{\odot}}$ for dust destruction in the ISM with $M_{\mathrm{cl}}$ = 100–0 ${\mathrm{M}_{\odot}}$. A stellar mass of $M_{\mathrm{\ast}}$ $\sim$ 3.5 $\times$ $10^{10}$ ${\mathrm{M}_{\odot}}$ is obtained. The metallicity in the system is $\sim$ 2 ${\mathrm{Z}_{\odot}}$ and a SFR of $\sim$ 1600 ${\mathrm{M}_{\odot}}$ yr$^{-1}$ could be sustained. This model is also favored given its values of $\alpha$ and $\eta_{\mathrm{g,H_{\mathrm{2}}}}$. The higher H$_2$ mass of $M_{\mathrm{H_{\mathrm{2}}}}$ = 3.7 $\times$ $10^{10}$ ${\mathrm{M}_{\odot}}$ derived by @rie09 leads to $\eta_{\mathrm{g,H_{\mathrm{2}}}}$ $<$ 2 and $\alpha$ $\sim$ 1.4 ${\mathrm{M}_{\odot}}$ (K km s$^{-1}$ pc$^{2}$)$^{-1}$. However, such a galaxy with $M_{\mathrm{ini}}$ = 1 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$ implies that the dynamical mass is larger than the derived $M_{\mathrm{dyn}}$ of $\sim$ 5.5 $\times$ $10^{10}$ ${\mathrm{M}_{\odot}}$ (for a $i$ = 65$\degr$) by @walt04. While none of the models for $M_{\mathrm{ini}}$ = 5 $\times$ $10^{10}$ ${\mathrm{M}_{\odot}}$, which was used by @dwe07, can be applied, a lower inclination angle similar to what has been adopted for the other QSOs might be considered. Another possible match with the properties of J1148+5251 is achieved by the same set of values for $M_{\mathrm{ini}}$, $\psi_{\mathrm{ini}}$, SN efficiency and IMF at an epoch of 100 Myr. The calculated stellar mass is within the estimated range from observations and the dust mass is $\sim$ 2.4–8.9 $\times$ $10^{8}$ ${\mathrm{M}_{\odot}}$, depending on $M_{\mathrm{cl}}$. However, the SFR dropped to $\sim$ 1000 ${\mathrm{M}_{\odot}}$ yr$^{-1}$, while the metallicity increased to $\sim$ 5 ${\mathrm{Z}_{\odot}}$. In view of the lower SFR reached by these models than suggested by observations at epochs either 30 or 100 Myr, a higher initial SFR than the 3 $\times$ $10^{3}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$ might be conceivable. In Fig. \[FIG:MZSFR\] one notices that a longer evolution with the same (or lower) initial SFR as used here does not lead to a better agreement with observations, since this results in an even lower SFR and higher metallicity. In view of this we find that this scenario at an epoch of 100 Myr is more appropriate for the QSOs J1048+4637 [@fan03] at $z$ = 6.23 and J2054-0005 [@jiang08] at $z$ = 6.06. For the latter QSO a fine tuning of the epoch to 70 Myr results in a better match. At this epoch we obtain a SFR of 1150 ${\mathrm{M}_{\odot}}$ yr$^{-1}$ and a metallicity of $\sim$ 4.4 ${\mathrm{Z}_{\odot}}$. The amount of dust is $M_{\mathrm{d}}$ $\sim$ 2.7 $\times$ $10^{8}$ ${\mathrm{M}_{\odot}}$ (for $M_{\mathrm{cl}}$ = 100 ${\mathrm{M}_{\odot}}$), while the stellar mass is $M_{\mathrm{\ast}}$ $\sim$ 4.7 $\times$ $10^{10}$ ${\mathrm{M}_{\odot}}$. The lower derived $L'_{\mathrm{CO(1- 0)}}$ leads to $\eta_{\mathrm{g,H_{\mathrm{2}}}}$ $\sim$ 3–4 in case $\alpha$ = 0.8–1 ${\mathrm{M}_{\odot}}$ (K km s$^{-1}$ pc$^{2}$)$^{-1}$ is applied, while for $\eta_{\mathrm{g,H_{\mathrm{2}}}}$ $\sim$ 2 a value for $\alpha$ of $\sim$ 1.6 would be required. For J1048+4637 the model for a lower initial SFR of $\psi_{\mathrm{ini}}$ = $10^{3}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$ might be an option. The SFR is $\sim$ 610 ${\mathrm{M}_{\odot}}$ yr$^{-1}$ and the metallicity is $\sim$ 3.4 ${\mathrm{Z}_{\odot}}$. While the stellar mass remains low, $M_{\mathrm{\ast}}$ $\sim$ 2.8 $\times$ $10^{10}$ ${\mathrm{M}_{\odot}}$, a dust mass of $M_{\mathrm{d}}$ $\sim$ 3.5 $\times$ $10^{8}$ ${\mathrm{M}_{\odot}}$ is obtained for a ‘maximum’ SN efficiency and moderate dust destruction in the ISM. However, for $\alpha$ = 0.8–1 ${\mathrm{M}_{\odot}}$ (K km s$^{-1}$ pc$^{2}$)$^{-1}$ the gas-to-H$_2$ ratio is $\sim$ 5–6, since for the lower initial SFR the system at this epoch is less exhausted. At either the same or a later epoch the more massive galaxies with $M_{\mathrm{ini}}$ = 3–5 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$, an initial SFR of $\psi_{\mathrm{ini}}$ = 3 $\times$ $10^{3}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$ and IMFs biased towards higher stellar masses are applicable to some $z$ $\sim$ 6 QSOs. The stellar mass, metallicity, and SFR of these systems are in agreement with observations, with either top-heavy IMFs or a mass-heavy IMF leading to the best results. The amount of dust can be produced by SNe with a ‘high’ SN efficiency and $M_{\mathrm{cl}}$ $\le$ 100 ${\mathrm{M}_{\odot}}$, although the dust masses reached are at the lower limit. At an epoch of 170 Myr the system with $M_{\mathrm{ini}}$ = 3 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$ is plausible for the QSO J0840+5624 [@fan06] at $z$ = 5.85, if an inclination angle higher than the assumed 40$\degr$ is assumed. The SFR is $\sim$ 1500 ${\mathrm{M}_{\odot}}$ yr$^{-1}$ and the metallicity is $\sim$ 4 ${\mathrm{Z}_{\odot}}$. The stellar mass is around 1.1 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$. The amount of dust obtained with a ‘high’ SN efficiency is 2.1 $\times$ $10^{8}$ ${\mathrm{M}_{\odot}}$, while with the ‘maximum’ SN efficiency the dust mass exceeds a few times $10^{9}$ ${\mathrm{M}_{\odot}}$ (as already at an epoch of 100 Myr). However, for a $L'_{\mathrm{CO(1- 0)}}$ = 3.2 $\times$ 10$^{10}$ K km s$^{-1}$ pc$^{2}$ as derived for this QSO the gas-to-H$_2$ ratio of $\eta_{\mathrm{g,H_{\mathrm{2}}}}$ $\sim$ 5–7 for $\alpha$ = 0.8–1 ${\mathrm{M}_{\odot}}$ (K km s$^{-1}$ pc$^{2}$)$^{-1}$ is higher than for the less massive galaxies. In case of a lower $\eta_{\mathrm{g,H_{\mathrm{2}}}}$ of $\sim$ 2, $\alpha$ $\sim$ 2.7 ${\mathrm{M}_{\odot}}$ (K km s$^{-1}$ pc$^{2}$)$^{-1}$ is required. The larger galaxy with $M_{\mathrm{ini}}$ = 5 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$, $\psi_{\mathrm{ini}}$ = 3 $\times$ $10^{3}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$ and top heavy IMF can account for the observed quantities at an epoch of 400 Myr. The amount of dust reached with a ‘high’ SN efficiency is $\sim$ 4.8 $\times$ $10^{8}$ ${\mathrm{M}_{\odot}}$ and the SFR is $\sim$ 1400 ${\mathrm{M}_{\odot}}$ yr$^{-1}$. The metallicity and stellar mass are in agreement, but the fraction of $M_{\mathrm{H_{\mathrm{2}}}}$ is around 1/10 for $\alpha$ = 0.8 ${\mathrm{M}_{\odot}}$ (K km s$^{-1}$ pc$^{2}$)$^{-1}$, while $\alpha$ $\sim$ 4 ${\mathrm{M}_{\odot}}$ (K km s$^{-1}$ pc$^{2}$)$^{-1}$ is needed for $\eta_{\mathrm{g,H_{\mathrm{2}}}}$ of $\sim$ 2. A higher amount of $M_{\mathrm{H_{\mathrm{2}}}}$ as denoted by the higher value of $\alpha$ in these massive galaxies might be possible. For example, the presence of large amounts of cold and low-excited molecular gas have been suggested by @pap01 for the QSO APM 08279+5255 at $z$ = 3.91. . \[TAB:SUMA\] [llcclll]{} Epoch &$M_{\mathrm{ini}}$ &$\psi_{\mathrm{ini}}$ & SN efficiency & $M_{\mathrm{cl}}$ & IMF &z $\gtrsim$ 6 QSOs from our sample\ ${\mathrm{M}_{\odot}}$ & 10$^{3}$ ${\mathrm{M}_{\odot}}$ yr$^{-1}$ & &${\mathrm{M}_{\odot}}$ &\ \ 30 Myr& 5 $\times$ $10^{10}$ &3 &max &100 &top-heavy, Larson 1, 2, mass-heavy&\ &&3 &max &0 &top-heavy, Larson 1, 2&\ &&[**3**]{} &[**max**]{} & [**0**]{} &[**mass-heavy**]{}&\ &&1 &max &0,100 &top-heavy, Larson 2&\ &1 $\times$ $10^{11}$ &[**3** ]{} &[**max**]{} & [**100**]{} &[**Larson 2, mass-heavy**]{} &\ &&[**3** ]{} &[**max**]{} &[**0**]{} &[**mass-heavy**]{}&\ &&[**3** ]{} &[**max**]{} & [**100–0**]{} &[**top-heavy**]{} &J1148+5251(A)\ &&3 &high &0 &Larson 2&\ 70 Myr& 1 $\times$ $10^{11}$ &[**1** ]{} &[**max**]{} & [**100–0**]{} &[**top-heavy**]{} &J2054-0005\ 100 Myr& 5 $\times$ $10^{10}$ &3 &max &100 & Larson 2&\ &&[**3**]{} &[**max**]{} &[**0**]{} &[**top-heavy, Larson 2**]{}&\ &&1 &max &100 & Larson 2&\ &1 $\times$ $10^{11}$ &[**3** ]{} &[**max**]{} &[**100**]{} &[**Larson 2**]{}&\ &&3 &high &0 & Larson 2&\ &&[**3** ]{} &[**max**]{} &[**100–0**]{} &[**top-heavy**]{}&J1148+5251(B), J1048+4637(A)\ &&1 &max &100 &Larson 2&\ &&[**1** ]{} &[**max**]{} &[**100–0**]{} &[**top-heavy**]{}&J1048+4637(B)\ &3 $\times$ $10^{11}$ &3 &max &800 & Larson 1, mass-heavy&\ &&[**3**]{} &[**max**]{} &[**800**]{} &[**top-heavy, Larson 2**]{}&\ &&3 &max &100 &Larson 1, mass-heavy&\ &&3 &high &0 & mass-heavy&\ &&[**3**]{} &[**high**]{} &[**0**]{} &[**top-heavy, Larson 2**]{}&\ &&[**1**]{} &[**max**]{} &[**800**]{} &[**Larson 2**]{}&\ &&1 &high &0 & Larson 2&\ &5 $\times$ $10^{11}$ &3 &max &800 & mass-heavy&\ &&[**3**]{} &[**max**]{} &[**800**]{} &[**top-heavy, Larson 2**]{}&\ &&3 &high &100 &top-heavy, mass-heavy&\ &&[**3**]{} &[**high**]{} &[**100**]{} &[**Larson 2**]{}&\ &&3 &high &0 & mass-heavy&\ &&[**3**]{} &[**high**]{} &[**0**]{} &[**top-heavy, Larson 2**]{}&\ 170 Myr& 3 $\times$ $10^{11}$ &[**1** ]{} &[**high**]{} & [**100–0**]{} &[**top-heavy**]{} &J0840+5624(A)\ 400 Myr& 5 $\times$ $10^{11}$ &[**1** ]{} &[**high**]{} & [**100–0**]{} &[**top-heavy**]{} &J0840+5624(B)\ \ SN efficiency and mass of the galaxy {#SEC:MVH} ------------------------------------ Our calculations show that with increasing $M_{\mathrm{ini}}$ (and fixed $\psi_{\mathrm{ini}}$, IMF) the SN dust production efficiencies can either be lowered or the degree of dust destruction increased in order to reach the required large dust masses. This is best demonstrated by models for the most massive galaxies with $M_{\mathrm{ini}}$ = 3–13 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$ in which a ‘high’ SN efficiency is sufficient in case of moderate to no dust destruction. However, the largest system with $M_{\mathrm{ini}}$ = 1.3 $\times$ $10^{12}$ ${\mathrm{M}_{\odot}}$ exceeds the plausible dynamical masses derived from observations of QSOs at $z$ $\gtrsim$ (5) 6 by more than an order of magnitude. Moreover, our computed models show that at least one of the properties of either SFR, $Z$ or $M_{\mathrm{\ast}}$ are not in agreement with observations at any epoch for any assumption of either the initial SFR or the IMF . Additionally the values for $\eta_{\mathrm{g,H_{\mathrm{2}}}}$ remain very high even for $\alpha$ = 4.6 ${\mathrm{M}_{\odot}}$ (K km s$^{-1}$ pc$^{2}$)$^{-1}$. We therefore conclude that such a massive system as advocated by @val09, cannot be applied to QSOs at $z$ $>$ (5) 6. Although systems with $M_{\mathrm{ini}}$ = 3–5 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$ are appropriate for some QSOs at $z$ $<$ 6, such massive systems can only be applied to QSOs $>$ 6 when the inclination angle is lower than the assumed average angle. The models which best reproduce the observed properties of QSOs $>$ 6 are for a galaxy with $M_{\mathrm{ini}}$ = 1 $\times$ $10^{11}$ ${\mathrm{M}_{\odot}}$, but necessitate a ‘maximum’ SN efficiency and/or a moderate amount of dust destruction. The overall rapid evolution of dust and some properties in these models indicates that such QSOs could possibly be present at a higher redshift than $z$ $>$ 6.4. An interesting example at a lower redshift of $z$ = 1.135 is the ULIRG SST J1604+4304, which shows properties similar to the considered high-$z$ QSOs. @kawa10 reported a dust mass in this ULIRG of 1–2 $\times$ $10^{8}$ ${\mathrm{M}_{\odot}}$, a metallicity of around 2.5 ${\mathrm{Z}_{\odot}}$ and estimated the age of the stellar population to be 40–200 Myr. The possibility of moderate dust destruction in the ISM was already discussed in . We found that the amount of dust for most models better coincide with observations for $M_{\mathrm{cl}}$ $\le$ 100 ${\mathrm{M}_{\odot}}$, which would be in agreement with the values of $M_{\mathrm{cl}}$ of 50–70 ${\mathrm{M}_{\odot}}$ derived for a multiphase ISM [e.g., @mct89; @dwe07]. The ‘maximum’ SN efficiency might be problematic. There is only little observational evidence that SN can be very efficient [e.g., @wils05; @dou01b; @dun09], and theoretical models predict significant dust destruction in reverse shocks of SNe [e.g., @bia07; @noz07; @noz10]. On the other hand, these models also show that the effectiveness of dust destruction depends on various properties such as the geometry of the shocks, the density of the ejecta and the ISM, the size and shape of the grains, clumping in the SNe ejecta, and different SN types. In addition there is some observational evidence that Type IIn SNe and sources such as luminous blue variables are possibly efficient dust producers [@fox09; @smi09; @gom10]. While dust production and destruction in SNe is yet unresolved, a ‘maximum’ SN efficiency cannot be ruled out (e.g., Gall et al. in prep). Alternatively, either dust formation in the outflowing winds of QSOs or grain growth in the ISM might be an option [e.g., @elv02; @dwe07; @drai09; @mich10b; @pip10; @dwe10] as supplementary or primary dust sources. However it remains to be investigated, if dust grain growth can be as efficient as required under the prevailing conditions of high star formation activity and a short time span. Typical grain growth timescales in molecular clouds are of order $10^{7}$ yr, but depending on the density and metallicity these can possibly be shorter [e.g., @hiras00; @zhuk08; @drai09]. The fact that the starburst is assumed to occur in an initially dust free galaxy implies that heavy elements first need to be ejected into the ISM before grain growth can take place. In forthcoming work we will further develop the model to investigate the impact of different infall and outflow scenarios on the evolution of the amount of dust and various properties of a galaxy. We would like to thank Michal Micha[ł]{}owski, Darach Watson, Thomas Greve, and Sabine König for informative and helpful discussions. We also thank the anonymous referee for useful suggestions which helped improve the paper. The Dark Cosmology Centre is funded by the DNRF.
--- author: - | Chao-Wei Tsai$^1$, Jean L. Turner$^1$, Sara C. Beck$^2$,\ Lucian P. Crosthwaite$^3$, Paul T. P. Ho$^{4}$, and David S. Meier$^{5}$\ $^1$ Department of Physics and Astronomy, UCLA, Los Angeles, CA 90095-1547, U.S.A.\ $^2$ Department of Physics and Astronomy, Tel Aviv University, Ramat Aviv, Israel\ $^3$ Northrop Grumman, San Diego, CA, U.S.A.\ $^4$ Institute of Astronomy and Astrophysics, Academia Sinica, Taipei, Taiwan\ $^5$ National Radio Astronomy Observatory, Socorro, NM, U.S.A.\ [*E-mail(C.-W.T.): [email protected]*]{} title: High Resolution Radio Maps of Four Nearby Spiral Galaxies --- Introduction ============ The formation of open clusters has been well studied in the past two decades (see review by Lada & Lada 2003). In contrast, due to the lack of well-studied young clusters as massive as a typical globular cluster, little is known about the formation of globular clusters (Larson 1992). Although it has been suggested that globular clusters share some basic formation mechanism with open clusters due to the apparent continuity in properties (Larsen 1992), the formation of two types clusters with few orders of magnitude difference in total mass might be induced by significantly different physical phenomena. It has been suggested that the young clusters found by *Hubble Space Telescope* (*HST*) are protoglobular clusters formed by merger events (Holtzman et al. 1992; and the review by Whitmore 2003). However, how the star formation is triggered by mergers and what triggers the star cluster formation in non-interacting galaxies (B[" o]{}ker et al. 2002) are still unclear. In order to understand the formation of globular clusters, it is essential to trace the youngest protoglobular clusters. These young and massive star clusters, or super star clusters – “SSC”, contain hundreds to thousands O stars in their very first few million years of life. They are often embedded in their dusty natal cocoons (Wynn-Williams et al. 1972) and visually obscured by their surrounding birth clouds. The high optical extinction is true as well for the [H[ii]{}]{} regions which are large enough, with much of the extinction internal to the nebula itself (Kawara et al. 1989; Ho et al. 1990; Beck et al. 1996). Thus, the forming SSC usually cannot be seen in optical. However, embedded clusters can be detected through their nebulae, which glow by reprocessed UV light from O stars. The thermal free-free emission from these nebulae can be detected and identified at radio wavelengths. Hence radio continuum observations which suffer much less extinction are often useful tracers of young star forming regions. We report the centimeter continuum study at 2 and 6 cm on the centers of four nearby and well-studied spiral galaxies, IC 342, Maffei II, NGC 2903, and NGC 6946. The centers of these galaxies are infrared-bright and molecular gas rich. The goal of this investigation is to identify young SSC candidates in the star formation regions in the centers of these galaxies using subarcsecond radio continuum imaging. Subarcsecond imaging with the *VLA* in its extended configuration maximizes sensitivity to bright and compact radio “supernebulae” over low brightness disk synchrotron emission. Observing at shorter wavelengths, $\lambda<$ 2 cm, also minimizes the contribution of synchrotron emission, which falls with frequency. The results presented here are detailed in Tsai et al. (2006) [lcccccc]{}\ Galaxy & D & $S_{6cm}^{compact}$/$S_{6cm}^{total;a}$ & $S_{2cm}^{compact}$/$S_{2cm}^{total;a}$ & [H[ii]{}]{}$^{b}$ & SNR$^{b}$ & Indeterminated$^{b}$\ & (Mpc) & (mJy/mJy) & (mJy/mJy) & Thick/Thin & &\ IC 342 & 3.3 & 7.5/82 & 7.5/38 & 5/1 & 6 & 0\ Maffei II & 5.0 & 13.5/107 & 12.7/46 & 4/3 & 3 & 0\ NGC 2903 & 8.9 & 1.7/35 & 3.0/12 & 4/2 & 1 & 0\ NGC 6946 & 5.9 & 6.8/39 & 6.1/23 & 1/3 & 2 & 3\ \ \ \ \ Observations ============ The radio continuum data at 2 cm and 6 cm were acquired at the *NRAO* *Very Large Array*[[^1]]{}. In order to enhance the sensitivity of these images, we have accumulate our unpublished *VLA* A-configuration data with all available *VLA* archival data. Only archived data sets of A-, B-, or C-configurations with time on source $>$ 10 minutes and phase center within 18 arcsec (1/10 of *VLA* primary beam at 2 cm) from centers of our measurements were used. Data calibration was done by using *AIPS*, following the standard reduction procedures. The flux measurements were done with matching (*u,v*) coverages at 6 and 2 cm, which include matched shortest baselines of A-configuration at 2 cm and longest baselines of A-configuration at 6 cm. The largest angular scales sampled by the images are $\sim$ $6{''}$. Fluxes and peak fluxes are therefore lower limits to the total flux if extended emission is present. The images were then convolved to the same beamsize, to have matching maximum baseline lengths. Final rms noise levels in blank regions of maps are $\lsim 0.05$ mJy/beam at 6 cm, and $\lsim 0.15$ mJy/beam at 2 cm. We note that the uncertainty of absolute flux scale is $\lsim 5~\%$. The Radio Continuum Images and the Compact Radio Sources ======================================================== The 6 cm naturally-weighted radio continuum maps of the four galaxies are shown in Figure 1, contoured at the same 4 $\sigma$ levels. We see emission from the central $\sim$ 150 pc except in the case of Maffei II, in which the emission extends over a distance of $\sim$ 350 pc. In each of the galaxies, the 6 cm continuum is strongest at the nucleus. Peak flux densities at 6 cm are 1.2, 2.6, 0.4, and 1.9 mJy/beam for IC 342, Maffei II, NGC 2903, and NGC 6946, respectively. In each galaxy there are strong compact sources embedded in extended emission. The western part of the extended emission in IC 342 contains 5 sources aligned from north-west to south-east while the eastern part has fewer sources and their emission is also significantly weaker. Ten sources are identified in Maffei II, lying on a twisted, inverse “S” shaped line north-south over 10${''}$ ($\sim$ 240 pc) on the eastern edge of the extended emission. In NGC 2903 and NGC 6949, 6 cm continuum emission is confined to single central sources surrounded by extended emission. The diffuse emission is “patchy” because the large scale extended emission has been resolved out by the high resolution. Our deep, high resolution radio continuum maps at 6 cm and 2 cm reveal 38 compact radio sources (cross marks in Figure 1). The sources meet at least one of the following criteria: (1) $5~\sigma$ detection at the peak, (2) $5~\sigma$ in integrated emission at one wavelength, or (3) $4~\sigma$ emission detection in both wavebands. The spectral index, defined as $S_{\nu} \propto \nu^{\alpha}$, indicates the nature of a radio source. Based on the values of $\alpha_{6-2}$, we classified sources as [H[ii]{}]{}–thick (optically thick free-free – $\alpha > 0.0$), [H[ii]{}]{}–thin (optically thin free-free – $\alpha \sim -0.1$), SNR (supernova remnant; synchrotron – $\alpha < -0.1$). For the cases identified as [H[ii]{}]{} regions, the required Lyman continuum rate, $N_{Lyc}$, can be derived from the observed flux density at 2 cm. $N_{O7}$, the number of standard O7 stars needed to generate $N_{Lyc}$ in a radio nebula, can be also derived. We here assume ionization-bonded nebulae; if UV photons escape the [H[ii]{}]{} region, our OB luminosities will be underestimates of the true Lyman continuum rates. Discussion ========== More than half of the 38 identified compact sources are [H[ii]{}]{} regions with flat or positive indices ($\alpha \gsim -0.1$). More than 60% of these [H[ii]{}]{} regions have a spectrum rising from 6 to 2 cm, indicating that they are at least partly optically thick at 6 cm. The radio fluxes of these radio [H[ii]{}]{} regions require that they harbor hundreds of massive stars. For electron temperatures of $\sim 10,000$ K, rising cm-wave spectra imply that the rms electron densities are $\sim 10^{4}~cm^{-3}$. The deconvolved sizes of [H[ii]{}]{} regions correspond to diameters of $\sim$ 6 - 14 pc at the distances of the galaxies. These [H[ii]{}]{} regions are much smaller than the 30 Doradus [H[ii]{}]{} region ($\sim$ 200 pc in diameter for region of $EM > 10^{4}~pc~cm^{-6}$; see Mills et al. 1978 and Kennicutt 1984) in the Large Magellanic Cloud (LMC). They are larger but similar in nature to the Galactic compact [H[ii]{}]{} regions in W49 (Mezger et al. 1967; Conti & Blum 2002) and NGC 3603 (de Pree et al. 1999; Sung & Bessell 2004, M[" u]{}cke et al. 2002). One possibility for the fact that these [H[ii]{}]{} regions are so compact and dense is that they are younger than 30 Doradus or NGC 3603. Nearly two dozen radio sources in the four galaxies of this study were found to have flat or rising spectra. These are presumably thermal nebulae excited by massive stars, for which the 2 cm fluxes give a lower limit to the required $N_{lyc}$ and corresponding $L_{H_{\alpha}}$. We cannot detect unresolved (diameter $<$ 15 pc) nebulae with $N_{Lyc} < 4\times 10^{50}~ s^{-1}$ due to our sensitivity limits, corresponding to 40 O stars in IC 342 and 120 stars in NGC 6946. However, we also cannot detect more luminous regions if they are large and resolved (size $\gg$ 15 pc); in this limit we are sensitive to [H[ii]{}]{} regions with $EM > 10^4 \rm ~cm^{-6}\,pc$. The youngest and therefore smallest [H[ii]{}]{} regions are probably not affected by this latter criterion. The luminosity function (LF) of the 23 thermal sources in 4 spirals shown in Figure 2 suggests a broken power law with turnover point at $L_{H_{\alpha}} = 2 \times 10^{39} erg/sec$ and cutoff at luminous end around $L_{H_{\alpha}} = 10^{40} erg/sec$. The total mass of such a cluster at cutoff would be $\sim 2 \times 10^{5} \; M_{{\odot}}$ (assuming Salpeter initial mass function, IMF), consistent with the mass of Galactic globular clusters. Very similar feature shows in LF for 21 thermal radio sources in M 82, another nearby starburst galaxy, based on more complete radio studies of Allen (1999), with sensitivity limits of $N_{Lyc} < 2\times 10^{50}~ s^{-1}$ (20 O7 stars) and radio nebulae with size $\lsim$ 50 pc. The agreement of the LF shape of M 82 and our sample suggests that the broken power law may be real. However, the results based on luminosity function of 44 sources are statistically inconclusive. Larger samples would help to clarify this issue. The steep power law LF suggests a cutoff at the high luminosity end in the young clusters associated with radio nebulae in spiral galaxies. The upper end of the luminosity function is around $L_{H_{\alpha}} \sim 10^{40}~ erg~ s^{-1}$. This corresponds to $\sim$ 750 O7 stars. We observe two such clusters, in Maffei 2 and NGC 2903, and their nebulae are $\lsim5$ pc in extent. The total mass of such a cluster would be $\sim$ 2.2 $\times 10^{5}~ M_{{\odot}}$ (assuming Salpeter IMF), consistent with the mass of Galactic globular clusters. Summary ======= We have identified 38 compact radio emission in the centers of four nearby spiral galaxies, IC 342, Maffei II, NGC 2903, and NGC 6946, at subarcsecond resolution at 6 and 2 cm with the VLA. Over half of the compact sources appear to be [H[ii]{}]{} regions, based on their radio spectra, of which two-thirds appear have optically thick free-free emission. The compact [H[ii]{}]{} regions contain ionized gas with densities as high as $n_{i} \sim 10^{4}~ cm^{-3}$, which suggests that these [H[ii]{}]{} regions are relatively young. The largest [H[ii]{}]{} regions we detect require equivalent of $\sim$ 500 - 1,000 O7 stars to excite them; since they are dense, compact [H[ii]{}]{} regions, by analogy with Galactic compact [H[ii]{}]{} regions, they are likely to be very youngest of the massive young clusters, a Myr or less in age. acknowledgments {#acknowledgments .unnumbered} =============== We acknowledge support by U.S. NSF grants AST 0307950 and AST 0071276 for this research. References {#references .unnumbered} ========== Allen, M. L. 1999, Ph.D. Thesis, Univ. Toronto Beck, S. C. et al. 1996, ApJ, 457, 610 B[" o]{}ker, T. et al. 2002, AJ, 123, 1389 Conti, P. S. & Blum, R. D. 2002, ApJ, 564, 827 de Pree, C. G. et al. 1999, AJ, 117, 2902 Ho, P. T. P. et al. 1990, ApJ, 349, 57 Holtzman, J. A. et al. 1992, AJ, 103, 691 Kawara, K. et al. 1989, ApJ, 337, 230 Kennicutt, R. C. 1984, ApJ, 287, 116 Lada, C. J. & Lada, E. A. 2003 ARA&A, 41, 57 Larson, R. B. 1992, in “The Astronomy and Astrophysics Encyclopedia”, ISBN 0-442-26364-3, page 672 Mezger, P. G. et al. 1967, ApJ, 150, 807 Mills, B. Y. et al. 1978, MNRAS, 185, 263 M[" u]{}cke, A. et al. 2002, ApJ, 571, 366 Sung, H. & Bessell, M. S. 2004, AJ, 127, 1014 Turner, J. L. & Ho, P. T. P. 1983, ApJl, 268, L79 Turner, J. L. & Ho, P. T. P. 1994, ApJ, 421, 122 Tsai, C.-W. et al. 2006 AJ, submitted Whitmore, B. C. 2003, STScI symp. series, 14, 153 Wynn-Williams, C. G. et al. 1972, MNRAS, 160, 1 Wynn-Williams, C. G. & Becklin, E. E. 1985, ApJ, 290, 108 [^1]: The National Radio Astronomy Observatory is a facility of the *National Science Foundation* operated under cooperative agreement by Associated Universities, Inc.
--- author: - Marcel von Maltitz - Dominik Bitzer - Georg Carle title: 'Data Querying and Access Control for Secure Multiparty Computation[^1]' --- [^1]: This work has been supported by the German Federal Ministry of Education and Research, project DecADe, grant 16KIS0538 and the German-French Academy for the Industry of the Future.
--- abstract: 'During the period of reionization the Universe was filled with a cosmological background of ionizing radiation. By that time a significant fraction of the cosmic gas had already been incorporated into collapsed galactic halos with virial temperatures $\la 10^4$K that were unable to cool efficiently. We show that photoionization of this gas by the fresh cosmic UV background boiled the gas out of the gravitational potential wells of its host halos. We calculate the photoionization heating of gas inside spherically symmetric dark matter halos, and assume that gas which is heated above its virial temperature is expelled. In popular Cold Dark Matter models, the Press-Schechter halo abundance implies that $\sim 50$–$90\%$ of the collapsed gas was evaporated at reionization. The gas originated from halos below a threshold circular velocity of $\sim10$–15 km s$^{-1}$. The resulting outflows from the dwarf galaxy population at redshifts $z=5$–10 affected the metallicity, thermal and hydrodynamic state of the surrounding intergalactic medium. Our results suggest that stellar systems with a velocity dispersion $\la 10~{\rm km~s^{-1}}$, such as globular clusters or the dwarf spheroidal galaxies of the Local Group, did not form directly through cosmological collapse at high redshifts.' author: - 'Rennan Barkana[^1]' - 'Abraham Loeb[^2]' title: 'The Photo-Evaporation of Dwarf Galaxies During Reionization' --- Introduction ============ The formation of galaxies is one of the most important, yet unsolved, problems in cosmology. The properties of galactic dark matter halos are shaped by gravity alone, and have been rigorously parameterized in hierarchical Cold Dark Matter (CDM) cosmologies (e.g., Navarro, Frenk, & White 1997). However, the complex processes involving gas dynamics, chemistry and ionization, and cooling and heating, which are responsible for the formation of stars from the baryons inside these halos, have still not been fully explored theoretically. Recent theoretical investigations of early structure formation in CDM models have led to a plausible picture of how the formation of the first cosmic structures leads to reionization of the intergalactic medium (IGM). The bottom-up hierarchy of CDM cosmologies implies that the first gaseous objects to form in the Universe have a low-mass, just above the cosmological Jeans mass of $\sim 10^4 M_\odot$ (see, e.g., Haiman, Thoul, & Loeb 1996, and references therein). The virial temperature of these gas clouds is only a few hundred K, and so their metal-poor primordial gas can cool only due to the formation of molecular hydrogen, ${\rm H_2}$. However, ${\rm H_2}$ molecules are fragile, and were easily photo-dissociated throughout the Universe by trace amounts of starlight (Stecher & Williams 1967; Haiman, Rees, & Loeb 1996) that were well below the level required for complete reionization of the IGM. Following the prompt destruction of their molecular hydrogen, the early low-mass objects maintained virialized gaseous halos that were unable to cool or fragment into stars. Most of the stars responsible for the reionization of the Universe formed in more massive galaxies, with virial temperatures $T_{\rm vir}\ga 10^4$K, where cooling due to atomic transitions was possible. The corresponding mass of these objects at $z\sim 10$ was $\sim10^8~{\rm M_\odot}$, typical of dwarf galaxies. The lack of a Gunn-Peterson trough and the detection of Ly$\alpha$ emission lines from sources out to redshifts $z=5.6$ (Weymann et al. 1998; Dey et al. 1998; Spinrad et al. 1998; Hu, Cowie, & McMahon 1998) demonstrates that reionization due to the first generation of sources must have occurred at yet higher redshifts; otherwise, the damping wing of Ly$\alpha$ absorption by the neutral IGM would have eliminated the Ly$\alpha$ line in the observed spectrum of these sources (Miralda-Escudé 1998). Popular CDM models predict that most of the intergalactic hydrogen was ionized at a redshift $8\la z\la 15$ (Gnedin & Ostriker 1997; Haiman & Loeb 1998a,c). The end of the reionization phase transition resulted in the emergence of an intense UV background that filled the Universe and heated the IGM to temperatures of $\sim 1$–$2\times 10^4$K (Haiman & Loeb 1998b; Miralda-Escudé, Haehnelt, & Rees 1998). After ionizing the rarefied IGM in the voids and filaments on large scales, the cosmic UV background penetrated the denser regions associated with the virialized gaseous halos of the first generation of objects. Since a major fraction of the collapsed gas had been incorporated by that time into halos with a virial temperature $\la 10^4$K, photoionization heating by the cosmic UV background could have evaporated much of this gas back into the IGM. No such feedback was possible at earlier times, since the formation of internal UV sources was suppressed by the lack of efficient cooling inside most of these objects. The gas reservoir of dwarf galaxies with virial temperatures $\la 10^4$K (or equivalently a 1D velocity dispersion $\la 10~{\rm km~s^{-1}}$) could not be immediately replenished. The suppression of dwarf galaxy formation at $z>2$ has been investigated both analytically (Rees 1986; Efstathiou 1992) and with numerical simulations (Thoul & Weinberg 1996; Quinn, Katz, & Efstathiou 1996; Weinberg, Hernquist, & Katz 1997; Navarro & Steinmetz 1997). The dwarf galaxies which were prevented from forming after reionization could have eventually collected gas at $z=1$–2, when the UV background flux declined sufficiently (Babul & Rees 1992; Kepner, Babul, & Spergel 1997). The reverse process during the much earlier reionization epoch has not been addressed in the literature. (However, note that the photo-evaporation of gaseous halos was considered by Bond, Szalay, & Silk (1988) as a model for Ly$\alpha$ absorbers at lower redshifts $z\sim 4$.) In this paper we focus on the reverse process by which gas that had already settled into virialized halos by the time of reionization was evaporated back into the IGM due to the cosmic UV background which emerged first at that epoch. The basic ingredients of our model are presented in §2. In order to ascertain the importance of a self-shielded gas core, we include a realistic, centrally concentrated dark halo profile and also incorporate radiative transfer. Generally we find that self-shielding has a small effect on the total amount of evaporated gas, since only a minor fraction of the gas halo is contained within the central core. Our numerical results are described in §3. In particular, we show the conditions in the highest mass halo which can be disrupted at reionization. We also use the Press-Schechter (1974) prescription for halo abundance to calculate the fraction of gas in the Universe which undergoes the process of photo-evaporation. Our versatile semi-analytic approach has the advantage of being able to yield the dependence of the results on a wide range of reionization histories and cosmological parameters. Clearly, the final state of the gas halo depends on its dynamical evolution during its photo-evaporation. We adopt a rough criterion for the evaporation of gas based on its initial interaction with the ionizing background. The precision of our results could be tested in specific cases by future numerical simulations. In §4 we discuss the potential implications of our results for the state of the IGM and for the early history of low-mass galaxies in the local Universe. Finally, we summarize our main conclusions in §5. A Model for Halos at Reionization ================================= We consider gas situated in a virialized dark matter halo. We adopt the prescription for obtaining the density profiles of dark matter halos at various redshifts from the Appendix of Navarro, Frenk, & White (1997, hereafter NFW), modified to include the variation of the collapse overdensity $\Delta_c$. Thus, a halo of mass $M$ at redshift $z$ is characterized by a virial radius, r\_[vir]{}=0.756 ()\^[1/3]{} \^[-1/3]{} ( )\^[-1]{} h\^[-1]{} [kpc]{} , or a corresponding circular velocity, V\_c=()\^[1/2]{}=31.6 ()\^[1/2]{} ()\^[3/2]{} [km s]{}\^[-1]{} . The density profile of the halo is given by (r)= (1+z)\^3  , \[NFW\] where $x=r/r_{\rm vir}$ and $c$ depends on $\delta_c$ for a given mass $M$. We include the dependence of halo profiles on $\Omega_0$ and $\Omega_{\Lambda}$, the current contributions to $\Omega$ from non-relativistic matter and a cosmological constant, respectively (see Appendix A for complete details). Although the NFW profile provides a good approximation to halo profiles, there are indications that halos may actually develop a core (e.g., Burkert 1995; Kravtsov et al. 1998; see, however, Moore et al. 1998). In order to examine the sensitivity of the results to model assumptions, we consider several different gas and dark matter profiles, keeping the total gas fraction in the halo equal to the cosmological baryon fraction. The simplest case we consider is an equal NFW profile for the gas and the dark matter. In order to include a core, instead of the NFW profile of equation (\[NFW\]) we also consider the density profile of the form fit by Burkert (1995) to dwarf galaxies, (r)= (1+z)\^3 \[core\] where $b$ is the inverse core radius, and we set $\delta_c$ by requiring the mean overdensity to equal the appropriate value, $\Delta_c$, in each cosmology (see Appendix A). We also consider two cases where the dark matter follows an NFW profile but the gas is in hydrostatic equilibrium with its density profile determined by its temperature distribution. In one case, we assume the gas is isothermal at the halo virial temperature, given by \[tvir\] T\_[vir]{}==36100  ()\^2   ()\^3  , where $\mu$ is the mean molecular weight as determined by ionization equilibrium, and $m_p$ is the proton mass. The spherical collapse simulations of Haiman, Thoul, & Loeb (1996) find a post-shock gas temperature of roughly twice the value given by equation (\[tvir\]), so we also compare with the result of setting $T=2\ T_{\rm vir}$. In the second case, we let the gas cool for a time equal to the Hubble time at the redshift of interest, $z$. Gas above $10^4$K cools rapidly due to atomic cooling until it reaches a temperature near $10^4$K, where the cooling time rapidly diverges. In this case, hydrostatic equilibrium yields a highly compact gas cloud when the halo virial temperature is greater than $10^4$K. In reality, of course, a fraction of the gas may fragment and form stars in these halos. However, this caveat hardly affects our results since only a small fraction of the gas which evaporates is contained in halos with $T_{\rm vir}>10^4$K. Throughout most of our subsequent discussion we consider the simple case of identical NFW profiles for both the dark matter and the gas, unless indicated otherwise. We assume a helium mass fraction of $Y=0.24$, and include it in the calculation of the ionization equilibrium state of the gas as well as its cooling and heating (see, e.g., Katz, Weinberg, & Hernquist 1996). We adopt the various reaction and cooling rates from the literature, including the rates for collisional excitation and dielectronic recombination from Black (1981); the recombination rates from Verner & Ferland (1996), and the recombination cooling rates from Ferland et al. (1992) with a fitting formula by Miralda-Escudé (1998, private communication). Collisional ionization rates are adopted from Voronov (1997), with the corresponding cooling rate for each atomic species given by its ionization rate multiplied by its ionization potential. We also include cooling by Bremsstrahlung emission with a Gaunt factor from Spitzer & Hart (1971), and by Compton scattering off the microwave background (e.g., Shapiro & Kang 1987). In assessing the effect of reionization, we assume for simplicity a sudden turn-on of an external radiation field with a specific intensity per unit frequency, $\nu$, I\_[,0]{}=10\^[-21]{} I\_[21]{}(z) (/\_L)\^ [-]{}\^[-2]{}\^[-1]{} \^[-1]{}\^[-1]{} , \[Inu\] where $\nu_L$ is the Lyman limit frequency. Our treatment of the response of the cloud to this radiation, as outlined below, is not expected to yield different results with a more gradual increase of the intensity with cosmic time. The external intensity $I_{21}(z)$ is responsible for the reionization of the IGM, and so we normalize it to have a fixed number of ionizing photons per baryon in the Universe. We define the ionizing photon density as n\_=\_[\_L]{}\^ d , \[eq:n\_gamma\] where the photoionization efficiency is weighted by the photoionization cross section of $HI$, $\sigma_{HI}(\nu)$, above the Lyman limit. The mean baryon number density is n\_b=2.2510\^[-4]{} ()\^3 ()\^[-3]{} .Throughout the paper we refer to proper densities rather than comoving densities. As our standard case we assume a post–reionization ratio of $n_{\gamma}/n_b=1$, but we also consider the effect of setting $n_{\gamma}/n_b=0.1$. For example, $\alpha=1.8$ and $n_{\gamma}/n_b=1$ yield $I_{21}=1.0$ at $z=3$ and $I_{21}=3.5$ at $z=5$, close to the values required to satisfy the Gunn-Peterson constraint at these redshifts (see, e.g., Efstathiou 1992). Note that $n_{\gamma}/n_b\ga 1$ is required for the initial ionization of the gas in the Universe (although this ratio may decline after reionization). We assume that the above uniform UV background illuminates the outer surface of the gas cloud, located at the virial radius $r_{\rm vir}$, and penetrates from there into the cloud. The radiation photoionizes and heats the gas at each radius to its equilibrium temperature, determined by equating the heating and cooling rates. The latter assumption is justified by the fact that both the recombination time and the heating time are initially shorter than the dynamical time throughout the halo. At the outskirts of the halo the dynamics may start to change before the gas can be heated up to its equilibrium temperature, but this simply means that the gas starts expanding out of the halo during the process of photoheating. This outflow should not alter the overall fraction of evaporated gas. The process of reionization is expected to be highly non-uniform due to the clustering of the ionizing sources and the clumpiness of the IGM. As time progresses, the HII regions around the ionizing sources overlap, and each halo is exposed to ionizing radiation from an ever increasing number of sources. While the external ionizing radiation may at first be dominated by a small number of sources, it quickly becomes more isotropic as its intensity builds up with time (e.g., Haiman & Loeb 1998a,b; Miralda-Escudé, Haehnelt, & Rees 1998). The evolution of this process depends on the characteristic clustering scale of ionizing sources and their correlation with the inhomogeneities of the IGM. In particular, the process takes more time if the sources are typically embedded in dense regions of the neutral IGM which need to be ionized first before their radiation shines on the rest of the IGM. However, in our analysis we do not need to consider these complications since the total fraction of evaporated gas in bound halos depends primarily on the maximum intensity achieved at the end of the reionization epoch. In computing the effect of the background radiation, we include self-shielding of the gas which is important at the high densities obtained in the core of high redshift halos. For this purpose, we include radiative transfer through the halo gas and photoionization by the resulting anisotropic radiation field in the calculation of the ionization equilibrium. We also include the fact that the ionizing spectrum becomes harder at inner radii, since the outer gas layers preferentially block photons with energies just above the Lyman limit. We neglect self-shielding due to helium atoms. Appendix B summarizes our simplified treatment of the radiative transfer equations. Once the gas is heated throughout the halo, some fraction of it acquires a sufficiently high temperature that it becomes unbound. This gas expands due to the resulting pressure gradient and eventually evaporates back to the IGM. The pressure gradient force (per unit volume) $k_B \nabla (T \rho/\mu)$ competes with the gravitational force of $\rho G M/r^2$. Due to the density gradient, the ratio between the pressure force and the gravitational force is roughly the ratio between the thermal energy $\sim k_B T$ and the gravitational binding energy $\sim \mu G M/r$ (which is $\sim k_B T_{\rm vir}$ at $r_{\rm vir}$) per particle. Thus, if the kinetic energy exceeds the potential energy (or roughly if $T>T_{\rm vir}$), the repulsive pressure gradient force exceeds the attractive gravitational force and expels the gas on a dynamical time (or faster for halos with $T\gg T_{\rm vir}$). We compare the thermal and gravitational energy (both of which are functions of radius) as a benchmark for deciding which gas shells are expelled from each halo. Note that infall of fresh IGM gas into the halo is also suppressed due to its excessive gas pressure, produced by the same photo-ionization heating process. This situation stands in contrast to feedback due to supernovae, which depends on the efficiency of converting the mechanical energy of the supernovae into thermal energy of the halo gas. The ability of supernovae to disrupt their host dwarf galaxies has been explored in a number of theoretical papers (e.g., Larson 1974; Dekel & Silk 1986; Vader 1986, 1987). However, numerical simulations (Mac-Low & Ferrara 1998) find that supernovae produce a hole in the gas distribution through which they expel the shock-heated gas, leaving most of the cooler gas still bound. In the case of reionization, on the other hand, energy is imparted to the gas directly by the ionizing photons. A halo for which a large fraction of the gas is unbound by reionization is thus prevented from further collapse and star formation. When the gas in each halo is initially ionized, an ionization shock front may be generated (cf. the discussion of Ly$\alpha$ absorbers by Donahue & Shull 1987). The dynamics of such a shock front have been investigated in the context of the interstellar medium by Bertoldi & McKee (1990) and Bertoldi (1989). Their results imply that the dynamics of gas in a halo are not significantly affected by the shock front unless the thermal energy of the ionized gas is greater than its gravitational potential energy. Furthermore, since gas in a halo is heated to the virial temperature even before reionization, the shock is weaker when the gas is ionized than a typical shock in the interstellar medium. Also, as noted above, the ionizing radiation reaching a given halo builds up in intensity over a considerable period of time. Thus, we do not expect the ionization shock associated with the first encounter of ionizing radiation to have a large effect on the eventual fate of gas in the halo. Results ======= We assume the most popular cosmology to date (Garnavich et al. 1998) with $\Omega_0=0.3$ and $\Omega_{\Lambda}=0.7$. We illustrate the effects of cosmological parameters by displaying the results also for $\Omega_0=1$, and for $\Omega_0=0.3$ and $\Omega_{\Lambda}=0$. The models all assume $\Omega_b h^2=0.02$ and a Hubble constant $h=0.5$ if $\Omega_0=1$ and $h=0.7$ otherwise (where $H_0=100\, h\mbox{ km s}^{-1}\mbox{Mpc}^{-1}$). Figure 1 shows the temperature of the gas versus its baryonic overdensity $\Delta_b$ relative to the cosmic average (cf. Efstathiou 1992). The curves are for $z=8$ and assume $\Omega_0=0.3$ and $\Omega_{\Lambda}=0.7$. We include intergalactic radiation with a flux given by equation (\[Inu\]) for $\alpha=1.8$ and $n_{\gamma}/n_b=1$. The dotted curve shows $t_H=t_{cool}$ with no radiation field, where $t_H$ is the age of the Universe, approximately equal to $6.5\times 10^9 h^{-1} (1+z)^{-3/2} \Omega_0^{-1/2}$ years at high redshift. This curve indicates the temperature to which gas has time to cool through atomic transitions before reionization. This temperature is always near $T=10^4$K since below this temperature the gas becomes mostly neutral and the cooling time is very long. It is likely that only atomic cooling is relevant before reionization since molecular hydrogen is easily destroyed by even a weak ionizing background (Haiman, Rees, & Loeb 1996). The solid curve shows the equilibrium temperature for which the heating time $t_{heat}$ due to a UV radiation field equals the cooling time $t_{cool}$. The decrease in the temperature at $\Delta_b<10$ is due to the increased importance of Compton cooling, which is proportional to the gas density rather than its square. At a given density, gas is heated at reionization to the temperature indicated by the solid curve, unless the net cooling or heating time is too long. The dashed curves show the temperature where the net cooling or heating time equals $t_H$. By definition, points on the solid curve have an infinite net cooling or heating time, but there is also a substantial regime at low $\Delta_b$ where the net cooling or heating time is greater than $t_H$. However, this regime has only a minor effect on halos, since the mean overdensity inside the virial radius of a halo is of order 200. On the other hand, if gas leaves the halo and expands it quickly enters the regime where it cannot reach thermal equilibrium. Figure 2 presents an example for the structure of a halo with an initial total mass of $M=3\times10^7 M_{\sun}$ at $z=8$. We assume the same cosmological parameters as in Figure 1. The bottom plot shows the baryon overdensity $\Delta_b$ versus $r/r_{\rm vir}$, and reflects our assumption of identical NFW profiles for both the dark matter and the baryons. The middle plot shows the neutral hydrogen fraction versus $r/r_{\rm vir}$, and the top plot shows the ratio of thermal energy per particle (${\rm TE}=\frac{3}{2} k_B T$) to potential energy per particle (${\rm PE}=\mu |\phi(r)|$, where $\phi(r)$ is the gravitational potential) versus $r/r_{\rm vir}$. The dashed curves assume an optically thin halo, while the solid curves include radiative transfer and self-shielding. The self-shielded neutral core is apparent from the solid curves, but since the point where ${\rm TE/PE}=1$ occurs outside this core, the overall unbound fraction does not depend strongly on the radiative transfer in this case. Its value is $67\%$ assuming an optically–thin halo, and $64\%$ when radiative transfer is included and only a fraction of the external photons make their way inside. Even when the opacity at the Lyman limit is large, some ionizing radiation still reaches the central parts of the halo because, (i) the opacity drops quickly above the Lyman limit, and (ii) the heated gas radiates ionizing photons inwards. Figure 3 shows the unbound gas fraction after reionization as a function of the total halo mass. We assume $\Omega_0=0.3$, $\Omega_{\Lambda}=0.7$, and $n_{\gamma}/n_b=1$. The three pairs of curves shown consist of a solid line (which includes radiative transfer) and a dashed line (which assumes an optically thin halo). From right to left, the first pair is for $\alpha=1.8$ and $z=8$, the second is for $\alpha=5$ and $z=8$, and the third is for $\alpha=1.8$ and $z=20$. In each case the self-shielded core lowers the unbound fraction when we include radiative transfer (solid vs dashed lines), particularly when the unbound fraction is sufficiently large that it includes part of the core itself. High energy photons above the Lyman limit penetrate deep into the halo and heat the gas efficiently. Therefore, a steepening of the spectral slope from $\alpha=1.8$ to $\alpha=5$ decreases the temperature throughout the halo and lowers the unbound gas fraction. This is only partially compensated for by our UV flux normalization, which increases $I_{21}$ with increasing $\alpha$ so as to get the same density of ionizing photons in equation (\[eq:n\_gamma\]). Increasing the reionization redshift from $z=8$ to $z=20$ increases the binding energy of the gas, because the high redshift halos are denser. Although the corresponding increase of $I_{21}$ with redshift (at a fixed $n_\gamma/n_{b}$) counteracts this change, the fraction of expelled gas is still reduced due to the deeper potential wells of higher redshift halos. From plots similar to those shown in Figure 3, we find the total halo mass at which the unbound gas fraction is $50\%$. We henceforth refer to this mass as the $50\%$ mass. Figure 4 plots this mass as a function of the reionization redshift for different spectra and cosmological models. The solid line assumes $\alpha=1.8$ and the dotted line $\alpha=5$, both for $\Omega_0=0.3$ and $\Omega_{\Lambda}=0.7$. The other lines assume $\alpha=1.8$ but different cosmologies. The short-dashed line assumes $\Omega_0=0.3$, $\Omega_{\Lambda}=0$ and the long-dashed line assumes $\Omega_0=1$. All assume $n_{\gamma}/n_b=1$. Gas becomes unbound when its thermal energy equals its potential binding energy. The thermal energy depends on temperature, but the equilibrium temperature does not change much with redshift since we increase the UV flux normalization by the same $(1+z)^3$ factor as the mean baryonic density. With this prescription for the UV flux, the $50\%$ mass occurs at a value of the circular velocity which is roughly constant with redshift. Thus for each curve, the change in mass with redshift is mostly due to the change in the characteristic halo density, which affects the relation between circular velocity and mass. The cosmological parameters have only a modest effect on the $50\%$ mass, and change it by up to $35\%$ at a given redshift. Lowering $\Omega_0$ reduces the characteristic density of a halo of given mass, and so a higher mass is required in order to keep the gas bound. Adding a cosmological constant reduces the density further through $\Delta_c$ \[see equations (\[dc1\]) and (\[dc2\])\]. For the three curves with $\alpha=1.8$, the circular velocity of the $50\%$ mass equals $13~{\rm km~s^{-1}}$ at all redshifts, up to variations of a few percent. The spectral shape of the ionizing flux affects modestly the threshold circular velocity corresponding to the $50\%$ mass, because assuming a steeper spectrum (i.e. with a larger $\alpha$) reduces the gas temperature and thus requires a shallower potential to keep the gas bound. A higher flux normalization has the opposite effect of increasing the threshold circular velocity. The left panel of Figure 5 shows the variation of circular velocity with spectral shape, for two normalizations ($n_{\gamma}/n_b=1$ and $n_{\gamma}/n_b=0.1$ for the solid and dashed curves, respectively). The right panel shows the complementary case of varying the spectral normalization, using two values for the spectral slope ($\alpha=1.8$ and $\alpha=5$ for the solid and dashed curves, respectively). All curves assume an $\Omega_0=0.3$, $\Omega_{\Lambda}=0.7$ cosmology. Obviously, $50\%$ is a fairly arbitrary choice for the unbound gas fraction at which halos evaporate. Figure 3 shows that for a given halo, the unbound gas fraction changes from $10\%$ to $90\%$ over a factor of $\sim 60$ in mass, or a factor of $\sim 4$ in velocity dispersion. When $50\%$ of the gas is unbound, however, the rest of the gas is also substantially heated, and we expect the process of collapse and fragmentation to be inhibited. In the extreme case where the gas expands until a steady state is achieved where it is pressure confined by the IGM, less than $10\%$ of the original gas is left inside the virial radius. However, continued infall of dark matter should limit the expansion. Numerical simulations may be used to define more precisely the point at which gas halos are disrupted. Clearly, photo-evaporation affects even halos with masses well above the $50\%$ mass, although these halos do not completely evaporate. Note that it is also clear from Figure 3 that not including radiative transfer would have only a minor effect on the value of the $50\%$ mass (typically $\sim 5\%$). Given the values of the unbound gas fraction in halos of different masses, we can integrate to find the total gas fraction in the Universe which becomes unbound at reionization. This calculation requires the abundance distribution of halos, which is readily provided by the Press-Schechter mass function for CDM cosmologies (relevant expressions are given, e.g., in NFW). The high-mass cutoff in the integration is given by the lowest mass halo for which the unbound gas fraction is zero, since halos above this mass are not significantly affected by the UV radiation. The low-mass cutoff is given by the lowest mass halo in which gas has assembled by the reionization redshift. We adopt for this low-mass cutoff the linear Jeans mass, which we calculate following Peebles (1993, §6). The gas temperature in the Universe follows the cosmic microwave background temperature down to a redshift $1+z_t \sim 740 (\Omega_b h^2)^{2/5}$, at which the baryonic Jeans mass is $1.9\times 10^5 (\Omega_b h^2)^ {-1/2}M_{\sun}$. After this redshift, the gas temperature goes down as $(1+z)^2$, so the baryon Jeans mass acquires a factor of $[(1+z)/(1+z_t)]^{3/2}$. Until now we have considered baryons only, but if we add dark matter then the mean density (or the corresponding gravitational force) is increased by $\Omega_0/\Omega_b$, which decreases the baryonic Jeans mass by $(\Omega_0/\Omega_b)^{-3/2}$. The corresponding total halo mass is $\Omega_0/\Omega_b$ times the baryonic mass. Thus the Jeans cutoff before reionization corresponds to a total halo mass of M\_J=6.9 10\^3 ()\^[-]{}()\^[-]{}()\^ M\_ . This value agrees with the numerical spherical collapse calculations of Haiman, Thoul, & Loeb (1996). We thus calculate the total fraction of gas in the Universe which is bound in pre-existing halos, and the fraction of this gas which then becomes unbound at reionization. In Figure 6 we show the fraction of the collapsed gas which evaporates as a function of the reionization redshift. The solid line assumes $\alpha=1.8$, and the dotted line assumes $\alpha=5$, both for $\Omega_0=0.3$, $\Omega_{\Lambda}=0.7$. The other lines assume $\alpha=1.8$, the short-dashed line with $\Omega_0=0.3$, $\Omega_{\Lambda}=0$ and the long-dashed line with $\Omega_0=1$. All assume $n_{\gamma}/n_b=1$ and a primordial $n=1$ (scale invariant) power spectrum. In each case we normalized the CDM power spectrum to the present cluster abundance, $\sigma_8=0.5\ \Omega_0^{-0.5}$ (see, e.g., Pen 1998), where $\sigma_8$ is the root-mean-square amplitude of mass fluctuations in spheres of radius $8\ h^{-1}$ Mpc. The fraction of collapsed gas which is unbound is $\sim 0.4$–0.7 at $z=6$ and it increases with redshift. This fraction clearly depends strongly on the halo abundance but is relatively insensitive to the spectral slope $\alpha$ of the ionizing radiation. In hierarchical models, the characteristic mass (and binding energy) of virialized halos is smaller at higher redshifts, and a larger fraction of the collapsed gas therefore escapes once it is photoheated. Among the three cosmological models, the characteristic mass at a given redshift is smallest for $\Omega_0=1$ and largest for $\Omega_0=0.3$, $\Omega_{\Lambda}=0$. In Figure 7 we show the total fraction of gas in the Universe which evaporates at reionization. The solid line assumes $\alpha=1.8$, and the dotted line assumes $\alpha=5$, both for $\Omega_0=0.3$, $\Omega_{\Lambda}=0.7$. The other lines assume $\alpha=1.8$, the short-dashed line with $\Omega_0=0.3$, $\Omega_{\Lambda}=0$ and the long-dashed line with $\Omega_0=1$. All assume $n_{\gamma}/n_b=1$. For the different cosmologies, the total unbound fraction goes up to 20–25$\%$ if reionization occurs as late as $z=6$–$7$; in this case a substantial fraction of the total gas in the Universe undergoes the process of expulsion from halos. However, this fraction typically decreases at higher redshifts. Although a higher fraction of the collapsed gas evaporates at higher $z$ (see Figure 6), a smaller fraction of the gas in the Universe lies in halos in the first place. The latter effect dominates except for the open model up to $z\sim 7$. As is well known, the $\Omega_0=1$ model produces late structure formation, and indeed the collapsed fraction decreases rapidly with redshift in this cosmological model. The low $\Omega_0$ models approach the $\Omega_0=1$ behavior at high $z$, but this occurs faster for the flat model with a cosmological constant than for the open model with the same value of $\Omega_0$. Changing the dark matter and gas profiles as discussed in §2 has a modest effect on the results. For example, with $\Omega_0=0.3$, $\Omega_{\Lambda}=0.7$, and $z=8$, and for our standard model where the gas and dark matter follow identical NFW profiles, the total unbound gas fraction is $19.8\%$ and the halo mass which loses $50\%$ of its baryons is $5.25\times 10^7 M_{\sun}$. If we let the mass and the baryons follow the profile of equation (\[core\]) the corresponding results are $20.0\%$ and $5.31 \times 10^7 M_{\sun}$ for $b=10$ in equation (\[core\]) and $20.9\%$ and $6.84 \times 10^7 M_{\sun}$ for $b=5$ (i.e. a larger core). With an NFW mass profile but gas in hydrostatic equilibrium at the virial temperature, the unbound fraction is $19.2\%$, and the $50\%$ mass is $4.33\times10^7 M_{\sun}$. If we let the gas temperature be $T=2\ T_{\rm vir}$, the unbound fraction is $22.0\%$ and the $50\%$ mass is $1.18 \times 10^8 M_{\sun}$. For clouds of gas which condense by cooling for a Hubble time, the unbound fraction is $18.2\%$, and the $50\%$ mass is $3.38\times 10^7 M_{\sun}$. We conclude that centrally concentrated gas clouds are in general more effective at retaining their gas, but the effect on the overall unbound gas fraction in the Universe is modest, even for large variations in the profile. If we return to the NFW profile but adopt $f=0.01$ instead of $f=0.5$ in the NFW prescription for finding the collapse redshift (see Appendix A), we find an unbound fraction of $20.3\%$, and a $50\%$ mass of $6.06\times 10^7 M_{\sun}$. Finally, lowering $\Omega_b$ by a factor of 2 changes the unbound fraction to $19.0\%$ and the $50\%$ mass to $5.44\times 10^7 M_{\sun}$. Our predictions appear to be robust against variations in the model parameters. Implications for the Intergalactic Medium and for Low Redshift Objects ====================================================================== Our calculations show that a substantial fraction of gas in the Universe may lie in virialized halos before reionization, and that most of it evaporates out of the halos when it is photoionized and heated at reionization. The resulting outflows of gas from halos may have interesting implications for the subsequent evolution of structure in the IGM. We discuss some of these implications in this section. In the pre-reionization epoch, a fraction of the gas in the dense cores of halos may fragment and form stars. Some star formation is, of course, needed in order to produce the ionizing flux which leads to reionization. These population III stars produce the first metals in the Universe, and they may make a substantial contribution to the enrichment of the IGM. Numerical models by Mac-Low & Ferrara (1998) suggest that feedback from supernovae is very efficient at expelling metals from dwarf galaxies of total mass $3.5\times 10^8\, M_{\sun}$, although it ejects only a small fraction of the interstellar medium in these hosts. Obviously, the metal expulsion efficiency depends on the presence of clumps in the supernova ejecta (Franco et al. 1993) and on the supernova rate – the latter depending on the unknown star formation rate and the initial mass function of stars at high redshifts. Reionization provides an alternative method for expelling metals efficiently out of dwarf galaxies by directly photoheating the gas in their halos, leading to its evaporation along with its metal content.[^3] Gas which falls into halos and is expelled at reionization attains a different entropy than if it had stayed at the mean density of the Universe. Gas which collapses into a halo is at a high overdensity when it is photoheated, and is therefore at a lower entropy than if it were heated to the same temperature at the mean cosmic density. However, the overall change in the entropy density of the IGM is small for two reasons. First, even at $z=6$ only about $25\%$ of the gas in the Universe undergoes evaporation. Second, the gas remains in ionization equilibrium and is photoheated during its initial expansion. For example, if $z=6$, $\Omega_0=0.3$, $\Omega_\Lambda=0.7$, $n_{\gamma}/n_b=1$, and $\alpha=1.8$, then the recombination time becomes longer than the dynamical time only when the gas expands down to an overdensity of 26, at which point its temperature is 22,400 K compared to an initial (non-equilibrium) temperature of 19,900 K for gas at the mean density. The resulting overall reduction in the entropy is the same as would be produced by reducing the temperature of the entire IGM by a factor of 1.6. This factor reduces to 1.4 if we increase $z$ to 8 or increase $\alpha$ to 5. Note that Haehnelt & Steinmetz (1998) showed that differences in temperature by a factor of $3$–$4$ result in possibly observable differences in the Doppler parameter distribution of Ly$\alpha$ absorption lines at redshifts 3–5. When the halos evaporate, recombinations in the gas could produce Ly$\alpha$ lines or radiation from two-photon transitions to the ground state of hydrogen. However, a simple estimate shows that the resulting luminosity is too small for direct detection unless these halos are illuminated by an internal ionizing source. In an externally illuminated $z=6$, $10^8 M_{\sun}$ halo our calculations imply a total of $\sim 1\times 10^{50}$ recombinations per second. Note that the number of recombinations is dominated by the high density core, and if we did not include self-shielding we would obtain an overestimate by a factor of $\sim 15$. If each recombination releases one or two photons with a total energy of $10.2$ eV, then for $\Omega_0=0.3$ and $\Omega_\Lambda=0.7$ the observed flux is $\sim 5\times 10^{-20}$ erg s$^{-1}$ cm$^{-2}$. This flux is well below the sensitivity of the planned Next Generation Space Telescope, even if part of this flux is concentrated in a narrow line. The photoionization heating of the gaseous halos of dwarf galaxies resulted in outflows with a characteristic velocity of $\sim 20$–$30~{\rm km~s^{-1}}$. These outflows must have induced peculiar velocities of a comparable magnitude in the IGM surrounding these galaxies. The effect of the outflows on the velocity field and entropy of the IGM at $z=5$–10 could in principle be searched for in the absorption spectra of high redshift sources, such as quasars. These small-scale fluctuations in velocity and the resulting temperature fluctuations have been seen in recent simulations by Bryan et al. (1998). However, the small halos responsible for these outflows were only barely resolved even in these high resolution simulations of a small volume. The evaporating galaxies could contribute to the high column density end of the Ly$\alpha$ forest (cf. Bond, Szalay, & Silk 1988). For example, shortly after being photoionized, a $z=8$, $5\times 10^7\ M_{\sun}$ halo has a neutral hydrogen column density of $2\times 10^{16}$ cm$^{-2}$ at an impact parameter of $0.5\, r_{\rm vir}=0.66$ kpc, $6\times 10^{17}$ cm$^{-2}$ at $0.25\, r_{\rm vir}$, and $9\times 10^{20}$ cm$^{-2}$ (or $9\times 10^{18}$ cm$^{-2}$ if we do not include self-shielding) at $0.1\, r_{\rm vir}$ (assuming $\Omega_0=0.3$, $\Omega_{\Lambda}=0.7$, $\alpha=1.8$, and $n_{\gamma}/n_b=1$). These column densities will decline as the gas expands out of the host galaxy. Abel & Mo (1998) have suggested that a large fraction of the Lyman limit systems at $z\sim 3$ may correspond to mini-halos that survived reionization. Remnant absorbers due to galactic outflows can be distinguished from large-scale absorbers in the IGM by their compactness. Close lines of sight due to quasar pairs or gravitational lensed quasars (see, e.g., Crotts & Fang 1998; Petry, Impey, & Foltz 1998, and references therein) should probe different HI column densities in galactic outflow absorbers but similar column densities in the larger, more common absorbers. Follow-up observations with high spectroscopic resolution could reveal the velocity fields of these outflows. Although much of the gas in the Universe evaporated at reionization, the underlying dark matter halos continued to evolve through infall and merging, and the heated gas may have accumulated in these halos at lower redshifts. This latter process has been discussed by a number of authors, with an emphasis on the effect of reionization and the resulting heating of gas. Thoul & Weinberg (1996) found a reduction of $\sim50\%$ in the collapsed gas mass due to heating, for a halo of $V_c=50\ {\rm km\ s}^{-1}$ at $z=2$, and a complete suppression of infall below $V_c=30\ {\rm km\ s}^{-1}$. The effect is thus substantial on halos with virial temperatures well above the gas temperature. Their interpretation is that pressure support delays turnaround substantially and slows the subsequent collapse. Indeed, as noted in §2, the ratio of the pressure force to the gravitational force on the gas is roughly equal to the ratio of its thermal energy to its potential energy. For a given enclosed mass, the potential energy of a shell of gas increases as its radius decreases. Before collapse, each gas shell expands with the Hubble flow until its expansion is halted and then reversed. Near turnaround, the gas is weakly bound and the pressure gradient may prevent collapse even for gas below the halo virial temperature. On the other hand, gas which is already well within the virial radius is tightly bound, which explains our lower value of $V_c \sim 13\ {\rm km\ s}^{-1}$ for halos which lose half their gas at reionization. Three dimensional numerical simulations (Quinn, Katz, & Efstathiou 1996; Weinberg, Hernquist, & Katz 1997; Navarro & Steinmetz 1997) have also explored the question of whether dwarf galaxies could re-form at $z \ga 2$. The heating by the UV background was found to suppress infall of gas into even larger halos ($V_c \sim 75\ {\rm km\ s}^{-1}$), depending on the redshift and on the ionizing radiation intensity. Navarro & Steinmetz (1997) noted that photoionization reduces the cooling efficiency of gas at low densities, which suppresses further the late infall at redshifts below 2. We note that these various simulations assume an isotropic ionizing radiation field, and do not calculate radiative transfer. Photoevaporation of a gas cloud has been calculated in a two dimensional simulation (Shapiro, Raga, & Mellema 1998), and methods are being developed for incorporating radiative transfer into three dimensional cosmological simulations (e.g., Abel, Norman, & Madau 1999; Razoumov & Scott 1999). Our results have interesting implications for the fate of gas in low-mass halos. Gas evaporates at reionization from halos below $V_c \sim 13\ {\rm km\ s}^ {-1}$, or a velocity dispersion $\sigma \sim 10\ {\rm km\ s}^ {-1}$. A similar value of the velocity dispersion is also required to reach a virial temperature of $10^4$ K, allowing atomic cooling and perhaps star formation before reionization. Thus, halos with $\sigma \ga 10\ {\rm km\ s}^{-1}$ could have formed stars before reionization. They would have kept their gas after reionization, and could have had ongoing star formation subsequently. These halos were the likely sites of population III stars, and could have been the progenitors of dwarf galaxies in the local Universe (cf.Miralda-Escudé & Rees 1998). On the other hand, halos with $\sigma \la 10\ {\rm km\ s}^{-1}$ could not have cooled before reionization. Their warm gas was completely evaporated from them at reionization, and could not have returned to them until very low redshifts, possibly $z\la 1$, so that their stellar population should be relatively young. It is interesting to compare these predictions to the properties of dwarf spheroidal galaxies in the Local Group which have low central velocity dispersions. At first sight this appears to be a difficult task. The dwarf galaxies vary greatly in their properties, with many showing evidence for multiple episodes of star formation as well as some very old stars (see the recent review by Mateo 1998). Another obstacle is the low temporal resolution of age indicators for old stellar populations. For example, if $\Omega_0=0.3$ and $\Omega_{\Lambda}=0.7$ then the age of the Universe is $43\%$ of its present age at $z=1$ and $31\%$ at $z=1.5$. Thus, stars that formed at these redshifts may already be $\sim 10$ Gyr old at present, and are difficult to distinguish from stars that formed at $z > 5$. Nevertheless, one of our robust predictions is that most early halos with $\sigma \la 10\ {\rm km\ s}^{-1}$ could not have formed stars in the standard hierarchical scenario. Globular clusters belong to one class of objects with such a low velocity dispersion. Peebles & Dicke (1968) originally suggested that globular clusters may have formed at high redshifts, before their parent galaxies. However, in current cosmological models, most mass fluctuations on globular cluster scales were unable to cool effectively and fragment until $z\sim 10$, and were evaporated subsequently by reionization. We note that Fall & Rees (1985) proposed an alternative formation scenario for globular clusters involving a thermal instability inside galaxies with properties similar to those of the Milky Way. Globular clusters have also been observed to form in galaxy mergers (e.g., Miller et al. 1997). It is still possible that some of the very oldest and most metal poor globular clusters originated from $z\ga 10$, before the UV background had become strong enough to destroy the molecular hydrogen in them. However, primeval globular clusters should have retained their dark halos but observations suggest that globular clusters are not embedded in dark matter halos (Moore 1996; Heggie & Hut 1995). Another related population is the nine dwarf spheroidals in the Local Group with central velocity dispersions $\sigma \la 10\ {\rm km\ s}^{-1}$, including five below $7\ {\rm km\ s}^{-1}$ (e.g., Mateo 1998). In the hierarchical clustering scenario, the dark matter in a present halo was most probably divided at reionization among several progenitors which have since merged. The velocity dispersions of these progenitors were likely even lower than that of the final halo. Thus the dwarf galaxies could not have formed stars at high redshifts, and their formation presents an intriguing puzzle. There are two possible solutions to this puzzle, (i) the ionizing background dropped dramatically at low redshifts, allowing the dwarf galaxies to form at $z\la 1$, or (ii) the measured stellar velocity dispersions of the dwarf galaxies are well below the velocity dispersions of their dark matter halos. Unlike globular clusters, the dwarf spheroidal galaxies are dark matter dominated. The dark halo of a present-day dwarf galaxy may have virialized at high redshifts but accreted its gas at low redshift from the IGM. However, for dark matter halos accumulating primordial gas, Kepner, Babul, & Spergel (1997) found that even if $I_{21}(z)$ declines as $(1+z)^4$ below $z=3$, only halos with $V_c \ga 20\ {\rm km\ s}^{-1}$ can form atomic hydrogen by $z=1$, and $V_c \ga 25\ {\rm km\ s}^{-1}$ is required to form molecular hydrogen. Alternatively, the dwarf dark halos could have accreted cold gas at low redshift from a larger host galaxy rather than from the IGM. As long as the dwarf halos join their host galaxy at a redshift much lower than their formation redshift, they will survive disruption due to their high densities. The subsequent accretion of gas could result from passages of the dwarf halos through the gaseous tidal tail of a merger event or through the disk of the parent galaxy. In this case, retainment of cold, dense, and possibly metal enriched gas against heating by the UV background requires a shallower potential well than accumulating warm gas from the IGM. Simulations of galaxy encounters (Barnes & Hernquist 1992; Elmegreen, Kaufman, & Thomasson 1993) have found that dwarf galaxies could form but with small amounts of dark matter. However, the initial conditions of these simulations assumed parent galaxies with a smooth dark matter distribution rather than clumpy halos with dense sub-halos inside them. Simulations by Klypin et al. (1999) suggest that galaxy halos may have large numbers of dark matter satellites, most of which have no associated stars. If true, this implies that the dwarf spheroidal galaxies might be explained even if only a small fraction of dwarf dark halos accreted gas and formed stars. A common origin for the Milky Way’s dwarf satellites (and a number of halo globular clusters), as remnants of larger galaxies accreted by the Milky Way galaxy, has been suggested on independent grounds. These satellites appear to lie along two (e.g., Majewski 1994) or more (Lynden-Bell & Lynden-Bell 1995, Fusi-Pecci et al. 1995) polar great circles. The star formation history of the dwarf galaxies (e.g., Grebel 1998) constrains their merger history, and implies that the fragmentation responsible for their appearance must have occured early in order to be consistent with the variation in stellar populations among the supposed fragments (Unavane, Wyse, & Gilmore 1996; Olszewski 1998). Observations of interacting galaxies (outside the Local Group) also suggest the formation of “tidal dwarf galaxies” (e.g., Duc & Mirabel 1997). Finally, there exists the possibility that the measured velocity dispersion of stars in the dwarf spheroidals underestimates the velocity dispersion of their dark halos. Assuming that the stars are in equilibrium, their velocity dispersion could be lower than that of the halo if the mass profile is shallower than isothermal beyond the stellar core radius. As discussed in §2, halo profiles are thought to vary from being shallow in a central core to being steeper than isothermal at larger distances. The velocity dispersion and mass to light ratio of a dwarf spheroidal could also appear high if it is non-spherical or the stellar orbits are anisotropic. Some dwarf spheroidals may even not be dark matter dominated if they are tidally disrupted (e.g., Kroupa 1997). The observed properties of dwarf spheroidals require a central mass density of order $0.1 M_{\sun}$ pc$^{-3}$ (e.g., Mateo 1998), which is $\sim 7\times 10^5$ times the present critical density. The stars therefore reside either in high-redshift halos or in the very central parts of low redshift halos. Detailed observations of the velocity dispersion profiles of these stars could be used to discriminate between these possibilities. Conclusions =========== We have shown that the photoionizing background radiation which filled the Universe during reionization likely boiled most of the virialized gas out of CDM halos at that time. The evaporation process probably lasted of order a Hubble time due to the gradual increase in the UV background as the HII regions around individual sources overlapped and percolated until the radiation field inside them grew up to its cosmic value – amounting to the full contribution of sources from the entire Hubble volume. The precise reionization history depends on the unknown star formation efficiency and the potential existence of mini-quasars in newly formed halos (Haiman & Loeb 1998a). The total fraction of the cosmic baryons which participate in the evaporation process depends on the reionization redshift, the ionizing intensity, and the cosmological parameters, but is not very sensitive to the precise gas and dark matter profiles of the halos. The central core of halos is typically shielded from the external ionizing radiation by the surrounding gas, but this core typically contains $<20\%$ of the halo gas and has only a weak effect on the global behavior of the gas. We have found that halos are disrupted up to a circular velocity $V_c \sim 13\ {\rm km\ s}^{-1}$ for a shallow, quasar-like spectrum, or $V_c \sim 11\ {\rm km\ s}^{-1}$ for a stellar spectrum, assuming the photoionizing sources build up a density of ionizing photons comparable to the mean cosmological density of baryons. At this photoionizing intensity, the value of the circular velocity threshold is nearly independent of redshift. The corresponding halo mass changes, however, from $\sim 10^8 M_{\sun}$ at $z=5$ to $\sim 10^7 M_{\sun}$ at $z=20$, assuming a shallow ionizing spectrum. Based on these findings, we expect that both globular clusters and Local Group dwarf galaxies with velocity dispersions $\la 10~{\rm km~s^{-1}}$ formed at low redshift, most probably inside larger galaxies. The latter possibility has been suggested previously for the Milky Way’s dwarf satellites based on their location along polar great circles. We are grateful to Jordi Miralda-Escudé, Chris McKee, Roger Blandford, Lars Hernquist, and David Spergel for useful discussions. We also thank Renyue Cen and Jordi Miralda-Escudé for assistance with the reaction and cooling rates. RB acknowledges support from Institute Funds. This work was supported in part by the NASA NAG 5-7039 grant (for AL). **APPENDIX A: Halo profile** We follow the prescription of NFW for obtaining the density profiles of dark matter halos, but instead of adopting a constant overdensity of 200 we use the fitting formula of Bryan & Norman (1998) for the virial overdensity: \_c=18\^2+82 d-39 d\^2 \[dc1\] for a flat Universe with a cosmological constant and \_c=18\^2+60 d-32 d\^2 \[dc2\] for an open Universe, where $d\equiv \Omega(z)-1$. Given $\Omega_0$ and $\Omega_{\Lambda}$, we define (z)= . In equation (\[NFW\]) $c$ is determined for a given $\delta_c$ by the relation \_c= . The characteristic density is given by \_c=C(f) (z)()\^3 . For a given halo of mass $M$, the collapse redshift $z_{coll}$ is defined as the time at which a mass $M/2$ was first contained in progenitors more massive than some fraction $f$ of $M$. This is computed using the extended Press-Schechter formalism (e.g. Lacey & Cole 1993). NFW find that $f=0.01$ fits their $z=0$ simulation results best. Since we are interested in high redshifts when mergers are very frequent, we adopt the more natural $f=0.5$ but also check the $f=0.01$ case. \[For example, the survival time of a $z=8$, $5\times 10^7\ M_{\sun}$ halo before it merges is $\sim 30$–$40\%$ of the age of the Universe at that redshift (Lacey & Cole 1993).\] In both cases we adopt the normalization of NFW, which is $C(0.5)=2 \times 10^4$ and $C(0.01)= 3 \times 10^3$. **APPENDIX B: Radiative Transfer** We neglect atomic transitions of helium atoms in the radiative transfer calculation. We only consider halos for which $k_B T$ is well below the ionization energy of hydrogen, and so following Tajiri & Umemura (1998) we assume that recombinations to excited levels do not result in further ionizations. On the other hand, recombinations to the ground state result in the emission of ionizing photons all of which are in a narrow frequency band just above the Lyman limit frequency $\nu=\nu_L$. We follow separately these emitted photons and the external incoming radiation. The external photons undergo absorption with an optical depth at the Lyman limit determined by =\_[HI]{}(\_L) n\_[HI]{} . The emitted photons near $\nu_L$ are propagated by the equation of radiative transfer, =-\_[HI]{}() n\_[HI]{}I\_+\_ . Assuming all emitted photons are just above $\nu=\nu_L$, we can set $\sigma_{HI}(\nu)=\sigma_{HI}(\nu_L)$ in this equation and propagate the total number flux of ionizing photons, F\_1\_[\_L]{}\^ d . The emissivity term for this quantity is \_[\_L]{}\^d= \_[HI]{} n\_[HII]{}n\_e  , where $\alpha_{HI}$ is the total recombination coefficient to all bound levels of hydrogen and $\omega$ is the fraction of recombinations to the ground state. In terms of Table 5.2 of Spitzer (1978), $\omega=(\phi_1-\phi_2)/\phi_1$. We find that a convenient fitting formula up to $64,000\ $K, accurate to $2\%$, is (with $T$ in K) =0.205-0.0266(T)+0.0049 \^2(T) . When these photons are emitted they carry away the kinetic energy of the absorbed electron. When the photons are re-absorbed at some distance from where they were emitted, they heat the gas with this extra energy. Since $k_B T \ll h \nu_L$ we do not need to compute the exact frequency distribution of these photons. Instead we solve a single radiative transfer equation for the total flux of energy (above the ionization energy of hydrogen) in these photons, F\_2\_[\_L]{}\^ (h -h \_L)d . The emissivity term for radiative transfer of $F_2$ is \_[\_L]{}\^(h - h \_L)d= n\_[HII]{}n\_e \^[-3]{}\^[-1]{}\^[-1]{} , where $\beta=h \nu_L/k T$, $T$ is in K, and the functions $\chi_1$ and $\chi_2$ are given in Table 6.2 of Spitzer (1978). We find a fitting formula up to $64,000\ $K, accurate to $2\%$ (with $T$ in K): \_1(T)-\_2(T)={ [ll]{} 0.78 &\ -0.172+0.255(T)-0.0171\^2(T) & . From each point we integrate along all lines of sight to find $\tau_{\nu_L}$, $F_1$ and $F_2$ as a function of angle. Because of spherical symmetry, we do this only at each radius, and the angular dependence only involves $\theta$, the angle relative to the radial direction. We then integrate to find the photoionization rate. For each atomic species, the rate is \_[i]{}= \_0\^[4]{}d\_[\_i]{}\^ \_i() d [s]{}\^[-1]{} , where $\nu_i$ and $\sigma_i(\nu)$ are the threshold frequency and cross section for photoionization of species $i$, given in Osterbrock \[1989; see Eq. (2.31) for HI, HeI and HeII\]. For the external photons the UV intensity is $I_{\nu,0}e^{-\tau_{\nu}}$, with the boundary intensity $I_{\nu,0}=I_{\nu_L,0} (\nu/\nu_L)^{-\alpha}$ as before, and $\tau_{\nu}$ approximated as $\tau_{\nu_L} (\nu/\nu_L)^{-3}$. Since $\sigma_i(\nu)$ has the simple form of a sum of two power laws, the frequency integral in $\Gamma_ {\gamma i}$ can be done analytically, and only the angular integration is computed numerically (cf. the similar but simpler calculation of Tajiri & Umemura 1998). There is an additional contribution to photoionization for HI only, from the emitted photons just above $\nu=\nu_L$, given by $\int_0^{4\pi}d\Omega\ \sigma_i(\nu_L) F_1$. The photoheating rate per unit volume is $n_i \epsilon_i$, where $n_i$ is the number density of species $i$ and \_i= \_0\^[4]{}d\_[\_i]{}\^ \_i() (h -h \_L) d [s]{}\^[-1]{} [ergs s\^[-1]{}]{} . The rate for the external UV radiation is calculated for each atomic species similarly to the calculation of $\Gamma_{\gamma i}$. The emitted photons contribute to $\epsilon_{HI}$ an extra amount of $\int_0^{4\pi}d\Omega\ \sigma_i(\nu_L) F_2$. [bib0957]{} Abel, T., & Mo, H. J. 1998, ApJ, 494, L151 Abel, T., Norman, M. L., & Madau, P. 1999, submitted to ApJL, astro-ph/9812151 Babul, A., & Rees, M. 1992, MNRAS, 253, 31 Barnes, J. E., & Hernquist, L. 1992, Nature, 360, 715 Bertoldi, F. 1989, ApJ, 346, 735 Bertoldi, F., & McKee, C. F. 1990, ApJ, 354, 529 Black, J. H. 1981, MNRAS, 197, 553 Bond, J. R., Szalay, A. S., & Silk, J. 1988, ApJ, 324, 627 Bryan, G. L., Machacek, M., Anninos, P., & Norman, M. L. 1998, to appear in ApJ, astro-ph/9805340 Bryan, G., & Norman, M. 1998, ApJ, 495, 80 Burkert, A. 1995, ApJ, 447, L25 Crotts, A. P. S., & Fang, Y. 1998, ApJ, 502, 16 Duc, P.-A., & Mirabel, I. F. 1997, Proceedings of IAU Symp. 186 (Kyoto), astro-ph/9711253 Dekel, A., & Silk, J. 1986, ApJ, 303, 39 Dey, A., Spinrad, H., Stern, D., Graham, J. R., & Chaffee, F. H. 1998, ApJ, 498, L93 Donahue, M., & Shull, J. M. 1987, ApJ, 323, L13 Elmegreen, B. G., Kaufman, M., & Thomasson, M. 1993, ApJ, 412, 90 Efstathiou, G. 1992, MNRAS, 256, 43 Fall, S. M., & Rees, M. J. 1985, ApJ, 298, 18 Ferland, G. J., Peterson, B. M., Horne, K., Welsh, W. F., & Nahar, S. N. 1992, ApJ, 387, 95 Franco, J., Ferrara, A., Roczyska, M., Tenorio-Tagle, G., & Cox, D. P. 1993, ApJ, 407, 100 Fusi-Pecci, F., Ballazzini, M., Cacciari, C., & Ferraro, F. R. 1995, AJ, 100, 1664 Garnavich, P. M., et al. 1998, ApJ, 509, 74 Gnedin, N. Y., & Ostriker, J. P. 1997, ApJ, 486, 581 Grebel, E. 1998, invited review to appear in IAU Symp. 192, “The Stellar Content of the Local Group”, astro-ph/9812443 Haiman, Z., & Loeb, A. 1998a, ApJ, 503, 505 —————————–. 1998b, ApJ, in press, astro-ph/9807070 —————————–. 1998c, invited contribution to Proc. of 9th Annual October Astrophysics Conference in Maryland, “After the Dark Ages: When Galaxies Were Young (the Universe at $2 < z < 5$)”, College Park, October 1998, astro-ph/9811395 Haiman, Z., Rees, M., & Loeb, A. 1996, ApJ, 476, 458; erratum, ApJ, 484, 985 Haiman, Z., Thoul, A. A., & Loeb, A. 1996, ApJ, 464, 523 Haehnelt, M. G., & Steinmetz, M. 1998, MNRAS, 298, 21 Heggie, D. C., & Hut, P. 1995, in IAU Symp. 174, Dynamical Evolution of Star Clusters-Confrontation of Theory and Observations, ed. P. Hut & J. Makino (Dordrecht: Kluwer) Hu, E. M., Cowie, L. L., & McMahon, R. G. 1998, ApJ, 502, L99 Katz, N., Weinberg, D. H., & Hernquist, L. 1996, ApJS, 105, 19 Kepner, J. V., Babul, A., & Spergel, D. N. 1997, ApJ, 487, 61 Klypin, A. A., Kravtsov, A. V., Valenzuela, O., & Prada, F. 1999, submitted to ApJ, astro-ph/9901240 Kravtsov, A. V., Klypin, A. A., Bullock, J. S., & Primack, J. R. 1998, ApJ, 502, 48 Kroupa, P. 1997, NewA 2, 139 Larson, R. B. 1974, MNRAS, 271, 676L Lacey, C. G., & Cole, S. M. 1993, MNRAS, 262, 627 Lynden-Bell, D., & Lynden-Bell, R. M. 1995, MNRAS, 275, 429 Majewski, S. R. 1994, ApJ, 431, L17 Mateo, M. 1998, ARAA, 36, 435 Miller, B. W., Whitmore, B. C., Schweizer, F., & Fall, S. M. 1997, AJ, 114, 2381 Miralda-Escudé, J. 1998, ApJ, 501, 15 Miralda-Escude, J., Haehnelt, M., & Rees, M. J. 1998, submitted to ApJ, astro-ph/9812306 Miralda-Escude, J., & Rees, M. J. 1998, ApJ, 497, 21 Moore, B. 1996, ApJ, 461, L13 Moore, B., Governato, F., Quinn, T., Stadel, J., & Lake, G. 1998, ApJ, 499, L5 Navarro, J. F., Frenk, C. S., & White, S. D. M. 1997, ApJ, 490, 493 (NFW) Navarro, J. F., & Steinmetz, M. 1997, ApJ, 478, 13 Olszewski, E. W. 1998, in Galactic Halos: A UC Santa Cruz Workshop, ed. D. Zaritski (San Francisco: ASP) Peebles, P. J. E. 1993, Principles of Physical Cosmology (Princeton: Princeton Univ. Press) Peebles, P. J. E., & Dicke, R. H. 1968, ApJ, 154, 891 Pen, U.-L. 1998, ApJ, 498, 60 Petry, C. E., Impey, C. D., & Foltz, C. B. 1998, ApJ, 494, 60 Press, W. H., & Schechter, P. 1974, ApJ, 187, 425 Quinn, T., Katz, N., & Efstathiou, G. 1996, MNRAS 278, L49 Razoumov, A., & Scott, D. 1999, submitted to MNRAS, astro-ph/9810425 Rees, M. J. 1986, MNRAS, 218, 25 Shapiro, P. R., & Kang, H. 1987, ApJ, 318, 32 Shapiro, P. R., Raga, A. C., and Mellema, G. 1998, in Molecular Hydrogen in the Early Universe, eds. E. Corbelli, D. Galli, and F. Palla, Memorie Della Societa Astronomica Italiana, 69, pp. 463- 469, astro-ph/9804117 Songaila, A., & Cowie, L. L. 1996, AJ, 112, 335 Stecher, T. P., & Williams, D. A. 1967, ApJ, 149, L29 Spitzer, L., Jr. 1978, Physical Processes in the Interstellar Medium (New York: Wiley) Spitzer, L., Jr., & Hart, M. H. 1971, ApJ, 166, 483 Tajiri, Y., & Umemura, M. 1998, ApJ, 502, 59 Tytler, D. et al. 1995, in QSO Absorption Lines, ed.  G. Meylan (Berlin: Springer) Thoul, A. A., & Weinberg, D. H. 1996, ApJ, 465, 608 Unavane, M., Wyse, R. F. G.,& Gilmore, G. 1996, MNRAS, 278, 727 Vader, J. P. 1986, ApJ, 305, 669 Vader, J. P. 1987, ApJ, 317, 128 Verner, D. A., & Ferland, G. J. 1992, ApJS, 103, 467 Voronov, G. S. 1997, ADNDT, 65, 1 Weinberg, D. H., Hernquist, L., & Katz, N. 1997, ApJ, 477, 8 [^1]: email: [email protected] [^2]: email: [email protected] [^3]: Note that we have assumed zero metallicity in calculating cooling. Even if some metals had already been mixed into the IGM, the metallicity of newly formed objects was likely too low to affect cooling since even at $z\sim 3$ the typical metallicity of the Lyman alpha forest has been observed to be $<0.01$ solar (Songaila & Cowie 1996; Tytler et al.  1995).
--- abstract: 'We propose an extension of minimal intuitionistic predicate logic, based on delimited control operators, that can derive the predicate-logic version of the Double-negation Shift schema, while preserving the disjunction and existence properties.' address: | Ecole Polytechnique, INRIA, CNRS & Université Paris Diderot\ Address: INRIA PI-R2, 23 avenue d’Italie, CS 81321, 75214 Paris Cedex 13, France\ E-mail: [email protected] author: - 'Danko Ilik[^1]' bibliography: - 'article.bib' title: 'Delimited control operators prove Double-negation Shift' --- delimited control operators ,Double-negation Shift ,disjunction property ,existence property ,intermediate logic 03B20 ,03B40 ,68N18 ,03F55 ,03F50 ,03B55 Introduction {#delcont} ============ The system {#delcont_mqcplus} =========== Relationship to MQC and CQC {#delcont_equicons} =========================== Properties {#delcont_srprogress} ========== Related and future work {#delcont_future} ======================= Acknowledgements {#acknowledgements .unnumbered} ================ I would like to thank my thesis supervisor Hugo Herbelin for commenting on an earlier draft of this paper, as well as for many inspiring discussions. [^1]: Present address: Faculty of Informatics, University “Goce Delčev”, PO Box 201, 2000 Štip, Macedonia; E-mail: [email protected]
--- abstract: 'The kinematic reconstruction of neutral current high $Q^2$ events at HERA is discussed in detail using as an example the recently published events of the H1 and ZEUS collaborations at $Q^2 > $ 15000 GeV$^2$ and M $>$ 180 GeV, which are more numerous than expected from Standard Model predictions. Taking into account the complete information of these events, the mass reconstruction is improved and the difference between the average mass of the samples of the two experiments is reduced from 26$\pm$10 GeV to 17$\pm$7 GeV, but remains different enough to render unlikely an interpretation of the excess observed by the two collaborations as originating from the decay of a single narrow resonance.' --- 15.5cm -1.0in -38pt plus 2mm minus 2mm [DESY 97-136 ]{}\ [July 1997]{}\ [ **Some Properties of the Very High $Q^2$\ Events of HERA** ]{} Ursula Bassler, Gregorio Bernardi\ Laboratoire de Physique Nucléaire et des Hautes Energies\ Université Paris 6-7, 4 Place Jussieu, 75252 Paris, France\ [*e-mail: [email protected]; [email protected]*]{} Introduction ============ At the electron-proton ($ep$) HERA collider, the study of DIS is performed in an unique and optimal way up to values of the squared momentum transfer $Q^2$ of $10^{5}$ GeV$^2$. Recently, the two HERA collaborations H1 and ZEUS have reported [@H1VHQ2; @ZEUSVHQ2] an excess of events at [*very high*]{} $Q^2$ (defined in the following as $Q^2>$ 15000 GeV$^2$) compared to standard model expectations. This observation has triggered an important activity on “Beyond the Standard Model” theories which might explain the effect [@Doksh]. The most favoured solution involves the production of a new resonance, which after its decay is kinematically similar to a standard deep inelastic scattering (DIS) event. Indeed, in the naive quark-parton model (QPM), a parton carrying a fraction $x$ of the momentum of the proton scatters elastically against an electron and in the Standard Model, this interaction is representing a t-channel scattering occuring via the exchange of a gauge boson such as a photon or a $Z^{\circ}$. Beyond the Standard Model, the naive QPM picture can represent the formation of an s-channel resonance (generically called “leptoquark”) subsequently followed by a two-body decay. While in the first interpretation the Bjorken $x$ variable is one of the two relevant variables to characterize the scattering, in the second case the invariant mass $M$ of the system formed is the physical quantity of interest. This mass is related to $x$ in the naive QPM by $M=\sqrt{xs}$, $s$ being the squared center of mass energy. In “real” interactions, the quantum chromodynamics (QCD) effects do not modify much this picture at large $Q^2$, since it is characterized in DIS by the dominance of one electron + one partonic jet + one remnant jet in the final state, and is thus similar to the one in which a leptoquark is produced and decays in 2 bodies. So, when looking at high $Q^2$ deep inelastic scattering (DIS) in an inclusive way it is possible to distinguish between these two scenarios by comparing the event rates in different $x$ or $M$ intervals: the smooth evolution of the DIS cross-section as a function of $x$ can be opposed to the appearance of a sharp peak in the invariant mass distribution, characteristic of a leptoquark. The HERA effect is complicated by the fact that the invariant mass distributions of the event samples of the two collaborations differ: at a mass greater than 180 GeV, the 7 H1 events appear clustered around a mass of 200 GeV, while the 5 ZEUS events are broadly distributed between 191 and 253 GeV. It is the subject of this paper to understand how significant is this difference and to provide a combined mass spectrum using the complete available information of these events in order to give an interpretation of the effect. At HERA the kinematic reconstruction does not need to rely on the scattered lepton only, since the most important part of the hadronic system is visible in the almost $ 4 \pi$ detectors H1 and ZEUS. This redundancy allows for an experimental control of the systematic errors and the radiative corrections, hence to determine in a more precise way the usual DIS kinematic variables $x,y$ and $Q^2$ which are defined as:\ $ \hspace*{1cm} x = {Q^2}/ ({2 P.q}) \hspace*{0.5cm} \hspace*{1cm} y = ({P.q}) / ({P.k}) \hspace*{0.5cm} \hspace*{1cm} Q^2= -(k-k')^2 = -q^2 = xys \hspace*{0.5cm} $\ $P,k $ being the 4-vectors of the incident proton and lepton, $k'$ the scattered lepton one. In this report we briefly review in section 2 the methods used at HERA to determine the kinematics of the high $Q^2$ events. In section 3 we characterize the high $Q^2$ region and introduce a new method which optimizes the kinematic reconstruction by determining the measurement errors on an event by event basis. Section 4 is devoted to the detailed study of the very high $Q^2$ HERA events and discusses the mass distributions obtained. Kinematic Reconstruction at HERA ================================ In order to introduce the kinematic methods used at HERA in the high $Q^2$ region let us start by a few definitions. The initial electron and proton (beam) energies are labeled $E_{\circ}$ and $P_{\circ}$. The energy and polar angle[^1] of the scattered electron (or positron) are $E$ and $\theta$. After identification of the scattered electron, we can reconstruct the following independent hadronic quantities: $\Sigma$, obtained as the sum of the scalar quantities $E_h-p_{z,h}$ associated to each particle belonging to the hadronic final state, $p_{T,h}$ as its total transverse momentum and define the hadronic inclusive angle $\gamma$: $$\Sigma = \sum_{h} (E_h-p_{z,h}) \hspace*{1cm} p_{T,h} = \sqrt{(\sum_{h} p_{x,h})^2+(\sum_{h} p_{y,h})^2 } \hspace*{1cm} \tan\frac{\gamma}{2} = \frac{\Sigma}{ p_{T,h}}$$ $E_h,p_{x,h},p_{y,h},p_{z,h}$ are the four-momentum vector components of every energy deposit in the calorimeter originating from any hadronic final state particle. The corresponding quantities for the scattered electron are $${\Sigma_e}=E~(1-\cos{\theta}) \hspace*{2cm} p_{T,e}=E\sin{\theta} \hspace*{1.2cm} \mbox{i.e.} \hspace*{0.7cm} \tan\frac{\theta}{2} = \frac{\Sigma_e}{ p_{T,e}}$$ Out of these variables, it is possible to write the four methods which have been used to determine the invariant mass of the very high $Q^2$ events (the formulae are given in the appendix). - The electron only method ($e$) based on E and $\theta$.\ - The hadrons only method ($h$) based on $\Sigma$ and $p_{T,h}$ [@jb].\ - The double angle method (DA) based on $\theta$ and $\gamma$ [@stan].\ - The Sigma method (${\Sigma}$) based on E, $\theta$ and ${\Sigma}$ [@bb]. The $e$ method is precise at high $y$ but becomes less precise in $x,y$ or $M$ at low $y$. The $h$ method is the only one available for charged current events, but is less precise than the three others in neutral currents, so it will be ignored in the following. The DA method is precise at high $Q^2$ both at high and low $y$. The ${\Sigma}$ method is similar to the $e$ method at high $y$ and to the DA at low $y$ apart from $Q^2$ which is less precise. However it is insensitive to initial state QED radiation (ISR) in M, $y$ and $Q^2$. See [@bb2] for a detailed discussion of the properties of these methods and [@wolf] for their behaviour in presence of ISR. It should be stressed that the quality of any kinematic reconstruction depends mainly on the precision achieved on the observables, such as E, $\theta$, $\Sigma$ or $\gamma$. It is one of the major tasks of the experiments to obtain precise and unbiased quantities, and several techniques have been developped in the HERA structure function analyses to achieve these goals. We refer to the original publications for a description of these studies [@HZ93-94], which discuss for instance how the electron energy is calibrated, or how the hadronic final state is measured at low $y$, when the hadronic jets are in the vicinity of the beam pipe. Treatment at High $Q^2$ ======================= At [*high*]{} $Q^2$ (defined in the following as $Q^2>$ 2500 GeV$^2$) the kinematic reconstruction is in general more precise than at low $Q^2$ and indeed the differences between the results obtained with the methods seen above are small. However since the number of events drops rapidly with increasing $Q^2$ it is still crucial to optimize the reconstruction by making a full use of all the observables of these events. The improvements on the reconstruction at high $Q^2$ come mainly from the better measurement of the hadronic final state. This is due to the fact that i) the individual hadron energy is on average greater; ii) the losses in the beam pipe and in the material in front of the calorimeter are in proportion smaller than at low $Q^2$, i.e. ${\Sigma}$, $p_{T,h}$ and $\gamma$ are less affected by these losses; iii) the hadronic final state displays more often a (single) collimated jet configuration. This is due both to kinematics (events are in average at higher $x$), and to the smaller gluon radiation and power corrections ($\alpha_S$ is smaller). All these characteristics enable a precise measurement of the hadronic angle, and thus are particularly favourable to the DA method which indeed becomes more accurate with increasing $Q^2$. This method has however the drawback to be very sensitive to ISR and it is thus difficult to apply to a small number of events. For instance, the ISR of a 2.75 GeV photon, if not taken into account produces a shift of 10% on the reconstructed mass and 20% on $Q^2$. In order to overcome this drawback we introduce a new method which, by making use of the precision on $\gamma$, estimates [*from the data*]{} on an event by event basis the shift in the measurement of $E$ and ${\Sigma}$, denoted in the following $\delta X/X \equiv (X_{true}-X_{reconstructed})/X_{reconstructed})$ with $X=E$ or ${\Sigma}$. These shifts allow i) the ${\Sigma}$ method to be corrected; ii) the presence of an undetected initial state photon (down to an energy of about 2 GeV) to be identified with high efficiency, and therefore to correct for it. The $\omega$ Method ------------------- Let us assume that there is no ISR (the specific treatment of ISR is discussed in the next section, although the two steps are not separated in the procedure), and that $\theta$ and $\gamma$ are precisely measured (implying ${\delta {\Sigma}}/{{\Sigma}}= {\delta p_{T,h}}/{p_{T,h}}$). From energy momentum conservation the two following equations can be derived: $$\begin{aligned} (1-y_e) \ \frac {\delta E}{E} + y_h \ \frac{\delta {\Sigma}}{{\Sigma}} = y_e - y_h \\ -p_{T,e} \ \frac{\delta E}{E} + p_{T,h} \ \frac{\delta {\Sigma}}{{\Sigma}} = p_{T,e} - p_{T,h}\end{aligned}$$ Under these assumptions ${\delta E}/{E}$ and ${\delta {\Sigma}}/{{\Sigma}}$ are determined on an event by event basis. Varying $\theta$ and $\gamma$ within their typical errors[^2] (5 and 40 mrad (r.m.s.) respectively, at high $Q^2$) we can obtain the errors arising from the angular measurements on ${\delta E}/{E}$ and ${\delta {\Sigma}}/{{\Sigma}}$. A more direct way to see the uncertainties arising from this determination is illustrated in fig.1, in which the error on $E$ and ${\Sigma}$ reconstructed using equations 3 and 4 are compared to their “true” value, which is known for the reconstructed simulated events. For a given event, the error on $({\delta E}/{E})_{rec}$, which is defined as $\Delta({\delta E}/{E})\equiv ({\delta E}/{E})_{rec} - ({\delta E}/{E})_{true}$, is of the same order as the r.m.s. (denoted $<{{{\delta E}/{E}}}>$) of the ${{{\delta E}/{E}}}$ distribution obtained from a large sample (fig.1b), showing that its use will not bring an improvement on an event by event basis. On the other side, the relative error on $({{\delta {\Sigma}/{\Sigma}}})_{rec}$ is smaller, about 30% of $<{{\delta {\Sigma}/{\Sigma}}}>$ , and diminishes at high $y$ (compare fig.1e to 1f, and 1b to 1c) implying that correcting ${\Sigma}$ will provide a better kinematic measurement, in particular at high $y$. The $\omega$ kinematic variables are thus derived from the $\Sigma$ ones by including the effect of ${{\delta {\Sigma}/{\Sigma}}}$, i.e. $$y_{\omega}\equiv\frac{{\Sigma}+ \delta {\Sigma}} {{\Sigma}+ \delta {\Sigma}+ {\Sigma}_e} \hspace*{2cm} Q^2_{\omega}\equiv\frac {p_{T,e}^2}{1 -y_{\omega}}$$ The comparison of the $\omega$ and ${\Sigma}$ reconstruction of M, $y$ and $Q^2$ for high $Q^2$ events fully simulated[^3] in the H1 detector is shown in fig.2. The improvement obtained by the recalibration of ${\Sigma}$ is clearly visible, allowing this method to be compared favourably to the original ones. This comparison is shown in fig.3 for high $Q^2$ events (in all the high $Q^2$ section, an $E-p_z$ cut against hard initial state radiation is applied, see next section) both at high $y$ and at low $y$. At high $y$ $(y_e>0.4$), the $e$ and the $\omega$ methods are comparable in $M$ and $y$ and slightly better than the DA one (fig.3a,b). In $Q^2$, the DA displays the narrowest peak both at high and low $y$ (fig.3c,f), but its high sensitivity to ISR induces tails in the distribution which renders its r.m.s. larger than the $\omega$ and $e$ ones. At low $y$ ($y_e<0.25)$ the $e$ method has a relatively poor resolution in M, much worse than the $\omega$ one which is slightly better than the DA one. In conclusion, for the high $Q^2$ events the $\omega$ method is similar or slightly better than the $e$ and DA method. Treatment of QED Initial State Radiation ---------------------------------------- The influence of ISR is taken into account in the simulation programs used by the HERA experiments, but on a small number of events it is difficult to control the migration due to an unseen ISR photon when using the $e$ and DA methods. Both H1 and ZEUS use an experimental cut on the total $E-p_z$ of the event (${\Sigma}_{he}$) or equivalently on the normalized $E-p_z$ called hereafter $\sigma_{he}$[^4] and defined as $$\sigma_{he} \equiv \frac{{\Sigma}_{he}}{ 2E_{\circ}} \equiv \frac{{\Sigma}+{\Sigma}_e} { 2E_{\circ}} \equiv \sigma_h + \sigma_e$$ The $E-p_z$ cuts used in H1 and ZEUS correspond to good approximation to $\sigma_{he}~>~0.75$ and prevent an ISR photon from carrying away more than about 25% of the incident electron energy. Such a photon can induce a large shift to the reconstructed mass [@wolf] of $1-\sqrt{(0.75 y_e ) / (y_e-0.25)}$ for $M_e$ (i.e. $-18\%$ at $y_e=0.75$) and of 25% on $M_{DA}$ at any $y$. On $M_{{\Sigma}}$ the shift is zero since the method is independent of colinear ISR for M$,y$ and $Q^2$. A more stringent cut on $\sigma_{he}$ is difficult to implement due to the experimental resolutions on $y_h$ and $y_e$. The $\omega$ method allows ISR to be identified with high efficiency for $\sigma_{he}$ as high as $\sim0.93$. It makes use of the simple fact that depending on the origin of the $\sigma_{he}$ shift (which comes mainly either from ISR or from hadronic miscalibration), the errors obtained with the $\omega$ method have a completely different pattern: for example if the observed $\sigma_{he}$ is 1$-z$, equations 3 and 4 will give for the ISR case $\delta E/E=z$ and $\delta {\Sigma}/{\Sigma}=z$, assuming negligible detector smearing, while for the hadronic miscalibration case, assuming negligible error on the electron energy, we will get $\delta E /E=0$ and $\delta {\Sigma}/{\Sigma}={z}/{y_h}$. These two different types of pattern allow ISR to be identified even in the presence of detector smearing, as studied on a complete simulation in the H1 detector, with typically 85% efficiency at high $Q^2$ for photon energies greater than 2 GeV. The exact conditions for ISR identification of a 2 GeV photons is $\delta E/E$ and $\delta {\Sigma}/{\Sigma}>7.3\%$. However, to take into account the detector smearing we use instead: $$\frac{\delta E}{E}>5\% \hspace*{0.5cm} \mbox{and} \hspace*{0.5cm} \frac{\delta {\Sigma}}{{\Sigma}}>5\% \hspace*{0.5cm} \mbox{and} \hspace*{0.5cm} \frac{\delta E}{E}+\frac{\delta {\Sigma}}{{\Sigma}}>15\%$$ These can be slightly varied depending on the ISR identification efficiency requested. These conditions will “misidentify” [high $Q^2$]{} non-radiative events in less than 3% of the cases in the H1 detector, implying that in the total sample of ISR identified events, the fraction of genuine ISR events is higher than the fraction of non-radiative events. This fraction increases with increasing $Q^2$ since the hadronic angle becomes more precise, i.e. the ISR identification becomes more efficient. In case of ISR identification, the equations 3 and 4 are solved again after recalculating $y_e$ and $y_h$ using $\sigma_{he} \cdot 2E_{\circ}$ instead of $2E_{\circ}$. The improvement obtained can be visualized in fig.4 which shows the same distributions as in fig.3 but on the sample of DIS high $Q^2$ events having radiated an ISR photon of 2 GeV or more. The dramatic improvement underlines the importance of controlling experimentally the radiation effect. Furthermore, the soft ISR photons which cannot be identified below 2 GeV have a small effect on $M_{\omega}$ ($<2\%$) but still a sizeable one on $M_{DA}$, up to about 7%, or 4% on $M_e$. The Very High $Q^2$ Events at HERA ================================== Let us remind some characteristics of the excess of events at high $Q^2$ observed by the H1 and ZEUS collaborations in the data collected from 1994 to 1996 [@H1VHQ2; @ZEUSVHQ2]. At $Q^2>15000$ GeV$^2$, 24 events have been observed for an expectation of 13.4$\pm$1.0, representing a probability of 0.0074 [@h1zeus]. Furthermore some of these events exhibit the additional characteristics either to cluster around a rather precise value of the invariant mass of the produced system (M$_e=200\pm 12.5$ GeV, for 7 events out of the 12 of H1, while $1.0 \pm 0.2$ are expected), or to lie at very high invariant mass (M$_{DA}>223$ GeV and $y_{DA}>0.25$, for 4 events out of the 12 of ZEUS, while $0.9\pm 0.2$ are expected). Actually the messages of these two observations could be different as already pointed out [@drees; @ellis] and we will try here to explore further this point with the use of our new kinematic tool. Among the obvious possible reasons for the difference are the influence of ISR which can strongly distort a distribution based on a small number of events, or the effect of a specific miscalibration. The $\omega$ method having been devised to respond to these problems, we can now study in more details the 11 extreme events mentioned above for which the complete kinematic information has been published by the two collaborations. We add to this sample 2 events[^5] from ZEUS in order to include all events which satisfy the condition $Q^2>15000$ GeV$^2$ and M$>180$ GeV, since we want to test the significance of the H1 event clustering at masses around 200 GeV over the complete very high $Q^2$ HERA sample. Kinematic Properties -------------------- In table 1 are given the M, $y$ and $Q^2$ of the 13 events at very high $Q^2$ and high mass reconstructed with the four different methods discussed above. The values of M, $y$ and $Q^2$ are identical to those published for the $e$ and DA methods. The ${\Sigma}$ values have been released recently by H1 [@GREGQ2] and can be computed straightforwardly for the ZEUS events. The errors are also reproduced exactly except for those of the $e$ method to which we conservatively added quadratically an error of 1.5% (i.e. half of the absolute energy scale uncertainty of 3% given by the two experiments) to account for potential miscalibrations between the different regions of the detectors. As we will see below the overall energy calibration can be checked to be well under control. For the errors not published by the collaborations ($\omega$, ${\Sigma}$ for H1 and ZEUS, and DA for H1) we used the following prescription, which was checked to be consistent with the other published errors: They are obtained here using a possible error of $\pm 5$ mrad and $\pm 30$ mrad for the electron and hadronic angle respectively. For the error on the electron (hadronic) energy we used $\pm 3\%$ ($\pm 4\%$) for H1 and $\pm 5\%$ ($\pm 4\%$) for ZEUS, which includes for the electron case both resolution effects, dead material corrections and the 1.5% overall error just mentioned. These two values for the electron energy error have been checked on the data (see below). Additional sizeable errors as published by the collaborations, due for instance to special energy corrections, have been taken into account in two events (H–3,Z–1). For the H1 events the errors of an event are similar for each method, with a slight advantage for the $e$ method at $y$ above 0.5. For the ZEUS events the DA errors are the smallest, since the error on the electron energy which is somewhat larger than in the H1 case has an influence on the 3 other methods. This also explains why the ZEUS collaboration chooses the DA method rather than the $e$ method favoured by H1. Also mentioned is the result of the $\omega$ ISR identification, and we can see that 2 events (H–5 and Z–4) are classified as radiative, and have been corrected accordingly. These 2 events indeed show the characteristics of a radiative event, i.e. there exists a value of $z \equiv E^{ISR}_{\gamma}/E_{\circ}$ for which the following equations, which hold exactly if there were no detector effects, are verified in a good approximation: $${{\delta E}/{E}} \simeq z \hspace*{1cm} {{\delta {\Sigma}}/{{\Sigma}}} \simeq z$$ $$Q^2_{{\Sigma}}/Q^2_{e} \simeq 1-z \hspace*{1cm} Q^2_{e}/Q^2_{DA} \simeq 1-z$$ $$M_{{\Sigma}} / M_{DA} \simeq 1-z \hspace*{1cm} x_{{\Sigma}} / x_{DA} \simeq (1-z)^2$$ $$y_{DA}\simeq y_{{\Sigma}} \hspace*{1cm} \sigma_{he} \simeq 1-z$$ Indeed in the case of H–5 a photon with an energy (2.9 GeV)consistent with the $z$ derived from the previous equations is observed in the special photon calorimeter located close to the beam pipe and designed to measure photons emitted collinearly to the incident electron, such as ISR photons. The ISR effect on the mass is sizeable, since $M_{DA}/M_{\omega}$ = 1.08 for H–5 and 1.11 for Z–4. Both values are consistent with the determined $\delta E / E $ and $\delta {\Sigma}/ {\Sigma}$ of these events (see tab.1). The reconstructed kinematics are consistent between the different methods except for the 2 radiative events and for the events displaying a significant difference between $M_e$ and $M_{DA}$ (events H–7,Z–1,Z–5), which indeed have the largest $\delta E / E $ and/or $\delta {\Sigma}/ {\Sigma}$. On these small samples of events, and after ISR corrections, the averages of the absolute value of the error on the hadronic energy are similar for H1 and ZEUS ($<|\frac{\delta{\Sigma}} {{\Sigma}}|>$= 6% compared to 5%), while on the electron energy the H1 average error is smaller: $<|\frac{\delta E} {E}|>$=3% compared to 5% for ZEUS. The uncertainty on the electromagnetic absolute energy scale can be estimated [*from the data*]{} using $<\frac{\delta E}{E}>$ (the 2 radiative events are excluded from these means) which gives $+1.1\%\pm 1.5\%$ for H1 and $+2.1\%\pm 3\%$ for ZEUS, both values in good agreement with the value of 3% given by the collaborations. The uncertainty on the hadronic absolute energy is obtained similarly and gives $-5.4\%\pm 3\%$ for H1 and $+3.3\%\pm 4\%$ for ZEUS, also in acceptable agreement with the values quoted by the collaborations. After all these consistency checks, the use of the $\omega$ method in the two samples will now allow a consistent mass distribution to be derived from the very high $Q^2$ events. ---------------- ---------------------------------- ---------------------------- ------------------------ ---------------------------- ---------------------------- ---------------------------- ------------------------ ---------------------------- ---------------------------- ---------------------------- ------------------------ ---------------------------- ------------------------------- [**Evt**]{} [$\delta E / E $ ]{} $ M_{\omega} $ $ M_{DA}$ $ M_{e} $ $ M_{{\Sigma}}$ $ y_{\omega} $ $ y_{DA}$ $ y_{e} $ $ y_{{\Sigma}}$ $ Q^2_{\omega}$ $ Q^2_{DA}$ $ Q^2_e $ $Q^2_{{\Sigma}} $ $\sigma_{he} $ [$\delta {\Sigma}/ {\Sigma}$ ]{} [$ \delta M_{\omega} $]{} [$ \delta M_{DA} $]{} [$ \delta M_e $]{} [$ \delta M_{{\Sigma}}$]{} [$ \delta y_{\omega} $]{} [$ \delta y_{DA} $]{} [$ \delta y_e $]{} [$ \delta y_{{\Sigma}}$]{} [$ \delta Q^2_{\omega}$]{} [$ \delta Q^2_{DA}$]{} [$ \delta Q^2_{e} $]{} [$ \delta Q^2_{{\Sigma}} $]{} [**H–1**]{} $ +.01 $ 197 198 196 196 .435 .434 .439 .443 16.8 17.1 17.0 17.1 1.01 $ -.03 $ 7 7 6 7 .016 .016 .016 .032 0.9 0.5 0.5 1.2 [**H–2**]{} $ -.04 $ 208 200 208 209 .574 .582 .563 .592 24.8 23.3 24.4 25.9 1.06 $ -.07 $ 6 5 5 6 .013 .013 .014 .024 1.2 0.8 0.5 1.8 [**H–3**]{} $ -.01 $ 188 185 188 188 .568 .573 .566 .561 20.0 19.6 20.0 19.7 0.99 $ +.03 $ 12 5 12 12 .020 .012 .033 .028 2.0 0.6 1.4 2.2 [**H–4**]{} $ +.02 $ 197 199 198 196 .789 .787 .790 .786 30.7 31.3 30.9 30.2 0.98 $ +.02 $ 4 4 3 5 .009 .007 .009 .012 1.2 1.1 0.7 1.7 [**H–5**]{} $ +.08^* $ 210 227 211 210 .525 .526 .562 .525 23.1 27.1 25.0 23.1 0.92$^*$ $ +.08^* $ 6 7 5 6 .015 .016 .014 .031 1.2 0.9 0.5 1.8 [**H–6**]{} $ +.00 $ 193 190 192 190 .440 .443 .440 .501 16.3 16.1 16.1 18.1 1.12 $ -.22 $ 6 6 7 6 .015 .016 .018 .030 0.9 0.5 0.5 1.3 [**H–7**]{} $ +.10 $ 199 213 200 202 .778 .762 .783 .786 30.7 34.5 31.4 31.9 1.02 $ -.05 $ 5 5 3 6 .009 .008 .009 .013 1.2 1.3 0.7 2.0 [**Z–1**]{} $ -.07 $ 221 208 218 207 .856 .865 .854 .836 41.7 37.5 40.5 35.9 0.89 $ +.17 $ 10 8 10 12 .011 .008 .018 .019 3.4 2.6 3.2 4.5 [**Z–2**]{} $ +.03 $ 220 227 220 220 .497 .490 .505 .507 24.1 25.2 24.4 24.6 1.00 $ -.04 $ 11 6 10 11 .019 .010 .025 .034 1.9 0.7 1.2 2.4 [**Z–3**]{} $ +.02 $ 228 236 225 230 .306 .299 .319 .299 15.9 16.6 16.2 15.8 0.97 $ +.03 $ 14 10 21 17 .023 .017 .040 .042 1.4 0.5 0.9 1.6 [**Z–4**]{} $ +.12^* $ 227 253 233 228 .728 .721 .752 .731 37.5 46.1 41.0 37.8 0.92$^*$ $ +.07^* $ 9 6 12 10 .014 .008 .021 .021 2.5 1.6 3.1 3.4 [**Z–5**]{} $ +.10 $ 207 232 200 206 .305 .285 .350 .310 13.1 15.4 14.0 13.2 0.94 $ -.03 $ 13 10 15 15 .024 .017 .033 .044 1.2 0.4 0.7 1.4 [**Z–6**]{} $ +.03 $ 185 191 186 183 .608 .592 .610 .591 20.7 21.6 21.0 19.8 0.95 $ +.07 $ 12 11 12 12 .023 .028 .054 .054 2.1 1.6 1.5 3.2 ---------------- ---------------------------------- ---------------------------- ------------------------ ---------------------------- ---------------------------- ---------------------------- ------------------------ ---------------------------- ---------------------------- ---------------------------- ------------------------ ---------------------------- ------------------------------- : \[tab9\] *Kinematic properties of the 13 events observed by the H1 and ZEUS collaborations which have, at least in one method, $Q^2 > 15000$ GeV$^2$ and $M >$ 180 GeV. For event Z–6, only the DA values are accurate, the values of the other methods being extrapolated. The values of M, $y$ and $Q^2$ are identical to those of the original papers ($e$, DA, ${\Sigma}$ for H1 [@H1VHQ2; @GREGQ2], $e$, DA for ZEUS [@ZEUSVHQ2]). The new values of the table are for the ${\Sigma}$ (ZEUS) and for the $\omega$ (H1+ZEUS), and for the non available errors (see text for more details). Also given are $\sigma_{he}$,[[$\frac{\delta E}{E}$]{}]{}and [[$\frac{\delta {\Sigma}}{{\Sigma}}$]{}]{}, which allow to quantify the size of the errors and to tag the presence of initial state radiation and to correct for it (marked with a “\*”). After ISR corrections, the ($\sigma_{he}$,[[$\frac{\delta E}{E}$]{}]{},[[$\frac{\delta {\Sigma}}{{\Sigma}}$]{}]{}) values are for event H–5: (1.00,.00,.00) and for event Z–4: (1.00,+.04,–.01). The masses are given in GeV, the $Q^2$ in $10^3$ GeV$^2$.* Mass Distribution ----------------- In the following we will not consider any more the event Z–5, since it survives the $Q^2$ cut only for the DA method. It might be a radiative event which could not be identified due to a large smearing in the hadronic energy. In any case (radiative or not) it would not survive the $Q^2_{\omega}$ cut, which is one of the 2 conditions the final sample must satisfy. Note “en passant” that its mass is reduced from $M_{DA}$=232 GeV to $M_{\omega}$=207 GeV. The mass distributions of the remaining 12 events, obtained with the $e,$ DA, ${\Sigma}$ and $\omega$, are displayed in fig.5a,b,d,e. The event Z–4 has the highest mass at $253\pm6$ GeV. It is identified as a radiative event and its mass is reduced by the $\omega$ determination to $226\pm 5$ GeV, thereby reducing the scattering appearance of the ZEUS events. Note that the error quoted on the DA mass is much smaller than the effect due to radiation. For the 5 ZEUS events the weighted average mass is decreased from M$^{avg}_{DA}$=226$\pm 9$ GeV to M$^{avg}_{\omega}$=216$\pm 7$ GeV. Actually with the $\omega$ determination 4 out of the 5 ZEUS events lie between 220 and 228 GeV, the 5$^{th}$ one (Z–6) being at 185 GeV, i.e. at a lower mass than any of the 7 H1 events (see fig.5b). The average mass[^6] of the 7 H1 events is essentially not dependent on the method used: M$^{avg}_{\omega}$=199$\pm 2.5$ GeV, M$^{avg}_{e}$=200$\pm 2.6$ GeV, M$^{avg}_{DA}$=201$\pm 5$ GeV. The 7 H1 events remain clustered between 188 and 211 GeV. Thus the H1 and ZEUS samples are concentrated at significantly different mass values and this splitting cannot be accounted for either by ISR or by detector effects. The small number of events involved prevents a definite interpretation of this effect. However the fact that no event among the 5 of ZEUS is found in the bin where the 7 H1 events are measured suggests that this specific accumulation is a statistical fluctuation. In fig.5c is shown the reconstruction of a narrow resonance generated at M=200 GeV with the LEGO [@lego] program in the H1 detector using the $\omega$ method[^7]. The width of the distribution which include experimental and QED/QCD radiation effects, cannot accomodate the tails of the measured experimental distributions (fig.5a,b,d,e). If we make an ideogram from the histogram 5e i.e. if we apply gaussian smearing to each of the 12 events using their $\omega$ mass as a mean, and their error as r.m.s., we obtain fig.5f, which also shows the incompatibility with fig.5c. To reconcile the 2 “peaks” visible in fig.5f would require to miscalibrate uniformly the electron energy of the H1 events at least by +6% and the ZEUS events by $-6$%, values completely incompatible with the absolute scale uncertainties found in the previous section. Note however, that the H1 events alone support the narrow resonance hypothesis. Since after the $\omega$ kinematic treatment none of the 12 events migrates outside the very high $Q^2$ and mass region, we confirm that the visible excess published by the collaborations at very high $Q^2$ is not due to detector (calibration) or radiation (ISR) effects. Conclusion ========== We have introduced a new reconstruction method (“$\omega$”) which allows the kinematic variables of the high $Q^2$ events to be determined in a more precise way. It uses the kinematic constraints on an event by event basis to calibrate the hadronic energy and to identify and correct for the presence of QED initial state radiation. This method has been applied to the 12 HERA events observed by the H1 and ZEUS collaborations (for an expectation of about 5) at $Q^2>15000$ GeV$^2$ and M$>180$ GeV. The accumulation of the 7 H1 events around 200$\pm12.5$ GeV is confirmed. The average mass of the 5 ZEUS events is decreased from the double-angle value of 226$\pm 9$ GeV to 216$\pm 7$ GeV. However none of the ZEUS events enter the H1 accumulation region. Taking the estimation of the systematic errors at face value, this suggests that, if the observed excess at very high $Q^2$ is due to physics beyond the Standard Model, it is unlikely to be explained by the decay of a single narrow resonance such as a leptoquark. The increase of the integrated luminosity in the current and in the future years at HERA will clarify the origin of this effect. [**Acknowledgments**]{} We would like to thank the two collaborations for the data we have studied in this paper. In particular this work has taken place within the H1 collaboration, and some of the results obtained were the outcome of the efforts of many people of the “Beyond the Standard Model” group to get the analysis of the very High $Q^2$ events completed. We also would like to thank J. Dainton, R. Eichler, J. Gayler, D. Haidt, B. Straub and G. Wolf for a careful reading of the manuscript and for their useful remarks. [**Appendix:**]{} [|c|c|c|]{} method & $y$ & $Q^2$ ------------------------------------------------------------------------ \ $ e $ & $1-$[$\frac{E}{E_{\circ}}$]{}$\sin^{2}$[[$\frac{\theta}{2}$]{}]{}& [$\frac{p_{T,e}^2}{1-y_{e}}$ ]{} ------------------------------------------------------------------------ \ $ h $ & [$\frac{\Sigma}{2 E_{\circ}} $]{} & [$\frac{p_{T,h}^2 }{1-y_h }$]{} ------------------------------------------------------------------------ \ DA & [$\frac{\tan{\frac{\gamma}{2}}} {\tan{\frac{\gamma}{2}}+\tan{\frac{\theta}{2}}} $]{} & $4E^{2}_{\circ} $[$\frac{\cot{\frac{\theta}{2}}} {\tan{\frac{\gamma}{2}}+\tan{\frac{\theta}{2}}}$]{} ------------------------------------------------------------------------ \ $\Sigma$ & [$\frac{\Sigma}{\Sigma+E(1-\cos\theta)}$]{} & [$\frac{p_{T,e}^2}{1-y_{\Sigma}}$ ]{} ------------------------------------------------------------------------ \ [99]{} H1 Collab., C. Adloff et al., Z. Phys C74 (1997) 191. ZEUS Collab., J. Breitweg et al., Z. Phys C74 (1997) 207-220. Yu. L. Dokshitzer, and references therein, to appear in the Proceedings of the 5$^{th}$ International Workshop on Deep Inelastic Scattering and QCD, Chicago (1997). A. Blondel, F. Jacquet, Proceedings of the Study of an $ep$ Facility for Europe, ed. U. Amaldi, DESY 79/48 (1979) 391-394. S. Bentvelsen et al., Proceedings of the Workshop Physics at HERA, vol. 1, eds. W. Buchmüller, G. Ingelman, DESY (1992) 23-40.\ C. Hoeger, ibid., 43-55. U. Bassler, G. Bernardi, Nucl. Instr. and Meth. A361 (1995) 197. U. Bassler, G. Bernardi, DESY 97-137 (1997) G. Wolf, hep-ex/9704006, DESY 97-047 (1997) 26. H1 Collab., T. Ahmed et al., Nucl. Phys. [**B439**]{} (1995) 471.\ ZEUS Collab., M. Derrick et al., Z. Phys. [**C65**]{} (1995), 379.\ H1 Collab., S. Aid et al., Nucl. Phys. [**B470**]{} (1996) 3.\ ZEUS Collab., M. Derrick et al., Z. Phys. [**C72**]{} (1996) 399. G. A. Schuler and H. Spiesberger, Proceedings of the Workshop Physics at HERA, vol. 3, eds. W. Buchmüller, G. Ingelman, DESY (1992) 1419. L. Lönnblad, Computer Phys. Comm. [**71**]{} (1992) 15. R. Brun et al., GEANT3 User’s Guide, CERN–DD/EE 84–1, Geneva (1987). Joint H1+ZEUS table of high $Q^2$ events. Private communication. M. Drees, hep-ph/9703332, APCTP 97-03 (1997) 5. G. Altarelli et al., hep-ph/9703276, CERN-TH/97-40 (1997) 28. G. Bernardi, T. Carli, to appear in the Proceedings of the 5$^{th}$ International Workshop on Deep Inelastic Scattering and QCD, Chicago (1997). K. Rosenbauer, Ph.D thesis RWTH Aachen, PITHA-95/16 (1995). [^1]: The positive $z$ axis is defined at HERA as the incident proton beam direction. [^2]: At very high $Q^2$, the $\gamma$ resolution improves to 30 mrad. We use here the r.m.s., rather than the standard deviation obtained from a gaussian fit to the distribution, to take into account non-gaussian tails which contribute in the propagation of the systematic errors. [^3]: In figures 1 and 2, the radiative processes of the DJANGO [@django] program which is used to generate the events have been turned off. The complete simulation, based on DJANGO, ARIADNE [@cdm] and GEANT [@geant] as used in the H1 simulation program, is further described in [@H1VHQ2]. [^4]: The normalized E-p$_z$ for the electron and for the hadronic final state satisfy: $\sigma_e=1-y_e$; $\sigma_h=y_h$. [^5]: For one of these 2 events (called Z–6 in table 1) only the DA variables are available (from fig.1 of ref. [@ZEUSVHQ2]). The additional kinematic properties were deduced from the DA values using the average $E-p_z$ and systematic shifts between $e$ and DA variables of the 5 other ZEUS published events. This assumption has a negligible influence on the conclusions drawn below. [^6]: The fact that the average masses are slightly different from the original publications is due to the difference in the errors which weights the events in a different way. This difference is irrelevant in the current discussion. [^7]: All the methods give similar distributions on these events, except for the DA which has a larger r.m.s. due to radiative tails. In units of GeV the (mean; r.m.s.) are for the $\omega$, $e$, DA and ${\Sigma}$ respectively: (195.5; 7.5), (195.3; 7.4), (195.2; 9.6), (195.3; 7.8). The bias of about 5 GeV can be removed by taking into account the mass of the jet, but is present when using inclusive methods.
--- abstract: 'The importance of HST for the study of quasar absorption lines and of the nature of the intergalactic medium is illustrated by reviewing selected results from past HST observations. Topics reviewed include the study of Ly$-\alpha$ absorbers at low redshift and the search for a diffuse IGM at high redshifts.' author: - 'Buell T. Jannuzi' title: Quasar Absorption Lines and the Intergalactic Medium --- Opening the Ultra-violet Window =============================== Soon after quasars were recognized as extragalactic sources (Schmidt 1963; Greenstein and Matthews 1963; Schmidt 1965) it was pointed out that as their light travels to Earth any intervening matter will leave its imprint on the spectra. This is true whether the intervening medium is full of diffuse hydrogen causing a uniform decrease of the quasar continuum (Gunn & Peterson 1965; Scheuer 1965) or discrete clouds producing separate absorption lines (by hydrogen, possibly associated with galaxies, Bahcall & Salpeter 1965; by hydrogen and other species in galactic halos, Bahcall & Salpeter 1966, Bahcall & Spitzer 1969). However, even before the discovery of quasars, Lyman Spitzer (1956) had pointed out the importance of UV spectroscopy for understanding the physical conditions of the gaseous content of the Galaxy, and by implication the halos of other galaxies and the gaseous content of the universe in general. As he pointed out, the majority of the strong resonance absorption lines occur in the rest frame UV (see Figure 1). At high redshifts intervening absorption systems can be studied from the ground (for an example of the current state of the art see Figure 2). However, the gaseous content of the nearby universe and the far-UV lines (e.g. He II) occurring at high redshift can only be observed in the UV. Although HST is not the first telescope with UV sensitive spectrographs, it is the first to provide both the spectral resolution and the sensitivity to allow the extensive observation of the quasar absorption lines. Here is a list of three of the many important studies that the observation of quasar absorption lines at UV wavelengths makes possible: 1.) The evolution of the gaseous content of the universe can now be traced by observing the changing number density per unit redshift of Ly$-\alpha$ absorbers from the present (using HST, from $z_{\rm abs} \approx 0.0$ to $z_{\rm abs} \approx 1.6$) back to when the universe was 10% of its current age (using groundbased telescopes like Keck to observe the most distant quasars). Such data are important for the study of cosmology, star and galaxy formation, development of large scale structures, and the composition of the ISM. The dramatic changes that occur from high to low redshifts are illustrated by comparing Figures 2-4. Quantifying and understanding these changes in detail is the continued focus of the HST quasar absorption line key project and other research efforts as well. 2.) For low redshift absorbers it is now possible to study their relation to individual galaxies, groups, or clusters – i.e. test the 1969 proposition of Bahcall and Spitzer, “that most of the absorption lines observed in quasi-stellar sources with multiple absorption redshifts are caused by gas in extended halos of normal galaxies.” 3.) Although efforts to detect a diffuse intergalactic medium (IGM) through continuous absorption by neutral hydrogen have failed, we can now extend the search by looking for He II absorption that may be easier to detect since the lower level of the ionizing background at short wavelengths means that the He II fraction is larger than the fraction of H I. In the following sections I will give examples of how HST has contributed to each of these problems. The examples are drawn from the work of the HST quasar absorption line key project and a few other groups to illustrate this progress, with apologies to the many other researchers that have also contributed to the study of these and other problems through the use of quasar absorption lines observed with HST. The HST Quasar Absorption Line Key Project ========================================== Design and Goals of the Survey ------------------------------ The HST quasar absorption line survey was an HST key project for cycles 1-3, with carryover observations extending into cycle 4. Led by John Bahcall, the survey had the ambitious goal of obtaining a large and homogeneous catalogue of absorbers suitable for the study of the nature of gaseous systems and their evolution (Bahcall et al. 1993). While the well known telescope problems in effect prior to the servicing mission reduced the original scope of the survey, the key project still successfully observed 89 quasars with the higher resolution (R$=1300$) gratings of the Faint Object Spectrograph. A small subset of the quasars were observed from 1150–3300 Å, but the majority were observed only between 2200–3300 Å  or 1600–3300 Å, depending on the redshift of the quasar. Targets were selected to be bright and have low Galactic extinction ($b>20$ degrees). The distribution of the targets is shown in Figure 5. Redshifts of the observed quasars range between 0.25 and 2.0. Details of the data calibration and analysis can be found in Schneider et al. 1993, Jannuzi & Hartig 1994, and Bahcall et al. 1996. In all of our analysis (from line measurements to line identifications) we have tried to remove subjective decision making from the process and replace it with well tested algorithms implemented through computer software. This allows us to run the same software on simulated data in order to improve our understanding of the limitations of both our data and our analysis techniques. Past Results ------------ The first nice surprise that HST presented to us was a larger number of low redshift Ly$-\alpha$ absorption systems in the spectrum of 3C 273 than might have been expected from a simple extrapolation of the evolution in the number density (per unit redshift) of such systems observed at high redshift (Bahcall et al. 1991; Morris et al. 1991). Early results have also been produced by HST on the nature and evolution of metal line systems (e.g. Reimers & Vogel 1993; Bergeron et al. 1994). Our understanding of the evolution of Lyman-limit systems from high redshifts down to $z=0.4$ has been improved (Storrie-Lombardi et al. 1994; Stengler-Larrea et al. 1995) and a first attempt has been made to measure the proximity effect in the spectra of low redshift quasars (Kulkarni & Fall 1993). The key project catalogue of Ly$-\alpha$ absorbers makes it easier to investigate the extent and nature of the relationship between Ly$-\alpha$ absorbers and individual galaxies, groups, or clusters. Many groups are actively working on this problem (e.g. incomplete or single field surveys: Bahcall et al. 1991, 1992; Morris et al. 1993; Spinrad et al. 1993; to more extensive surveys in progress that have presented partial results: Lanzetta et al. 1995; Stocke et al. 1995; Le Brun et al. 1995), but I have chosen to adapt a figure from Morris et al.’s (1993) study of the field of 3C 273 to illustrate both the progress that has been made and how much more needs to be done (Figure 6). Despite a complete redshift survey of even the faintest galaxies in the field, the study of the 3C 273 field gives a mixed signal. While some absorbers appear to be associated with the same structures as the galaxies (as suggested by Lanzetta et al., actually part of the halos of the galaxies) other lines appear in voids (see also Stocke et al.) with no detected galaxy within 1 Mpc. The Morris et al. study is limited by the small number of Ly$-\alpha$ systems along the line of sight toward 3C 273, resulting in a limited comparison between the distribution of galaxies and absorbers. In fact no single line of sight provides enough Ly$-\alpha$ absorbers to allow the accurate determination of the fraction of all absorbers which are associated with galaxies or larger structures. For some of the other papers listed above the problem is similar or the galaxy redshift surveys that they use are incomplete. Some of the other surveys are also not able to address the relationship between the absorbers and groups or clusters of galaxies because the galaxy survey does not cover a large enough angular area to be able to identify a cluster or group. To determine accurately the fraction of Ly$-\alpha$ absorbers associated with galaxies and large scale structures requires both the completion of the key project catalogue of absorbers and an increase in the number of fields for which galaxy redshifts are available (e.g. Sarajedini et al. 1996). The same key project spectra have also provided valuable information on our own Galaxy’s halo and ISM (Savage et al. 1993), the emission line properties and spectral energy distributions of quasars (Espey et al. 1994; Weymann et al. 1996; Laor et al. 1994, 1995; Sulentic this conference), and warm x-ray absorbers (in the quasar 3C 351, Mathur et al. 1994). Some New and Future Results --------------------------- A continuing focus of the key project is to determine the nature and evolution of the low redshift Ly$-\alpha$ absorbers. The number density of such systems as a function of redshift is summarized in Figure 7 (see Bahcall et al. 1996 for details). At low redshift ($z< 1.3$), the key project data analyzed to date (about 10% of the expected final catalogue) is consistent with no evolution for $\gamma = 0.58 \pm 0.50$ and $ dN/dz \propto (1+z)^{\gamma} $. This result is derived from a maximum likelihood estimation for the observed lines in those spectral regions where the 4.5 $\sigma $ detection limit is less than 0.24 Å. We further find that the slope of the observed low-redshift $dN/dz$ relation differs at the $2-4.5\sigma$ level of significance from the slope deduced from various ground-based samples that refer to redshifts $z > 1.6$ (Lu et al. 1991; Press et al. 1993; Bechtold 1994). As the number of absorbers in the analyzed catalogue increases it becomes possible to study the clustering properties of Ly$-\alpha$ absorbers. While we have yet to detect any signal in the two-point correlation function, we have found evidence that about half of the extensive metal line systems seen at redshifts between 0.4 and 1.3 are accompanied by highly-clustered clumps of Ly$-\alpha$ lines which are physically associated with the metal-line systems (details in Bahcall et al. 1996). Our understanding of both the redshift evolution of all absorption systems and of their clustering properties will improve as we complete the catalogue of absorption systems. The last observation of the key project was made in May of 1995. At the time of this meeting all of the quasar spectra have been reduced, lines measured, and the lines are being identified. The key project results I have reviewed have been based on only part of the total absorption line data set (see Figure 8). While we have analyzed one sixth of the objects, the remaining five sixths include most of the higher redshift objects and four fifths of the observed redshift path length. Expected improvements upon completion of the catalogue include: 1.) examining the evolution of Ly$-\alpha$ systems not only as a function of redshift, but also as a function of neutral column density and 2.) confirming or refuting the preliminary evidence for clustering of Ly$-\alpha$ absorbers around metal line systems. Has the IGM Finally Been Detected? ================================== We now leave the universe at low redshift behind and examine the Herculean efforts that have been made to detect the diffuse intergalactic medium with HST (a complete and detailed account of this exciting area can be found in the contribution to these proceedings by Dr. Jakobsen). Excluding the detection of absorption assigned to weak individual “clouds” (the low column density end of the Ly$-\alpha$ forest; the Bahcall-Salpeter effect), all efforts to detect absorption by diffuse neutral hydrogen (the Gunn-Peterson effect) at ANY redshift have failed. The ionizing background radiation reduces the fraction of H I and He I (Sargent et al. 1980) and removes them as probes of the diffuse IGM (note, He I has been observed in high column density systems, the first detection being made with HST by Reimers & Vogel 1993). The lower ionizing background at short wavelengths might leave a higher fraction of He II and provide a means of detecting the diffuse IGM, but the short wavelength of He II (304 Å) means that it can only be observed at high redshifts. This means that “clear” quasars, as Jakobsen call them, must be found. Such quasars must be bright, have redshifts greater than 3 (to have He II observable with HST), and be free of significant absorption from intervening “clouds”, particularly the high column density systems whose Lyman-Limit absorption would preclude the observation of He II. Jakobsen et al. (1994) and Tytler et al. (personal communication) both conducted searches during cycles 1-3 using HST and respectively the FOC and FOS to find candidate “clear” quasars. In cycle 4 both groups succeeded in detecting He II absorption in the spectra of distant quasars. While the details can be found in Dr. Jakobsen’s contribution, here are some bottom lines. There are now three detections of absorption due to He II. Two quasars (Q0302$-$003 and PKS1935$-$692) exhibit black, continuous absorption blueward of the expected wavelength of He II at the redshift of the observed quasars ($z > 3$ for the two observed with HST, see figures in Jakobsen et al. 1994 and Jakobsen’s contribution; Jakobsen and Tytler 1996). The lower limits on the optical depth of He II absorption are 1.7 in both cases. A third quasar, HS1700$+$6414, was successfully observed by HUT (Davidsen 1995) and shows He II absorption beginning at $z=2.7$, but the absorption is not as strong as at the higher redshifts observed with HST. It appears that both the HST and HUT observations can be interpreted as consistent with each other given the possible evolution between redshifts of 3.2 to 2.7 (Jakobsen, this meeting). One problem with any search for a diffuse component in the IGM is that as we are able to detect and resolve lower column density systems we remove absorption from the previously unresolved “diffuse” component and move it into the “cloud” component. Q0302$-$003 has been observed with Keck and the HIRES echelle spectrograph and a population of very low column density Ly$-\alpha$ clouds has been detected by Songaila et al. (1995) and they report that the detected population is extensive enough that it is possible to explain the observed He II absorption without invoking a diffuse IGM. Neither the HUT nor the HST FOC and FOS observations have the spectral resolution necessary to distinguish directly between the He II “forest” and a more diffuse and uniform absorption. The issue is likely to remain unsettled until the existing lines of sight (or additional new detections, hopefully with brighter background quasars) are successfully observed at a high enough spectral resolution that the He II associated with the hydrogen forest clouds can be resolved. There is a second complication. Cosmological simulations of the universe at intermediate and high redshifts (e.g. Katz et al. 1996) indicate that we should now expect a complex distribution for the gas in the IGM with filamentary structures covering a large range of physical scales and conditions. There might not exist any component that matches our expectations of a smooth or uniform “diffuse IGM”. It might be that the distinction between numerous, closely packed, very diffuse (low column density) “clouds” and a more uniform diffuse medium is purely a question of semantics, but the resolution of this issue has implications for a variety of issues, including understanding the physical conditions that exist during the formation of galaxies (see Jakobsen, these proceedings for further discussion). End Matters =========== The 1990’s is the epoch of two revolutions in the the study of quasar absorption lines. Prior to HST and the Keck telescopes, quasar absorption lines have been discussed and studied in distinct subgroups, roughly separated by column density. At the extremes were the Ly$-\alpha$ forest lines that were observed to be unclustered and possibly composed of primordial material (based in part on the lack of any detected metal line absorption) and the damped systems with their high column densities and large gas masses identified as the progenitors of spiral galaxies (e.g. Wolfe 1988). Such divisions, while still useful, are getting fuzzy as new results rapidly blur distinctions. Just one example (of many) is the detection of weak CIV absorption associated with some fraction of low column density Ly$-\alpha$ absorbers, systems that would have previously been securely identified as part of the primordial “forest clouds” (e.g. Cowie et al. 1995; Womble et al. 1996). Such wonderful observations require modification of the pre-HST-Keck picture of absorption line systems. How should we modify the old “standard picture”? I am not sure. But I do think that a second revolution is going to provide critical guidance in the development of the new more complex and detailed models. The second revolution is the progress theorists have made in leaving behind spherical cloud, slab, or mini-halo models and replacing them with the help of super computers to generate full hydrodynamic and SPH simulations of the evolution of the universe. Three groups are now able to not only generate simulations of large scale structures, but also simulated quasar absorption line spectra along numerous lines of sight through their simulations that can be compared to real observations (see Zhang, Y. et al. 1995; Hernquist et al. 1996 and Katz et al. 1996; Cen et al. 1994, Miralda-Escudé et al. 1996). The challenge ahead is to extract the best set of observables from both the simulations and the various data sets so that cosmological models might be discriminated against. Furthermore, enough simulations (and observational data!) need to be generated that the uniqueness of “good fit” models can be tested. In his introduction to the Hubble Deep Field project, Bob Williams ably described how HST has opened up the distant universe to our view. He speculated that one of the Hubble Space Telescope’s lasting and important legacies would be providing us our first “clear” images of the early history of the universe. In the future HST will also be remembered for making possible unique studies of the more evolved and nearby universe. WFPC-2 is providing exquisite images of galactic sources and nearby objects that reveal a wealth of previously unobservable detail (see for examples the contributions to these proceedings by Bally, Livio, Machetto, and O’Dell). But HST should also be remembered for the unique information provided by its spectrographs. By making it possible to study quasar absorption lines in the ultra-violet HST has already provided important data about the gaseous content of the universe at both low and high redshifts. This legacy will continue to grow as existing data is further analyzed and when STIS makes its appearance on HST. I thank Jill Bechtold, Simon Morris, Donna Womble and Wal Sargent, and the HST quasar absorption line key project team for providing data used in the figures and acknowledge valuable discussions with Jill Bechtold, Peter Jakobsen, David Weinberg, and the entire HST quasar absorption line survey key project team. Hans-Walter Rix and David Weinberg provided useful comments on an early version of this paper. I thank the meeting organizers, particularly the local organizers, for managing to host such an enjoyable conference given the logistical problems brought about by the Paris transit strike. Bahcall, J. N., Jannuzi, B. T., Schneider, D. P., Hartig, G. F., Bohlin, R., & Junkkarinen, B. 1991, , 377, L5 Bahcall, J. N., Jannuzi, B. T., Schneider, D. P., Hartig, G. F., & Green, R. F. 1992, , 397, 68 Bahcall, J. N., et al. 1993, , 87, 1 Bahcall, J. N., et al. 1996, , 451, 19 Bahcall, J. N., & Salpeter, E. E. 1965, , 142, 1677 Bahcall, J. N., & Salpeter, E. E. 1966, , 144, 847 Bahcall, J. N., & Spitzer, L. 1969, , 156, L63 Bechtold, J. 1994, , 91, 1 Bergeron, J., et al. 1994, , 436, 33 Cen, R., Miralda-Escudeé, J. Ostriker, J. P., & Rauch, M. 1994, , 437, L9 Davidsen, A. 1995, B.A.A.S., 186, 30.01 Greenstein, J. & Matthews, T. 1963, Nature, 197, 1041 Hernquist, L. Katz, N., Weinberg, D. H., & Miralda-Escudé 1996, , 457, L51 Jannuzi, B. T., & Hartig, G. F. 1994, in Calibrating Hubble Space Telescope, ed. J. C. Blades & S. J. Osmer (Baltimore: STScI), 215 Jannuzi, B. T. et al. 1996, in preparation Jakobsen, P., Boksenberg, A., Deharveng, J. M., Greenfield, P., Jedrzejewski, R., & Paresce, F. 1994, Nature, 370, 35 Jakobsen, P. & Tytler, D. 1996, in preparation Katz, N., Weinberg, D. H., Hernquist, L, & Miralda-Escudé, J. 1996, , 457, L57 Kulkarni, V. P., & Fall, S. M. 1993, , 413, L63 Laor, A. et al. 1994, , 420, 110 Laor, A. et al. 1995, , 99, 1 Lanzetta, K., Bowen, D. V., Tytler, D., & Webb, J. K. 1995, , 442, 538 Lu, L., Wolfe, A. M., & Turnshek, D. A. 1991, , 434, 493 Le Brun, V., Bergeron, J., & Boisseé, P. 1995, A&A, in press Mathur, S., Wilkes, B., Elvis, M., & Fiore, I. 1994, , 434, 493 Morris, S. L., et al. 1993, , 419, 524 Morris, S. L., Weymann, R. J., Savage, B. D., & Gilliland, R. L. 1991, , 377, L21 Miralda-Escudé, J., Cen, R. Y., Ostriker, J. P., & Rauch, M. 1996, , in press Press, W. H., Rybicki, G. B., & Schneider, D. P. 1993, , 414, 64 Reimers, D., & Vogel, S. 1993, A&A 276, L13 Savage, B. et al. 1993, , 413, 116 Sarajedini, V., Green, R. F., & Jannuzi, B. T. 1996, , in press Sargent, W. L. W., Young, P. J., Boksenberg, A. & Tytler, D. 1980, , 41 Schmidt, M. 1965, , 141, 1295 Schmidt, M. 1963, Nature, 197, 1040 Schneider, D. P., et al. 1993, , 87, 45 Songaila, A., Hu, E. M., & Cowie, L. L. 1995, Nature, 375, 124 Spinrad, H., et al. 1993, , 106, 1 Spitzer, L. 1956, , 124, 20 Stengler-Larrea, E., et al. 1995, , 444, 64 Storrie-Lombardi, L. J., McMahon, R. G., Irwin, M. J., & Hazard, C. 1994, , 427, L13 Stocke, J. T., Shull, J. M., Penton, S., Donahue, M., & Carilli, C. 1995, , 451 24 Weymann, R. et al. 1996, in preparation Wolfe, A. M. 1988, in QSO Absorption Lines, eds. J. C. Blades, D. Turnshek, & C. A. Norman, (Cambridge University Press: New York), 297 Womble, D., Sargent, W. L. W., & Lyons, R. S. 1996, to appear in “Cold Gas at High Redshift”, eds. M. Bremer, H. Rottgering, P. van der Werf, & C. Carrilli (Kluwer) Zhang, Y., Annino, P. & Norman, M. L. 1995, , 453, L57
--- abstract: 'In this paper we investigate algorithmic randomness on more general spaces than the Cantor space, namely computable metric spaces. To do this, we first develop a unified framework allowing computations with probability measures. We show that any computable metric space with a computable probability measure is isomorphic to the Cantor space in a computable and measure-theoretic sense. We show that any computable metric space admits a universal uniform randomness test (without further assumption).' address: - 'LIENS, Ecole Normale Supérieure, Paris. email: [email protected]' - 'LIENS, Ecole Normale Supérieure and CREA, Ecole Polytechnique, Paris. email: [email protected]' author: - Mathieu Hoyrup - Cristóbal Rojas bibliography: - 'bibliography.bib' title: | Computability of probability measures and\ Martin-Löf randomness over metric spaces --- [^1] Computability, computable metric spaces, computable measures, Kolmogorov complexity, algorithmic randomness, randomness tests. Introduction ============ The theory of algorithmic randomness begins with the definition of individual random infinite sequence introduced in 1966 by Martin-Löf [@MLof66]. Since then, many efforts have contributed to the development of this theory which is now well established and intensively studied, yet restricted to the Cantor space. In order to carry out an extension of this theory to more general infinite objects as encountered in most mathematical models of physical random phenomena, a necessary step is to understand what means for a probability measure on a general space to be computable (this is very simple expressed on the Cantor Space). Only then algorithmic randomness can be extended. The problem of computability of (Borel) probability measures over more general spaces has been investigated by several authors: by Edalat for compact spaces using domain-theory ([@Eda96]); by Weihrauch for the unit interval ([@Wei99]) and by Schröder for sequential topological spaces ([@Sch07]) both using representations; and by Gács for computable metric spaces ([@Gac05]). Probability measures can be seen from different points of view and those works develop, each in its own framework, the corresponding computability notions. Mainly, Borel probability measures can be regarded as points of a metric space, as valuations on open sets or as integration operators. We express the computability counterparts of these different views in a unified framework, and show them to be equivalent. Extensions of the algorithmic theory of randomness to general spaces have previously been proposed: on effective topological spaces by Hertling and Weihrauch (see [@HerWei98],[@HerWei03]) and on computable metric spaces by Gács (see [@Gac05]), both of them generalizing the notion of randomness tests and investigating the problem of the existence of a universal test. In [@HerWei03], to prove the existence of such a test, ad hoc computability conditions on the measure are required, which [*a posteriori*]{} turn out to be incompatible with the notion of computable measure. The second one ([@Gac05]), carrying the extension of Levin’s theory of randomness, considers *uniform tests* which are tests parametrized by measures. A computability condition on the basis of ideal balls (namely, recognizable Boolean inclusions) is needed to prove the existence of a universal uniform test. In this article, working in computable metric spaces with any probability measure, we consider both uniform and non-uniform tests and prove the following points: - uniformity and non-uniformity do not essentially differ, - the existence of a universal test is assured without any further condition. Another issue addressed in [@Gac05] is the characterization of randomness in terms of Kolmogorov Complexity (a central result in Cantor Space). There, this characterization is proved to hold (for a compact computable metric space $X$ with a computable measure) under the assumption that there exists a computable injective encoding of a full-measure subset of $X$ into binary sequences. In the real line for example, the base-two numeral system (or binary expansion) constitutes such encoding for the Lebesgue measure. This fact was already been (implicitly) used in the definition of random reals (reals with a random binary expansion, w.r.t the uniform measure). We introduce, for computable metric spaces with a computable measure, a notion of binary representation generalizing the base-two numeral system of the reals, and prove that: - such a binary representation always exists, - a point is random if and only if it has a unique binary expansion, which is random. Moreover, our notion of binary representation allows to identify any computable probability space with the Cantor space (in a computable-measure-theoretic sense). It provides a tool to directly transfer elements of algorithmic randomness theory from the Cantor space to any computable probability space. In particular, the characterization of randomness in terms of Kolmogorov complexity, even in a non-compact space, is a direct consequence of this. The way we handle computability on continuous spaces is largely inspired by representation theory. However, the main goal of that theory is to study, in general topological spaces, the way computability notions depend on the chosen representation. Since we focus only on Computable Metric Spaces (see [@Hem02] for instance) and *Enumerative Lattices* (introduced in setion 2.2) we shall consider only one *canonical* representation for each set, so we do not use representation theory in its general setting. Our study of measures and randomness, although restricted to computable metric spaces, involves computability notions on various sets which do not have natural metric structures. Fortunately, all these sets become enumerative lattices in a very natural way and the canonical representation provides in each case the right computability notions. In section \[basic\], we develop a language intended to express computability concepts, statements and proofs in a rigorous but still (we hope) transparent way. The structure of computable metric space is then recalled. In section \[section\_es\], we introduce the notion of enumerative lattices and present two important examples to be used in the paper. Section \[measures\] is devoted to the detailed study of computability on the set of probability measures. In section \[cps\] we define the notion of binary representation on any computable metric space with a computable measure and show how to construct such a representation. In section \[randomness\] we apply all this machinery to algorithmic randomness. Enumerative Lattices {#section_es} ==================== Definition ---------- We introduce a simple structure using basic order theory, on which a natural representation can be defined. The underlying ideas are those from domain theory, but the framework is lighter and (hence) less powerful. Actually, it is sufficient for the main purpose: proposition \[enum\_es\]. This will be applied in the last section on randomness. An is a triple $(X,\leq,\P)$ where $(X,\leq)$ is a complete lattice and $\P\subseteq X$ is a numbered set such that every element $x$ of $X$ is the supremum of some subset of $\P$. We then define $\P_\downarrow(x):=\{p\in\P: p\leq x\}$ (note that $x=\sup \P_\downarrow(x)$). Any element of $X$ can be described by a sequence $\seq{p}$ of elements of $\P$. Note that the least element $\bot$ need not belong to $\P$: it can be described by the empty set, of which it is the supremum. \[representation\_es\] The canonical representation on an enumerative lattice $(X,\leq,\P)$ is the induced by the partial surjection $\delta_{\leq}(\seq{p})=\sup \seq{p}$ (where the sequence $\seq{p}$ may be empty). From here and beyond, each set $X$ endowed with an enumerative structure $(X,\leq,\P)$ will be implicitly represented using the canonical representation. Hence, canonical constructivity notions derive directly from definition \[computability\_notions\]. Let us focus on an example: the identity function from $X$ to $X$ is computed by an algorithm outputting exactly what is provided by the oracle. Hence, when the oracle is empty, which describes $\bot$, the algorithm runs forever and outputs nothing, which is a description of $\bot$. $(\overline{\R},\leq,\Q)$ with $\overline{\R}=\R \cup \{-\infty,+\infty\}$: the constructive elements are the so-called *lower semi-computable* real numbers, $(2^\N,\subseteq,\{\mbox{finite sets}\})$: the constructive elements are the r.e sets from classical recursion theory, $(\{\bot,\top\},\leq,\{\top\})$ with $\bot<\top$. We recall that a real number $x$ is *computable* if both $x$ and $-x$ are lower semi-computable. Here is the main interest of enumerative lattices: \[enum\_es\] Let $(X,\leq,\P)$ be an enumerative lattice. There is an enumeration $(x_i)_{i\in\N}$ of all the constructive elements of $X$ such that $x_i$ is constructive uniformly in $i$. there is an enumeration $\FI$ of the r.e subsets of $\N$: for every r.e subset $E$ of $\N$, there is some $i$ such that $E=E_i:=\{\FI(\uple{i,n}):n\in\N\}$. Moreover, we can take $\FI$ such that whenever $E_i\neq\emptyset$ the function $\FI(\uple{i,.}):\N\to\N$ is total (this is a classical construction from recursion theory, see [@Rog87]). Then consider the associated algorithm $\A_\FI=\nu_\P\circ \FI$: for every constructive element $x$ there is some $i$ such that $\A_\FI(\uple{i,.}):\N\to \P$ enumerates $x$ ($\emptyset$ is an enumeration of $\bot$). \[scott\] Observe that on every enumerative lattice the Scott topology can be defined: a Scott open set $O$ is an upper subset ($x\in O, x\leq y \Rightarrow y\in O$) such that for each sequence $\seq{p}=(p_{n_i})_{i\in\N}$ such that $\sup \seq{p}\in O$, there is some $k$ such that $\sup\{p_{n_0},\ldots,p_{n_k}\}\in O$. If $Y$ and $Z$ have enumerative lattice structures, a function $f:Y\to Z$ is said to be Scott-continuous if it is monotonic and commutes with suprema of increasing sequences (one can prove that $f$ is Scott-continuous if and only if it is continuous for the Scott topologies on $Y$ and $Z$) and is easy to see that a Scott-continuous function $f:Y\to Z$ such that all $f(\sup\{p_{n_1},\ldots,p_{n_k}\})$ are constructive uniformly in $\uple{n_1,\ldots,n_k}$, is in fact a constructive function. Functions from a computable metric space to an enumerative lattice {#section_CXY} ------------------------------------------------------------------ Given a computable metric space $(X,d,\S)$ and an enumerative space $(Y,\leq,\P)$, we define the numbered set $\mathcal{F}$ of from $X$ to $Y$: $$f_\uple{i,j}(x)=\left\{ \begin{array}{rl} p_j & \mbox{ if $x\in B_i$} \\ \bot & \mbox{ otherwise} \end{array}\right.$$ We then define $\CXY$ as the closure of $\F$ under pointwise suprema, with the pointwise ordering $\sqsubseteq$. We have directly: $(\CXY,\sqsubseteq,\mathcal{F})$ is an enumerative lattice. **example:** the set $\Rplus=[0,+\infty)\cup\{+\infty\}$ has an enumerative lattice structure $(\Rplus,\leq,\Q^+)$, which induces the enumerative lattice $\C(X,\Rplus)$ of positive lower semi-continuous functions from $X$ to $\Rplus$. Its constructive elements are the positive *lower semi-computable* functions.\ We now show that the constructive elements of $\CXY$ are exactly the constructive functions from $X$ to $Y$. To each algorithm $\A$ we associate a constructive element of $\CXY$, enumerating a sequence of step functions: enumerate all $\uple{n,i_0,\ldots,i_k}$ with $d(s_{i_j},s_{i_{j+1}})<2^{-(j+1)}$ for all $j<k$ (prefix of a super-fast sequence). Keep only those for which the computation of $\A\orcl{i_0,\ldots,i_k,0,0,\ldots}(n)$ halts without trying to read beyond $i_k$. For each one, the latter computation outputs some element $p_l$: then output the step function $f_\uple{i,l}$ where $B_i=B(s_{i_k},2^{-k})$. We denote by $f_\A$ the supremum of the enumerated sequence of step functions. \[lemma\_extensional\] For all $x$ on which $\A$ is extensional, $f_\A(x)$ is the element of $Y$ described by $\A\orcl{x}$. let $y$ be the element described by $\A\orcl{x}$. For all $\uple{n,i_0,\ldots,i_k}$ for which some $f_\uple{i,j}$ is enumerated with $x\in B_i$, there is a fast sequence $\seq{s}$ converging to $x$ starting with $s_{i_0},\ldots,s_{i_k}$, for which $\A\orcl{\seq{s}}(n)=p_j$. Then $y\geq p_j=f_\uple{i,j}(x)$. Hence $y\geq f_\A(x)$. There is a super-fast sequence $\seq{s}$ converging to $x$: for all $n$, $\A\orcl{\seq{s}}(n)$ stops and outputs some $p_{j_n}$, so there is some $i_n$ with $x\in B_{i_n}$ such that $f_\uple{i_n,j_n}$ is enumerated. Hence, $y=\sup_n p_{j_n}=\sup f_\uple{i_n,j_n}(x)\leq f_\A(x)$. \[proposition\_constructive\_functions\] The constructive elements of $\CXY$ are exactly the (total) constructive functions from $X$ to $Y$. the supremum of a r.e subset $E$ of $\mathcal{F}$ is a total constructive function: semi-decide in dovetail $x\in B_i$ for all $f_\uple{i,j}\in E$, and enumerate $p_j$ each time a test stops. Given a total constructive function $f$, there is an algorithm $\A$ which on each $x\in X$ is extensional and describes $f(x)$, so $f=f_\A$. The proof even shows that the equivalence is constructive: the evaluation of any $f:X\to Y$ on any $x\in X$ can be achieved by an algorithm having access to any description of $f\in\C(X,Y)$, and any algorithm evaluating $f$ can be converted into an algorithm describing $f\in C(X,Y)$. More precisely: \[curry\]Let $X,X'$ be computable metric spaces and $Y$ be an enumerative lattice: The function $Eval:\C(X,Y)\times X \to Y$ is constructive, If a function $f:X'\times X\to Y$ is constructive then the function from $X'$ to $C(X,Y)$ mapping $x'\in X'$ to $f(x',.)$ is constructive. Lemma \[lemma\_extensional\] and proposition \[proposition\_constructive\_functions\] implie: \[corollary\_x\_constructive\] The $x$-constructive elements of $Y$ are exactly the images of $x$ by *total* constructive functions from $X$ to $Y$. This is a particular property of the enumerative lattice structure: a partial constructive function from some represented space to another cannot in general be extended to a total constructive one. The Open Subsets of a computable metric space {#opensets} --------------------------------------------- Following [@BraWei99], [@BraPre03], we define constructivity notions on the open subsets of a computable metric space. The topology $\tau$ induced by the metric has the numbered set $\B$ of ideal balls as a countable basis: any open set can then be described as a countable union of ideal balls. Actually $(\tau,\subseteq,\B)$ is an enumerative space (cf section \[section\_es\]), the supremum operator being union. The canonical representation on enumerative lattices (definition \[representation\_es\]) induces constructivity notions on $\tau$, a constructive open set being called a . On the integers, it may be unnatural to show that some subset is recursively enumerable, and the equivalent notion of semi-decidable set is often used. This notion can be extended to subsets of a computable metric space, and it happens to be very useful in the applications. We recall from section \[section\_es\] that $\{\bot,\top\}$ is an enumerative lattice, which induces canonically the enumerative lattice $\C(X,\{\bot,\top\})$. A subset $A$ of $X$ is said to be if its indicator function $1_A:X\to\{\bot,\top\}$ (mapping $x\in A$ to $\top$ and $x\notin A$ to $\bot$) is constructive. In other words, $A$ is semi-decidable if there is a recursive function $\FI$ such that for all $x\in X$ and all description $\seq{s}$ of $x$, $\FI\orcl{\seq{s}}$ stops if and only if $x\in A$. It is a well-known result (see [@BraPre03]) that the two notions are effectively equivalent: \[proposition\_semi-decidable\] A subset of $X$ is semi-decidable if and only if it is a r.e open set. Moreover, the enumerative lattices $(\tau,\subseteq,\B)$ and $\C(X,\{\bot,\top\})$ are constructively isomorphic. The isomorphism is the function $U\mapsto 1_U$ and its inverse $f\mapsto f^{-1}(\top)$. In other words, $f^{-1}(\top)$ is $f$-r.e uniformly in $f$ and $1_U$ is $U$-lower semi-computable uniformly in $U$. It implies in particular that: The intersection $(U,V)\mapsto U\cap V$ and union $(U,V)\mapsto U\cup V$ are constructive functions from $\tau\times \tau$ to $\tau$. For computable functions between computable metric spaces, we have the following useful characterization: \[functions\] Let $(X,d_X,S_X)$ and $(Y,d_Y,S_Y)$ be computable metric spaces. A function $f:X \vers Y$ is computable on $D\subseteq X$ if and only if the preimages of ideal balls are uniformly r.e open (in $D$) sets. That is, for all $i$, $f^{-1}(B_i)=U_i \cap D$ where $U_i$ is a r.e open set uniformly in $i$. We will use the following notion: A $\Pi_2^0$-set is a set of the form $\bigcap_n U_n$ where $(U_n)_n$ is a sequence of uniformly r.e open sets. Aknowledgments {#aknowledgments .unnumbered} ============== We would like to thank Stefano Galatolo, Peter Gács and Giuseppe Longo for useful comments and remarks. [^1]: partly supported by ANR Grant 05 2452 260 ox
--- abstract: 'We use the largest open repository of public speaking—TED Talks—to predict the ratings of the online viewers. Our dataset contains over 2200 TED Talk transcripts (includes over 200 thousand sentences), audio features and the associated meta information including about 5.5 Million ratings from spontaneous visitors of the website. We propose three neural network architectures and compare with statistical machine learning. Our experiments reveal that it is possible to predict all the 14 different ratings with an average AUC of 0.83 using the transcripts and prosody features only. The dataset and the complete source code is available for further analysis.' author: - | Md Iftekhar Tanveer$^1$, Md Kamrul Hassan$^2$, Daniel Gildea$^3$, M. Ehsan Hoque$^4$\ University of Rochester\ [{$^1$itanveer,$^2$mhasan8,$^3$gildea,$^4$mehoque}@cs.rochester.edu]{} bibliography: - 'tanveer\_ted\_2019.bib' title: Predicting TED Talk Ratings from Language and Prosody --- Introduction ============ Imagine you are a teacher, or a corporate employee, or an entrepreneur. Which soft skill do you think would be the most valuable in your daily life? According to an article in Forbes [@Gallo2014a], 70% of employed Americans agree that public speaking skills are critical to their success at work. Yet, it is one of the most dreaded acts. Many people rate the fear of public speaking even higher than the fear of death [@Wallechinsky2005]. As a result, several commercial products are being available nowadays to come up with automated tutoring systems for training public speaking. Predicting the viewer ratings is an essential component for the systems capable of tutoring oral presentations. We propose a framework to predict the viewer ratings of TED talks from the transcript and prosody component of the speech. We use a dataset of $2233$ public speaking videos accompanying over $5$ million viewer ratings. The viewers rate each talk on 14 different categories. These are—*Beautiful*, *Confusing*, *Courageous*, *Fascinating*, *Funny*, *Informative*, *Ingenious*, *Inspiring*, *Jaw-Dropping*, *Long-winded*, *Obnoxious*, *OK*, *Persuasive*, and *Unconvincing*. Besides, the complete manual transcriptions of the talks are available. As a result, this dataset provides high-quality multimedia contents with rich ground truth annotations from a significantly large number of spontaneous viewers. We release the data and the complete source code for future scientific exploration [^1]. TED talks are edited production videos. They contain numerous changes in the camera angles, clips from the presentation slides, reactions from the audience, etc. To avoid these extraneous features and to focus only on the speech, we remove the visual elements from the data. We use only the transcripts and the processed audio features (pitch, loudness etc.) in our experiments. However, the links to the original TED talks are preserved in the dataset. Therefore, it is possible to retrieve the visual elements if necessary. We utilize three neural network architectures in our experiments. Our results show that the proposed solutions always outperform (AUC $0.83$) the baseline approaches (AUC $0.78$) for predicting the TED talk ratings. Background Research =================== An example of behavioral prediction research is to *automatically grade essays*, which has a long history [@valenti2003overview]. Recently, the use of deep neural network based solutions [@alikaniotis2016automatic; @taghipour2016neural] are becoming popular in this field. @farag2018neural proposed an adversarial approach for their task. @jin2018tdnn proposed a two-stage deep neural network based solution. Predicting *helpfulness* [@martin2014prediction; @yang2015semantic; @liu2017using; @chen2018cross] in the online reviews is another example of predicting human behavior. In general, behavioral prediction encompasses numerous areas such as predicting *outcomes in job interviews* [@Naim2016], *hirability* [@Nguyen2016], *presentation performance* [@Tanveer2015; @Chen2017a; @Tanveer2018] etc. Research has been conducted on predicting various aspects of the TED talks. @Chen2017 analyzed the TED Talks for humor detection. @Liu2017 analyzed the transcripts of the TED talks to predict audience engagement in the form of applause. @Haider2017 predicted user interest (engaging vs. non-engaging) from high-level visual features (e.g., camera angles) and audience applause. @Pappas:2013:SAU:2484028.2484116 proposed a sentiment-aware nearest neighbor model for a multimedia recommendation over the TED talks. @bertero2016long proposed a combination of Convolutional Neural Network (CNN) and Long-short Term Memory (LSTM) based framework to predict humor in the dialogues. @jaech2016phonological analyzed the detection performance of phonological puns using various natural language processing techniques. @weninger2013words predicted the TED talk ratings from the linguistic features of the transcripts. This work is similar to ours. However, they did not use neural networks and thus obtained similar performance to our baseline methods. ![Counts of all the 14 different rating categories (labels) in the dataset[]{data-label="fig:rating_counts"}](figures/totalrating_barplot){width="1\linewidth"} Dataset {#sec:DatContents} ======= The data for this study was gathered from the [ted.com](ted.com) website on November 15, 2017. We removed the talks published six months before the crawling date to make sure each talk has enough ratings for a robust analysis. More specifically, we filtered any talk that— was published less than 6 months prior to the crawling date, contained any of the following keywords: live music, dance, music, performance, entertainment, or, contained less than 450 words in the transcript. This left a total of 2231 talks in the dataset. We collect the manual transcriptions and the total view counts for each video. We also collect the “ratings” which is the counts of the viewer-annotated labels. The viewers can annotate a talk from a selection of 14 different labels provided in the website. The labels are not mutually exclusive. Viewers can choose at most 3 labels for each talk. If only one label is chosen, it is counted 3 times. We count the total number of annotations under each label as shown in Figure \[fig:rating\_counts\]. The ratings are treated as the ground truth about the audience perception. A summary of the dataset characteristics is shown in Table \[tab:datasize\]. **Property** **Quantity** ------------------------------- -------------- **Number of talks** 2,231 **Total length of all talks** 513.49 Hours **Total number of ratings** 5,574,444 **Minimum number of ratings** 88 **Average ratings per talk** 2498.6 **Total word count** 5,489,628 **Total sentence count** 295,338 : Dataset Properties[]{data-label="tab:datasize"} The longer a TED talk remains in the web, the more views it gets. Large number of views also result in a large number of annotations. As a result, older TED talks contain more annotations per rating category. However, an old speech does not necessarily imply better quality. We normalize the rating counts of each individual talk as in the following equation: $$\label{eq:scaled_score} r_{i,\text{scaled}} = \frac{r_i}{\sum_i{r_i}}$$ Where $r_i$ represents the count of the $i^{\text{th}}$ label in a talk. Let us assume that in a talk, $f_i$ fractions of the total viewers annotate for the rating category $i$. Then the scaled rating, $r_{i,\text{scaled}}$ becomes $\frac{f_iV}{\sum_i{f_iV}}=\frac{Vf_i}{V\sum_i{f_i}}$. This process removes the effect of *Total Views*, $V$ as evident in Table \[tab:corrcoef\]. Scaling the rating counts removes the effects of *Total Views* by reducing the average correlation from $0.56$ to $-0.03$. This also removes the effect of the *Age of the Talks* by reducing the average correlation from $0.15$ to $0.06$. Therefore, removing $V$ reduces the effect of the *Age of the Talks* in the ratings. ------------- ------------- ----------- ------------- ----------- **noscale** **scale** **noscale** **scale** **Beaut.** 0.52 0.01 0.03 -0.14 **Conf.** 0.39 -0.12 0.27 0.20 **Cour.** 0.52 -0.003 0.01 0.15 **Fasc.** 0.78 0.05 0.15 0.06 **Funny** 0.57 0.14 0.10 0.10 **Info.** 0.76 -0.08 0.07 -0.19 **Ingen.** 0.59 -0.06 0.18 0.10 **Insp.** 0.79 0.1 0.05 -0.15 **Jaw-Dr.** 0.51 0.1 0.18 0.23 **Long.** 0.44 -0.17 0.36 0.31 **Obnox.** 0.27 -0.11 0.19 0.17 **OK** 0.72 -0.16 0.21 0.14 **Pers.** 0.72 -0.01 0.12 0.02 **Unconv.** 0.29 -0.14 0.18 0.15 **Avg.** 0.56 -0.03 0.15 0.06 ------------- ------------- ----------- ------------- ----------- : Correlation coefficients of each category of the ratings with the *Total Views* and the *“Age” of Talks*[]{data-label="tab:corrcoef"} In our experiments, we scale and binarize the rating counts by thresholding over the median value which results in a $0$ and $1$ class for each category of the ratings. The dataset contains the complete original information as well as the scaled and binarized versions of the ratings. Network Architectures ===================== We implemented three neural networks for comparison of their performance with the statistical machine learning techniques in predicting the viewer ratings. The architectures of these models are described in the following subsections. All these models are multi-label binary classifiers designed to capture sentence-wise patterns in the TED talks that contribute to the prediction of the rating labels. Word Sequence Model ------------------- A pictorial illustration of this model is shown in Figure \[fig:model\_word\_seq\]. Each sentence, $s_j$ in the transcript is represented by a sequence of words-vectors[^2], $\mathbf{w}_1,\mathbf{w}_2,\mathbf{w}_3,\dots,\mathbf{w}_{n_j}$. Here, each $\mathbf{w}$ represents the pre-trained, 300-dimensional GLOVE word vectors [@pennington2014glove] corresponding to the words in the sentence. We use a Long-Short-Term-Memory (LSTM) [@Hochreiter1997] neural network to obtain an embedding vector, $\mathbf{h}_{s_j}$, for the $j^{\text{th}}$ sentence in the talk transcript. These vectors ($\mathbf{h}_{s_j}$) are averaged and passed through a feed-forward network to produce a 14-dimensional output vector corresponding to each category of the ratings. An element-wise sigmoid ($\sigma(x) = \frac{1}{1+e^{-x}}$) activation function is applied to the output vector. The mathematical description of the model can be given using the following equations: $$\begin{aligned} &\mathbf{h}_{s_j} = \text{LSTM}(\mathbf{w}_1,\mathbf{w}_2,\mathbf{w}_3,\dots,\mathbf{w}_{n_j})\\ &\mathbf{h} = \frac{1}{N}\sum_{j=1}^N\mathbf{h}_{s_j}\\ &\mathbf{r} = \sigma(\mathbf{W}\mathbf{h} + \mathbf{b}_r)\end{aligned}$$ Here, $\mathbf{h}_{s_j}$ represents the the last recurrent state for the sentence $j$. $N$ represents the total number of the sentences in the transcript. We use zero vectors to initialize the memory cell ($\mathbf{c}_0$) and the hidden state ($\mathbf{h}_0$). ![An illustration of the Word Sequence Model[]{data-label="fig:model_word_seq"}](figures/model1){width="1\linewidth"} Dependency Tree-based Model {#sec:deptree_model} --------------------------- We are interested to represent the sentences as hierarchical trees of dependent words. We use a freely available dependency parser named SyntaxNet[^3] [@Andor2016] to extract the dependency tree corresponding to each sentence. The child-sum TreeLSTM [@Tai2015] is used to process the dependency trees. As shown in Figure \[fig:model\_dep\_tree\], the parts-of-speech and dependency types of the words are used in addition to the GLOVE word vectors. We concatenate a parts-of-speech embedding ($\mathbf{p}_i$) and a dependency type embedding ($\mathbf{d}_i$) with the word vectors. These embeddings are learned through back-propagation along with other free parameters of the network. The complete mathematical description of this model is as follows: $$\begin{aligned} &\mathbf{x'}_t = [\mathbf{w}'_t, \mathbf{p}'_t, \mathbf{d}'_t]\label{eq:concat}\\ &\mathbf{\tilde{h}}_t = \sum_{k\in C(t)}\mathbf{h}_k\label{eq:treelstm_first}\\ &\mathbf{i}_t =\sigma(\mathbf{U}_i\mathbf{x}_t+\mathbf{V}_i\mathbf{\tilde{h}}_{t} + \mathbf{b}_i)\label{eq:Vstart}\\ &\mathbf{f}_{tk}=\sigma(\mathbf{U}_f\mathbf{x}_t+\mathbf{V}_f\mathbf{h}_k + \mathbf{b}_f)\\ &\mathbf{u}_t =\tanh(\mathbf{U}_u\mathbf{x}_t+\mathbf{V}_u\mathbf{\tilde{h}}_{t} + \mathbf{b}_u)\\ &\mathbf{o}_t =\sigma(\mathbf{U}_o\mathbf{x}_t+\mathbf{V}_o\mathbf{\tilde{h}}_t + \mathbf{b}_o)\label{eq:Vend}\\ &\mathbf{c}_t =\mathbf{f}_{tk}\odot \mathbf{c}_k + \mathbf{i}_t\odot\mathbf{u}_t\\ &\mathbf{h}_t = \mathbf{o}_t\odot\tanh(\mathbf{c}_t)\label{eq:treelstm_last} \\ &\mathbf{h}_{s_j} = \mathbf{h}_{ROOT}\\ &\mathbf{h} = \frac{1}{N}\sum_{j=1}^N\mathbf{h}_{s_j}\\ &\mathbf{r} = \sigma(\mathbf{W}\mathbf{h} + \mathbf{b}_r)\end{aligned}$$ Here, equation  refers to the fact that the input to the treeLSTM nodes are constructed by concatenating the pre-trained GLOVE word-vectors with the embeddings of the parts of speech and the dependency type of a specific word. $C(t)$ represents the set of all the children of node $t$. The parent-child relation of the treeLSTM nodes come from the dependency tree. Notably, the memory cell and hidden states flow hierarchically from the children to the parent. Each node contains a forget gate ($\mathbf{f}$) for each child. Zero vectors are used as the children of the leaf nodes and the sentence embedding vector is obtained from the root node. ![An illustration of the Dependency Tree-based Model[]{data-label="fig:model_dep_tree"}](figures/model2){width="1\linewidth"} Capturing the Patterns in Prosody --------------------------------- We align the TED talk audio with its corresponding transcripts using forced alignment method [^4]. PRAAT [^5] is used to extract the pitch, loudness, and first three formants (frequency and bandwidth) sampled at a rate of 10Hz. We normalize these signals by subtracting the mean and dividing by the standard deviation over the whole video. These signals are then sentence-wise cropped based on the alignment data. We pad all the sentence-wise signal-clips to a length equal to the longest sentence in the transcript. This process constructs a signal of length $M$; where $M$ is the number of samples in the signal corresponding to the longest sentence. Each sample in the signal is an $8$ dimensional vector. We use one dimensional Convolutional Neural Network (CNN) [@LeCun2015] to extract the patterns within the pitch, loudness and formant as follows: $$\begin{aligned} \mathbf{S}_{\text{out}}[f_{o},m] = &\sum_{f_i=1}^{F_{\text{in}}}\sum_{k=1}^K \mathbf{W}_F[f_o,f_i,k] \mathbf{S}_{\text{in}}[f_i,m-k]\\& + \mathbf{b}[f_o] \\ &\forall f_{o}\in{1,2, ..., F_{\text{out}}}\\ &\forall m\in{1,2, ..., M}\end{aligned}$$ Here $\mathbf{S_{\text{in}}}$ is the input signal, $\mathbf{S_{\text{out}}}$ is the output signal, $\mathbf{W}_F$ is the filter weights, $K$ is the receptive fields of the filters, $F_{\text{in}}$ is the dimension of the input signal, $F_{\text{out}}$ is the number of filters and $M$ is the signal length. $\mathbf{b}$ is a bias term. Both $\mathbf{W}_F$ and $\mathbf{b}$ are learned in training time through back-propagation. We use one dimensional Convolutional Neural Network (CNN) [@LeCun2015] to extract the patterns within the *prosody signal*—i.e. pitch, loudness, and the first three formants computed over small segments of the audio. The network consists of four 1D convolutional layers, each having a receptive field of 3. We use element-wise RELU ($\mathcal{R}(x) = \max(0,x)$) activation function to the output of each convolution layer. The lowest (closest to the input signal) two layers consist of 16 filters, and the upper two layers have 32 and 64 filters respectively. The second and third convolution layers are followed by max-pool layers of window size 2. The final convolution layer is followed by a max-pool layer having the window size equal to the length of the signal. Thus, the CNN outputs a 64-dimensional vector. This vector is concatenated with the sentence embedding vector obtained from the dependency tree-based model discussed in section \[sec:deptree\_model\]. The concatenated vector is passed through two layers of fully connected networks to produce the probabilities of the ratings. Training the Networks ===================== We implemented the networks in pyTorch [^6]. Details of the training procedure are described in the following subsections. Optimization {#sec:optimization} ------------ We use multi-label Binary Cross-Entropy loss as defined below for the backpropagation of the gradients: $$\ell(\mathbf{r},\mathbf{y}) = -\frac{1}{n}\sum_{i=1}^{n}(y_i\log(r_i) + (1-y_i)\log(1-r_i))$$ Here $\mathbf{r}$ is the model output and $\mathbf{y}$ is the ground truth label obtained from data. $r_i$ and $y_i$ represent the $i^{\text{th}}$ element of $\mathbf{r}$ and $\mathbf{y}$. $n=14$ represents the number of the rating categories. We randomly split the training dataset into 9:1 ratio and name them training and development subsets respectively. The networks are trained over the training subset. We use the loss in the development subset to tune the hyper-parameters, to adjust the learning rate and regularization strength, and to select the best model for final evaluation, etc. The training loop is terminated when the loss over the development subset saturates. The model parameters are saved only when the loss over the development subset is lower than any previous iteration. We experiment with two optimization algorithms: Adam [@Kingma2014] and Adagrad [@Duchi2011]. The learning rate is varied in an exponential range from $0.0001$ to $1$. The optimization algorithms are evaluated with mini-batches of size $10$, $30$, and $50$. We obtain the best results using Adagrad with learning rate $0.01$ and in Adam with a learning rate of $0.00066$. The training loop ran for $50$ iterations which mostly saturates the development set loss. We conducted around $100$ experiments with various parameters. Experiments usually take about 48 hours to make 50 iterations over the dataset when running in an Nvidia K20 GPU. ![Effect of Weight-Drop regularization on the training and development subset loss[]{data-label="fig:weight-drop"}](figures/regularization_effect){width="0.7\linewidth"} Regularization -------------- Neural networks are often regularized using Dropout [@Hinton2012] to prevent overfitting—where the elements of a layer’s output are set to zero with a probability $p$ during the training time. A naive application of dropout to LSTM’s hidden state disrupts its ability to retain long-term memory. We resolve this issue using the weight-dropping technique proposed by @Merity2017. In this technique, instead of applying the dropout operation between time-steps, it is applied to the hidden-to-hidden weight matrices [@Wan2013]. The dropout probability, $p$ is set to $0.2$. Effect of the regularization is shown in Figure \[fig:weight-drop\]. Baseline Methods ================ We compare the performance of the neural network models against several popular statistical classifiers. Feature Extraction {#sec:feat_extraction} ------------------ We use language, prosody, and narrative trajectory features that are used in similar tasks in the relevant literature. ### Language Features We use a psycholinguistic lexicon named “Linguist Inquiry Word Count” (LIWC) [@Pennebaker-liwc01] for extracting language features. We count the total number of words under the 64 word categories provided in the LIWC lexicon and normalize these counts by the total number of words in the transcript. The LIWC categories include words describing function word categories (e.g., articles, quantifiers, pronouns), various content categories (e.g., anxiety, insight), positive emotions (e.g., happy, kind), negative emotions (e.g., sad, angry), etc. These features have been used in several related works [@Ranganath2009; @Zechner2009; @Naim2016; @Liu2017]. ### Prosodic Features {#subsec:audio_feat_extraction} We extract several summary statistics from the pitch, loudness, and the first three formants extracted from the audio. These statistics are min, max, mean, 25th percentile, median, 75th percentile, standard deviation, kurtosis, and skewness. Additionally, we collect pause duration, the percentage of unvoiced frames, jitter (irregularities in pitch), shimmer (irregularities in vocal intensity), and percentage of breaks in speech. These features are used in several related works as well [@Soman2009; @Naim2016]. ### Narrative Trajectory {#subsec:audio_feat_extraction} Tanveer et al.  proposed a set of features that can capture the “narrative trajectory” of the TED Talks. These features are constructed by extracting sentence-wise emotion (anger, disgust, fear, joy, or sadness), language (analytical, confidence, and tentative) and personality (openness, conscientiousness, extraversion, emotional range, and agreeableness) scores from a standard machine learning toolbox and then interpolating the sentence-wise scores to a signal of fixed size (e.g., 100 samples). These signals form several interesting clusters that can capture patterns of storytelling. The summary statistics of these signals are found to be good predictors of the TED talk ratings as well. We use the min, max, mean, standard deviation, kurtosis, and skewness of these signals. We use IBM Tone Analyzer [^7] to extract the sentence-wise scores. Baseline Classifiers -------------------- We use the Linear Support Vector Machine (SVM) [@Vapnik1964] and LASSO [@Tibshirani1996] as the baseline classifiers. In SVM, the following objective function is minimized: $$\begin{aligned} & \underset{\mathbf{w}, \xi_i, b}{\text{minimize}} & & \frac{1}{2} \| \mathbf{w} \| + C \sum_{i = 1}^N \xi_i\\ & \text{subject to} & & y_i \left(\mathbf{w}' \mathbf{x}_i - b\right) \geq 1 - \xi_i, \ \forall i \\ &&& \xi_i, \geq 0, \ \forall i \\ \end{aligned}$$ Where $\mathbf{w}$ is the weight vector and $b$ the bias term. $\|\mathbf{w}\|$ refers to the $\ell2$ norm of the vector $\mathbf{w}$. In these equations, we assume that the “higher than median” and “lower than median” classes are represented by $1$ and $-1$ values respectively. We adapt the original Lasso [@Tibshirani1996] regression model for classification purposes. It is equivalent to Logistic regression with $\ell1$ norm regularization. It works by solving the following optimization problem: $$\begin{aligned} &\underset{\mathbf{w},b}{\text{minimize}} \quad \| \mathbf{w} \|_1 + k\\ & k=C\sum_{i=1}^N \log\left(\exp\left(-y_i\left(\mathbf{w}' \mathbf{x}_i + b \right)\right)+1\right) \\ \end{aligned}$$ where $C > 0$ is the inverse of the regularization strength, and $\| \mathbf{w} \|_1 = \sum_{j=1}^d |w_j|$ is the $\ell1$ norm of $\mathbf{w}$. The $\ell1$ norm regularization is known to push the coefficients of the irrelevant features down to zero, thus reducing the predictor variance. Finally, the Ridge regression is essentially same as logistic regression with $\ell2$ regularization. The objective function is as below: $$\begin{aligned} &\underset{\mathbf{w},b}{\text{minimize}} \quad \frac{1}{2}\| \mathbf{w} \|+k \\ & k=C\sum_{i=1}^N \log\left(\exp\left(-y_i\left(\mathbf{w}' \mathbf{x}_i + b \right)\right)+1\right) \end{aligned}$$ **Model** ---------------- ------ ------ ------ ------ **Word Seq** 0.83 0.76 0.76 0.76 **D.Tree** 0.83 0.77 0.77 0.77 **D.Tree+Pr.** 0.83 0.72 0.75 0.73 **** 0.76 0.70 0.68 0.68 **LinearSVM** 0.78 0.71 0.71 0.71 **Ridge** 0.78 0.71 0.71 0.71 **LASSO** 0.77 0.70 0.70 0.70 **Weninger** – 0.71 – – : Average of several prediction performance metrics over 14 different ratings of TED talks[]{data-label="tab:avg_metrics"} Experimental Results {#sec:exp_res} ==================== We allocated $150$ randomly sampled TED talks from the dataset as a reserved test subset. All the results shown in this section are computed over this test subset. We evaluate the models by computing the values of four performance metrics—Area Under the ROC Curve (AUC), Precision, Recall, and F-score for all the 14 categories of the ratings. We compute averages of these metrics over all the rating categories that are shown in Table \[tab:avg\_metrics\]. The first three rows represent the average performances of the Word Sequence model, the Dependency Tree based model, and the Dependency Tree model combined with CNN respectively. It is evident from the table that the neural networks outperform the baseline models in all the four metrics. These models were trained and tested on the scaled rating counts ($R_\text{scaled}$). We also trained and tested the dependency tree model with the unscaled rating counts ($4^\text{rd}$ row in Table \[tab:avg\_metrics\]). Notably, the networks perform worse for predicting the unscaled ratings. We believe this is due to the fact that unscaled ratings are biased with the amount of time the TED talks remain online. This mixture of additional information makes it difficult for the neural networks to predict the ratings from transcript and prosody only. We are surprised that adding the prosody does not improve the prediction performance. We think it is because TED Talks are highly rehearsed public speeches. It is likely that the change of prosody in most of the talks are acted, and therefore, it does not carry much information in addition to the talk transcripts. We believe it is a global artifact of the TED talk dataset. **Ratings** **** **** **** ------------------ ---------- ---------- ---------- **Beautiful** 0.88 **0.91** 0.80 **Confusing** 0.70 **0.74** 0.56 **Courageous** 0.84 **0.89** 0.79 **Fascinating** 0.75 0.76 **0.80** **Funny** **0.78** 0.77 0.76 **Informative** 0.81 **0.83** 0.78 **Ingenious** 0.80 **0.81** 0.74 **Inspiring** 0.72 **0.77** 0.72 **Jaw-dropping** 0.68 **0.72** **0.72** **Longwinded** **0.73** 0.70 0.63 **Obnoxious** **0.64** **0.64** 0.61 **OK** **0.73** 0.70 0.61 **Persuasive** 0.83 **0.84** 0.78 **Unconvincing** **0.70** **0.70** 0.61 **Average** 0.76 0.77 0.71 : Recalls for various rating categories. The reason we choose recall is for making comparison with the results reported by @weninger2013words.[]{data-label="tab:ratingwise_metric"} Table \[tab:ratingwise\_metric\] provides a clearer picture how the dependency tree based neural network performs better than the word sequence neural network. The former achieves a higher recall for most of the rating categories ($9$ out of $14$). Only in three cases (*Funny*, *Longwinded*, and *OK*) the word sequence model achieved higher performance than the dependency tree model. Both these models performed equally well for the *Obnoxious* and *Unconvincing* rating category. It is important to realize that the dependency trees we extracted were not manually annotated. They were extracted using SyntaxNet which itself introduces some error. Andor et al.  described their model accuracy to be approximately $0.95$. We expected to notice an impact of this error in the results. However, the results show that the additional information (Parts of Speech tags and the dependency structure) benefited the prediction performance despite the error in annotating the dependency trees. We think the hierarchical tree structure resolves many ambiguities in the sentence semantics which is not available to the word sequence model. We also compare our results with @weninger2013words. However, this comparison is just an approximation because the number of TED talks are different in our experiment than in @weninger2013words. The results show that the neural network models perform better for almost every rating category except *Fascinating* and *Obnoxious*. A neural network is a universal function approximator [@cybenko1989; @hornik1991] and thus expected to perform better. Yet we think another reason for its excel is its ability to process a faithful representation of the transcripts. In the baseline methods, the transcripts are provided as words without any order. In the neural counterparts, however, it is possible to maintain a more natural representation of the words—either the sequence, or the syntactic relationship among them through a dependency tree. In addition, neural networks intrinsically capture the correlations among the rating categories. The baseline methods, on the other hand, considers each category as a separate classification problem. These are a few reasons why neural networks are a better choice for the TED talk prediction task. Conclusion ========== In summary, we presented neural network architectures to predict the TED talk ratings from the speech transcripts and prosody. We provide domain specific information such as psycho-linguistic language features, prosody and narrative trajectory features to the baseline classifiers. The neural networks, on the other hand, were designed to consume mostly the raw data with a few high-level assumptions on human cognition. The neural network architectures provide state of the art prediction performance, outperforming the competitive baseline method in the literature. The average AUC of the networks are $0.83$ compared to the baseline method’s AUC of $0.78$. The results also show that dependency tree based networks perform better in predicting the TED talk ratings. Furthermore, inclusion of prosody does not help as much as we expect it to be. The exact reason why this happens, however, remains to be explored in the future. The dataset and the complete source code of this work will be freely available to the scientific community for further evaluation.[^8] [^1]: Link to source code blinded for author anonymity [^2]: In this paper, we represent the column vectors as lowercase boldface letters; matrices or higher dimensional tensors as uppercase boldface letters and scalars as lowercase regular letters. We use a prime symbol ($'$) to represent the transpose operation. [^3]: https://opensource.google.com/projects/syntaxnet [^4]: https://github.com/JoFrhwld/FAVE/wiki/FAVE-align [^5]: http://www.fon.hum.uva.nl/praat/ [^6]: pytorch.org [^7]: https://www.ibm.com/watson/services/tone-analyzer/ [^8]: link blinded due to author anonymity
--- abstract: 'We study quantum systems on a discrete bounded lattice (lattice billiards). The statistical properties of their spectra show universal features related to the regular or chaotic character of their classical continuum counterparts. However, the decay dynamics of the open systems appear very different from the continuum case, their properties being dominated by the states in the band center. We identify a class of states (“lattice scars”) that survive for infinite times in dissipative systems and that are degenerate at the center of the band. We provide analytical arguments for their existence in any bipartite lattice, and give a formula to determine their number. These states should be relevant to quantum transport in discrete systems, and we discuss how to observe them using photonic waveguides, cold atoms in optical lattices, and quantum circuits.' address: - '$^1$ Instituto de Física Fundamental, IFF-CSIC, Serrano 113b, Madrid 28006, Spain' - '$^2$ Instituto de Estructura de la Materia, IEM-CSIC, Serrano 123, Madrid 28006, Spain' author: - 'Víctor Fernández-Hurtado$^1$, Jordi Mur-Petit$^2$, Juan José García-Ripoll$^1$ and Rafael A. Molina$^2$' bibliography: - 'scars.bib' title: 'Lattice scars: Surviving in an open discrete billiard' --- Introduction ============ Understanding and controlling quantum transport is essential for many different quantum technologies. By now, it has become clear than different quantum systems with very different sizes and time scales follow the same guiding principles as far as transport properties go [@Datta_Book; @Nazarov_Book]. One of these guiding principles is that the statistical properties of quantum transport and quantum decay are chiefly determined by the chaotic or regular properties of the classical analog. One of the most remarkable achievements in classical mechanics in the last century has been the establishment that the time evolution of certain dynamical systems is [*chaotic*]{}, i.e., it features an extreme sensitivity to initial conditions, usually portrayed by their Lyapunov exponents, which are a measure of an exponential divergence of trajectories in phase space. Even though the concept of trajectory no longer holds in quantum physics, [ *quantum chaos*]{}, the quantum-mechanical study of classically chaotic systems, has also flourished [@stockmann1999book]. Results of quantum chaos have been particularly remarkable in the study of billiards: domains wherein a particle moves ballistically except for elastic collisions with the boundary. One of the most surprising results in this field was the discovery by Heller [@heller1984] that the probability amplitude of certain wavefunctions—called “scarred wavefunctions” or, simply, “scars”—in a chaotic two-dimensional billiard is not uniform but concentrates along the trajectory of classical periodic orbits. This effect due to wave interference has now been observed in a number of systems, from microwaves in cavities [@sridar1991; @stein1992], to electrons in quantum dots [@marcus1992; @akis1997], to optical fibers [@michel2012]. The relevance on quantum transport of quantum-chaotic effects in general, and scarred states in particular, is widely supported by theory and experimental evidence [@beenakker1997rmp; @alhassid2000]. The chaotic or regular properties of the dynamics in the closed system have important consequences when the system is opened. In particular, the decay properties of the particles inside a leaking billiard depend strongly on the system being regular or chaotic, on the presence of marginally stable periodic orbits (*bouncing balls*) [@bauer1990], and on Sieber-Richter “paired” trajectories [@richter2002; @waltner2008]. With the development of atomic cooling and trapping techniques, beautiful experiments could be performed exploring different issues of quantum chaos [@raizen2011]. The group of Nir Davidson confined rubidium atoms to a billiard realized by rapidly scanning a blue-detuned laser beam following the shape of the desired domain [@milner2001; @friedman2001]. Opening a hole in the billiard, the number of atoms trapped as a function of time followed an exponential decay for chaotic domains, and a power-law decay for domains supporting stable trajectories [@milner2001]. They also showed the controlled appearance of islands of stability when the walls of chaotic billiards are softened [@kaplan2001] in agreement with theoretical arguments [@alt1996]. However, no scarred states were observed. In this work, we study regular and chaotic billiards where the particle motion is restricted to a square lattice of discrete points. This model is adequate to describe several systems, including ultra-cold atoms trapped in optical lattices [@bloch2012nphysrev; @lewenstein2012book], cf. Fig. \[fig:sketch\]a, as well as the propagation of light along photonic waveguides [@politi2008; @obrien2009], Fig. \[fig:sketch\]b. We consider billiards that in a continuum description feature regular and chaotic properties. By studying the statistics of their energy levels, we show that these behaviors are also present for the discrete case. Furthermore, we study the quantum dynamics in dissipative billiards with a leak localized on the border, and show that the population in [*both*]{} kinds of systems follows a similar trend: an initial exponential decay, followed by a power-law decay, until on occasions a final non-zero population is trapped in the system. We explain this unexpected behavior by the appearance of “lattice scars”: scarred wavefunctions supported on the lattice structure and whose energy is at the band center, $E=0$. Our numerical findings are supported with analytical arguments, which lay the necessary conditions for the appearance of these states, thus pointing a route for controlling the dissipation in finite lattice systems. Finally, we discuss the observability of this effect in several different atomic, photonic, and solid-state setups. ![ From a continuous region (a stadium, a rectangle, etc.), we obtain a [*discrete billiard*]{} by selecting only the sites (circles) that are inside it (shaded region). Particles are allowed to hop between sites with probability $J$, and there is a sink of particles (decay rate $\Gamma$) at a corner of the lattice. This model can be implemented using optical lattices (a), coupled photonic waveguides (b) or coupled superconducting microwave resonators (c). In the first case, the sink can be implemented with a focused, resonant laser. For coupled waveguides, it is a guide with losses, while for (c) the loss comes from a resistor or a semi-infinite transmission line coupled to a few resonators. []{data-label="fig:sketch"}](fig1-setup.jpg){width="\linewidth"} Energy statistics ================= We start by computing the eigenvalues $E_n$ and eigenfunctions $\psi_n$ of a lattice Hamiltonian $$H = -\sum_{\langle l,m\rangle} J_{lm}c^{\dagger}_l c_m \label{eq:ham}$$ where $J_{lm}~(l,m=1,\ldots,N)$ is the hopping amplitude from site $m$ to site $l$, $c_m~(c^{\dagger}_m)$ destroys (creates) a particle at site $m$, and the sum runs over all pairs of nearest neighbors of the $N$-points lattice. The topology of the billiard is hence encoded in the hopping amplitudes or, equivalently, on the set of neighbors of a given site. We calculate the eigenvalues by exact diagonalization. The Hamiltonian presents chiral symmetry meaning that the Schrödinger equation can be written as $$\begin{aligned} \label{chiralEq} H\Psi= \left( \begin{array}{cc} 0 & C \\ C^T & 0 \end{array} \right) \left( \begin{array}{c} \Psi_A \\ \Psi_B \end{array} \right) = E \left( \begin{array}{c} \Psi_A \\ \Psi_B \end{array} \right),\end{aligned}$$ with $A$ and $B$ representing the two sublattices in which the square lattice can be divided. Sites in the $A$ sublattice only connect with sites in the $B$ sublattice and vice versa. This bipartite property of the square lattice translates into a symmetry of the eigenenergies around the band center $E=0$. Following the standard procedure and taking into account the symmetries in the spectrum, we unfold the set of eigenenergies into $s_n=(E_{n+1}-E_n)/\langle E_{n+1}-E_n \rangle$, where the brackets $\langle \cdot \rangle$ denote a local average. We have used different unfolding procedures and checked that the spacing distribution, $P(s)$, obtained is the same, including a local unfolding with different energy windows [@haake], as well as using a smooth functional form that takes into account the logarithmic divergence of the density of states at the band center. The normalized level spacing distribution, $P(s)$, for a continuum regular billiard follows a Poisson distribution, $P_{\mathrm{P}}(s) = \exp(-s)$ [@berry1977], while for chaotic billiards it follows the Wigner surmise, $$P_{\mathrm{W}}(s) = \frac{\pi}{2}se^{-\pi s^2/4} \:, \label{eq:wigner}$$ from Random Matrix Theory (RMT) [@bohigas1984; @stockmann1990]. In the case of the lattice billiards with a square lattice that we are studying the proper Random Matrix Ensemble is the chiral Gaussian Orthogonal Ensemble (ch-GOE, or BD I in the Cartan classification of symmetric spaces) for systems with time reversal and chiral symmetries [@altland97]. However, besides the symmetry of the spectrum around the band center, the statistical spectral properties, including the $P(s)$, are the same as for the usual GOE. We show in Fig. \[fig:PdeS\] our numerical results for a rectangular billiard of $50\times 37$ sites \[Fig \[fig:PdeS\](a)\] and for a desymmetrized Bunimovich stadium with a total of 5238 lattice sites \[Fig. \[fig:PdeS\](b)\]. Similarly to the continuum case, we see that the former agrees well with a Poisson distribution (*dashed line*), while the stadium presents a distribution in agreement with RMT (*solid*). It is worth mentioning that we do not find any indication of the semi-Poisson behavior that was found for the very similar spin stadium billiard in Ref. [@montangero2009]. We have also performed a more stringent test, based on an analysis of the long-range correlations, calculated through the Power Spectrum, $P_{\delta}(k)$, of the $\delta_n$ statistics (as defined in Refs. [@relano2002; @faleiro2004]). Our numerical results are presented in the inset of Fig. \[fig:PdeS\]. The comparison with the theoretical expectations is very good, including the decrease in the average value of the Power Spectrum for small values of the frequency $k$, which can be understood as the effect of bouncing ball orbits [@faleiro2006]. ![ Level spacing statistics for (top) rectangle and (bottom) stadium billiards. Numerical data are plotted with bars while the lines are the theoretical predictions of the Poisson distribution (dashed) and RMT (solid). Inset: long-range energy statistics averaged over 300 states close to the band center of 10 rectangular (squares) and 10 stadium (dots) billiards with similar total number of sites, and the theoretical predictions with the same line coding.[]{data-label="fig:PdeS"}](fig2.jpg){width="0.8\linewidth"} Dynamics in open systems ======================== Having established the static properties of the discrete rectangular and stadium billiards, we proceed now to analyze their dynamics in the presence of dissipation, which we include in the form of a leaking hole on the border of the system. We have studied the evolution of a localized wavepacket with initial momentum $\bm{p}_0$ and width $w$, described by a pure state $\psi_i(t=0) \propto \exp[-(\bm{x}_i -\bm{x}_0)^2/2w^2 - i \bm{p}_0 \cdot \bm{x}_i]$. For a weak dissipation, the dynamics of the resulting mixed state is given by the master equation, $$\frac{\partial\rho}{\partial t} = -\frac{i}{\hbar}[H,\rho] + \sum_k \frac{\gamma_k}{2\hbar} \left( 2 c^{\dagger}_k \rho c_k - c_k c^{\dagger}_k \rho - \rho c_k c^{\dagger}_k \right) \:. \label{eq:master}$$ Here, $H$ is given by Eq. [(\[eq:ham\])]{} while $\gamma_k$ describes the loss rate: $\gamma_k=\Gamma$ within the leak located on the billiard boundary, and $\gamma_k=0$ otherwise. This is equivalent to the evolution under an effective non-Hermitian Hamiltonian with imaginary on-site energies $\gamma_k$. We used a value $\Gamma=2$ (in units of nearest-neighbor hopping $J$) for the decay rate, and a leak radius $\sigma=2$ (in units of the lattice constant). We have verified that using a square-well or Gaussian profile for the leak does not substantially modify our findings. The number of particles remaining in the system after a time $t$ is $N(t) = \sum_k \mathrm{Tr}(\rho(t) c^{\dagger}_k c_k)$. The average of $N(t)$ over all possible positions of the hole, and over a range of initial momenta $\bm{p}_0$ is shown in Fig. \[fig:NdeT\]. For classical systems, one expects very different population dynamics for the two billiards [@alt1996; @bauer1990; @dettmann2009]: a rapid exponential decay for the chaotic one, and a power-law decay for the regular one. For quantum systems, unless there is a large number of decay channels or holes, a purely algebraic decay is expected [@alt1995; @alt1996]. These predictions have been confirmed in previous experiments in a large variety of continuous systems, from microwaves billiards [@alt1995] to cold atoms in optics billiards [@friedman2001]. Here, we observe two features that strikingly contradict these expectations: (i) the population dynamics is similar for [*both*]{} discrete billiards, and (ii) there is a fraction of population that remains trapped for arbitrarily long times. Indeed, $N(t)$ decays rapidly at short times $tJ\lesssim 1000$, then it levels off, and finally saturates for $tJ\gtrsim 10^4$ \[cf. Fig. \[fig:NdeT\], inset\]. The numerical data is accurately fitted by the formula ( compare Eq. (1) in Ref. [@alt1996]) $$N(t) = {\cal E} \exp(-\lambda t) + {\cal A} (1+\alpha t)^{-\beta} + {\cal S} \:. \label{eq:NdeT}$$ ![ Fraction of population trapped in the lattice for the rectangle (top curve) and stadium (bottom curve), averaged over the position of the leak around the billiard. Symbols are simulation data while lines stand for least-squares fits to Eq. [(\[eq:NdeT\])]{}. Inset: behavior for long times.[]{data-label="fig:NdeT"}](fig3-NdeT.jpg){width="\linewidth"} For a given initial wavepacket and position of the hole, this can be rationalized in terms of the decomposition of $\psi_i(0)$ over the eigenstates of the closed billiard with rapid (exponential) decay for short times, ${\cal E}$, those with algebraic decay, ${\cal A}$, and those that survive the presence of the leak for $t\gtrsim 10^4$, ${\cal S}=N(t=0)-{\cal E - A}$. The rapidly decaying eigenstates correspond to those that overlap the site where the leak opens, or to trajectories of the wavepacket that reach the hole after only a few bounces off the walls; classically, this can be expected to be most relevant for chaotic systems, where (almost) any initial trajectory will quickly approach the leak. Algebraic decay is associated with orbits that go through many bounces before leaking out [@dettmann2009]. We do not expect Sieber-Richter paired trajectories [@richter2002; @waltner2008] to be relevant here, as discretized systems do not support exponentially close pairs of trajectories. Following this idea, we have performed a quantitative analysis of the eigenenergies, $E_n^{\mathrm{open}} = \varepsilon_n + i\Gamma_n$, of the non-Hermitian Hamiltonian with imaginary on-site energies, for rectangular and stadium billiards. In both cases, we find that the widths $\Gamma_n$ can be divided into three sets: (i) a very small number of states (2$-$8) with large imaginary parts, $\Gamma_n \geq 10^{-1}$, which we expect to decay for times $\leq 10^1$. (ii) A large fraction ($\gtrsim95\%$) of states with $\Gamma_n \approx 10^{-4}-10^{-1}$ which decay slowly and whose widths follow, in the chaotic billiard, a Porter-Thomas distribution as RMT predicts [@stockmann1999book] (see \[app:porter\]). Finally, (iii) a small number of states with very small decay rates; among these, a few $\Gamma_n$ are numerically equivalent to zero. ![ Probability density of some states in lattices with a leak at the top-right corner (yellow square) of a rectangular billiard with $M=35,N=26$ (top row) and a stadium with $M=32,N=25$ (bottom). Left column: lattice scars, i.e., $E_n^{\mathrm{open}}=0$ eigenstates of the non-Hermitian Hamiltonian: there are two for the rectangle (each localized on a different sublattice, shown on the same figure as they are symmetric upon reflection on the central (dashed) line), and four for the stadium (of which we show one). Right column: snapshots of the dynamical evolution with Eq. [(\[eq:master\])]{} at the indicated times. Lower density is indicated by blue (dark grey) and higher by lighter colors; maximum density is at red spots. Small white dots point the lattice sites, and the purple line is the circular edge of the stadium. See [@epaps].[]{data-label="fig:scars"}](fig4-scars.jpg){width="\linewidth"} Lattice scars ============= The probability density of one of these non-decaying eigenstates, $|\psi_n^{\mathrm{open}}|^2$, in the rectangle (stadium) is shown on the left panel of Fig. \[fig:scars\] (top, resp. bottom) when the leak is at the top-right corner of the billiard. The long lifetime of these states is quickly understood by noting their vanishing densities at the hole position. Their spatial distribution on the billiard, however, is far from a “bouncing ball” orbit [@alt1995; @alt1996; @dettmann2009; @loeck2012]. This is due to the lattice constraint in $J_{lm}$ that only allows to hop from one site to its nearest neighbor. This, together with the geometry of the square lattice, amounts to the system being bipartite on two disjoint sublattices, $A,B$, as mentioned in the discussion about spectral statistics. Application of a theorem by Inui [*et al.*]{} [@inui1994] implies then that the closed system has $n$ solutions to the Schrödinger equation at the band center, $E=0$, which vanish on one of the sublattices, say $\psi_{k \in B}=0$. Here, $n$ is the number of sites on the occupied lattice (here, $N_A$) minus the rank, $r$, of the matrix $J_{lm}$ [@inui1994], i.e., $r+n=N_A$. Once we open the system, some of these degenerate states in the middle of the band stay at $E=0$, while the others acquire a purely imaginary eigenvalue with a large width. The number of the latter is given directly by the number of $A$ sites overlapping the leak. These large-width states dominate the decay at short times. States remaining at $(\varepsilon_n=0,\Gamma_n=0)$, on the other hand, will dwell in the billiard for very long times. We refer to them as [*lattice scars*]{}. We have calculated the number of lattice scars for a wide range of system sizes, as shown in Fig. \[fig:scar\_geometry\]. We see that most rectangles feature no lattice scars: they appear only when $M/N\approx q,~q\in\mathbb{Z}$. In contrast, almost all stadia have at least one lattice scar, with a larger number when $M\approx2N$, a trace itself of the embedded rectangle [^1]. The spatial distribution of the bound $E^{\mathrm{open}}=0$ states will hence reside in sites $k \in A$, which on a square lattice are linked by $45^{\circ}$ lines. This requirement, besides the boundary conditions appropriate to each billiard, which restrict the allowed “bounces" off the walls, results in trapped eigenfunctions such as those in Fig. \[fig:scars\]. Indeed, we have independently checked that [*all*]{} states with $\Gamma_n=0$ do have $\varepsilon_n = 0$. Moreover, we have also verified the prediction derived from the theorem in [@inui1994] that the number of states with $\varepsilon_n=\Gamma_n=0$ equals $n$ as defined above. ![ Number of lattice scars for a given billiard. Squares represent data for $M\times N$ rectangular billiards; circles show data for stadia with circular ends of radius $N$ and straight edges of length $M-N$. The dashed line separates the data sets, which have been slightly shifted to ease visualization of the points along $M=N$. The number of lattice scars is indicated by color darkness, with black the maximum; empty symbols correspond to billiards with no lattice scars.[]{data-label="fig:scar_geometry"}](fig5-scars_vs_geometry.jpg){width="\linewidth"} From the preceding arguments, we conclude that an initial state on a dissipative discrete billiard will evolve until at times $t\gg 1/J$, all probability amplitude is concentrated on a (superposition of) lattice scarred state(s), i.e., *dissipation selects this class of eigenstates*, removing all the other components of the initial wavepacket. This prediction is confirmed by looking at the probability density of an initial state after numerically evolving it according to Eq. [(\[eq:master\])]{}, see the right panels in Fig. \[fig:scars\]: the resemblance of these with the eigenstate probability densities on the left panels is evident [@epaps] and we verified that these stationary wavefunctions are pure superpositions of states with $\varepsilon_n=\Gamma_n=0$. Physical implementations ======================== Photonic waveguides ------------------- Paired trajectories in the sense of [@richter2002] strongly affect the conductance through quantum dots [@richter2002; @alhassid2000]. Analogously, we expect lattice scars to influence transport through discrete systems. As a first example, this effect can be studied with infrared or visible light in photonic lattices, which are optical circuits with tens of waveguides imprinted on a substrate using a laser [@fleischer2003; @obrien2009; @krimer2011]. These waveguides can be arranged on parallel rows forming a square lattice which are evanescently coupled, i.e., light may tunnel between neighboring guides, cf. Fig. \[fig:sketch\]b. Doping one or more of the waveguides, or coupling them to outgoing lines, would result in a sink realizing the desired amount of dissipation. The paraxial propagation of light in this setup is described by Eq. [(\[eq:master\])]{}, the final population distribution corresponding to the distribution of light at the substrate’s end. Typical numbers for such a system are a guide length $L=1-10$ cm, width $W \approx 200~\mu$m, and inter-guide distance of $d \approx 20~\mu$m, resulting in a coupling $J \approx 1-10~\mathrm{cm}^{-1}$ [@szameit2010jpb; @garanovich2012; @corrielli2013]. An initial wavepacket corresponds in this system to a spatial intensity distribution over the waveguides on the $z=0$ plane, $|\psi(x,y;z=0)|^2$, which then propagates along the guides’ length, $z$, that amounts to the time variable in our simulations. Then, the distance, $z_{\mathrm{scar}}$, after which the intensity profile has taken the shape of the lattice scar equals the characteristic time, $t_{\mathrm{scar}}$, required for scar appearance as seen in the simulations. For systems with $\sim 10\times10$ guides (as afforded by $W$), we find $z_{\mathrm{scar}} \equiv t_{\mathrm{scar}}\approx10^{3} J^{-1}$. Hence, to get $z_{\mathrm{scar}} < L$ one requires $J \gtrsim 10^3 L^{-1} \gtrsim 10^2~\mathrm{cm}^{-1}$. For waveguides written on fused silica using fs laser pulses [@szameit2010jpb], this amounts to creating a refractive index variation $\Delta n = J \lambda/2\pi \approx 1\times 10^{-3}$ for visible light, which is well within present capabilities [@szameit2010jpb]. Under these conditions, for $z_{\mathrm{scar}} \lesssim z \leq L$ the light beam propagates with a constant intensity profile, similarly as in lattice solitons [@christo2003] but without nonlinear effects. Cold atoms in optical lattices ------------------------------ Our predictions can also be investigated using cold atoms trapped in optical lattices [@ng2009] (Fig. \[fig:sketch\]a), where single-site resolution for preparation and measurement has already been demonstrated in several labs [@bakr2009; @wurtz2009; @sherson2010]. Dissipation here would be realized via a focused blue-detuned laser beam, which can be pointed either on the system boundary or even inside the billiard. A major challenge in these systems is to produce a lattice with a customized boundary, a task that can be achieved thanks to the improved optics in recent experiments, which allows projecting arbitrary optical potentials onto the trapping plane [@bakr2009]. Detection of quantum transport modifications due to the lattice topology in these atomic systems would contrast the observations in graphene, where weak localization is strongly suppressed [@morozov2006; @dassarma2011rmp]. Approximate time and length scales in these setups are an inter-site separation $d\approx\lambda/2 \approx 500$ nm ($\lambda$ being the optical wavelength) and hopping energy $J \approx \hbar \times (1-100)$ kHz [@bloch2012nphysrev; @lewenstein2012book], leading to $t_{\mathrm{scar}} \approx 10^3 \hbar/J \approx 10\,\mathrm{ms}-1$ s, which lies within the typical lifetime of these systems. Superconducting microwave circuits ---------------------------------- Finally, the same ideas can be studied using microwave quantum optics. Inspired by recent designs of coupled harmonic oscillators [@houck2012], we suggest creating a lattice of capacitively coupled microwave LC resonators, Fig. \[fig:sketch\]c. When the capacitive coupling is weaker than the on-site energy, the rotating-wave approximation applies [@Peropadre2013] and the hopping of microwave photons in the array is described once more by Eq [(\[eq:master\])]{}. The leak can be introduced using either resistive elements or outgoing wires that extract energy from a few sites. The distribution of energy can be measured using a probe antenna that is moved over the circuit to scan the electromagnetic field. The timescale of such experiments is much faster than in the atomic case. Both the oscillator energy and the coupling between oscillators can be within the range of GHz to tens of MHz, to ensure that we are far above the typical decoherence times of the cavities. Assuming that the dissipation has the same rate as in the other setups, once more the observation timescale of the decay $t_{\mathrm{scar}}\sim 1\,{\mathrm{ns}}-0.1\,\mu\mathrm{s}$, which would allow for a fast preparation of the state while being able to monitor the decay of the electromagnetic field. Conclusions =========== In summary, we studied two quantum billiards on a bipartite lattice: a rectangle and the Bunimovich stadium. We have shown that their level statistics agree with those of a regular and a chaotic billiard, respectively. However, the dynamics of a wavepacket on the open billiards turns out to be rather similar for both cases, and presents a number of, to the best of our knowledge, so-far unnoticed features. The most remarkable is the appearance of lattice scars: states at the band center whose probability density collapses around spatially-concentrated orbits that live on only one of the two sublattices. This allows them to survive the presence of localized decay channels for very long times. We determined analytically the surviving population in terms of the lattice geometry and hoppings. Finally, we discussed three experimental setups, within current capabilities, to test our predictions. Propagation through periodic lattices is a subject of interest in fields as diverse as biological molecules [@plenio2008; @seeman2005], optical waveguides [@fleischer2003; @obrien2009], nanophysics [@mello2004; @datta2005] and cold-atom systems [@bloch2012nphysrev; @lewenstein2012book], and we expect that this work will enable new perspectives in the study and control of quantum dynamics in classically-chaotic regimes. These results should also be relevant in quantum simulations of interacting systems [@bloch2012nphysrev], quantum walks [@perets2008; @karski2009], and quantum-enhanced computational techniques such as boson sampling [@broome2013; @spring2013; @tillmann2013; @crespi2013]. We thank V. A. Gopar for bringing Ref. [@inui1994] to our attention. This work has been funded by Spanish Government projects FIS2012-33022, FIS2009-07277, FIS2012-34479, CAM research consortium QUITEMAD (S2009-ESP-1594), COST Action IOTA (MP1001), EU Programme PROMISCE, ESF Programme POLATOM, and the JAE-Doc and JAE-Intro programs (CSIC and European Social Fund). Statistics of eigenstate widths: Porter-Thomas distribution {#app:porter} =========================================================== Porter and Thomas derived the distribution of partial widths of the resonances for an open quantum system assuming a Gaussian distribution of the eigenstates amplitudes. The Porter-Thomas distribution can also be derived from Random Matrix Theory [@Porter56; @Brody81]. $$P_{\mathrm{PT}}(z)=\frac{1}{\sqrt{2 \pi z}} \exp{(-z/2)}, \label{eq:PT}$$ where $z=\Gamma/\left<\Gamma\right>$. Eigenstates following the random wave model, conjectured to be valid for the statistical properties of chaotic quantum system by Berry [@Berry77], fulfill this property. We have calculated the distribution of the widths (imaginary parts of the energy) of the eigenstates corresponding to the effective Hamiltonian of an opened stadium lattice billiard. In order to make a fair comparison, the average width value $\left<\Gamma\right>$ was calculated averaging in a window of $12$ neighboring resonances in energy [@Meredith93] and over 10 different positions of the opening for a stadium billiard of 2784 sites without taking into account the states in the band center. The results are shown in Fig. \[fig:PT\]. ![ Comparison of the distribution of widths for the stadium billiard (bars) and the Porter-Thomas distribution, Eq. [(\[eq:PT\])]{} (solid line).[]{data-label="fig:PT"}](figA1-ptsta.jpg){width="0.8\linewidth"} The agreement is quite good taking into account that small divergences are expected due to a breakdown of the random wave model due to the walls and/or discreteness of the lattice. From the results shown we can conclude that the random wave model is a good approximation from the behavior of the wave functions outside the band center in chaotic lattice billiards. A more careful analysis of the differences between the calculated width distribution and the Porter-Thomas distribution is beyond the scope of this work. References {#references .unnumbered} ========== [^1]: Note that for $M=N$ a Bunimovich stadium actually corresponds to a circle of radius $M$, which is classically a regular billiard.
--- address: | School of Physics, Research Centre for High Energy Physics, The University of Melbourne, Victoria 3010, Australia\ $^*$E-mail: [email protected] author: - 'R. R. VOLKAS$^*$' title: | $A_4$ SYMMETRY BREAKING SCHEME FOR UNDERSTANDING\ QUARK AND LEPTON MIXING ANGLES --- Tribimaximal mixing =================== The current neutrino oscillation data are well described by the following MNSP mixing matrix: $$\left(\begin{array}{ccc} \frac{2}{\sqrt{6}} & \frac{1}{\sqrt{3}} & 0 \\ -\frac{1}{\sqrt{6}} & \frac{1}{\sqrt{3}} & -\frac{1}{\sqrt{2}} \\ -\frac{1}{\sqrt{6}} & \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{2}} \end{array}\right)$$ This is called “tribimaximal mixing”[@tribi]. Complex phases can also be introduced. It is significant that the entries are square roots of fractions formed from [*small*]{} integers[@zee-small-integer], and it is suggestive of a flavour symmetry. It also motivates that the flavour structure required to understand mixing should be divorced from whatever physics is needed to understand the mass eigenvalues, because the latter do not at this stage seem to show suggestive patterns. We shall call a matrix “form diagonalisable (FD)” if its (left) diagonalisation matrix is formed from definite numbers while its eigenvalues are free parameters.[@fd] A simple $2 \times 2$ example is $$\left(\begin{array}{cc} m_1 & m_2 \\ m_2 & m_1 \end{array}\right)$$ whose diagonalisation matrix gives two-fold maximal mixing, while its eigenvalues are arbitrary and depend on $m_{1,2}$. This matrix has a $Z_2$ structure, and arose in the mirror matter model.[@mm] A relevant $3 \times 3$ example is $$\left( \begin{array}{ccc} m_1 & \ \ m_2 & \ \ m_3 \\ m_1 & \ \ \omega\, m_2 & \ \ \omega^2\, m_3 \\ m_1 & \ \ \omega^2\, m_2 & \ \ \omega\, m_3 \end{array} \right)$$ where $\omega \equiv e^{i2\pi/3}$ is a cube root of unity. It is equal to $$U(\omega) \left( \begin{array}{ccc} \sqrt{3}m_1 & 0 & 0 \\ 0 & \sqrt{3} m_2 & 0 \\ 0 & 0 & \sqrt{3}m_3 \end{array} \right)$$ where the left-diagonalisation matrix is “trimaximal”: $$U(\omega) = \frac{1}{\sqrt{3}} \left( \begin{array}{ccc} 1 & 1 & 1 \\ 1 & \omega & \omega^2 \\ 1 & \omega^2 & \omega \end{array} \right).$$ Since the MNSP matrix $V_{MNSP} = V^{e\dagger}_L\, V^{\nu}_L$ is the product of two diagonalisation matrices, we observe that tribimaximal mixing is obtained from $U(\omega)^{\dagger} V_L^{\nu}$ when $$V_L^{\nu} = \frac{1}{\sqrt{2}} \left( \begin{array}{ccc} 1 & 0 & -1 \\ 0 & \sqrt{2} & 0 \\ 1 & 0 & 1 \end{array} \right).$$ which is the previous $Z_2$ structure in the $(1,3)$ subspace. $A_4$ scheme and tree-level results =================================== $A_4$ is the set of even permutations of four objects.[@a4; @af] It has 12 elements: $1, c, a=c^{-1}, r_{1,2,3}=r_{1,2,3}^{-1}, r_i c r_i, r_i a r_i$, where $\{1,c,a\}$ form $C_3 = Z_3$ subgroup, $\{1,r_i\}$ form $Z_2$ subgroups (see[@zee-small-integer] for notation). Its irreducible reps are ${\mbox{${\bf \underline{3}}$}}$, ${\mbox{${\bf \underline{1}}$}}$, ${\mbox{${\bf \underline{1}'}$}}$ and ${\mbox{${\bf {\underline{1}''}}$}}$ with ${\mbox{${\bf \underline{3}}$}}\otimes {\mbox{${\bf \underline{3}}$}}= {\mbox{${\bf \underline{3}}$}}_s \oplus {\mbox{${\bf \underline{3}}$}}_a \oplus {\mbox{${\bf \underline{1}}$}}\oplus {\mbox{${\bf \underline{1}'}$}}\oplus {\mbox{${\bf {\underline{1}''}}$}},\quad {\rm and}\quad {\mbox{${\bf \underline{1}'}$}}\otimes {\mbox{${\bf \underline{1}'}$}}= {\mbox{${\bf {\underline{1}''}}$}}$. Under the group element corresponding to $c (a)$, ${\mbox{${\bf \underline{1}'}$}}\to \omega (\omega^2) {\mbox{${\bf \underline{1}'}$}}$ and ${\mbox{${\bf {\underline{1}''}}$}}\to \omega^2 (\omega) {\mbox{${\bf {\underline{1}''}}$}}$. Let $(x_1,x_2,x_3)$ and $(y_1,y_2,y_3)$ denote the basis vectors for two ${\mbox{${\bf \underline{3}}$}}$’s. Then $$\begin{aligned} ({\mbox{${\bf \underline{3}}$}}\otimes {\mbox{${\bf \underline{3}}$}})_{{\mbox{${\bf \underline{3}}$}}s,a} & = & ( x_2 y_3 \pm x_3 y_2\, ,\, x_3 y_1 \pm x_1 y_3\, ,\nonumber\\ & \, & x_1 y_2 \pm x_2 y_1 ) \nonumber \\ ({\mbox{${\bf \underline{3}}$}}\otimes {\mbox{${\bf \underline{3}}$}})_{{\mbox{${\bf \underline{1}}$}}} & = & x_1 y_1 + x_2 y_2 + x_3 y_3 \nonumber\\ ({\mbox{${\bf \underline{3}}$}}\otimes {\mbox{${\bf \underline{3}}$}})_{{\mbox{${\bf \underline{1}'}$}}} & = & x_1 y_1 + \omega\, x_2 y_2 + \omega^2\, x_3 y_3 \nonumber\\ ({\mbox{${\bf \underline{3}}$}}\otimes {\mbox{${\bf \underline{3}}$}})_{{\mbox{${\bf {\underline{1}''}}$}}} & = & x_1 y_1 + \omega^2\, x_2 y_2 + \omega\, x_3 y_3\end{aligned}$$ Under $SU(3) \otimes SU(2) \otimes U(1) \otimes A_4$, choose:[@xg] $$\begin{aligned} & Q_L \sim \left( 3,2,\frac{1}{3} \right) \left( {\mbox{${\bf \underline{3}}$}}\right) & \nonumber\\ & u_R \sim \left( 3,1,\frac{4}{3} \right)\left({\mbox{${\bf \underline{1}}$}}\oplus {\mbox{${\bf \underline{1}'}$}}\oplus {\mbox{${\bf {\underline{1}''}}$}}\right) & \nonumber\\ &d_R \sim \left( 3,1,-\frac{2}{3} \right)\left({\mbox{${\bf \underline{1}}$}}\oplus {\mbox{${\bf \underline{1}'}$}}\oplus {\mbox{${\bf {\underline{1}''}}$}}\right)& \nonumber\\ & \ell_L \sim \left( 1,2,-1 \right) \left( {\mbox{${\bf \underline{3}}$}}\right), & \nonumber\\ & \nu_R \sim \left( 1,1,0 \right)\left( {\mbox{${\bf \underline{3}}$}}\right),& \nonumber\\ & e_R \sim \left( 1,1,-2 \right)\left({\mbox{${\bf \underline{1}}$}}\oplus {\mbox{${\bf \underline{1}'}$}}\oplus {\mbox{${\bf {\underline{1}''}}$}}\right)&\end{aligned}$$ for the fermions, and for the Higgs multiplets: $$\begin{aligned} & \Phi \sim \left( 1,2,-1 \right) \left( {\mbox{${\bf \underline{3}}$}}\right),\ \phi \sim \left( 1,2,-1 \right) \left( {\mbox{${\bf \underline{1}}$}}\right),& \nonumber \\ & \chi \sim \left(1,1,0 \right) \left( {\mbox{${\bf \underline{3}}$}}\right).& \end{aligned}$$ The required spontaneous symmetry breaking pattern is given by the VEVs: $$\begin{aligned} & \langle\Phi^0\rangle = (v,v,v),\qquad A_4 \to Z_3 & \nonumber\\ & \langle\chi\rangle = (0,v_\chi,0),\qquad A_4 \to Z_2 &\nonumber \\ & \langle\phi\rangle = v_\phi,\qquad A_4 \to A_4 &\end{aligned}$$ The quark mass matrices come from $\langle\Phi\rangle$ and have the form $U(\omega)$ multiplied by a diagonal matrix of arbitrary eigenvalues, so at tree level $U_{CKM} = 1$. The charged lepton mass matrices also come from $\langle\Phi\rangle$, so the left diagonalisation matrix is $U(\omega)$. The neutrino Dirac masses arise from $\langle\phi\rangle$: $m_\nu^D {\rm diag}(1,1,1)$. The neutrino RH Majorana masses are driven by $\langle\chi\rangle$ plus bare masses. The overall $\nu$ mass matrix is $$\left( \begin{array}{cccccc} 0 & 0 & 0 & m_{\nu}^D & 0 & 0 \\ 0 & 0 & 0 & 0 & m_{\nu}^D & 0 \\ 0 & 0 & 0 & 0 & 0 & m_{\nu}^D \\ m_{\nu}^D & 0 & 0 & M & 0 & M_\chi \\ 0 & m_{\nu}^D & 0 & 0 & M & 0 \\ 0 & 0 & m_{\nu}^D & M_\chi & 0 & M \end{array} \right),$$ and the effective light $\nu$ mass matrix is $$\begin{aligned} & M_L & = - M_\nu^D M_R^{-1} (M_\nu^D)^T \\ & = & - \frac{(m_\nu^D)^2}{M} \left( \begin{array}{ccc} \frac{M^2}{M^2-M^2_\chi} & 0 & - \frac{M M_\chi}{M^2-M^2_{\chi}} \\ 0 & 1 & 0 \\ - \frac{M M_\chi}{M^2-M^2_{\chi}} & 0 & \frac{M^2}{M^2-M^2_\chi} \end{array} \right).\nonumber\end{aligned}$$ Note the $Z_2$ structure in $(1,3)$ subspace. So, at tree-level we have tribimaimxal mixing (up to phases): $$V_{MNSP} = U(\omega)^{\dagger} V_L^{\nu} = \left( \begin{array}{ccc} \frac{2}{\sqrt{6}} & \frac{1}{\sqrt{3}} & 0 \\ -\frac{\omega^2}{\sqrt{6}} & \frac{\omega^2}{\sqrt{3}} & -\frac{e^{-i\pi/6}}{\sqrt{2}} \\ -\frac{\omega}{\sqrt{6}} & \frac{\omega}{\sqrt{3}} & \frac{e^{i\pi/6}}{\sqrt{2}} \end{array} \right)$$ In the neutrino sector, the mixing pattern is driven by $\langle\chi\rangle: A_4 \to Z_2$. For the rest of the fermions, the patterns are driven by $\langle\Phi\rangle: A_4 \to Z_3$. This dual symmetry breaking structure gives trivial CKM and tribimaximal MNSP. For theory as a whole, of course, $A_4 \to$ nothing. We can describe this situation as “parallel worlds of $A_4$ symmetry breaking.”[@xg] Corrections after flavour symmetry breaking =========================================== The above mixing matrix results hold only at lowest order. After spontaneous $A_4$ symmetry breaking, deviations are induced. Because of the parallel worlds of symmetry breaking, it is useful to classify these effects into those within each sector (the neutrino sector and the charged-fermion sector), and those acting between sectors.[@xg] Within each sector, we can write down the mass entries permitted by the unbroken symmetry in that sector, not all of which are generated at tree-level. For quarks and charged leptons, the tree-level form is not changed, so the left diagonalisation matrices are still $U(\omega)$. This means the CKM matrix is still trivial. This is ensured by the unbroken $Z_3$ in the quark sector. But, the effective light $\nu$ matrix changes: $$\begin{aligned} M_L & \to & M_L \nonumber\\ & + & \left. \left( \begin{array}{ccc} \delta_{11} & 0 & \delta_{13} \\ 0 & \delta_{22} & 0 \\ \delta_{13} & 0 & \delta_{33} \end{array} \right) \right|_{\rm h.o.}\end{aligned}$$ where h.o. denotes higher order. This means that $V_L^{\nu}$ becomes $$\begin{aligned} \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & e^{i\beta} \end{array} \right) & \times &\nonumber\\ \left( \begin{array}{ccc} \cos\theta & \ 0 & \ -\sin\theta \\ 0 & \ 1 & \ 0 \\ \sin\theta & \ 0 & \ \cos\theta \end{array} \right) &\times& \nonumber\\ \left( \begin{array}{ccc} e^{i\alpha_1} & 0 & 0 \\ 0 & e^{i\alpha_2} & 0 \\ 0 & 0 & e^{i\alpha_3} \end{array} \right)& &\end{aligned}$$ where $\theta = \frac{\pi}{4} + \delta$ and $|\delta| \ll 1$ if the h.o. corrections are small. Hence, $$\begin{aligned} & V_{MNSP} & = U(\omega)^{\dagger} V_L^{\nu} \\ & = & \frac{1}{\sqrt{3}} \left( \begin{array}{ccc} c + s e^{i\beta} & 1 & c e^{i\beta} - s \\ c + \omega s e^{i\beta} & \omega^2 & \omega c e^{i\beta} - s \\ c + \omega^2 s e^{i\beta} & \omega & \omega^2 c e^{i \beta} - s \end{array} \right) \nonumber\end{aligned}$$ There are deviations from tribimaximal mixing in the first and third columns, including a nonzero $U_{e3}$. We now turn to interactions between the sectors. To generate realistic CKM mixing, we need to break the $Z_3$. But in the theory overall, it [*is*]{} broken. Hence one way that CKM mixing might be generated is through the mediation of $Z_3$ breaking in the neutrino sector to the quark sector, for example through effective operators like $$\begin{aligned} & \overline{Q}_L \, u_R \, \Phi \, \chi,\ \overline{Q}_L \, u'_R \, \Phi \, \chi,\ \overline{Q}_L \, u''_R \, \Phi \, \chi & \nonumber\\ & \overline{Q}_L \, d_R \, \tilde{\Phi} \, \chi,\ \overline{Q}_L \, d'_R \, \tilde{\Phi} \, \chi,\ \overline{Q}_L \, d''_R \, \tilde{\Phi} \, \chi, &\end{aligned}$$ There is enough freedom at this level to generate a realistic CKM matrix. But it is not yet clear if that is the best way to do it, though it seems like a natural feature to have. Challenges and conclusions ========================== The theory needs to be “completed”, as the above is a symmetry scheme without a fully sepcified dynamics. The default possibility is a standard Higgs potential. But this raises a non-trivial problem: How to keep the parallel worlds of symmetry breaking controllably intact? Unfortunately, the Higgs potential interactions between $\Phi$ and $\chi$ tend to spoil the different VEV patterns required. So, at least some of these interactions need to eliminated. What are the logical possibilities?[@xg] Normal internal symmetries do not work, because a term like $\Phi^{\dagger} \Phi \chi^2$ is always invariant. One possibility is to decouple $\chi$ or $\nu_R, \chi$ from rest of theory by making certain parameters very small (hidden sector). It is not known if this can be made to work A second possibility is supersymmetry, because terms like $\Phi^{\dagger} \Phi \chi^2$ come from superpotential terms $\Phi_u\, \Phi_d\, \chi$ and the latter [*can*]{} be forbidden by an internal symmetry. We have constructed an inelegant existence proof for this. A third possibility is to sequester $\chi$ on different brane from $\Phi$.[@af] There may be others. Overall, we conclude that $A_4$ has the potential to simultaneously explain the quark and lepton mixing matrices, while leaving the masses arbitrary. This works at a symmetry level – which is my main point – but a dynamically complete theory is a work-in-progress. Acknowledgments {#acknowledgments .unnumbered} =============== This work was supported by the Australian Research Council. [9]{} P. F. Harrison, D. H. Perkins and W. G. Scott, ; P. F. Harrison and W. G. Scott, ; ; Z.-Z. Xing, ; X.-G. He and A. Zee, ; . A. Zee, . C. I. Low and R. R. Volkas, . R. Foot, H. Lew and R. R. Volkas, ; R. Foot, ; R. Foot and R. R. Volkas, . E. Ma and G. Rajasekaran, ; K. S. Babu, E. Ma and J. W. F. Valle, ; E. Ma, ; hep-ph/0409075; ; . G. Altarelli and F. Ferruglio, ; hep-ph/0512103. X.-G. He, Y.-Y. Keum and R. R. Volkas, .
--- abstract: 'In the present paper, a new type of ruled surfaces called osculating-type (OT)-ruled surface is introduced and studied. First, a new orthonormal frame is defined for OT-ruled surfaces. The Gaussian and the mean curvatures of these surfaces are obtained and the conditions for an OT-surface to be flat or minimal are given. Moreover, the Weingarten map of an OT-ruled surface is obtained and the normal curvature, the geodesic curvature and the geodesic torsion of any curve lying on surface are obtained. Finally, some examples related to helices and slant helices are introduced.' author: - | Onur Kaya$^{1}$, Tanju Kahraman$^{1}$, Mehmet Önder$^{2}$\ [*$^{1}$ Manisa Celal Bayar University, Department of Mathematics, 45140, Manisa, Turkey*]{}\ [E-mails: [email protected], [email protected]]{}\ [*$^{2}$ Delibekirli Village, Tepe Street, No:63, 31440, Kırıkhan, Hatay, Turkey*]{}\ [E-mail: [email protected]]{} title: 'Osculating-type Ruled Surfaces in the Euclidean 3-space' --- **AMS Classification:** 53A25, 53A05. **Keywords:** Osculating-type ruled surface, minimal surface, geodesic. Introduction ============ In the study of fundamental theory of curves and surfaces, the special ones of these geometric topics have been of significant value because of satisfying some particular conditions. In the curve theory, the most famous one of such special curves is general helix for which the tangent vector of the curve always makes a constant angle with a constant direction. The necessary and sufficient condition for a curve to be a general helix is that the ratio of the second curvature $\tau$ to the first curvature $\kappa$ is constant i.e., $\tau / \kappa$ is constant along the curve [@Barros]. If the principal normal vector of a curve makes a constant angle with a constant direction, then that curve is called slant helix and the necessary and sufficient condition for a curve to be a slant helix is that the function $\sigma (s) = \left( \frac{\kappa^2}{\left( \kappa^2 + \tau^2 \right)^{3/2}} \left( \frac{\tau}{\kappa} \right)' \right) (s)$ is constant [@IzuTakeSlant]. In the surface theory, the surfaces constructed by the simplest way are important. The well-known example of such surfaces is ruled surface which is generated by a continuous movement of a line along a curve. These surfaces have a wide use in technology and architecture [@Emmer]. Furthermore, some special types of these surfaces have particular relationships with helices and slant helices [@IzuTakeSlant; @IzuTakeSpec; @IzuTakeGeom; @Izumiyaetal]. In [@OnderSlant], Önder considered the notion of “slant helix” for ruled surfaces and defined slant ruled surfaces by the property that the components of the frame along the striction curve of ruled surface make constant angles with fixed lines. He has proved that helices or slant helices are the striction curves of developable slant ruled surfaces. Also, he has defined a new kind of ruled surfaces called general rectifying ruled surface for which the generating line of the surface always lies on the rectifying plane of base curve and he has given many properties of such surfaces [@OnderRect]. This study introduces a new type of ruled surfaces called osculating-type (OT)-ruled surfaces. First, a new orthonormal frame and new curvatures for OT-ruled surfaces are obtained and many properties of the surface are given by considering the new frame and its curvatures. Later, the Gaussian curvature $K$ and the mean curvature $H$ of OT-ruled surfaces are given. The set of singular points of such surfaces are introduced and some differential equations characterizing special curves lying on the surface are obtained. Finally, some examples related to helix and slant helix are given. Preliminaries ============= A ruled surface in $\mathbb{R}^3$ is constructed by a continuous movement of a straight line along a space curve $\alpha$. For an open interval $I \subset \mathbb{R}$, the parametric equation of a ruled surface is given by $\varphi_{(\alpha, q)} (s,u) : I \times \mathbb{R} \rightarrow \mathbb{R}^3$, $\vec{\varphi}_{(\alpha, q)} (s,u) = \vec{\alpha} (s) + u \vec{q} (s)$ where $q: I \rightarrow \mathbb{R}^3$, $\lVert \vec{q} \hspace{1pt} \rVert = 1$ is called director curve and $\alpha: I \rightarrow \mathbb{R}^3$ is called the base curve of the surface $\varphi_{(\alpha, q)}$. The straight lines of the surface defined by $u \rightarrow \vec{\alpha} (s) + u \vec{q} (s)$ are called rulings [@IzuTakeSpec]. The ruled surface $\varphi_{(\alpha, q)}$ is called cylindrical if $\vec{q}\hspace{2pt}' = 0$ and non-cylindrical otherwise where $\vec{q}\hspace{2pt}' = \frac{d\vec{q}}{ds}$ [@KargerNovak]. A curve $c$ lying on $\varphi_{(\alpha, q)}$ with property that $\langle \vec{c}\hspace{2pt}', \vec{q} \hspace{2pt}' \rangle = 0$ is called striction line of $\varphi_{(\alpha, q)}$. The parametric representation of striction line is given by $$\label{strictionline} \vec{c} (s) = \vec{\alpha} (s) - \frac{\langle \vec{c}\hspace{2pt}' (s), \vec{q} \hspace{2pt}' (s) \rangle}{\langle \vec{q}\hspace{2pt}' (s), \vec{q} \hspace{2pt}' (s) \rangle} \vec{q} (s)$$ The striction line is geometrically important because it is the locus of special points called central points for which considering a common perpendicular between two constructive rulings, the foot of common perpendicular on the main ruling is a central point [@KargerNovak]. The unit surface normal or Gauss map $U$ of the ruled surface $\varphi_{(\alpha, q)}$ is defined by $$\vec{U} (s,u) = \frac{\frac{\partial \vec{\varphi}_{(\alpha, q)}}{\partial s} \times \frac{\partial \vec{\varphi}_{(\alpha, q)}}{\partial u}}{\left\| \frac{\partial \vec{\varphi}_{(\alpha, q)}}{\partial s} \times \frac{\partial \vec{\varphi}_{(\alpha, q)}}{\partial u} \right\|} .$$ If $\frac{\partial \vec{\varphi}_{(\alpha, q)}}{\partial s} \times \frac{\partial \vec{\varphi}_{(\alpha, q)}}{\partial u} = 0$ for some points $({{s}_{0}},{{u}_{0}})\in \,I\times \mathbb{R}$ then, such points are called singular points of ruled surface $\varphi_{(\alpha, q)}$. Otherwise, they are called regular points. The surface $\varphi_{(\alpha, q)}$ is called developable if the unit surface normal $U$ along any ruling does not change its direction. Otherwise, $\varphi_{(\alpha, q)}$ is called non-developable or skew. A ruled surface $\varphi_{(\alpha, q)}$ is developable if and only if $\det (\vec{\alpha }', \vec{q}, \vec{q}\hspace{2pt}') = 0$ holds [@KargerNovak]. The unit vectors $\vec{h}= \vec{q} \hspace{2pt}' / \left\| \vec{q} \hspace{2pt}' \right\|$ and $\vec{a} = \vec{q} \times \vec{h}$ are called central normal and central tangent of $\varphi_{(\alpha, q)}$, respectively. Then, the orthonormal frame $\left\{ \vec{q}, \vec{h}, \vec{a} \right\}$ is called the Frenet frame of ruled surface $\varphi_{(\alpha, q)}$. \[qha-slant\] [@OnderSlant] A ruled surface $\varphi_{(\alpha, q)}$ is called $q$-slant or $a$-slant (resp. $h$-slant) ruled surface if its ruling $\vec{q}$ (resp. central normal $\vec{h}$) always makes a constant angle with a fixed direction. The first fundamental form $I$ and second fundamental form $II$ of $\varphi_{(\alpha, q)}$ are defined by $$I = E d{{s}^{2}} + 2F ds du + G d{{u}^{2}}, \hspace{8pt} II = L d{{s}^{2}} + 2M ds du + N d{{u}^{2}},$$ respectively, where $$\label{EFGformulas} E=\left\langle \frac{\partial {{{\vec{\varphi }}}_{(\alpha ,q)}}}{\partial s},\frac{\partial {{{\vec{\varphi }}}_{(\alpha ,q)}}}{\partial s} \right\rangle, \hspace{6pt} F=\left\langle \frac{\partial {{{\vec{\varphi }}}_{(\alpha ,q)}}}{\partial s},\frac{\partial {{{\vec{\varphi }}}_{(\alpha ,q)}}}{\partial u} \right\rangle, \hspace{6pt} G=\left\langle \frac{\partial {{{\vec{\varphi }}}_{(\alpha ,q)}}}{\partial u},\frac{\partial {{{\vec{\varphi }}}_{(\alpha ,q)}}}{\partial u} \right\rangle,$$ $$\label{LMNformulas} L=\left\langle \frac{{{\partial }^{2}}{{{\vec{\varphi }}}_{(\alpha ,q)}}}{\partial {{s}^{2}}},\vec{U} \right\rangle, \hspace{6pt} M=\left\langle \frac{{{\partial }^{2}}{{{\vec{\varphi }}}_{(\alpha ,q)}}}{\partial s\partial u},\vec{U} \right\rangle, \hspace{6pt} N=\left\langle \frac{{{\partial }^{2}}{{{\vec{\varphi }}}_{(\alpha ,q)}}}{\partial {{u}^{2}}},\vec{U} \right\rangle .$$ The Gaussian curvature $K$ and the mean curvature $H$ are defined by $$\label{GaussianCurvatureFormula} K=\frac{LN-{{M}^{2}}}{EG-{{F}^{2}}},$$ $$\label{MeanCurvatureFormula} H=\frac{EN-2FM+GL}{2(EG-{{F}^{2}})}.$$ respectively. An arbitrary surface is called minimal if $H=0$ at all points of the surface. Furthermore, a ruled surface is developable (or flat) if and only if $K=0$ [@doCarmo]. (Catalan Theorem) [@FomenkoTuzhilin] Among all ruled surfaces except planes only the helicoid and fragments of it are minimal. Osculating-type Ruled Surfaces ============================== In this section, we define the osculating-type ruled surface of a curve $\alpha$ such that the ruling of the surface always lies in the osculating plane of $\alpha$ and also $\alpha$ is the base curve of the surface. Such a surface is defined as follows: \[OT-ruledSurfaceDefinition\] Let $\alpha : I \subset \mathbb{R} \rightarrow {{\mathbb{R}}^{3}}$ be a smooth curve in the Euclidean 3-space $\mathbb{E}^3$ with arc-length parameter $s$, curvature $\kappa (s)$, torsion $\tau (s)$ and Frenet frame $\left\{ \vec{T}(s),\vec{N}(s),\vec{B}(s) \right\}$. Then, the ruled surface ${{\varphi }_{(\alpha ,{{q}_{o}})}}:I\times \mathbb{R}\to {{\mathbb{R}}^{3}}$ given by the parametric form $$\label{OT-ruledSurfaceEquation} {{\vec{\varphi }}_{(\alpha ,{{q}_{o}})}}(s,u)=\vec{\alpha }(s)+u{{\vec{q}}_{o}}(s), \hspace{8pt} {{\vec{q}}_{o}}(s)=\cos \theta \vec{T}(s)+\sin \theta \vec{N}(s)$$ is called the osculating-type (OT)-ruled surface of $\alpha$ where $\theta =\theta (s)$ is ${{C}^{\infty}}$-scalar angle function of arc-length parameter $s$ between unit vectors $\vec{q}_o$ and $\vec{T}$. Here, we use the index “$o$” to emphasize that the ruling always lies on the osculating plane $sp\left\{ \vec{T},\vec{N} \right\}$ of base curve $\alpha$. As we see from equation (\[OT-ruledSurfaceEquation\]), when $\theta (s)=k\pi, (k \in \mathbb{Z})$, for all $s \in I$, the ruling becomes ${{\vec{q}}_{o}}=\pm \vec{T}$ and the OT-ruled surface $\varphi_{(\alpha, q_o)}$ becomes the developable tangent surface $\varphi_{(\alpha, T)}$ of $\alpha$. Similarly, when $\theta (s)=\pi /2+k\pi, (k \in \mathbb{Z})$ for all $s \in I$, the ruling becomes ${{\vec{q}}_{o}} = \pm \vec{N}$ and the OT-ruled surface $\varphi_{(\alpha, q_o)}$ becomes the principal normal surface $\varphi_{(\alpha, N)}$ of $\alpha$. \[remark\] If $\alpha$ is a straight line, then $\varphi_{(\alpha, T)}$ is not a surface, only a line. So, for the case ${{\varphi }_{(\alpha ,{{q}_{o}})}}={{\varphi }_{(\alpha ,T)}}$, we always assume that $\alpha$ is not a straight line, i.e., $\kappa \ne 0$. Considering (\[OT-ruledSurfaceEquation\]) and the fact that the binormal vector $\vec{B}$ of $\alpha$ is perpendicular to $sp\left\{ \vec{T},\vec{N} \right\}$, we get $\left\langle {{{\vec{q}}}_{o}},\vec{B} \right\rangle =0$. Therefore, we can define a unit vector $\vec{r} (s)$ as follows, $$\label{r-vector} \vec{r}={{\vec{q}}_{o}}\times \vec{B}=\sin \theta \,\vec{T}-\cos \theta \vec{N}.$$ Then, the frame $\left\{ {{{\vec{q}}}_{o}},\vec{B},\vec{r} \right\}$ is an orthonormal moving frame along $\alpha$ on the OT-ruled surface $\varphi_{(\alpha, q_o)}$. From equations (\[OT-ruledSurfaceEquation\]) and (\[r-vector\]), the relations between that frame and Frenet frame of $\alpha$ are given by $\vec{T}=\cos \theta \,{{\vec{q}}_{o}}+\sin \theta \,\vec{r}$ and $\vec{N}=\sin \theta \,{{\vec{q}}_{o}}-\cos \theta \,\vec{r}$. After some computations, for the derivative formulae of new frame $\left\{ {{{\vec{q}}}_{o}},\vec{B},\vec{r} \right\}$, we get $$\begin{bmatrix} {{{{\vec{q}}\hspace{2pt}'_{\hspace{-2pt}o}}}} \\ {{\vec{B}}'} \\ {{\vec{r}}\hspace{2pt}'} \\ \end{bmatrix} = \begin{bmatrix} 0 & \mu & -\eta \\ -\mu & 0 & \xi \\ \eta & -\xi & 0 \\ \end{bmatrix} \begin{bmatrix} {{{\vec{q}}}_{o}} \\ {\vec{B}} \\ {\vec{r}} \\ \end{bmatrix}$$ where $\eta (s)={\theta }'+\kappa $, $\mu (s)=\tau \sin \theta $, $\xi (s)=\tau \cos \theta $ are called the curvatures of OT-ruled surface $\varphi_{(\alpha, q_o)}$ according to the frame $\left\{ {{{\vec{q}}}_{o}},\vec{B},\vec{r} \right\}$. Then, the relationships between the curvatures $\kappa$, $\tau$ of base curve $\alpha$ and the curvatures $\eta$, $\mu$, $\xi$ of OT-ruled surface of OT-ruled surface $\varphi_{(\alpha, q_o)}$ are obtained as $\kappa =\eta -{\theta }'$, $\tau =\pm \sqrt{{{\mu }^{2}}+{{\xi }^{2}}}$. Now, using these relationships and considering the characterizations for general helix and slant helix, the following theorem is obtained: \[alphahelixslanthelix\] For the OT-ruled surface $\varphi_{(\alpha, q_o)}$, we have that (i) $\alpha$ is a plane curve if and only if both $\mu$ and $\xi$ vanish. (ii) $\alpha$ is a general helix if and only if the function $\rho (s)=\pm \frac{\sqrt{{{\mu }^{2}}+{{\xi }^{2}}}}{\eta - \theta'}$ is constant. (iii) $\alpha$ is a slant helix if and only if the function $$\sigma (s)=\pm \frac{(\mu {\mu }'+\xi {\xi }')(\eta -{\theta }')-({{\mu }^{2}}+{{\xi }^{2}})({\eta }'-{\theta }'')}{\left[ {{(\eta -{\theta }')}^{2}}+{{\mu }^{2}}+{{\xi }^{2}} \right]{{\left( {{\mu }^{2}}+{{\xi }^{2}} \right)}^{1/2}}}$$ is constant. Let now consider the special case that the base curve $\alpha$ is a plane curve, i.e., $\tau = 0$. Then, $\alpha$ lies on the osculating plane $sp\left\{ \vec{T},\vec{N} \right\}$ and has constant binormal vector $\vec{B}$. Since, the unit surface normal $\vec{U}$ of OT-ruled surface $\varphi_{(\alpha, q_o)}$ is always perpendicular to both $\vec{q}_o$ and $\vec{T}$, we have that $\vec{U}=\pm \vec{B}$. Then, the OT-ruled surface has a constant unit normal, that is, it is a plane. Conversely, if the OT-ruled surface $\varphi_{(\alpha, q_o)}$ is a plane with constant unit normal $\vec{U}$, since $\vec{U}\bot sp\left\{ {{{\vec{q}}}_{o}},\vec{T} \right\}$, from (\[OT-ruledSurfaceEquation\]) we get $\vec{U}\bot sp\left\{ \vec{T},\vec{N} \right\}$ which gives $\vec{U}=\pm \vec{B}$ is a constant vector. Then, $\tau = 0$, i.e., $\alpha$ is a plane curve and we have the followings: \[PlaneOT\] The OT-ruled surface $\varphi_{(\alpha, q_o)}$ is a plane if and only if the base curve $\alpha$ is a plane curve. Clearly, Theorem 3.4 gives the following corollary: \[notTN\] If ${{\varphi}_{(\alpha ,{{q}_{o}})}}\ne {{\varphi}_{(\alpha ,T)}}$ and ${{\varphi}_{(\alpha ,{{q}_{o}})}}\ne {{\varphi}_{(\alpha ,N)}}$, the followings are equivalent $$\begin{split} & (i) \hspace{4pt} \alpha \hspace{4pt} \textit{is a plane curve}. \hspace{16pt} (ii) \hspace{4pt} \textit{The OT-ruled surface} \hspace{4pt} \varphi_{(\alpha, q_o)} \hspace{4pt} \textit{is a plane}. \\ & (iii) \hspace{4pt} \mu=0. \hspace{65pt} (iv) \hspace{4pt} \xi=0. \end{split}$$ Now, we will give other characterizations and geometric properties of the OT-ruled surfaces. The set of the singular points of OT-ruled surface $\varphi_{(\alpha, q_o)}$ is given by $$S = \left\{ ({{s}_{0}},{{u}_{0}}) \in I \times \mathbb{R}: \theta ({{s}_{0}})=k\pi, {{u}_{0}}=0, k\in \mathbb{Z} \right\}.$$ From the partial derivatives of ${{\vec{\varphi }}_{(\alpha, {{q}_{o}})}}(s,u)=\vec{\alpha }(s)+u{{\vec{q}}_{o}}(s)$, we get $$\label{partialderivativesofvarphi} \frac{\partial {{{\vec{\varphi }}}_{(\alpha ,{{q}_{o}})}}}{\partial s}=\cos \theta \,{{\vec{q}}_{o}}+u\mu \vec{B}+(\sin \theta -u\eta )\vec{r}, \hspace{8pt} \frac{\partial {{{\vec{\varphi }}}_{(\alpha ,{{q}_{o}})}}}{\partial u}={{\vec{q}}_{o}}.$$ Therefore, the direction of surface normal is given by the vector $$\frac{\partial {{{\vec{\varphi }}}_{(\alpha ,{{q}_{o}})}}}{\partial s}\times \frac{\partial {{{\vec{\varphi }}}_{(\alpha ,{{q}_{o}})}}}{\partial u}=(\sin \theta -u\eta )\vec{B}-u\mu \vec{r}.$$ Then, the OT-ruled surface $\varphi_{(\alpha, q_o)}$ has singular points if and only if the system $$\label{SingularSystem} \begin{cases} \sin \theta -u\eta = 0,\\ u \mu = 0 \end{cases}$$ holds. Let now assume that $u=0$. Then, from the first equality, it follows $\theta ({{s}_{0}})=k\pi, (k \in \mathbb{Z}, {{s}_{0}} \in I)$. When this satisfies for all $s \in I$, we have ${{\varphi }_{(\alpha, {{q}_{o}})}}={{\varphi }_{(\alpha ,T)}}$ and the locus of the singular points is the base curve $\alpha$. If $u\ne 0$, from the system (\[SingularSystem\]), we get $u(s)=\frac{\sin \theta }{\eta }$ and $\mu =0$. Since we assume that singular points exist, from Theorem \[PlaneOT\], we have $\tau \ne 0$. Otherwise, the surface is a plane and regular. Then, $\mu=0$ implies that $\sin \theta =0$ which is a contradiction with the assumption that $u \ne 0$. And so, the system (\[SingularSystem\]) only holds if and only if $u=0$, $\theta ({{s}_{0}})=k\pi, (k\in \mathbb{Z}, {{s}_{0}}\in I)$. Hereafter, for the sake of simplicity, we will take $f=\sin \theta -u\eta $ and $g=u\mu $. \[developable\] The OT-ruled surface $\varphi_{(\alpha, q_o)}$ is developable if and only if $\varphi_{(\alpha, q_o)}$ is a plane or ${{\varphi }_{(\alpha, {{q}_{o}})}}={{\varphi }_{(\alpha ,T)}}$. For the surface $\varphi_{(\alpha, q_o)}$, we have $\det (\vec{\alpha }', \vec{q}_o, \vec{q}\hspace{2pt}'_{\hspace{-2pt} o}) = \mu \sin \theta$. Considering Theorem \[PlaneOT\], we have the desired result. \[cylindrical\] Among all OT-ruled surfaces $\varphi_{(\alpha, q_o)}$, only the plane is cylindrical. Since a ruled surface is called cylindrical if and only if the direction of the ruling is a constant vector, we get $\vec{q}\hspace{2pt}'_{\hspace{-2pt} o} = 0$ if and only if $$\label{diffq_o} -\eta \sin \theta \vec{T} + \eta \cos \theta \vec{N} + \tau \sin \theta \vec{B} = 0.$$ If ${{\varphi }_{(\alpha, {{q}_{o}})}}={{\varphi }_{(\alpha, T)}}$, then $\theta (s)=k \pi$ for all $s\in I$ and (\[diffq\_o\]) gives $\eta =0$, which implies that $\kappa =0$, which is a contradiction with Remark \[remark\]. If ${{\varphi }_{(\alpha ,{{q}_{o}})}}\ne {{\varphi }_{(\alpha ,T)}}$, then from (\[diffq\_o\]) we have $\tau =0$, $\eta =0$ which gives $\theta (s)=-\int_{0}^{s}{\kappa (s)ds}$ and Theorem \[PlaneOT\] gives that $\varphi_{(\alpha, q_o)}$ is a plane. Proposition \[developable\] and Proposition \[cylindrical\] give the following corollary: If the OT-ruled surface $\varphi_{(\alpha, q_o)}$ is cylindrical, then it is a plane with the parametric form $${{\vec{\varphi }}_{(\alpha ,{{q}_{o}})}}(s,u)=\vec{\alpha }(s)+u\left( \cos \left( \int_{0}^{s}{\kappa (s)ds} \right)\vec{T}(s)-\sin \left( \int_{0}^{s}{\kappa (s)ds} \right)\vec{N}(s) \right)$$ \[strictionlineprop\] The base curve $\alpha$ of the OT-ruled surface $\varphi_{(\alpha, q_o)}$ is also its striction line if and only if $\theta (s)=-\int_{0}^{s}{\kappa (s)ds}$ or ${{\varphi }_{(\alpha ,{{q}_{o}})}}={{\varphi }_{(\alpha ,T)}}$. The base curve $\alpha$ is the striction line of $\varphi_{(\alpha, q_o)}$ if and only if $\langle \vec{\alpha}', \vec{q}\hspace{2pt}'_{\hspace{-2pt} o} \rangle = 0$. Therefore, we get $\langle \vec{\alpha}', \vec{q}\hspace{2pt}'_{\hspace{-2pt} o} \rangle = - \eta \sin \theta$ which gives the desired result. From Proposition \[strictionlineprop\], it is clear that the set of the intersection points of base curve $\alpha$ and striction curve $c$ is $V = S \cup Y$, where $S$ is the set of singular points of $\varphi_{(\alpha, q_o)}$ and $$Y=\left\{ ({{s}_{0}},{{u}_{0}}) \in I \times \mathbb{R}: {\theta }'({{s}_{0}})=-\kappa ({{s}_{0}}), {{u}_{0}}=0 \right\}.$$ It is clear that the points of $Y$ are non-singular. Let now investigate the special curves lying on the OT-surface $\varphi_{(\alpha, q_o)}$. The Gauss map (or the unit surface normal) of the OT-ruled surface $\varphi_{(\alpha, q_o)}$ is given by $$\label{GaussMap} \vec{U}(s,u)=\frac{1}{\sqrt{{{f}^{2}}+{{g}^{2}}}}\left( f\vec{B}-g\,\vec{r} \right).$$ Then, for the base curve $\alpha$ we have the followings: The base curve $\alpha$ is a geodesic on the OT-ruled surface $\varphi_{(\alpha, q_o)}$ if and only if $\alpha$ is a straight line. We know that $\alpha$ is a geodesic on $\varphi_{(\alpha, q_o)}$ if and only if the condition $$\label{geodesiccondition} \vec{U} \times {\vec{\alpha }}'' = 0$$ satisfies. Then, by using (\[GaussMap\]), from (\[geodesiccondition\]) we get $$\vec{U}\times {\vec{\alpha }}''=\frac{1}{\sqrt{{{f}^{2}}+{{g}^{2}}}}\left( -\kappa f\,\vec{T}-g\kappa \sin \theta \,\vec{B} \right)$$ and that $\alpha$ is a geodesic curve on $\varphi_{(\alpha, q_o)}$ if and only if the system $$\begin{cases} \kappa f = 0\\ g \kappa \sin \theta = 0 \end{cases}$$ holds. If we assume ${{\varphi }_{(\alpha ,{{q}_{o}})}}\ne {{\varphi }_{(\alpha ,T)}}$, from the last system it follows $$\kappa f=0,\hspace{4pt} g\kappa =0,$$ which gives that $\kappa=0$, i.e., $\alpha$ is a straight line or the system $$f=0, \hspace{4pt} g=0,$$ holds. But for the last system, considering (\[SingularSystem\]), it follows that the system has a solution as a curve if and only if ${{\varphi }_{(\alpha ,{{q}_{o}})}}={{\varphi }_{(\alpha ,T)}}$ which is a contradiction by the assumption ${{\varphi }_{(\alpha ,{{q}_{o}})}}\ne {{\varphi }_{(\alpha ,T)}}$ and so, we eliminate this case. If ${{\varphi }_{(\alpha ,{{q}_{o}})}}={{\varphi }_{(\alpha ,T)}}$, considering Remark \[remark\], we should take $\kappa \ne 0$. But for this case, the system gives that $\eta =\kappa =0$, which is a contradiction. Then we have that $\alpha$ is a geodesic on $\varphi_{(\alpha, q_o)}$ if and only if $\alpha$ is a straight line. Let $\alpha$ have non-vanishing curvature $\kappa$. Then, $\alpha$ is an asymptotic curve on the OT-ruled surface $\varphi_{(\alpha, q_o)}$ if and only if one of the followings hold: $$(i) \hspace{4pt} \varphi_{(\alpha, q_o)} \hspace{4pt} \textit{is a plane} \hspace{12pt} (ii) \hspace{4pt} {{\varphi }_{(\alpha ,{{q}_{o}})}}={{\varphi }_{(\alpha ,T)}} \hspace{12pt} (iii) \hspace{4pt} {{\varphi }_{(\alpha ,{{q}_{o}})}}={{\varphi }_{(\alpha ,N)}}.$$ $\alpha$ is an asymptotic curve on $\varphi_{(\alpha, q_o)}$ if and only if $\langle \vec{U},{\vec{\alpha }}'' \rangle =0.$ Then, we get $$\label{asymptoticcondition} \left\langle \vec{U},{\vec{\alpha }}'' \right\rangle =\frac{u\kappa \tau \cos \theta \sin \theta }{\sqrt{{{f}^{2}}+{{g}^{2}}}}$$ From (\[asymptoticcondition\]), we obtain that $\langle \vec{U},{\vec{\alpha }}'' \rangle = 0$ if and only if $\tau=0$ or $\sin \theta = 0$ or $\cos \theta = 0$. The base curve $\alpha$ is a line of curvature on the OT-ruled surface $\varphi_{(\alpha, q_o)}$ if and only if $\varphi_{(\alpha, q_o)}$ is a plane. The curve $\alpha$ is a line of curvature on the OT-ruled surface $\varphi_{(\alpha, q_o)}$ if and only if $\vec{U}_\alpha' \times \vec{\alpha}' = 0$ holds where $\vec{U}_\alpha$ is the unit surface normal along the curve $\alpha$ and for which we have ${{\vec{U}}_{\alpha }}=\vec{B}$. Then, it follows $$\label{lineofcurvaturecondition} {{{\vec{U}}'}_{\alpha }}\times {\vec{\alpha }}'=-\tau \vec{B}$$ The equation (\[lineofcurvaturecondition\]) is equal to zero if and only if $\tau = 0$ and from Theorem \[PlaneOT\], we have that $\varphi_{(\alpha, q_o)}$ is a plane. Now, let us examine first and second fundamental coefficients of the OT-ruled surface $\varphi_{(\alpha, q_o)}$. From (\[EFGformulas\]) and (\[LMNformulas\]), we get $$\label{EFG} E={{f}^{2}}+{{g}^{2}}+{{\cos }^{2}}\theta, \hspace{4pt} F=\cos \theta, \hspace{4pt} G=1$$ $$\label{LMN} L=\frac{-({{f}^{2}}+{{g}^{2}})\xi +\mu \sin \theta \cos \theta -{{g}^{2}}{{\left( \frac{f}{g} \right)}_{\hspace{-2pt}s}} }{\sqrt{{{f}^{2}}+{{g}^{2}}}}, \hspace{4pt} M=\frac{\mu \sin \theta }{\sqrt{{{f}^{2}}+{{g}^{2}}}}, \hspace{4pt} N=0$$ By using the fundamental coefficients computed in (\[EFG\]) and (\[LMN\]), from (\[GaussianCurvatureFormula\]) and (\[MeanCurvatureFormula\]) the Gaussian curvature $K$ and the mean curvature $H$ of OT-ruled surface $\varphi_{(\alpha, q_o)}$ are obtained as $$\label{KandH} K=-\frac{{{\mu }^{2}}{{\sin }^{2}}\theta }{{{\left( {{f}^{2}}+{{g}^{2}} \right)}^{2}}}, \hspace{4pt} H=-\frac{({{f}^{2}}+{{g}^{2}})\xi +\mu \sin \theta \cos \theta +{{g}^{2}}{{\left( \frac{f}{g} \right)}_{\hspace{-2pt}s}}}{2{{\left( {{f}^{2}}+{{g}^{2}} \right)}^{3/2}}}$$ respectively. From (\[KandH\]), it follows that $K=0$ if and only if $\tau = 0$ or $\sin \theta = 0$. This result coincides with Proposition \[developable\]. It is clear that if $\tau = 0$, then $H=0$ and the OT-ruled surface $\varphi_{(\alpha, q_o)}$ is minimal. If $\tau \ne 0$ and $\sin \theta =0$, then from (\[KandH\]) we get $H=\frac{\tau }{2u\eta }\ne 0$ Therefore, in this case, the tangent surface ${{\varphi }_{(\alpha ,T)}}$ cannot be minimal. Then, followings are obtained: (i) The OT-ruled surface $\varphi_{(\alpha, q_o)}$ is minimal if and only if the equality $$({{f}^{2}}+{{g}^{2}})\xi +\mu \sin \theta \cos \theta +{{g}^{2}}{{\left( \frac{f}{g} \right)}_{\hspace{-2pt}s}}=0$$ satisfies. (ii) If $\tau \ne 0$, there is no minimal tangent surface ${{\varphi }_{(\alpha ,T)}}$. (iii) The principal normal surface ${{\varphi }_{(\alpha ,N)}}$ is minimal if and only if $fu{\mu }'-g{{f}_{s}}=0$, where ${{f}_{s}}=\partial f/\partial s$. Furthermore, considering Catalan Theorem, we have the following corollary: If the base curve $\alpha$ is not a plane curve, the OT-ruled surface $\varphi_{(\alpha, q_o)}$ is a helicoid if and only if $({{f}^{2}}+{{g}^{2}})\xi +\mu \sin \theta \cos \theta +{{g}^{2}}{{\left( \frac{f}{g} \right)}_{\hspace{-2pt}s}}=0$ holds. Now, we will consider the special curves lying on an OT-ruled surface $\varphi_{(\alpha, q_o)}$. Let us consider the tangent space ${{T}_{p}}{{\varphi }_{(\alpha ,{{q}_{o}})}}$ and its base $\left\{ \frac{\partial {{{\vec{\varphi }}}_{(\alpha ,{{q}_{o}})}}}{\partial s},\frac{\partial {{{\vec{\varphi }}}_{(\alpha ,{{q}_{o}})}}}{\partial u} \right\}$ at a point $p\in {{\varphi }_{(\alpha ,{{q}_{o}})}}$. For any tangent vector ${{\vec{v}}_{p}}\in {{T}_{p}}{{\varphi }_{(\alpha ,{{q}_{o}})}}$, the Weingarten map of the OT-ruled surface $\varphi_{(\alpha, q_o)}$ is defined by ${{S}_{p}}=-{{D}_{p}}\vec{v}:{{T}_{p}}{{\varphi }_{(\alpha ,{{q}_{o}})}}\to {{T}_{{{{\vec{v}}}_{p}}}}{{S}^{2}}$ where ${{S}^{2}}$ is unit sphere with center origin. Therefore, we have $$\begin{split} S_p \left( \frac{\partial {{{\vec{\varphi }}}_{(\alpha ,{{q}_{o}})}}}{\partial s} \right) & = - D_{\frac{\partial {{{\vec{\varphi }}}_{(\alpha ,{{q}_{o}})}}}{\partial s}} \vec{U}(s,u),\\ & = A_1 (s,u) \frac{\partial {{{\vec{\varphi }}}_{(\alpha ,{{q}_{o}})}}}{\partial s} + A_2 (s,u) \frac{\partial {{{\vec{\varphi }}}_{(\alpha ,{{q}_{o}})}}}{\partial u}, \end{split}$$ and $$\begin{split} S_p \left( \frac{\partial {{{\vec{\varphi }}}_{(\alpha ,{{q}_{o}})}}}{\partial u} \right) & = - D_{\frac{\partial {{{\vec{\varphi }}}_{(\alpha ,{{q}_{o}})}}}{\partial u}} \vec{U}(s,u),\\ & = B_1 (s,u) \frac{\partial {{{\vec{\varphi }}}_{(\alpha ,{{q}_{o}})}}}{\partial s} + B_2 (s,u) \frac{\partial {{{\vec{\varphi }}}_{(\alpha ,{{q}_{o}})}}}{\partial u}, \end{split}$$ where $$\begin{split} {{A}_{1}}&=\frac{-1}{{{\left( {{f}^{2}}+{{g}^{2}} \right)}^{3/2}}}\left[ {{g}^{2}}{{\left( \frac{f}{g} \right)}_{\hspace{-2pt}s}}+\left( {{f}^{2}}+{{g}^{2}} \right)\xi \right],\\ {{A}_{2}}&=\frac{1}{{{\left( {{f}^{2}}+{{g}^{2}} \right)}^{3/2}}}\left[ \left( {{f}^{2}}+{{g}^{2}} \right)(f\mu +g\eta +\xi \cos \theta )+{{g}^{2}}\cos \theta {{\left( \frac{f}{g} \right)}_{\hspace{-2pt}s}} \right],\\ {{B}_{1}}&=\frac{\mu \sin \theta }{{{\left( {{f}^{2}}+{{g}^{2}} \right)}^{3/2}}}, \hspace{4pt} {{B}_{2}}=\frac{-\mu \cos \theta \sin \theta }{{{\left( {{f}^{2}}+{{g}^{2}} \right)}^{3/2}}}. \end{split}$$ Thus, the matrix form of the Weingarten map can be given by $$\label{WeingartenMap} {S}_{p} = \begin{bmatrix} {{A}_{1}} & {{B}_{1}}\\ {{A}_{2}} & {{B}_{2}}\\ \end{bmatrix}$$ From (\[WeingartenMap\]), one can easily compute the Gaussian curvature $K$ and the mean curvature $H$ by considering the equalities $K=\det ({{S}_{p}})$ and $H=\frac{1}{2}tr({{S}_{p}})$ and the results given in (\[KandH\]) are obtained. Moreover, from these results, for the parameter curves, we have the following corollary: (i) The parameter curves ${{\vec{\varphi }}_{(\alpha, {{q}_{o}})}}(s,{{u}_{0}})$, ($u_0$ is constant) are lines of curvature if and only if ${{A}_{2}}=0$ or equivalently, $\left( {{f}^{2}}+{{g}^{2}} \right)(f\mu +g\eta +\xi \cos \theta )+{{g}^{2}}\cos \theta {{\left( \frac{f}{g} \right)}_{\hspace{-2pt}s}}=0$ holds. (ii) The parameter curves ${{\vec{\varphi }}_{(\alpha ,{{q}_{o}})}}({{s}_{0}},u)$, ($s_0$ is constant) are lines of curvature if and only if ${{B}_{1}}=0$ or equivalently, the OT-ruled surface $\varphi_{(\alpha, q_o)}$ is a plane or ${{\varphi }_{(\alpha ,{{q}_{o}})}}={{\varphi }_{(\alpha, T)}}$. Considering the characteristic equation $\det \left( {{S}_{p}}-\lambda \text{I} \right)=0$, the principal curvatures of OT-ruled surface $\varphi_{(\alpha, q_o)}$ are obtained as $${{\lambda }_{1,2}}=\frac{{{A}_{1}}+{{B}_{2}}\pm \sqrt{{{\left( {{A}_{1}}-{{B}_{2}} \right)}^{2}}+4{{A}_{2}}{{B}_{1}}}}{2}$$ where $I$ is $2 \times 2$ unit matrix. Then, the principal directions are obtained as $${{\vec{e}}_{1}}=\frac{1}{{{B}_{1}}}\left( {{B}_{1}}\frac{\partial {{{\vec{\varphi }}}_{(\alpha ,{{q}_{o}})}}}{\partial s}+k{{A}_{2}}\frac{\partial {{{\vec{\varphi }}}_{(\alpha ,{{q}_{o}})}}}{\partial u} \right), \hspace{4pt} {{\vec{e}}_{2}}=\frac{1}{m{{A}_{2}}}\left( {{B}_{1}}\frac{\partial {{{\vec{\varphi }}}_{(\alpha ,{{q}_{o}})}}}{\partial s}+m{{A}_{2}}\frac{\partial {{{\vec{\varphi }}}_{(\alpha ,{{q}_{o}})}}}{\partial u} \right),$$ where $k,m$ are scalar functions such that $$\frac{{{\lambda }_{1}}-{{A}_{1}}}{{{A}_{2}}}=\frac{{{B}_{1}}}{{{\lambda }_{1}}-{{A}_{2}}}=k, \hspace{4pt} \frac{{{\lambda }_{2}}-{{A}_{1}}}{{{A}_{2}}}=\frac{{{B}_{1}}}{{{\lambda }_{2}}-{{A}_{2}}}=m.$$ Let now $\beta (t)={{\varphi }_{(\alpha ,{{q}_{o}})}}\left( s(t),u(t) \right)$ be a unit speed curve on $\varphi_{(\alpha, q_o)}$ with arc length parameter $t$ and unit tangent vector ${{\vec{v}}_{p}}\in {{T}_{p}}{{\varphi }_{(\alpha ,{{q}_{o}})}}$ at the point $\beta ({{t}_{o}})=p$ on $\varphi_{(\alpha, q_o)}$. The derivative of $\beta$ with respect to $t$ has the form $$\dot{\vec{\beta }}(t)=\frac{\partial {{{\vec{\varphi }}}_{(\alpha ,{{q}_{o}})}}}{\partial s}\frac{ds}{dt}+\frac{\partial {{{\vec{\varphi }}}_{(\alpha ,{{q}_{o}})}}}{\partial u}\frac{du}{dt}$$ where $\dot{\vec{\beta }}=\frac{d\vec{\beta }}{dt}$. For this tangent vector, we can write $$\label{TangentVector} {{\vec{v}}_{p}}=C(s,u)\frac{\partial {{{\vec{\varphi }}}_{(\alpha ,{{q}_{o}})}}}{\partial s}+D(s,u)\frac{\partial {{{\vec{\varphi }}}_{(\alpha ,{{q}_{o}})}}}{\partial u}$$ where $C,D$ are smooth functions defined by $C(t)=C\left( s(t),u(t) \right)=\frac{ds}{dt}=\dot{s}$ and $D(t)=D\left( s(t),u(t) \right)=\frac{du}{dt}=\dot{u}$. Substituting (\[partialderivativesofvarphi\]) in (\[TangentVector\]), gives $${{\vec{v}}_{p}}=\left( C\cos \theta +D \right){{\vec{q}}_{o}}+Cg\vec{B}+Cf\vec{r}$$ where ${{\left( C\cos \theta +D \right)}^{2}}+{{C}^{2}}({{f}^{2}}+{{g}^{2}})=1$. Also, by using the linearity of the Weingarten map, we get $${{S}_{p}}({{\vec{v}}_{p}})=\left[ \cos \theta \left( C{{A}_{1}}+D{{B}_{1}} \right)+\left( C{{A}_{2}}+D{{B}_{2}} \right) \right]{{\vec{q}}_{o}}+\left( C{{A}_{1}}+D{{B}_{1}} \right)\left( g\vec{B}+f\vec{r} \right)$$ and so on, the normal curvature ${{k}_{n}}$ in the direction ${{\vec{v}}_{p}}$ is computed as $$\label{normalcurvature} \begin{split} {{k}_{n}}({{{\vec{v}}}_{p}})&=\left\langle {{S}_{p}}({{{\vec{v}}}_{p}}),{{{\vec{v}}}_{p}} \right\rangle\\ &=C\left[ \left( C\cos \theta +D \right)\left( {{A}_{1}}\cos \theta +{{A}_{2}} \right)+\left( C{{A}_{1}}+D{{B}_{1}} \right)\left( {{f}^{2}}+{{g}^{2}} \right) \right]. \end{split}$$ Then, from (\[normalcurvature\]), we have the following theorem: The surface curve $\beta (t)={{\varphi }_{(\alpha ,{{q}_{o}})}}\left( s(t),u(t) \right)$ with unit tangent ${{\vec{v}}_{p}}$ is an asymptotic curve if and only if $\beta (t)$ is a ruling or $\left( C\cos \theta +D \right)\left( {{A}_{1}}\cos \theta +{{A}_{2}} \right)+\left( C{{A}_{1}}+D{{B}_{1}} \right)\left( {{f}^{2}}+{{g}^{2}} \right)=0$ holds. Similarly, the geodesic curvature ${{\kappa }_{g}}$ and the geodesic torsion ${{\tau }_{g}}$ of the curve $\beta (t)={{\varphi }_{(\alpha, {{q}_{o}})}}\left( s(t),u(t) \right)$ are computed as $$\begin{split} {{\kappa }_{g}}=\frac{1}{\sqrt{{{f}^{2}}+{{g}^{2}}}}&\left[ \left( C\cos \theta +D \right) \right.\left( -\left( {{f}^{2}}+{{g}^{2}} \right)\dot{C} \right. \\ & \left. +C\left( \eta f-\mu g \right)\left( 2D+\cos \theta \right)-\frac{1}{2}C{{\left( {{f}^{2}}+{{g}^{2}} \right)}_{s}} \right) \\ & \,+Cg\left( \dot{C}g\cos \theta -Cg{\theta }'\sin \theta -\mu C{{g}^{2}}+fC\eta g+\dot{D}g \right) \\ & \left. \,+Cf\left( \dot{C}f\cos \theta -Cf{\theta }'\sin \theta -\mu fgC+C{{f}^{2}}\eta +\dot{D}f \right) \right] \\ \end{split}$$ and $${{\tau }_{g}}=\sqrt{{{f}^{2}}+{{g}^{2}}}\left[ C\left( C{{A}_{2}}+D{{B}_{2}} \right)-D\left( C{{A}_{1}}+D{{B}_{1}} \right) \right]$$ respectively. Then, we have the followings: The surface curve $\beta (t)={{\varphi }_{(\alpha ,{{q}_{o}})}}\left( s(t),u(t) \right)$ with unit tangent ${{\vec{v}}_{p}}$ is a geodesic if and only if $$\begin{split} &\left( C\cos \theta +D \right)\left( -\left( {{f}^{2}}+{{g}^{2}} \right)\dot{C}+C\left( \eta f-\mu g \right)\left( 2D+\cos \theta \right)-\frac{1}{2}C{{\left( {{f}^{2}}+{{g}^{2}} \right)}_{s}} \right) \\ & \,\,\,\,\,\,\,+Cg\left( \dot{C}g\cos \theta -Cg{\theta }'\sin \theta -\mu C{{g}^{2}}+fC\eta g+\dot{D}g \right) \\ & \,\,\,\,\,\,\,+Cf\left( \dot{C}f\cos \theta -Cf{\theta }'\sin \theta -\mu fgC+C{{f}^{2}}\eta +\dot{D}f \right)=0 \\ \end{split}$$ holds. Now, we can investigate some special cases: *Case 1:* Let $\varphi_{(\alpha, q_o)}$ be $\varphi_{(\alpha, T)}$. Then, $$\begin{split} {{k}_{n}}&={{C}^{2}}u\kappa \tau \\ {{\kappa }_{g}}&=C\left( C+D \right)\left[ u{\kappa }'+\kappa \left( 2D+1 \right) \right]+u\kappa \left( \dot{C}D-C\dot{D} \right)+{{C}^{2}}{{u}^{2}}{{\kappa }^{3}} \\ {{\tau }_{g}}&=C\tau \left( C+D \right) \end{split}$$ and for the curve $\beta (t)={{\varphi }_{(\alpha ,T)}}\left( s(t),u(t) \right)$, we have followings: (i) $\beta (t)={{\varphi }_{(\alpha ,T)}}\left( s(t),u(t) \right)$ is an asymptotic curve if and only if $\beta (t)$ is a ruling or $\alpha$ is a plane curve. (ii) $\beta (t)={{\varphi }_{(\alpha ,T)}}\left( s(t),u(t) \right)$ is a geodesic if and only if $$C\left( C+D \right)\left[ u{\kappa }'+\kappa \left( 2D+1 \right) \right]+u\kappa \left( \dot{C}D-C\dot{D} \right)+{{C}^{2}}{{u}^{2}}{{\kappa }^{3}}=0$$ holds. (iii) $\beta (t)={{\varphi }_{(\alpha ,T)}}\left( s(t),u(t) \right)$ is a line of curvature if and only if one of the followings holds (a) $\beta (t)$ is a ruling, (b) $\alpha$ is a plane curve, (c) $s(t)=-u(t)+c$, where $c$ is integration constant. *Case 2*: Let $\varphi_{(\alpha, q_o)}$ be $\varphi_{(\alpha, N)}$. Then, $$\begin{split} {{k}_{n}}&=\frac{C\left[ C\left( {{u}^{2}}{\kappa }'\tau +\left( 1-u\kappa \right)u{\tau }' \right)+2D\tau \right]}{\sqrt{{{\left( 1-u\kappa \right)}^{2}}+{{u}^{2}}{{\tau }^{2}}}}\\ {{\kappa }_{g}}&=\frac{1}{\sqrt{{{\left( 1-u\kappa \right)}^{2}}+{{u}^{2}}{{\tau }^{2}}}}\left[ \dot{C}D \right.\left( -\left( {{\left( 1-u\kappa \right)}^{2}}+{{u}^{2}}{{\tau }^{2}} \right) \right. \\ & \hspace{50pt} \left. +2CD\left( \kappa -u\left( {{\kappa }^{2}}+{{\tau }^{2}} \right) \right)-C\left( \left( 1-u\kappa \right){\kappa }'+{{u}^{2}}\tau {\tau }' \right) \right) \\ & \hspace{50pt} +Cu\tau \left( -\tau C{{u}^{2}}{{\tau }^{2}}+C\left( 1-u\kappa \right)u\kappa \tau +\dot{D}u\tau \right) \\ & \hspace{50pt} \left. +C\left( 1-u\kappa \right)\left( -C\left( 1-u\kappa \right)u{{\tau }^{2}}+C\kappa {{\left( 1-u\kappa \right)}^{2}}+\dot{D}\left( 1-u\kappa \right) \right) \right]\\ {{\tau }_{g}}&={{C}^{2}}\tau -\frac{D\left[ Cu\left( u{\kappa }'\tau +\left( 1-u\kappa \right){\tau }' \right)-D\tau \right]}{\sqrt{{{\left( 1-u\kappa \right)}^{2}}+{{u}^{2}}{{\tau }^{2}}}} \end{split}$$ and for the curve $\beta (t)={{\varphi }_{(\alpha ,N)}}\left( s(t),u(t) \right)$ with unit tangent ${{\vec{v}}_{p}}$, we have followings: (i) $\beta (t)={{\varphi }_{(\alpha ,N)}}\left( s(t),u(t) \right)$ is an asymptotic curve if and only if $\beta (t)$ is a ruling or $$C\left( {{u}^{2}}{\kappa }'\tau +\left( 1-u\kappa \right)u{\tau }' \right)+2D\tau =0$$ holds. (ii) $\beta (t)={{\varphi }_{(\alpha ,N)}}\left( s(t),u(t) \right)$ is a geodesic if and only if $$\begin{split} & \dot{C}D\left( -\left( {{\left( 1-u\kappa \right)}^{2}}+{{u}^{2}}{{\tau }^{2}} \right) \right. \\ & \hspace{30pt} \left. +2CD\left( \kappa -u\left( {{\kappa }^{2}}+{{\tau }^{2}} \right) \right)-C\left( \left( 1-u\kappa \right){\kappa }'+{{u}^{2}}\tau {\tau }' \right) \right) \\ & \hspace{30pt} +Cu\tau \left( -\tau C{{u}^{2}}{{\tau }^{2}}+C\left( 1-u\kappa \right)u\kappa \tau +\dot{D}u\tau \right) \\ & \hspace{30pt} +C\left( 1-u\kappa \right)\left( -C\left( 1-u\kappa \right)u{{\tau }^{2}}+C\kappa {{\left( 1-u\kappa \right)}^{2}}+\dot{D}\left( 1-u\kappa \right) \right)=0 \\ \end{split}$$ holds. (iii) $\beta (t)={{\varphi }_{(\alpha ,N)}}\left( s(t),u(t) \right)$ is a line of curvature if and only if $$\frac{{{C}^{2}}}{D}=\frac{Cu\left( u{\kappa }'\tau +\left( 1-u\kappa \right){\tau }' \right)-D\tau }{\tau \sqrt{{{\left( 1-u\kappa \right)}^{2}}+{{u}^{2}}{{\tau }^{2}}}}$$ holds. *Case 3*: Let $s={{s}_{0}}$ be constant. Then, $C=\dot{s}=0$ and we get that ${{\vec{v}}_{p}}={{\vec{q}}_{o}}$, i.e., $\beta (t)$ is a ruling. Then, we have followings: $${{k}_{n}}=0, \hspace{4pt} {{\kappa }_{g}}=0, \hspace{4pt} {{\tau }_{g}}=-\frac{{{D}^{2}}\mu \sin \theta }{{{f}^{2}}+{{g}^{2}}}$$ which give us (i) All rulings are asymptotic. (ii) All rulings are geodesic. (iii) The ruling $\beta (t)={{\varphi }_{(\alpha ,{{q}_{o}})}}\left( {{s}_{0}},u(t) \right)$is a line of curvature if and only if $\mu \sin \theta =0$ which suggests that either ${{\varphi }_{(\alpha ,{{q}_{o}})}}={{\varphi }_{(\alpha ,T)}}$ or $\alpha$ is a plane curve. *Case 4*: Let $u={{u}_{0}}$ be constant. Then, $D=\dot{u}=0$ and we have followings: $$\begin{split} {{k}_{n}}&=\frac{{{C}^{2}}}{\sqrt{{{f}^{2}}+{{g}^{2}}}}\left[ \left( f\mu +g\eta \right)\cos \theta -{{g}^{2}}{{\left( \frac{f}{g} \right)}_{\hspace{-2pt}s}}-\left( {{f}^{2}}+{{g}^{2}} \right)\xi \right]\\ {{\kappa }_{g}}&=\frac{1}{\sqrt{{{f}^{2}}+{{g}^{2}}}}\left[ C\cos \theta \left( -\left( {{f}^{2}}+{{g}^{2}} \right)\dot{C} \right. \right. \\ & \hspace{30pt} \left. +C\cos \theta \left( \eta f-\mu g \right)-\frac{1}{2}C{{\left( {{f}^{2}}+{{g}^{2}} \right)}_{s}} \right) \\ & \hspace{30pt} +Cg\left( \dot{C}g\cos \theta -Cg{\theta }'\sin \theta -\mu C{{g}^{2}}+fC\eta g \right) \\ & \hspace{30pt} \left. \,+Cf\left( \dot{C}f\cos \theta -Cf{\theta }'\sin \theta -\mu fgC+C{{f}^{2}}\eta \right) \right] \\ {{\tau }_{g}}&=\frac{{{C}^{2}}}{{{f}^{2}}+{{g}^{2}}}\left[ \left( {{f}^{2}}+{{g}^{2}} \right)(f\mu +g\eta +\xi \cos \theta )+{{g}^{2}}\cos \theta {{\left( \frac{f}{g} \right)}_{\hspace{-2pt}s}} \right] \end{split}$$ (i) The parameter curve $\beta (t)={{\varphi }_{(\alpha ,{{q}_{o}})}}\left( s(t),{{u}_{0}} \right)$ is an asymptotic curve if and only if $$\left( f\mu +g\eta \right)\cos \theta -{{g}^{2}}{{\left( \frac{f}{g} \right)}_{\hspace{-2pt}s}}-\left( {{f}^{2}}+{{g}^{2}} \right)\xi =0$$ holds. (ii) The parameter curve $\beta (t)={{\varphi }_{(\alpha ,{{q}_{o}})}}\left( s(t),{{u}_{0}} \right)$is a geodesic if and only if $$\begin{split} & C\cos \theta \left( -\left( {{f}^{2}}+{{g}^{2}} \right)\dot{C} + C\cos \theta \left( \eta f-\mu g \right)-\frac{1}{2}C{{\left( {{f}^{2}}+{{g}^{2}} \right)}_{s}} \right) \\ & \hspace{15pt} +Cg\left( \dot{C}g\cos \theta -Cg{\theta }'\sin \theta -\mu C{{g}^{2}}+fC\eta g \right) \\ & \hspace{15pt} +Cf\left( \dot{C}f\cos \theta -Cf{\theta }'\sin \theta -\mu fgC+C{{f}^{2}}\eta \right)=0 \\ \end{split}$$ holds. (iii) The parameter curve $\beta (t)={{\varphi }_{(\alpha ,{{q}_{o}})}}\left( s(t),{{u}_{0}} \right)$is a line of curvature if and only if $$\left( {{f}^{2}}+{{g}^{2}} \right)(f\mu +g\eta +\xi \cos \theta )+{{g}^{2}}\cos \theta {{\left( \frac{f}{g} \right)}_{\hspace{-2pt}s}}=0$$ holds. *Case 5*: Let $C=\dot{s}$, $D=\dot{u}$ be non-zero constants. Then, the curve has the parametric form $\beta (t)={{\varphi }_{(\alpha ,{{q}_{o}})}}\left( {{c}_{1}}t+{{c}_{2}},{{d}_{1}}t+{{d}_{2}} \right)$ where ${{c}_{i}}, {{d}_{i}}, (i=1,2)$ are constants and we have $$\begin{split} {{k}_{n}}&=C\left[ \left( C\cos \theta +D \right)\left( {{A}_{1}}\cos \theta +{{A}_{2}} \right)+\left( C{{A}_{1}}+D{{B}_{1}} \right)\left( {{f}^{2}}+{{g}^{2}} \right) \right]\\ {{\kappa }_{g}}&=\frac{1}{\sqrt{{{f}^{2}}+{{g}^{2}}}}\left[ \left( C\cos \theta +D \right) \right.\left( C\left( \eta f-\mu g \right)\left( 2D+\cos \theta \right)-\frac{1}{2}C{{\left( {{f}^{2}}+{{g}^{2}} \right)}_{s}} \right) \\ & \hspace{15pt} +Cg\left( -Cg{\theta }'\sin \theta -\mu C{{g}^{2}}+fC\eta g \right)\left. \,+Cf\left( -Cf{\theta }'\sin \theta -\mu fgC+C{{f}^{2}}\eta \right) \right],\\ {{\tau }_{g}}&=\sqrt{{{f}^{2}}+{{g}^{2}}}\left[ C\left( C{{A}_{2}}+D{{B}_{2}} \right)-D\left( C{{A}_{1}}+D{{B}_{1}} \right) \right] \end{split}$$ which give followings: (i) $\beta (t)={{\varphi }_{(\alpha ,{{q}_{o}})}}\left( {{c}_{1}}t+{{c}_{2}},{{d}_{1}}t+{{d}_{2}} \right)$ is an asymptotic curve if and only if $$\left( C\cos \theta +D \right)\left( {{A}_{1}}\cos \theta +{{A}_{2}} \right)+\left( C{{A}_{1}}+D{{B}_{1}} \right)\left( {{f}^{2}}+{{g}^{2}} \right)=0$$ holds. (ii) $\beta (t)={{\varphi }_{(\alpha ,{{q}_{o}})}}\left( {{c}_{1}}t+{{c}_{2}},{{d}_{1}}t+{{d}_{2}} \right)$ is a geodesic if and only if $$\begin{split} &\left( C\cos \theta +D \right)\left( +C\left( \eta f-\mu g \right)\left( 2D+\cos \theta \right)-\frac{1}{2}C{{\left( {{f}^{2}}+{{g}^{2}} \right)}_{s}} \right) \\ & \hspace{15pt} +Cg\left( -Cg{\theta }'\sin \theta -\mu C{{g}^{2}}+fC\eta g \right)+Cf\left( -Cf{\theta }'\sin \theta -\mu fgC+C{{f}^{2}}\eta \right)=0 \\ \end{split}$$ holds. (iii) $\beta (t)={{\varphi }_{(\alpha ,{{q}_{o}})}}\left( {{c}_{1}}t+{{c}_{2}},{{d}_{1}}t+{{d}_{2}} \right)$ is a line of curvature if and only if $$\frac{C{{A}_{1}}+D{{B}_{1}}}{C{{A}_{2}}+D{{B}_{2}}}=\text{constant}$$ holds. Let now consider the Frenet frame of a non-cylindrical OT-ruled surface $\varphi_{(\alpha, q_o)}$. Differentiating the ruling ${{\vec{q}}_{o}}=\cos \theta \vec{T}+\sin \theta \vec{N}$, it follows $$\label{derivativeofruling} \vec{q}\hspace{2pt}'_{\hspace{-2pt} o} = -\eta \sin \theta \,\vec{T}+\eta \cos \theta \,\vec{N}+\tau \sin \theta \,\vec{B}$$ Then, the central normal and central tangent vectors of OT-ruled surface $\varphi_{(\alpha, q_o)}$ are computed as $$\label{handavectors} \begin{split} \vec{h}&=\frac{1}{\sqrt{{{\eta }^{2}}+{{\tau }^{2}}{{\sin }^{2}}\theta }}\left( -\eta \sin \theta \,\vec{T}+\eta \cos \theta \,\vec{N}+\tau \sin \theta \,\vec{B} \right) \\ \vec{a}&=\frac{1}{\sqrt{{{\eta }^{2}}+{{\tau }^{2}}{{\sin }^{2}}\theta }}\left( \tau {{\sin }^{2}}\theta \,\vec{T}-\tau \cos \theta \sin \theta \,\vec{N}+\eta \vec{B} \right) \\ \end{split}$$ respectively. From the equations (\[derivativeofruling\]) and (\[handavectors\]), we have following theorem: For the OT-ruled surface $\varphi_{(\alpha, q_o)}$ the followings are equivalent: (i) The angle between the vectors ${{\vec{q}}_{o}}$ and $\vec{T}$ is given by $\theta =-\int_{0}^{s}{\kappa ds}$. (ii) The central normal vector $\vec{h}$ coincides with the binormal vector $\vec{B}$ of $\alpha$. (iii) The central tangent vector $\vec{a}$ lies in the osculating plane of $\alpha$. Let the angle $\theta $ be given by $\theta =-\int_{0}^{s}{\kappa d}s$. Then, we get $\eta =0$. Thus, the proof is clear from (\[handavectors\]). Let the angle between the vectors ${{\vec{q}}_{o}}$ and $\vec{T}$ is given by $\theta =-\int_{0}^{s}{\kappa d}s$. Then, $\alpha $ is a general helix if and only if the OT-ruled surface ${{\varphi }_{(\alpha ,{{q}_{o}})}}$ is an $h$-slant ruled surface. The Frenet frame $\left\{ {{{\vec{q}}}_{o}},\vec{h},\vec{a} \right\}$ of OT-ruled surface ${{\varphi }_{(\alpha ,{{q}_{o}})}}$ coincides with the Frenet frame $\left\{ \vec{T},\vec{N},\vec{B} \right\}$ of base curve $\alpha $ if and only if ${{\varphi }_{(\alpha ,{{q}_{o}})}}$ is the tangent surface ${{\varphi }_{(\alpha ,T)}}$ of $\alpha$. Examples ======== Let consider the general helix curve ${{\alpha }_{1}}$ given by the parametrization $${{\vec{\alpha }}_{1}}(s)=\left( \cos \left( \frac{s}{\sqrt{2}} \right),\sin \left( \frac{s}{\sqrt{2}} \right),\frac{s}{\sqrt{2}} \right)$$ For the required Frenet elements of ${{\alpha }_{1}}$, we obtain $$\begin{split} & \vec{T}(s)=\left( -\frac{1}{\sqrt{2}}\sin \left( \frac{s}{\sqrt{2}} \right),\frac{1}{\sqrt{2}}\cos \left( \frac{s}{\sqrt{2}} \right),\frac{1}{\sqrt{2}} \right), \hspace{4pt} \vec{N}(s)=\left( -\cos \left( \frac{s}{\sqrt{2}} \right),-\sin \left( \frac{s}{\sqrt{2}} \right),0 \right), \\ & \kappa (s)=\frac{1}{2}, \hspace{4pt} \tau (s)=\frac{1}{2}. \end{split}$$ By choosing $\theta (s)=s$, we get $$\begin{split} {{{\vec{q}}}_{o}}(s)&=\left( -\frac{1}{\sqrt{2}}\cos (s)\sin \left( \frac{s}{\sqrt{2}} \right)-\sin (s)\cos \left( \frac{s}{\sqrt{2}} \right) \right., \\ & \hspace{20pt} \left. \frac{1}{\sqrt{2}}\cos (s)\cos \left( \frac{s}{\sqrt{2}} \right)-\sin (s)\sin \left( \frac{s}{\sqrt{2}} \right),\,\,\frac{1}{\sqrt{2}}\cos (s) \right).\\ \end{split}$$ and the OT-ruled surface ${{\varphi }_{_{1}({{\alpha }_{1}},{{q}_{o}})}}$ has the parametrization $$\begin{split} {{{\vec{\varphi }}}_{_{1}({{\alpha }_{1}},{{q}_{o}})}}&=\left( \cos \left( \frac{s}{\sqrt{2}} \right)+u\left( -\frac{1}{\sqrt{2}}\cos (s)\sin \left( \frac{s}{\sqrt{2}} \right)-\sin (s)\cos \left( \frac{s}{\sqrt{2}} \right) \right) \right., \\ & \hspace{20pt} \sin \left( \frac{s}{\sqrt{2}} \right)+u\left( \frac{1}{\sqrt{2}}\cos (s)\cos \left( \frac{s}{\sqrt{2}} \right)-\sin (s)\sin \left( \frac{s}{\sqrt{2}} \right) \right), \\ & \hspace{20pt} \left. \frac{s}{\sqrt{2}}+\frac{1}{\sqrt{2}}u\cos (s) \right).\\ \end{split}$$ From (\[strictionline\]), the equation of the striction line of OT-ruled surface ${{\varphi }_{_{1}({{\alpha }_{1}},{{q}_{o}})}}$ is given by $$\begin{split} {{{\vec{c}}}_{1}}(s)&=\left( \frac{\frac{3\sqrt{2}}{2}\sin (2s)\sin \left( \frac{\sqrt{2}}{2}s \right)-\cos \left( \frac{\sqrt{2}}{2}s \right)\left( 5{{\cos }^{2}}(s)+4 \right)}{{{\cos }^{2}}(s)-10}, \right. \\ & \hspace{20pt} -\frac{\frac{3\sqrt{2}}{2}\sin (2s)\cos \left( \frac{\sqrt{2}}{2}s \right)+\sin \left( \frac{\sqrt{2}}{2}s \right)\left( 5{{\cos }^{2}}(s)+4 \right)}{{{\cos }^{2}}(s)-10}, \\ & \hspace{20pt} \left. \frac{\sqrt{2}\left( s{{\cos }^{2}}(s)-3\sin (2s)-10s \right)}{2\left( {{\cos }^{2}}(s)-10 \right)} \right). \\ \end{split}$$ The curvatures of ${{\varphi }_{_{1}({{\alpha }_{1}},{{q}_{o}})}}$ are computed as $\eta (s)=\frac{3}{2}$, $\xi (s)=\frac{1}{2}\cos (s)$, $\mu (s)=\frac{1}{2}\sin (s)$ and the functions $f$ and $g$ are given by $f(s,u)=\sin (s)+\frac{3}{2}u$, $g(s,u)=\frac{1}{2}u\sin (s)$. The graph of ${{\varphi }_{_{1}({{\alpha }_{1}},{{q}_{o}})}}$ for the intervals $s\in \left[ 0,3\pi \right]$, $u\in \left[ -1,1 \right]$ is given in Figure \[fig1\]. From Proposition \[strictionlineprop\], the base curve ${{\alpha }_{1}}$ (red) and striction line ${{c}_{1}}$ (blue) intersect at the points ${{\varphi }_{_{1}({{\alpha }_{1}},{{q}_{o}})}} (0,0)$, ${{\varphi }_{_{1}({{\alpha }_{1}},{{q}_{o}})}} (\pi,0)$, ${{\varphi }_{_{1}({{\alpha }_{1}},{{q}_{o}})}} (2\pi,0)$, ${{\varphi }_{_{1}({{\alpha }_{1}},{{q}_{o}})}} (3\pi,0)$ which are also singular points of ${{\varphi }_{_{1}({{\alpha }_{1}},{{q}_{o}})}}$ and shown with black color in Figure \[fig1\]. ![The OT-ruled surface ${{\varphi }_{_{1}({{\alpha }_{1}},{{q}_{o}})}}$[]{data-label="fig1"}](fig1a.png){width="\textwidth"} ![The OT-ruled surface ${{\varphi }_{_{1}({{\alpha }_{1}},{{q}_{o}})}}$[]{data-label="fig1"}](fig1b.png){width="\textwidth"} Let the curve ${{\alpha }_{2}}$ be given by the parametrization $${{\vec{\alpha }}_{2}}(s)=\left( \frac{3}{2}\cos \left( \frac{s}{2} \right)+\frac{1}{6}\cos \left( \frac{3s}{2} \right),\frac{3}{2}\sin \left( \frac{s}{2} \right)+\frac{1}{6}\sin \left( \frac{3s}{2} \right),\sqrt{3}\cos \left( \frac{s}{2} \right) \right)$$ whose required Frenet elements are $$\begin{split} \vec{T}(s)&=\left( -\frac{3}{4}\sin \left( \frac{s}{2} \right)-\frac{1}{4}\sin \left( \frac{3s}{2} \right),\frac{3}{4}\cos \left( \frac{s}{2} \right)+\frac{1}{4}\cos \left( \frac{3s}{2} \right),-\frac{\sqrt{3}}{2}\sin \left( \frac{s}{2} \right) \right),\\ \vec{N}(s)&=\left( -\frac{\sqrt{3}}{2}\cos (s),-\frac{\sqrt{3}}{2}\sin (s),-\frac{1}{2} \right),\\ \kappa (s)&=\frac{\sqrt{3}}{2}\cos \left( \frac{s}{2} \right), \hspace{4pt} \tau (s)=-\frac{\sqrt{3}}{2}\sin \left( \frac{s}{2} \right), \end{split}$$ where we calculate $$\frac{{{\kappa }^{2}}}{{{\left( {{\kappa }^{2}}+{{\tau }^{2}} \right)}^{3/2}}}{{\left( \frac{\tau }{\kappa } \right)}^{\prime }}=-\frac{\sqrt{3}}{3}=\textnormal{constant}$$ Therefore, we obtain that ${{\alpha}_{2}}$ is a slant helix. By choosing $\theta (s)=\frac{s}{2}$, we get $$\begin{split} {{{\vec{q}}}_{o}}(s)&=\left( -\frac{1}{2}\sin \left( \frac{s}{2} \right)\left( 2{{\cos }^{2}}\left( \frac{s}{2} \right)\left( \cos \left( \frac{s}{2} \right)+\sqrt{3} \right)+\cos \left( \frac{s}{2} \right)-\sqrt{3} \right), \right. \\ & \hspace{20pt} \left. \cos \left( \frac{s}{2} \right)\left( {{\cos }^{2}}\left( \frac{s}{2} \right)\left( \sqrt{3}+\cos \left( \frac{s}{2} \right) \right)-\sqrt{3} \right),-\frac{1}{2}\sin \left( \frac{s}{2} \right)\left( \sqrt{3}\cos \left( \frac{s}{2} \right)+1 \right) \right). \\ \end{split}$$ Then, the parametrization of the OT-ruled surface ${{\varphi }_{_{2}({{\alpha }_{2}},{{q}_{o}})}}$ and its striction line ${{c}_{2}}$ can be written easily by using the equalities (\[OT-ruledSurfaceEquation\]) and (\[strictionline\]), respectively. The curvatures of that surface are $$\eta (s)=\frac{1}{2}+\frac{\sqrt{3}}{2}\cos \left( \frac{s}{2} \right), \hspace{4pt} \xi (s)=-\frac{\sqrt{3}}{4}\sin (s), \hspace{4pt} \mu (s)=-\frac{\sqrt{3}}{2}{{\sin }^{2}}\left( \frac{s}{2} \right).$$ Furthermore, the functions $f$ and $g$ are calculated as $$f(s,u)=\sin \left( \frac{s}{2} \right)+u\left( \frac{1}{2}+\frac{\sqrt{3}}{2}\cos \left( \frac{s}{2} \right) \right), \hspace{4pt} g(s,u)=-\frac{\sqrt{3}}{2}u{{\sin }^{2}}\left( \frac{s}{2} \right).$$ The graph of ${{\varphi }_{_{2}({{\alpha }_{2}},{{q}_{o}})}}$ for intervals $s\in \left[ -2\pi ,2\pi \right]$ and $u\in \left[ -1,1 \right]$ is given in Figure \[fig2\]. From Proposition \[strictionlineprop\], the base curve ${{\alpha }_{2}}$ (red) and striction line ${{c}_{2}}$ (blue) intersect at the points $$\begin{split} p_1 &= {{\varphi }_{_{2}({{\alpha }_{2}},{{q}_{o}})}} (-2\pi,0) = {{\varphi }_{_{2}({{\alpha }_{2}},{{q}_{o}})}} (\pi,0), \hspace{4pt} p_2 = {{\varphi }_{_{2}({{\alpha }_{2}},{{q}_{o}})}} (0,0)\\ p_3 &= {{\varphi }_{_{2}({{\alpha }_{2}},{{q}_{o}})}} \left( 2\left( \pi - \arccos \left( \frac{\sqrt{3}}{3} \right) \right), 0 \right) , \hspace{4pt} p_4 = {{\varphi }_{_{2}({{\alpha }_{2}},{{q}_{o}})}} \left( 2\left( \pi + \arccos \left( \frac{\sqrt{3}}{3} \right) \right), 0 \right). \end{split}$$ Here, ${{p}_{1}},{{p}_{2}}\in S$ are singular points of ${{\varphi }_{_{2}({{\alpha }_{2}},{{q}_{o}})}}$ ${{p}_{3}},{{p}_{4}}\in Y$ are non-singular points which are given black and green in Figure \[fig2\], respectively. ![The OT-ruled surface ${{\varphi }_{_{2}({{\alpha }_{2}},{{q}_{o}})}}$[]{data-label="fig2"}](fig2a.png){width="\textwidth"} ![The OT-ruled surface ${{\varphi }_{_{2}({{\alpha }_{2}},{{q}_{o}})}}$[]{data-label="fig2"}](fig2b.png){width="\textwidth"} Let ${{\alpha }_{3}}$ be given by the parametrization $$\begin{split} {{{\vec{\alpha }}}_{3}}(s)&=\frac{5\sqrt{26}}{26}\left( \frac{\left( \sqrt{26}-26 \right)\sin \left( \left( 1+\frac{\sqrt{26}}{13} \right)s \right)}{104+8\sqrt{26}}+\frac{\left( \sqrt{26}+26 \right)\sin \left( \left( 1-\frac{\sqrt{26}}{13} \right)s \right)}{-104+8\sqrt{26}}-\frac{1}{2}\sin (s) \right., \\ & \hspace{50pt} \frac{\left( 26-\sqrt{26} \right)\cos \left( \left( 1+\frac{\sqrt{26}}{13} \right)s \right)}{104+8\sqrt{26}}-\frac{\left( \sqrt{26}+26 \right)\cos \left( \left( 1-\frac{\sqrt{26}}{13} \right)s \right)}{-104+8\sqrt{26}}+\frac{1}{2}\cos (s),\, \\ & \hspace{50pt} \left. \frac{5}{4}\cos \left( \frac{\sqrt{26}}{13}s \right) \vphantom{\frac{\left( \sqrt{26}-26 \right)\sin \left( \left( 1+\frac{\sqrt{26}}{13} \right)s \right)}{104+8\sqrt{26}}} \right) \\ \end{split}$$ which is a special chosen of general Salkowski curve defined in [@Monterde]. The required Frenet elements are $$\begin{split} \vec{T}(s)&=\left( -\cos (s)\cos \left( \frac{\sqrt{26}}{26}s \right)-\frac{\sqrt{26}}{26}\sin (s)\sin \left( \frac{\sqrt{26}}{26}s \right) \right., \\ & \hspace{20pt} \left. -\sin (s)\cos \left( \frac{\sqrt{26}}{26}s \right)+\frac{\sqrt{26}}{26}\cos (s)\sin \left( \frac{\sqrt{26}}{26}s \right),-\frac{5\sqrt{26}}{26}\sin \left( \frac{\sqrt{26}}{26}s \right) \right) \\ \vec{N}(s)&=\left( \frac{5\sqrt{26}}{26}\sin (s),-\frac{5\sqrt{26}}{26}\cos (s),-\frac{\sqrt{26}}{26} \right),\\ \kappa (s)&=1, \hspace{4pt} \tau (s)=\tan \left( \frac{\sqrt{26}}{26}s \right). \end{split}$$ By choosing $\theta (s)=\frac{s}{\sqrt{26}}$, we get $$\begin{split} {{{\vec{q}}}_{o}}(s)&=\left( -\frac{\sqrt{26}}{26}\cos \left( \frac{\sqrt{26}}{26}s \right)\sin (s)\sin \left( \frac{\sqrt{26}}{26}s \right)-\cos (s){{\cos }^{2}}\left( \frac{\sqrt{26}}{26}s \right)+\frac{5\sqrt{26}}{26} \right.\sin (s)\sin \left( \frac{\sqrt{26}}{26}s \right), \\ & \hspace{22pt} \frac{\sqrt{26}}{26}\cos \left( \frac{\sqrt{26}}{26}s \right)\cos (s)\sin \left( \frac{\sqrt{26}}{26}s \right)-\frac{5\sqrt{26}}{26}\cos (s)\sin \left( \frac{\sqrt{26}}{26}s \right)-\sin (s){{\cos }^{2}}\left( \frac{\sqrt{26}}{26}s \right), \\ & \hspace{20pt} \left. -\frac{\sqrt{26}}{26}\sin \left( \frac{\sqrt{26}}{26}s \right)\left( 5\cos \left( \frac{\sqrt{26}}{26}s \right)+1 \right) \right) \\ \end{split}$$ Then the parametrization of the OT-ruled surface ${{\varphi }_{_{3}({{\alpha }_{3}},{{q}_{o}})}}$ and the equation of striction line ${{c}_{3}}$ can be written easily from the equalities (\[OT-ruledSurfaceEquation\]) and (\[strictionline\]), respectively. This surface has the curvatures $$\eta (s)=1+\frac{\sqrt{26}}{26}, \hspace{4pt} \xi (s)=\sin \left( \frac{\sqrt{26}}{26}s \right), \hspace{4pt} \mu (s)=\tan \left( \frac{\sqrt{26}}{26}s \right)\sin \left( \frac{\sqrt{26}}{26}s \right),$$ and the functions $f$ and $g$ are calculated as $$f(s,u)=\sin \left( \frac{\sqrt{26}}{26}s \right)+u\left( 1+\frac{\sqrt{26}}{26} \right), \hspace{4pt} g(s,u)=u\tan \left( \frac{\sqrt{26}}{26}s \right)\sin \left( \frac{\sqrt{26}}{26}s \right).$$ The graph of ${{\varphi }_{_{3}({{\alpha }_{3}},{{q}_{o}})}}$ for intervals $s\in \left[ -\frac{\sqrt{26}}{2}\pi ,\frac{\sqrt{26}}{2}\pi \right]$ and $u \in \left[ -0.5,0.5 \right]$ is given in Figure \[fig3\]. Proposition \[strictionlineprop\], the base curve ${{\alpha }_{3}}$ (red) and striction line ${{c}_{3}}$ (blue) intersect at the points ${{\varphi }_{_{3}({{\alpha }_{3}},{{q}_{o}})}} \left( -\frac{\sqrt{26}}{2}\pi, 0 \right) $, ${{\varphi }_{_{3}({{\alpha }_{3}},{{q}_{o}})}} \left( 0, 0 \right) $ and ${{\varphi }_{_{3}({{\alpha }_{3}},{{q}_{o}})}} \left( \frac{\sqrt{26}}{2}\pi, 0 \right) $. All these points are singular points of ${{\varphi }_{_{3}({{\alpha }_{3}},{{q}_{o}})}}$ and given by black in Figure \[fig3\]. ![The OT-ruled surface ${{\varphi }_{_{3}({{\alpha }_{3}},{{q}_{o}})}}$[]{data-label="fig3"}](fig3a.png){width="\textwidth"} ![The OT-ruled surface ${{\varphi }_{_{3}({{\alpha }_{3}},{{q}_{o}})}}$[]{data-label="fig3"}](fig3b.png){width="\textwidth"} Conclusions =========== A new type of ruled surfaces has been defined according to the position of the ruling. Taking the ruling on the osculating plane of a curve, these surfaces is defined as osculating type ruled surface or OT-ruled surface. Many properties of such surfaces have been obtained. Of course, this subject can be considered in some other spaces such as Lorentzian space and Galilean space, and properties of OT-ruled surfaces can be given in these spaces according to the characters of base curve and ruling. [99]{} Barros, M., General helices and a theorem of Lancret, Proc. Amer. Math. Soc., 125(5): 1503-1509 (1997). do Carmo, M.P., Differential Geometry of Curves and Surfaces. Prentice-Hall, New Jersey (1976). Emmer, M., Imagine Math Between Culture and Mathematics. Springer, 2012 ed. (2012). Fomenko, A.T., Tuzhilin, A.A., Elements of the geometry and topology of minimal surfaces in three-dimensional space. American Mathematical Society, Providence, Rhode Island (2005). Izumiya, S., Takeuchi, N. New special curves and developable surfaces. Turkish Journal of Mathematics, 28:153-163 (2004). Izumiya, S., Takeuchi, N. Special curves and ruled surfaces. Beitrage zur Algebra und Geometrie (Contributions to Algebra and Geometry), 44(1):203-212 (2003). Izumiya, S., Takeuchi, N., Geometry of ruled surfaces. Applicable Mathematics in the Golden Age (ed., J.C. Misra), Narosa Publishing House, New Delhi, 305-338 (2003). Izumiya, S., Katsumi, H., Yamasaki, T. The rectifying developable and the spherical Darboux image of a space curve. Banach Center Publications, 50:137-149 (1999). Karger, A., Novak, J., Space Kinematics and Lie Groups. STNL Publishers of Technical Lit., Prague, Czechoslovakia (1978). Monterde, J., Salkowski curves revisited: A family of curves with constant curvature and non-constant torsion, Computer Aided Geometric Design, 26:271–278 (2009). Önder, M., Slant ruled surfaces. Transnational Journal of Pure and Applied Mathematics, 1(1):63-82 (2018). Önder, M., Rectifying ruled surfaces, Kuwait Journal of Science (In press)
--- abstract: 'We study 1D fermions with photoassociation or with a narrow Fano-Feshbach resonance described by the Boson-Fermion resonance model. Using the bosonization technique, we derive a low-energy Hamiltonian of the system. We show that at low energy, the order parameters for the Bose Condensation and fermion superfluidity become identical, while a spin gap and a gap against the formation of phase slips are formed. As a result of these gaps, charge density wave correlations decay exponentially in contrast with the phases where only bosons or only fermions are present. We find a Luther-Emery point where the phase slips and the spin excitations can be described in terms of pseudofermions. This allows us to provide closed form expressions of the density-density correlations and the spectral functions. The spectral functions of the fermions are gapped, whereas the spectral functions of the bosons remain gapless. The application of a magnetic field results in a loss of coherence between the bosons and the fermion and the disappearance of the gap. Changing the detuning has no effect on the gap until either the fermion or the boson density is reduced to zero. Finally, we discuss the formation of a Mott insulating state in a periodic potential. The relevance of our results for experiments with ultracold atomic gases subject to one-dimensional confinement is also discussed.' author: - 'E. Orignac' - 'R. Citro' title: 'Phase transitions in the boson-fermion resonance model in one dimension' --- Introduction ============ Since the discovery of Bose-Einstein Condensation (BEC) of atoms in optical traps, the field of ultracold atoms has experienced tremendous developments in the recent years.[@dalfovo99_bec_review] A first important step has been the use of Fano-Feshbach resonances[@fano_resonance; @feshbach_resonance] to tune the strength of atom-atom interaction.[@stwalley_feshbach; @tiesinga_feshbach] Fano-Feshbach resonances take place when the energy difference between the molecular state in the closed channel and the threshold of the two-atom continuum in the open channel, known as the detuning $\nu$, is zero[@duine_feshbach_review]. Near a Fano-Feshbach resonance, the atom-atom scattering length possesses a singularity. For $\nu>0$, atoms are stable, but the existence of the virtual molecular state results in an effective attraction. For $\nu<0$, the molecules are formed and possess a weakly repulsive interaction. Since the value of $\nu$ can be controlled by an applied magnetic field, this allows to tune the sign and strength of the atomic and molecular interactions.[@inouye98_feshbach_na; @roberts98_feshbach_rb; @courteille98_feshbach_rb; @vuletic99_feshbach_cs] In particular, the use of Fano-Feshbach resonances has allowed the observation of pairs of fermionic[@jochim_bec; @regal_bec; @strecker_bec; @cubizolles_bec] or bosonic[@donley_feshbach_exp; @duerr04_molecules_rb; @chin03_molecules_cs; @xu03_molecules_bec] atoms binding together to form bosonic molecules. At sufficiently low temperature, for $\nu<0$, these molecules can form a Bose-Einstein condensate. In the case of a fermionic system, for $\nu>0$, due to attractive interactions a BCS superfluid is expected. Since the BEC and the BCS state break the same U(1) symmetry, a smooth crossover between the two states is expected as $\nu$ is tuned through the resonance. Indeed, the BEC of molecules[@jochim_bec; @greiner_bec; @zwierlein_bec] and the crossover to a strongly degenerate Fermi gas[@bartenstein_bec; @zwierlein04_bec; @regal_bec_pairs; @bourdel04_bcs_bec] have been observed as a gas of cold fermionic atoms is swept through the Fano-Feshbach resonance. Measurement of the radio-frequency excitation spectra[@chin04_gap_bcs] and of the specific heat[@kinast05_Cp] as well as observation of vortices in a rotating system[@zwierlein05_vortices] on the $\nu>0$ side revealed the presence of a superfluid BCS gap, thus proving the existence of a BEC-BCS crossover. Such a crossover is naturally described by the boson-fermion model, [@timmermans01_bosefermi_model; @holland01_bosefermi_model; @ohashi02_bcsbec; @ohashi03_transition_feshbach; @ohashi03_collective_feshbach; @stajic04_bcs_bec; @chen_bcs_bec_review; @domanski05_feshbach]first introduced in the 1950s in the context of the theory of superconductivity[@schafroth54_preformed_pairs; @blatt64_preformed_pairs] and later reinvestigated in the 1980s in the context of polaronic[@alexandrov81] and high-Tc superconductivity theory.[@ranninger85_bosefermi; @friedberg89_bosefermi; @geshkenbein97_preformed_pairs] A second important parallel development has been the possibility to form quasi-1D condensates using anisotropic traps[@grimm_potential_review; @hellweg01_bec1d; @goerlitz01_bec1d; @richard03_bec1d], two-dimensional optical lattices[@greiner01_2dlattice; @moritz03_bec1d; @kinoshita_tonks_experiment; @paredes_toks_experiment; @stoeferle_coldatoms1d; @koehl_1dbose] or atoms on chips.[@reichel] In one dimensional systems interactions are known to lead to a rich physics.[@giamarchi_book_1d] In particular, strongly correlated states of fermions, where individual particles are replaced by collective spin or density excitations, are theoretically expected.[@giamarchi_book_1d; @cazalilla_1d_bec; @recati03_fermi1d] When the interactions between the fermions are repulsive, both the spin and density fluctuations are gapless with linear dispersion and this state is known as the Luttinger liquid[@luther_bosonisation; @haldane_luttinger; @giamarchi_book_1d]. For attractive interactions between the fermions, the spin degrees of freedom develop a gap, yielding a state known as the Luther-Emery liquid.[@giamarchi_book_1d; @luther_exact] Similarly, bosons are expected to be found in a Luttinger liquid state, with individual particles being replaced by collective density excitations[@giamarchi_book_1d; @cazalilla_1d_bec; @haldane_bosons; @petrov04_bec_review]. Moreover, strong repulsion can lead to the fermionization of interacting bosons i.e. the density matrix becomes identical to that of a non-interacting spinless fermion system, the so-called Tonks-Girardeau (TG) regime.[@girardeau_bosons1d; @schultz_1dbose] Experiments in elongated traps have provided evidence for one-dimensional fluctuations[@hellweg01_bec1d; @goerlitz01_bec1d; @richard03_bec1d]. However, in these systems, the bosons remain weakly interacting. With two-dimensional optical lattices, it is possible to explore a regime with stronger repulsion. In particular, it was possible to observe the TG regime with ${}^{87}$Rb atoms[@kinoshita_tonks_experiment] by increasing the transverse confinement. The TG regime can also be reached by applying a 1D periodic potential along the tubes to increase the effective mass of the bosons[@paredes_toks_experiment]. Using a stronger 1D potential, it is possible to drive a one-dimensional Mott transition between the superfluid state and an insulating state[@stoeferle_coldatoms1d]. Another characteristic of atoms in a one-dimensional trap is that transverse confinement can give rise to a type of Fano-Feshbach resonance as a function of the trapping frequency called the confinement induced resonance (CIR).[@olshanii_cir; @bergeman_cir; @yurovsky_feshbach] Recently, experiments have been performed on ${}^{40}$K fermionic atoms in a one dimensional trap forming bound states either as a result of Fano-Feshbach resonances or of CIR.[@moritz05_molecules1d] Both types of bound states have been observed and the results can be described using the Boson-Fermion model.[@dickerscheid_comment] This prompts the question of whether a one dimensional analogue of the BEC-BCS crossover could be observed in such a system. It is well known that in one dimension, no long range BEC or BCS order can exist.[@mermin_wagner_theorem; @mermin_theorem] However, quasi-long range superfluid order is still possible. For fermions with attractive interactions, it was shown using the exactly solved Gaudin-Yang model[@gaudin_fermions; @yang_fermions] that for weakly attractive interactions, a Luther-Emery state with gapless density excitations and gapful spin excitations was formed, whereas for strongly attractive interactions the system would crossover to a Luttinger liquid of bosons.[@tokatly_bec_bcs_crossover1d; @fuchs_bcs_bec] The boson-fermion model was also considered in the case of a broad Fano-Feshbach resonance.[@fuchs04_resonance_bf] In that case only bosons or fermions are present (depending on which side of the resonance the system is) and the results are analogous to those obtained with the Gaudin-Yang model. In fact, in the three dimensional case, it is possible to derive a mapping of the boson-fermion model with a broad resonance to a model with only fermions and a two-body interaction.[@simonucci05_becbcs] In the narrow resonance case, such a mapping is valid only very close to the resonance. It was therefore interesting to investigate what happens in one dimension in the case of a narrow resonance. Indeed, in the latter case, it has been shown previously[@sheehy_feshbach; @citro05_feshbach] that a richer phase diagram could emerge with a phase coherence between a fluid of atoms and a fluid of molecules at weak repulsion and a decoupling transition for stronger repulsion. Analogous effects have been discussed in the context of bosonic atoms with a Fano-Feshbach resonance in[@lee05_feshbach]. Due to the concrete possibility of forming 1D Fermi and Bose gas with optical lattices [@koehl_1dbose; @petrov04_bec_review] some of the theoretical predictions in the narrow resonance case may become testable experimentally in the future. Experimental signature of the phase coherence between the two fluids include density response and momentum distribution function. In the present paper, we investigate in more details the phase in which the atomic and the molecular fluid coexist. In particular, we study the equilibrium between the atomic and the molecular fluid as the detuning is varied. Also, we investigate the effect of placing the system in a periodic potential and show that the phase coherence between the atomic and molecular fluid hinders the formation of the Mott state in systems at commensurate filling. Such conclusion is in agreement with a study in higher dimension[@zhou05_mott_bosefermi]. The plan of the paper is the following. In Sec.\[sec:hamiltonian\] we introduce the boson-fermion Hamiltonian both in the lattice representation and in the continuum. We discuss its thermodynamics in the limit of an infinitesimal boson-fermion conversion term and show under which conditions atoms and molecules can coexist. In Sec.\[sec:boson-appr\] we derive the bosonized expression for the boson-fermion Hamiltonian valid in the region where atoms and molecules coexist. This Hamiltonian is valid for a system in an optical lattice provided it is at an incommensurate filling (i.e. with a number of atoms per site which is not integer). We show that for not too strong repulsion in the system, a phase where the atomic and the molecular superfluid become coherent can be obtained. This phase possesses a spin gap. We show that in this phase the order parameter for the BEC and the BCS superfluidity order parameter are identical, while charge density wave correlations present an exponential decay. We discuss the phase transitions induced by the detuning, the magnetic field and the repulsion. We also exhibit a solvable point where some correlation functions can be obtained exactly. In Sec. \[sec:mott-insul-state\], we consider the case where the number of atoms per site in the optical lattice is integer. We show that a phase transition to a Mott insulating state can be obtained in that case. However, there is no density wave order in this Mott state. Finally, in Sec. \[sec:param-boson-hamilt\], we discuss the applicability of our results to experiments. Hamiltonians and thermodynamics {#sec:hamiltonian} =============================== Hamiltonians {#sec:lattice-continuum} ------------ We consider a system of 1D fermionic atoms with a Fano-Feshbach resonance.[@maccurdy_feschbach; @stwalley_feshbach; @tiesinga_feshbach; @holland01_bosefermi_model] This 1D system can be obtained by trapping the fermions in a two dimensional or a three dimensional optical lattice. In the first case, the fermions are trapped into 1D tubes, in the second case, a periodic potential is superimposed along the direction of the tubes. In the case in which the fermions are injected in a uniform potential, the Hamiltonian of the system reads: $$\begin{aligned} \label{eq:nolattice} H=&&-\int dx \sum_\sigma \psi^\dagger_\sigma \frac {\nabla^2}{2m_F} \psi_\sigma + \int dx \psi_b^\dagger \left(-\frac {\nabla^2} {2m_B} +\nu\right)\psi_b + \lambda \int dx (\psi_b^\dagger \psi_\uparrow \psi_\downarrow + \psi^\dagger_\downarrow \psi^\dagger_\uparrow \psi_b) \nonumber \\ &&+ \frac 1 2 \int dx dx^{\prime}\left[V_{BB}(x-x^{\prime}) \rho_b(x) \rho_b(x^{\prime}) + V_{FF}(x-x^{\prime}) \sum_{\sigma,\sigma^{\prime}} \rho_\sigma(x) \rho_{\sigma^{\prime}}(x^{\prime}) +2V_{BF}(x-x^{\prime}) \sum_{\sigma} \rho_\sigma(x) \rho_b(x^{\prime})\right],\end{aligned}$$ where $\psi_b$ annihilates a molecule, $\psi_\sigma$ a fermion of spin $\sigma$, $m_F$ is the mass of the isolated fermionic atom, $m_B=2m_F$ the mass of the molecule, $V_{BB},V_{BF},V_{FF}$ are (respectively) the molecule-molecule, atom-molecule and atom-atom interactions. Since these interactions are short ranged, it is convenient to assume that they are of the form $V_{\alpha\beta}(x)=g_{\alpha\beta} \delta(x)$. The term $\nu$ is the detuning. Finally, the term $\lambda$ allows the transformation of a pair of fermions into a Fano-Feshbach molecule and the reverse process. This term can be viewed as a Josephson coupling[@tinkham_book_superconductors] between the order parameter of the BEC of the molecules, and the order parameter for the superfluidity of the fermions. As a result of the presence of this term, pairs of atoms are converted into molecules and vice-versa, as in a chemical reaction[@schafroth54_preformed_pairs]. As a result of this, only the total number of atoms (paired and unpaired), $\mathcal{N}=2N_b+N_f$ (where $N_b$ is the number of molecules and $N_f$ is the number of unpaired atoms) is a conserved quantity. In the case where atoms are injected in a periodic potential, $V(x)=V_0 \sin^2 (\pi x/d)$ it is convenient to introduce the Wannier orbitals[@ziman_solid_book] of this potential. In the single band approximation the Hamiltonian reads: [@orso05_feshbach1d; @jaksch05_coldatoms; @dickerscheid_feshbach_lattice; @dickerscheid_feschbach_bf] $$\begin{aligned} \label{eq:lattice} H&=&-t\sum_j (f^\dagger_{j+1,\sigma} f_{j,\sigma} +f^\dagger_{j,\sigma} f_{j+1,\sigma}) + U \sum_j n_{f,j,\uparrow} n_{f,j,\downarrow} \nonumber \\ && -t^{\prime}\sum_j (b^\dagger_{j+1} b_j +b^\dagger_j b_{j+1}) + U^{\prime}\sum_j(n_{b,j})^2 +\nu \sum_j b^\dagger_j b_j \nonumber \\ && + \bar{\lambda} \sum_j (b^\dagger_j f_{j,\uparrow}f_{j,\downarrow} +f^\dagger_{j,\uparrow}f^\dagger_{j,\downarrow} b_j ) + V_{bf} \sum_j n_{b,j}(n_{f,j,\uparrow} + n_{f,j,\downarrow}),\end{aligned}$$ where $f_{j,\sigma}$ annihilates a fermion of spin $\sigma$ on site $j$, $n_{f,j,\sigma}=f^\dagger_{j,\sigma} f_{j,\sigma}$, $% b^\dagger_{j}$ creates a Fano-Feshbach molecule (boson) on the site $j$, and $n_{b,j}=b^\dagger_j b_j$. The hopping integrals of the fermions and bosons are respectively $t$ and $t^{\prime}$. The quantity $\nu$ is the detuning. The parameters $U$, $% U^{\prime}$ and $V_{bf}$ measure (respectively) the fermion-fermion, boson-boson, and fermion-boson repulsion. The case of hard core bosons corresponds to $U'\to \infty$. The conversion of atoms into molecules is measured by the term $\bar{\lambda}$. Again, only the sum ${\cal N}=2N_b+N_f$ is conserved. We note that within the single band approximation, there should exist a hard core repulsion between the bosons. Thermodynamics of the boson-fermion model in the limit of $\lambda \to 0$ {#sec:continuum-case} ------------------------------------------------------------------------- In this Section, we wish to study the behavior of the density of unpaired atoms $\rho_f$ and of the density of atoms paired in molecules $\rho_b$ as a function of the total density of atoms (pair and unpaired) $\rho_{\text{tot.}}$ in the limit of $\lambda \to 0_+$. In such a limit, the fermion-boson conversion does not affect the spectrum of the system compared to the case without fermion-boson conversion. However, it is imposing that only the total total number of atoms ${\cal N}=2N_b+N_f$ is conserved. Therefore, it this limit there is a single chemical potential $\mu$ and the partition function reads: $$\begin{aligned} \label{eq:partition-conversion} Z_\lambda[\mu]=\mathrm{Tr}[e^{-\beta[H_\lambda-\mu(N_f+2N_b)]}], \end{aligned}$$ and: $$\begin{aligned} \label{eq:total-number} N_f+2N_b=\frac{1}{\beta Z_\lambda} \frac{\partial Z_\lambda}{\partial \mu}, \end{aligned}$$ In the absence of fermion-boson conversion, $N_b$ and $N_f$ would be separately conserved, and one would have a chemical potential $\mu_b$ for the molecules and $\mu_f$ for the atoms. The partition function of this hypothetical system would read: $$\begin{aligned} \label{eq:partition-no-conversion} Z_0[\mu_f,\mu_b]=\mathrm{Tr}[e^{-\beta[H_0-\mu_f N_f- \mu_b N_b]}], \end{aligned}$$ and thus: $$\begin{aligned} \lim_{\lambda \to 0_+} Z_\lambda[\mu]=Z_0[\mu,2\mu]. \end{aligned}$$ If we further assume that $V_{BF}=0$, we have $% H_0=H_f+H_b$, where $H_f$ is the Hamiltonian of the fermion subsystem and $H_b$ is the Hamiltonian of the bosonic subsystem, and the partition function (\[eq:partition-no-conversion\]) factorizes as $Z_0[\mu_f,\mu_b]=Z_f[\mu_f] Z_b[\mu_b]$, where $Z_{\nu}[\mu_\nu]=\mathrm{Tr}[e^{-\beta[H_\nu-\mu_\nu N_\nu}]$ for $\nu=f,b$. Thus, in the limit $\lambda,V_{BF}\to 0$, we obtain the following expression of the number of unpaired atoms $N_f$ and the number of atoms paired in molecules $N_b$. $$\begin{aligned} \label{eq:atom-number} N_f=&=&\frac{1}{\beta Z_f} \left(\frac{\partial Z_f}{\partial \mu_f}% \right)_{\mu_f=\mu}, \\ \label{eq:molecule-number} N_b&=&\frac{1}{\beta Z_b} \left(\frac{\partial Z_b}{\partial \mu_b}% \right)_{\mu_b=2\mu}. \end{aligned}$$ We now use these equations (\[eq:atom-number\]) and (\[eq:molecule-number\]) to study the coexistence of bosons and fermions as the detuning $\nu$ is varied. Two simple cases can be considered to illustrate this problem of coexistence. First, one can consider bosonic molecules with hard core repulsion and noninteracting fermionic atoms. In such a case, the thermodynamics of the gas of molecules is reduced to that of a system of spinless fermions by the Jordan-Wigner transformation[@jordan_transformation; @girardeau_bosons1d; @schultz_1dbose], and the expression of the densities of unpaired atoms and molecules can be obtained in closed form. In this simple case, it is straightforward to show that for sufficiently negative detuning all atoms are paired into molecules, and for sufficiently positive detuning all the atoms remain unpaired. The case of intermediate detuning is more interesting as coexistence of unpaired atoms with atoms paired into molecules becomes possible. The physical origin of this coexistence is of course the molecule-molecule repulsion that makes the chemical potential of the gas of molecules increase with the density so that in a sufficiently dense gas of molecules, it becomes energetically favorable to create unpaired atoms. To show that the above result is not an artifact of having a hard core repulsion, we have also considered a slightly more realistic case of molecules with contact repulsion and non-interacting atoms. Although in that case we cannot anymore obtain closed form expressions of the density of molecules, we can still calculate numerically the density of molecules using the Lieb-Liniger solution[@lieb_bosons_1D]. We will see that having a finite repulsion between the molecules indeed does not eliminate the regime of of coexistence. ### The case of bosons with hard core repulsion In that case we assume that the boson-boson repulsion $U'$ in the lattice case and $g_{BB}$ in the continuum case is going to infinity. Using the Jordan-Wigner transformation[@jordan_transformation], one shows that the partition function of these hard core bosons is equal to that of free spinless fermions thanks to the Jordan-Wigner transformation. For positive temperature, the number of unpaired atoms and the number of atoms paired into molecules read: $$\begin{aligned} \label{eq:Nf-hard-core} N_f&=&2 L \int \frac{dk}{2\pi} \frac 1 {e^{\beta(\epsilon_f(k)-\mu)}+1} \\ \label{eq:Nb-hard-core} N_b&=& L\int \frac{dk}{2\pi} \frac 1 {e^{\beta(\epsilon_b(k)+\nu -2\mu)}+1} \end{aligned}$$ For $T\to 0$, these equations reduce to: $$\begin{aligned} \label{eq:hard-core-conditions} && \rho_F=\frac{N_f}{L}=\frac {2k_F}{\pi}, \nonumber \\ && \rho_B=\frac{N_b}{L}=\frac{k_B}{\pi},\nonumber \\ && \mu=\epsilon_F(k_F)=\frac {\nu+\epsilon_b(k_B)} 2 , \end{aligned}$$ where $k_F$ is the Fermi momentum of the atoms and $k_B$ is the Fermi momentum of the spinless fermions (i.e. the pseudo-Fermi momentum of the molecules). Up to now, we have not specified the dispersion of the atoms and of the molecules. In the lattice case, these dispersion are obtained from Eq. (\[eq:lattice\]) as $\epsilon_f(k)=-2t \cos(k)$ and $\epsilon_b(k)=-2t' \cos(k)$. A graphical solution of (\[eq:hard-core-conditions\]) is shown on Fig. \[fig:chemical\] for three different values of the chemical potential $\mu$ and $\nu>0$. Three different regimes are obtained. In the first one, for $\mu=\mu_A$, only unpaired atoms are present. In the second one for $\mu=\mu_B$, unpaired atoms and molecules coexist. In the last one, for $\mu=\mu_C$, all the available levels of unpaired atoms are filled, and the available levels for molecules are partially filled. As a result, the system behaves as if only molecules were present. This last phase is in fact a degenerate Tonks-Girardeau gas of molecules[@girardeau_bosons1d; @schultz_1dbose]. In the intermediate regime, the fermions form a two-component Luttinger liquid[@cazalilla_1d_bec; @recati03_fermi1d] and the bosons form a single component Luttinger liquid.[@petrov04_bec_review] Similar calculations can be performed in the case of fermions and bosons in the continuum described by Eq. (\[eq:nolattice\]). With free fermions and hard core bosons in the continuum Eqs. (\[eq:hard-core-conditions\]) become: $$\begin{aligned} \label{eq:conditions-hardcore-continuum} &&\frac{k_F^2}{2m_F}=\mu,\nonumber \\ &&\frac{k_B^2}{4m_F}+\nu = 2 \mu, \nonumber \\ &&\frac \pi 2 \rho_{\text{tot.}}= (k_F + k_B),\end{aligned}$$ with $\rho_{\text{tot.}}=2\rho_B+\rho_F$ the total density of atoms. Eliminating $k_F$ in Eq. (\[eq:conditions-hardcore-continuum\]), the problem is reduced to solving a second degree equation: $$\begin{aligned} \label{eq:kb-hardcore} 3 k_B^2 -4\pi \rho k_B +\pi^2 \rho^2 - 4 m_F \nu =0.\end{aligned}$$ The solutions of Eq. (\[eq:kb-hardcore\]) are: $$\begin{aligned} \label{eq:densities-hardcore} k_F&=&\frac 1 3 \sqrt{\pi^2 \rho^2 +12 m_F\nu} -\frac \pi 6 \rho,\nonumber \\ k_B &=& \frac{2\pi \rho -\sqrt{\pi^2 \rho^2 + 12 m_F\nu}}{3},\end{aligned}$$ and these solutions are physical when they yield both $k_F$ and $k_B$ positive. For $\nu>0$, Eq. (\[eq:densities-hardcore\]) yields $k_B>0$ provided $\rho_{\text{tot.}}>\rho^{(1)}_{\text{tot.},c}=\frac 2 \pi \sqrt{m_F \nu}$. When $\rho<\rho^{(1)}_{\text{tot.},c}$, the density of molecules is vanishing and $\rho_{\text{tot.}}=\rho_F$. Above the critical density, atoms and molecules coexist, with densities given by Eq. (\[eq:conditions-hardcore-continuum\]). At the critical density, the slope of $k_B$ versus $\rho$ is discontinuous, being $0$ below the critical density and $\frac \pi 2$ above the critical density. The Fermi wavevector $k_F$ also possesses a slope discontinuity at the critical density, the slope being zero above the critical density. The behavior of $k_F$ and $k_B$ as a function of the density is represented on Fig. \[fig:kf-kb1\]. For $\nu<0$, Eq. (\[eq:densities-hardcore\]) yields $k_F>0$ provided $\rho>\rho^{(2)}_{\text{tot.},c}=\frac 4 \pi \sqrt{m_F|\nu|}$. When, $\rho<\rho^{(2)}_{\text{tot.},c}$ the density of unpaired atoms vanishes, and $\rho=\rho_B$. Above the critical density, atoms and molecules coexist with densities given by Eq. (\[eq:conditions-hardcore-continuum\]). As before, the slope of the curve $k_F$ versus $\rho$ is discontinuous at the critical density, being zero below and $\pi/3$ above. The behavior of $k_F$ and $k_B$ as a function of the density for $\nu<0$ is represented on Fig. \[fig:kf-kb2\]. The slope discontinuities in $k_B$ and $k_F$ have important consequences for the compressibility. Indeed, using Eq. (\[eq:conditions-hardcore-continuum\]), it is easy to see that above the critical density, the chemical potential varies as $O(\rho-\rho_{\text{tot.}c})^2$. Since the compressibility $\chi$ is defined as $1/\chi=\rho^2\frac {\partial \mu}{\partial \rho}$, this implies that the compressibility of the system becomes infinite as the critical density is approached from above, signalling a first-order phase transition. Such first order transitions associated with the emptying of a band have been analyzed in the context of Luttinger liquid theory in Refs. [@nomura96_ferromagnet; @cabra_instabilityLL]. ![The different cases in the Bose-Fermi mixture with hardcore bosons when $\protect\nu>0$. For $\protect\mu=\protect\mu_A$, the Fermion band is partially filled, and the hardcore boson band is empty. For $\protect\mu=% \protect\mu_C$, the hardcore boson band is partially filled, and the fermion band is totally filled. In the case $\protect\mu=\protect\mu_B$, both band are partially filled. In the rest of the paper we will only consider the latter case.[]{data-label="fig:chemical"}](chemical.eps){width="9cm"} ![The behavior of $k_F$ and $k_B$ for positive detuning $\nu>0$ as a function of the total density $\rho$. For low densities, only atoms are present ($k_B=0$). At higher densities such that $\pi\rho >2\sqrt{m_F\nu}$, a nonzero density of molecules appear. At the critical density, the slopes of $k_F$ and $k_B$ versus $\rho$ are discontinuous. On the figure we have taken $m_F\nu=1$.[]{data-label="fig:kf-kb1"}](kf-kb.eps) ![The behavior of $k_F$ and $k_B$ for negative detuning $\nu<0$. For low densities, only the molecules are present ($k_F=0$). For $\pi\rho>4\sqrt{m_F|\nu|}$, molecules coexist with atoms. At the critical density, the slopes of $k_F$ and $k_B$ versus $\rho$ are discontinuous. []{data-label="fig:kf-kb2"}](kf-kb2.eps) ### The case of bosons with finite repulsion We have seen in the previous Section that in the case of hard core repulsion between the molecules, both in the lattice case and in the continuum case, that having $\nu<0$ did not prevent the formation of unpaired atoms provided the total density of atoms was large enough. This was related with the increase of the chemical potential of bosons as a result of repulsion when the density was increased. In this section, we want to analyze a slightly more realistic case where the repulsion between bosons is finite and check that coexistence remains possible. In the lattice case, the problem is untractable by analytic methods and one needs to rely on numerical approaches.[@batrouni_bosons_numerique; @kuhner_bosehubbard] In the continuum case, however, it is well known that bosons with contact repulsions are exactly solvable by Bethe Ansatz techniques.[@lieb_bosons_1D] The density of molecules can therefore be obtained by solving a set of integral equations.[@lieb_bosons_1D; @takahashi_tba_review] They read: $$\begin{aligned} \label{eq:lieb_equations} \epsilon(k)&=&\frac{\hbar^2 k^2}{2m_B} +\nu -\mu_B +\frac c \pi \int_{-q_0}^{q_0} \frac{dq}{c^2+(q-k)^2} \epsilon(q), \\ 2\pi \rho(k)&=&1+ 2c \int_{-q_0}^{q_0} \frac{\rho(q) dq}{c^2 +(k-q)^2},\end{aligned}$$ where: $$\begin{aligned} c=\frac{m_B g_{BB}}{\hbar^2}, \\ \rho_B=\int_{-q_0}^{q_0} \rho(q) dq,\end{aligned}$$ $g_{BB}$ being the boson-boson interaction defined in Eq. (\[eq:nolattice\]). The parameter $q_0$ plays the role of a pseudo Fermi momentum. For $q>q_0$, we have $\rho(q)=0$. We also have $\epsilon(\pm q_0)=0$.[@lieb_bosons_1D] It is convenient to introduce dimensionless variables[@lieb_bosons_1D]: $$\begin{aligned} \lambda=\frac c {q_0}\; ; \; \gamma = \frac c {\rho_B},\end{aligned}$$ and rewrite $k=q_0 x$, $q=q_0 y$, $\rho(q_0 x)=g(x)$, $\epsilon(q_0 x)=\frac{\hbar^2 q_0^2}{2m} \bar{\epsilon}(x)$. The dimensionless integral equations read: $$\begin{aligned} \label{eq:lieb_dimensionless} \bar{\epsilon}(x)&=& x^2+ \frac{2m(\nu -\mu_B)}{\hbar^2 q_0^2} +\frac \lambda \pi \int_{-1}^{1} \frac{dy}{\lambda^2+(x-y)^2} \bar{\epsilon(y)}, \\ 2\pi g(x)&=&1+ 2\lambda \int_{-1}^{1} \frac{g(y) dy}{\lambda^2 +(x-y)^2}.\end{aligned}$$ Using $\epsilon(\pm q_0)=0$ one has the following integral equation for $\bar{\epsilon}(x)$: $$\begin{aligned} \bar{\epsilon}(x)= x^2 -1 +\frac \lambda \pi \int_{-1}^{1} dy \bar{\epsilon}(y) \left[ \frac{1}{\lambda^2 +(x-y)^2} - \frac{1}{\lambda^2 +(1-y)^2}\right]\end{aligned}$$ Once this equation has been solved, the chemical potential of the bosons is obtained by: $$\begin{aligned} \mu_B = \nu+ \frac{\hbar^2 q_0^2}{2m_B} \left[1+ \frac {\lambda}{\pi} \int_{-1}^{1} \frac{1}{\lambda^2+(x-1)^2} \bar{\epsilon}(x) dx\right]. \end{aligned}$$ Knowing $\mu_B$ gives immediately $\mu_F=\mu_B/2$. From $\mu_F$ one finds $k_F=\sqrt{2m_F \mu_F}$ and $\rho_F=2k_F/\pi$. Finally, using the definition of the total density $\rho=2\rho_B +\rho_F$ one can map the molecule density and the free atom density as a function of the total density of atoms. The resulting equation of state can be written in terms of dimensionless parameters as: $$\begin{aligned} \label{eq:eq-state-adim} \frac{\hbar^2 \rho_B}{m_B g_{1D}} = {\cal F}\left(\frac{\hbar^2 \rho}{m_B g_{1D}},\frac{\hbar^2 \nu}{m_F g_{1D}^2}\right)\end{aligned}$$ The behavior of the boson density $\rho_B$ and fermion density $\rho_F$ as a function of total density $\rho$ can be understood in qualitative terms. Let us first discuss the case of negative detuning. For sufficiently low densities, only bosons are present. However, in that regime, the boson-boson repulsion is strong, and the boson chemical potential is increasing with the boson density. As a result, when the density exceeds a critical density $\rho_c$, the fermion chemical potential becomes positive, and the density of fermions becomes non-zero. The appearance of fermions is causing a cusp in the boson density plotted versus the total density. When the density of particles becomes higher, the boson-boson interaction becomes weaker, and the boson chemical potential barely increases with the density. As a result, the fermion density becomes almost independent of the total density. In the case of positive detuning, for low density, only fermions are present. Again, the increase of fermion density results in an increase of chemical potential and above a certain threshold in fermion density, bosons start to appear, creating a cusp in the dependence of the fermion density upon the total density. At large density, the detuning becomes irrelevant, and the fermion density barely increases with the total density. To illustrate this behavior, we have solved numerically the integral equations (\[eq:lieb\_dimensionless\]), and calculated the resulting fermion and boson densities. A plot of the density of bosons as well as the density of fermions is shown on Fig. \[fig:density-positive\] for $\nu>0$ and on Fig. \[fig:density-negative\] for $\nu<0$. The slope discontinuities at the critical density remain visible. Obviously, this implies that the divergence of the compressibility is still present when the repulsion between the molecule is not infinite. ![The density of molecules $\rho_B$ and unpaired atoms $\rho_F$ as a function of the total density $\rho$ in the case of a repulsion $c=100$ between the bosons and for positive detuning $\nu=0.1$. At large density, the fermion density is increasing more slowly than the boson density. Inset: the behavior of the boson and fermion densities near the origin. Note the cusp in the fermion density as the boson density becomes nonzero as in the $c=\infty$ case.[]{data-label="fig:density-positive"}](cusp_nu_posit.eps) ![The density of molecules $\rho_B$ and unpaired atoms $\rho_F$ as a function of the total density $\rho$ in the case of a repulsion $c=100$ between the bosons and for negative detuning $\nu=-0.1$. At large density, the fermion density is increasing more slowly than the boson density. Inset: the behavior of the boson and fermion densities near the origin. Note the cusp in the boson density as the fermion density becomes nonzero as in the $c=\infty$ case.[]{data-label="fig:density-negative"}](cusp_nu_negat.eps) We have thus seen that generally we should expect a coexistence of fermionic atoms and bosonic molecules as soon as repulsion between the molecules is sufficiently strong. Moreover, the repulsion between the molecules results in a finite velocity for sound excitations in the molecule Bose gas. As a result, we can expect that the gas of molecules will behave as a Luttinger liquid. Till now however, we have assumed that the term converting atoms into molecules was sufficiently small not to affect significantly the spectrum of the system. In the following, we will treat the effect of a small but not infinitesimal conversion term in Eqs. (\[eq:lattice\]) and (\[eq:nolattice\]) using bosonization techniques. We will show that this term can lead to phase coherence between the atoms and the molecules, and we will discuss the properties of the phase in which such coherence is observed. Phase diagram and correlation functions {#sec:boson-appr} ======================================= Derivation of the bosonized Hamiltonian {#sec:deriv-boson-hamilt} --------------------------------------- In this Section, we consider the case discussed in Sec. \[sec:hamiltonian\] where neither the density of molecules nor the density of atoms vanishes. As discussed in Sec. \[sec:hamiltonian\], this requires a sufficiently large initial density of atoms. As there is both a non-zero density of atoms and of molecules, they both form Luttinger liquids[@petrov04_bec_review; @recati03_fermi1d; @cazalilla_1d_bec]. These Luttinger liquids are coupled by the repulsion between atoms and molecules $V_{BF}$ and via the conversion term or Josephson coupling $\lambda$. To describe these coupled Luttinger liquids, we apply bosonization[@giamarchi_book_1d] to the Hamiltonians (\[eq:lattice\])– (\[eq:nolattice\]). For the sake of definiteness, we discuss the bosonization procedure in details only in the case of the continuum Hamiltonian (\[eq:nolattice\]). For the lattice Hamiltonian (\[eq:lattice\]), the steps to follow are identical provided the system is not at a commensurate filling. At commensurate filling, umklapp terms must be added to the bosonized Hamiltonian and can result in Mott phases[@giamarchi_book_1d]. This case is treated in Sec. \[sec:mott-insul-state\]. To derive the bosonized Hamiltonian describing the low-energy spectrum of the Hamiltonian (\[eq:nolattice\]), we need first to consider the bosonized description of the system when all atom-molecule interactions are turned off. For $\lambda=0,V_{BF}=0$, both $N_f$ and $N_b$ are conserved and the bosonized Hamiltonian equivalent to (\[eq:lattice\]) or (\[eq:nolattice\]) is given by: $$\begin{aligned} \label{eq:bosonized-spin} H&=&H_b+H_\rho+H_\sigma \nonumber \\ H_b&=&\int \frac{dx}{2\pi} \left[ u_b K_b (\pi \Pi_b)^2 +\frac {u_b} {K_b}(\partial_x\phi_b)^2\right] \nonumber \\ H_\rho&=&\int \frac{dx}{2\pi} \left[ u_\rho K_\rho (\pi \Pi_\rho)^2 +\frac {u_\rho} {K_\rho}(\partial_x\phi_\rho)^2\right] \nonumber \\ H_\sigma&=&\int \frac{dx}{2\pi} \left[ u_\sigma K_\sigma (\pi \Pi_\sigma)^2 +\frac {u_\sigma} {K_\sigma}(\partial_x\phi_\sigma)^2\right] -\frac{% 2g_{1\perp}}{(2\pi\alpha)^2} \int dx \cos \sqrt{8}\phi_\sigma\end{aligned}$$ where $[\phi_{\nu}(x),\Pi_{\nu^{\prime}}(x^{\prime})]=i% \delta(x-x^{\prime})\delta_{\nu,\nu^{\prime}}$, ($\nu,\nu^{\prime}=b,% \sigma,\rho$). In the context of cold atoms, the Hamiltonian (\[eq:bosonized-spin\]) have been discussed in [@cazalilla_1d_bec; @recati03_fermi1d; @petrov04_bec_review]. The parameters $% K_\rho$, the Luttinger exponent, and $u_\rho,u_\sigma$, the charge and spin velocities, are known functions of the interactions [@gaudin_fermions; @schulz_hubbard_exact; @frahm_confinv], with $K_\rho=1$ in the non-interacting case, $g_{1\perp}$ is a marginally irrelevant interaction, and at the fixed point of the RG flow $K_{\sigma}^*=1$. For the bosonic system, the parameters $u_b,K_b$ can be obtained from numerical calculations[@kuhner_bosehubbard] in the lattice case or from the solution of the Lieb-Liniger model[@lieb_bosons_1D] in the continuum case. In the case of non-interacting bosons $K_b\to \infty$ and in the case of hard core bosons $K_b=1$.[@girardeau_bosons1d; @schultz_1dbose; @haldane_bosons] An important property of the parameters $K_b$ and $K_\rho$ is that they decrease as (respectively) the boson-boson and fermion-fermion interaction become more repulsive. The bosonized Hamiltonian (\[eq:bosonized-spin\]) is also valid in the lattice case (\[eq:lattice\]) provided that both $N_f$ and $N_b$ do not correspond to any commensurate filling. The fermion operators can be expressed as functions of the bosonic fields appearing in (\[eq:bosonized-spin\]) as [@giamarchi_book_1d]: $$\begin{aligned} \label{eq:fermion-bosonized} \psi_\sigma(x)= \sum_{r=\pm} e^{i rk_F n \alpha} \psi_{r,\sigma}(x=n\alpha) \\ \psi_{r,\sigma}(x) =\frac{e^{\frac{i}{\sqrt{2}} [\theta_{\rho}-r\phi_{\rho} +\sigma (\theta_{\sigma}-r\phi_{\sigma})](x)}}{\sqrt{2\pi\alpha}},\end{aligned}$$ where the index $r=\pm$ indicates the right/left movers, $\alpha$ is a cutoff equal to the lattice spacing in the case of the model Eq. (\[eq:lattice\]). Similarly, the boson operators are expressed as[@giamarchi_book_1d]: $$\begin{aligned} \label{eq:boson-bosonized} \frac{b_{n}}{\sqrt{\alpha}} = \Psi_b(x=n\alpha) \\ \Psi_b(x)=\frac {e^{i\theta_b}}{\sqrt{2\pi\alpha}} \left[ 1 + A \cos (2\phi_b -2 k_B x) \right].\end{aligned}$$ In Eqs. (\[eq:fermion-bosonized\])-(\[eq:boson-bosonized\]), we have introduced the dual fields[@giamarchi_book_1d] $\theta_\nu(x) =\pi \int^x \Pi_\nu(x^{\prime})dx^{\prime}$ ($\nu=\rho,\sigma,b$), $k_F=\pi N_f/2L$, and $k_B=\pi N_b/L$ where $L$ is the length of the system. The fermion density is given by[@giamarchi_book_1d]: $$\label{eq:fermion-density-bosonized} \sum_\sigma \frac{n_{f,n,\sigma}}{\alpha}=\rho_f(x=n\alpha)=-\frac{\sqrt{2}}{\pi}% \partial_x\phi_\rho +\frac{\cos (2k_F x -\sqrt{2}\phi_{\rho})}{\pi \alpha}% \cos \sqrt{2}\phi_\sigma,$$ and the boson density by[@giamarchi_book_1d]: $$\label{eq:boson-density-bosonized} \frac{n_{b,n}}{a}=\rho_b(x)=-\frac{1}{\pi}\partial_x\phi_b +\frac{\cos (2k_B x - 2\phi_b)}{\pi \alpha}.$$ The detuning term in (\[eq:nolattice\]) is thus expressed as: $$\begin{aligned} \label{eq:detun-bosonized} H_{detuning}=-\frac{\nu}{\pi}\int dx \partial_x\phi_b\end{aligned}$$ We now turn on a small $\lambda$ and a small $V_{BF}$. The effect of a small $V_{BF}$ on a boson-fermion mixture has been investigated previously[@cazalilla03_mixture; @mathey04_mix_polarons]. The forward scattering contribution is: $$\begin{aligned} \label{eq:V-term-bosonized} \frac{V_{BF}\sqrt{2}} {\pi^2} \int \partial_x \phi_b \partial_x \phi_{\rho},\end{aligned}$$ and as discussed in [@cazalilla03_mixture], it can give rise to a phase separation between bosons and fermions if it is too repulsive. Otherwise, it only leads to a renormalization of the Luttinger exponents. The atom molecule repulsion term also gives a backscattering contribution: $$\begin{aligned} \label{eq:cdw-locking} \frac{2V_{BF}}{(2\pi\alpha)^2} \int dx \cos (2\phi_b -\sqrt{2}\phi_\rho -2 (k_F-k_B) x)\cos \sqrt{2}\phi_\sigma,\end{aligned}$$ however in the general case, $k_F\ne k_B$ this contribution is vanishing. In the special case of $k_B=k_F$, the backscattering can result in the formation of a charge density wave. This effect will be discussed in Sec. \[sec:quantum-ising\]. The contribution of the $\lambda$ term is more interesting.[@sheehy_feshbach; @citro05_feshbach] Using Eqs. (\[eq:fermion-bosonized\])-(\[eq:boson-bosonized\]), we find that the most relevant contribution reads: $$\begin{aligned} \label{eq:lambda-bosonized} H_{bf}=\frac{2 \lambda}{\sqrt{2\pi^3 \alpha^3}} \int dx \cos (\theta_b -\sqrt{2}% \theta_\rho) \cos \sqrt{2}\phi_{\sigma}\end{aligned}$$ In the next section, we will see that this term gives rise to a phase with atom-molecule coherence when the repulsion is not too strong. Phase diagram {#sec:bf-coupling} ------------- ### phase with atom-molecule coherence The effect of the term (\[eq:lambda-bosonized\]) on the phase diagram can be studied by renormalization group techniques[@giamarchi_book_1d]. A detailed study of the renormalization group equations has been published in [@sheehy_feshbach]. Here, we present a simplified analysis, which is sufficient to predict the phases that can be obtained in our system. The scaling dimension of the boson-fermion coupling term (\[eq:lambda-bosonized\]) is: $\frac{1}{4K_b} +\frac 1 {2 K_\rho} + \frac 1 {2} K_\sigma$. For small $\lambda$ it is reasonable to replace $K_\sigma$ with its fixed point value $K_\sigma^*=1$. Therefore, the RG equation for the dimensionless coupling $\tilde{\lambda}=\frac{\lambda\alpha^{1/2}}{u}$ (where $u$ is one of the velocities $u_\rho,u_\sigma,u_b$) reads: $$\begin{aligned} \label{eq:RG-coupling} \frac{d\tilde{\lambda}}{d\ell} = \left(\frac 3 2 -\frac 1 {2K_{\rho}} -\frac 1 {4 K_b}\right) \tilde{\lambda},\end{aligned}$$ where $\ell$ is related to the renormalized cutoff $\alpha(\ell)=\alpha e^{\ell}$. We thus see that for $\frac 1 {2K_{\rho}}+\frac 1 {4 K_b} <3/2$, this interaction is relevant. Since for hardcore bosons[@girardeau_bosons1d; @schultz_1dbose] $% K_b=1$ and for non-interacting bosons $K_b=\infty$, while for free fermions $% K_\rho=1$ and in the lattice case for $U=\infty$ one has $K_\rho=1/2$[@kawakami_hubbard], we see that the inequality is satisfied unless there are very strongly repulsive interactions both in the boson system and in the fermion system. When this inequality is not satisfied, for instance in the case of fermions with nearest-neighbor repulsion[@mila_hubbard_etendu; @sano_extended_hubbard_1d], in which one can have $1/4<K_\rho<1/2$ and hardcore bosons with nearest neighbor repulsion[@shankar_spinless_conductivite; @haldane_luttinger], in which one can have $K_b=1/2$, the atoms and the molecules decouple. This case is analogous to that of the mixture of bosons and fermions[@mathey04_mix_polarons; @cazalilla03_mixture]and charge density waves can be formed if $k_B$ and $k_F$ are commensurate. The phase transition between this decoupled phase and the coupled phase belongs to the Berezinskii-Kosterlitz-Thouless (BKT) universality class.[@kosterlitz_thouless] As pointed out in [@sheehy_feshbach], in the decoupled phase, the effective interaction between the fermions can be attractive. In that case, a spin gap is formed [@luther_exact; @giamarchi_book_1d] and the fermions are in a Luther-Emery liquid state with gapless density excitations. Let us consider the coupled phase in more details. The relevance of the interaction (\[eq:lambda-bosonized\]) leads to the locking of $\phi_\sigma$, i.e. it results in the formation of a spin gap. To understand the effect of the term $\cos (\theta_b -\sqrt{2}\theta_\rho)$, it is better to perform a rotation: $$\label{eq:rot} \left( \begin{array}{c} \theta_{-} \\ \theta_{+} \end{array} \right) =\left( \begin{array}{cc} \frac 1 {\sqrt{3}} & -\frac{\sqrt{2}}{\sqrt{3}} \\ \frac{\sqrt{2}} {\sqrt{3}} & \frac 1{\sqrt{3}} \end{array} \right) \left( \begin{array}{c} \theta_b \\ \theta_\rho \end{array} \right),$$ and the same transformation for the $\phi_\nu$. This transformation preserves the canonical commutation relations between $\phi_{\pm}$ and $\Pi_{\pm}$. Under this transformation, $H_b+H_\rho$ becomes: $$\begin{aligned} \label{eq:b-plus-rho} H_b+H_\rho&=&\int \frac{dx}{2\pi}\sum_{\nu=\pm} \left[ u_\nu K_\nu (\pi \Pi_\nu)^2 + \frac{u_\nu}{K_\nu}(\partial_x \phi_\nu)^2\right] \nonumber \\ && + \int \frac{dx}{2\pi} [g_1 (\pi \Pi_+)(\pi \Pi_-) + g_2 \partial_x\phi_+ \partial_x\phi_-],\end{aligned}$$ where: $$\begin{aligned} \label{eq:corresp-uK} u_+ K_+ &=& \frac 2 3 u_b K_b + \frac 1 3 u_\rho K_\rho, \nonumber \\ u_- K_- &=& \frac 1 3 u_b K_b + \frac 2 3 u_\rho K_\rho, \nonumber \\ g_1 &=& \frac{\sqrt{8}} 3 (u_b K_b -u_\rho K_\rho), \nonumber \\ \frac{ u_+} {K_+}&=& \frac {2 u_b} {3 K_b} + \frac {u_\rho} {3 K_\rho} + \frac{4V}{3\pi}, \\ \frac{ u_-} {K_-}&=& \frac { u_b} {3 K_b} + \frac {2 u_\rho} {3 K_\rho} - \frac{4V}{3\pi}, \\ g_2 &=& \frac{\sqrt{8}} 3 (\frac{u_b} {K_b} -{u_\rho} {K_\rho} -\frac V \pi),\end{aligned}$$ while, $H_{bf}$ defined in (\[eq:lambda-bosonized\]) becomes: $$\begin{aligned} \label{eq:lambda-change} H_{bf}=\frac{\lambda}{\sqrt{2\pi^3}\alpha} \int dx \cos \sqrt{3}\theta_{-} \cos \sqrt{2}\phi_\sigma.\end{aligned}$$ After the rotation, we see that when $\lambda$ is relevant, the field $% \theta_-$ is also locked, but $\phi_+$ remains gapless. Since the field $\theta_-$ is the difference of the superfluid phase of the atoms and the one of the molecules, this means that when $\lambda$ becomes relevant, unpaired atoms and molecules share the same superfluid phase i.e. they become coherent. The gap induced by the term $\lambda$ can be estimated from the renormalization group equation (\[eq:RG-coupling\]). Under the renormalization group, the dimensionless parameter $\tilde{\lambda}(\ell)$ grows until it becomes of order one at a scale $\ell=\ell^*$ where the perturbative approach breaks down. Beyond the scale $\ell^*$, the fields $\theta_-$ and $\phi_\sigma$ behave as classical fields. Therefore, the associated energy scale $u/(\alpha e^{\ell^*})$ is the scale of the gap. From this argument, we obtain that the gap behaves as: $$\label{eq:gap-RG} \Delta \sim \frac {u}{\alpha} \left(\frac{\lambda \alpha^{1/2}}{u}\right)^{\frac 1 {\frac 3 2 -\frac 1 {2 K_\rho} -\frac {1}{4 K_b}}}.$$ The gapful excitations have a dispersion $\epsilon(k)=\sqrt{(uk)^2+\Delta^2}$ and are kinks and antikinks of the fields $\theta_-$ and $\phi_\sigma$.[@rajaraman_instanton] More precisely, since a kink must interpolate between degenerate classical ground states of the potential (\[eq:lambda-change\]), we find that when a kink is present $\theta_-(+\infty)-\theta_-(-\infty)=\pm \pi/\sqrt{3}$ and $\phi_{\sigma}(+\infty) -\phi_{\sigma}(-\infty)=\pi/\sqrt{2}$. This indicates that a kink is carrying a spin $1/2$, and is making the phase $\theta_b$ of the bosons jump by $% \pi/3$ and the phase of the superfluid order parameter $\sqrt{2}% \theta_\rho$ of the fermions jump by $-2\pi/3$. Since the current of bosons is $j_b = u_b K_b \pi \Pi_b = u_b K_b \partial_x \theta_b$ and the current of fermions is $j_F=\sqrt{2} u_\rho K_\rho \pi \Pi_\rho=\sqrt{2} u_\rho K_\rho \partial_x \theta_\rho$, this indicates that counterpropagating supercurrents of atoms and molecules exist in the vicinity of the kinks. Therefore, we can view the kinks and antikinks are composite objects formed of vortices bound with a spin 1/2. We note that the kinks and antikinks may not exhaust all the possible gapful excitations of the system. In particular, bound states of kinks and antikinks, known as breathers may also be present[@rajaraman_instanton]. However, these gapful excitations present a larger gap than the single kinks. Let us now turn to the gapless field $\phi_+$. This field has a simple physical interpretation. Considering the integral $$\begin{aligned} \label{eq:connection-to-N} -\frac 1 \pi \int_{-\infty}^{\infty} dx \partial_x \phi_+ &=& -\frac 1 {\pi% \sqrt{3}} \int_{-\infty}^{\infty} (\sqrt{2}\phi_b +\phi_\rho)=\frac{\mathcal{N}}{\sqrt{6}},\end{aligned}$$ showing that $\phi_+(\infty)-\phi_+(-\infty)$ measures the total number of particles in the system $\mathcal{N}$. Thus $(\Pi_+,\phi_+)$ describe the total density excitations of the system. The resulting low-energy Hamiltonian describing the gapless total density modes reads: $$\begin{aligned} \label{eq:gapless-modes} H_+=\int \frac{dx}{2\pi} \left[ u^*_+ K^*_+ (\pi \Pi_+)^2 +\frac {u^*_+} {K^*_+}(\partial_x\phi_+)^2\right],\end{aligned}$$ where $u^*_+,K^*_+$ denote renormalized values of $u_+,K_+$. This renormalization is caused by the residual interactions between gapless modes and the gapped modes measured by $% g_1,g_2$ in Eq. (\[eq:b-plus-rho\]). Since $\phi_+$ measures the total density, the Hamiltonian (\[eq:gapless-modes\]) describes the propagation of sound modes in the 1D fluid with dispersion $\omega(k)=u|k|$. We note than in Ref.[@zhou05_mott_bosefermi; @zhou05_mott_bosefermi_long], dispersion relation similar to ours were derived for the sound modes and the superfluid phase difference modes using different methods. ### effect of the detuning and applied magnetic field Having understood the nature of the ground state and the low excited states when $\lambda$ is relevant we turn to the effect of the detuning term. Eqs. (\[eq:detun-bosonized\]) and (\[eq:rot\]) show that the detuning term can be expressed as a function of $\phi_+,\phi_-$ as: $$\begin{aligned} \label{eq:detun-pm} H_{detun.}=-\frac{\nu}{\pi} \int \partial_x \left(\sqrt{\frac 2 3}\phi_+ + \frac{\phi_-}{\sqrt{3}} \right).\end{aligned}$$ This shows that the detuning does not affect the boson-fermion coupling (\[eq:lambda-bosonized\]) since it can be eliminated from the Hamiltonian by a canonical transformation $\phi_\pm \to \phi_\pm +\lambda_{\pm} x$, where $\lambda_+=\nu \sqrt{\frac 2 3}$ and $\lambda_-=\nu \frac{1}{\sqrt{3}}$. For a fixed total density, changing the detuning only modifies the wavevectors $k_B$ and $k_F$. As discussed extensively in Sec. \[sec:continuum-case\], for a detuning sufficiently large in absolute value, only molecules or only atoms are present, and near the critical value of the detuning, the compressibility of the system is divergent. We therefore conclude that in one-dimension, the crossover from the Bose condensation to the superfluid state, as the detuning is varied, is the results of the band-filling transitions at which either the density of the atoms or the molecules goes to zero. At such band filling transitions,$v_{\rho,\sigma}\to 0$ (respectively $v_b\to 0$) and bosonization breaks down [@nomura96_ferromagnet; @cabra_instabilityLL; @yang04_ferromagnet]. The two band filling transitions are represented on Fig. \[fig:band-filling\]. The cases at the extreme left and the extreme right of the phase diagram have been analyzed in [@fuchs04_resonance_bf], where it was shown that in the case of a broad Fano-Feshbach resonance, the zone of coexistence was very narrow. In the narrow Fano-Feshbach resonance case we are investigating, the zone of coexistence can be quite important. ![The band filling transitions as a function of the detuning[]{data-label="fig:band-filling"}](transitions.eps){width="9cm"} Application of a magnetic field can also induce some phase transitions. The interaction with the magnetic field reads: $$\begin{aligned} \label{eq:magfield} H_{magn}=-\frac{h}{\pi \sqrt{2}} \int dx \partial_x\phi_\sigma,\end{aligned}$$ The effect of the magnetic field is to lower the gap for the creation of kink excitations (remember that they carry a spin $1/2$). As a result, when it becomes larger than the gap, the magnetic field induces a commensurate incommensurate transition[@japaridze_cic_transition; @pokrovsky_talapov_prl; @schulz_cic2d; @horowitz_renormalization_incommensurable] that destroys the coherence between atoms and molecules and gives back decoupled Luttinger liquids[@chitra_spinchains_field]. In that regime, the behavior of the system is described by the models of Ref. . Commensurate-incommensurate transitions have been already discussed in the context of cold atoms in [@buechler03_cic_coldatoms]. In the problem we are considering, however , since two fields are becoming gapless at the same time, $\theta_-$ and $\phi_\sigma$ , there are some differences[@orignac_spintube; @yu_spin_orbital] with the standard case[@buechler03_cic_coldatoms], in particular the exponents at the transition are non-universal. To conclude this section, we notice that we have found three types of phase transitions in the system we are considering. We can have Kosterlitz-Thouless phase transitions as a function of interactions, where we go from a phase with locked superfluid phases between the bosons and the fermions at weak repulsion to a phase with decoupled bosons and fermions at strong repulsion. We can have band-filling transitions as a function of the detuning between the phase in which atoms and molecule coexist and phases where only atoms or only molecules are present. Finally, we can have commensurate-incommensurate transitions as a function of the strength of the magnetic field. In the following section, we discuss the correlation functions of superfluid and charge density wave order parameters in the phase in which molecules and atoms coexist with their relative superfluid phase $\theta_-$ locked. ### Quantum Ising phase transition for $k_F=k_B$ {#sec:quantum-ising} In the case of $k_F=k_B$, the backscattering term (\[eq:cdw-locking\]) is non-vanishing. This term induces a mutual locking of the densities of the bosons and the fermions[@cazalilla03_mixture] and favors charge density wave fluctuations. This term is competing with the Josephson term (\[eq:lambda-bosonized\]) which tends to reduce density wave fluctuations. For $k_F=k_B$ the relevant part of the Hamiltonian given by the combination of the terms (\[eq:cdw-locking\]) and (\[eq:lambda-bosonized\]) reads: $$\begin{aligned} \label{eq:competing} H_{\text{Josephon}+\text{CDW Lock.}} =\int dx \left[\frac{2\lambda}{\sqrt{2\pi^3 \alpha^3}} \cos ( \theta_b -\sqrt{2}\theta_\rho) +\frac{2V_{BF}}{(2\pi \alpha)^2} \cos (2\phi_b -\sqrt{2}\phi_\rho)\right] \cos \sqrt{2}\phi_\sigma\end{aligned}$$ Using a transformation $\phi_b =\tilde{\phi}_b/\sqrt{2}$, $\theta_b =\tilde{\theta}_b \sqrt{2}$, and introducing the linear combinations $$\begin{aligned} \phi_1 =\frac{\tilde{\phi}_b +\phi_\rho}{\sqrt{2}} \\ \phi_2 = \frac{\tilde{\phi}_b - \phi_\rho}{\sqrt{2}}\end{aligned}$$ and similar combinations for the dual fields, we can rewrite the interaction term (\[eq:competing\]) as: $$\begin{aligned} \label{eq:quantum-ising} H_{\text{Josephon}+\text{CDW Lock.}} =\int dx \left[\frac{2\lambda}{\sqrt{2\pi^3 \alpha^3}} \cos 2\theta_2 + \frac{2V_{BF}}{(2\pi \alpha)^2} \cos 2\phi_2 \right] \cos \sqrt{2}\phi_\sigma\end{aligned}$$ From this Hamiltonian, it is immediate to see that a quantum Ising phase transition occurs between the density wave phase $\phi_2$ and the superfluid phase $\theta_2$ at a critical point $\lambda_c=\frac{V_{BF}}{\sqrt{8\pi \alpha}}$.[@finkelstein_2ch; @schulz_2chains; @fabrizio_dsg] Indeed, the field $\phi_\sigma$ being locked, we can replace $\cos \sqrt{2}\phi_\sigma$ by its expectation value in Eq. (\[eq:quantum-ising\]), and rewrite (\[eq:quantum-ising\]) as a free massive Majorana fermions Hamiltonian.[@finkelstein_2ch; @schulz_2chains; @fabrizio_dsg] At the point $\lambda_c$, the mass of one of these Majorana fermions vanishes giving a quantum critical point in the Ising universality class[@sachdev_book]. On one side of the transition, when $\lambda>\lambda_c$, the system is in the superfluid state discussed in Sec. \[sec:bf-coupling\], on the other side $\lambda<\lambda_c$, the charge density wave state discussed in [@cazalilla03_mixture] is recovered. Correlation functions --------------------- In order to better characterize the phase in which $\lambda$ is relevant, we need to study the correlation function of the superfluid and the charge density wave operators. Let us begin by characterizing the superfluid order parameters. First, let us consider the order parameter for BEC of the molecules. As a result of the locking of the fields $\theta_-$ and $\phi_\sigma$, the boson operator Eq. (\[eq:boson-bosonized\]) becomes at low energy: $$\Psi_B(x)\sim \frac{e^{i\sqrt{\frac 2 3} \theta_+}}{\sqrt{2\pi\alpha}}\left\langle e^{-i\frac{\theta_-}{\sqrt{3}}}\right\rangle.$$ An order of magnitude of $\langle e^{-i\frac{\theta_-}{\sqrt{3}}}\rangle$ can be obtained from a scaling argument similar to the one giving the gap. Since the scaling dimension of the field $ e^{-i\frac{\theta_-}{\sqrt{3}}}$ is $1/12K_{-}$, and since the only lengthscale in the problem is $e^{\ell^*}$, we must have $ \langle e^{-i\frac{\theta_-}{\sqrt{3}}}\rangle \sim e^{- \ell^*/12K_{-}} \sim (\lambda \alpha^{1/2}/u)^{1/12K_{-}}$. Similarly, the order parameter for s-wave superconductivity of the atoms $O_{SS} = \sum_\sigma \psi_{r,\sigma} \psi_{-r,-\sigma}$ becomes: $$O_{SS} = \frac{e^{i\sqrt{2}\theta_\rho}}{\pi \alpha} \cos \sqrt{2} \phi_\sigma \sim \frac{e^{i\sqrt{\frac 2 3} \theta_+}}{\pi \alpha} \left\langle e^{\frac{2i}{\sqrt{3}}\theta_-} \cos \sqrt{2}\phi_\sigma \right\rangle,$$ thus indicating that the order parameters of the BEC and the BCS superfluidity have become identical in the low energy limit[@ohashi03_transition_feshbach; @ohashi03_collective_feshbach]. This is the signature of the coherence between the atom and the molecular superfluids. The boson correlator behaves as: $$\begin{aligned} \langle \Psi_B(x,\tau) \Psi_B(0,0)=\left(\frac{\alpha^2}{x^2 +(u \tau)^2}% \right)^{\frac 1 {6K_+}}\end{aligned}$$ As a result, the molecule momentum distribution becomes $n_B(k)\sim |k|^{1/(3K_+)-1}$. One can see that the tendency towards superfluidity is strongly enhanced since the divergence of $n_B(k)$ for $k\to 0$ is increased by the coherence between the molecules and the atoms. This boson momentum distribution can, in principle be measured in a condensate expansion experiment[@gerbier05_phase_mott_coldatoms; @altman04_exploding_condensates]. Having seen that superfluidity is enhanced in the system, with BEC and BCS order parameters becoming identical, let us turn to the density wave order parameters. These order parameters are simply the staggered components of the atom and molecule density in Eqs. (\[eq:fermion-density-bosonized\])–(\[eq:boson-density-bosonized\]). In terms of $\phi_\pm$, the staggered component of the molecule density is reexpressed as: $$\begin{aligned} \label{eq:boson-staggered-gap} \rho_{2k_B,b}(x)\sim \cos \left[2\left(\frac{\phi_-}{\sqrt{3}} +\frac{\sqrt{2% }}{\sqrt{3}} \phi_+\right) -2k_b x\right],\end{aligned}$$ and the staggered component of the fermion density as: $$\begin{aligned} \label{eq:fermion-staggered-gap} \rho_{2k_F,f}(x)\sim \cos \left[\sqrt{2} \left(-\frac{\sqrt{2}}{\sqrt{3}}% \phi_- +\frac{1}{\sqrt{3}} \phi_+\right) -2k_F x\right],\end{aligned}$$ where we have taken into account the long range ordering of $\phi_\sigma$. We see that the correlations of both $\rho_{2k_B,b}(x)$ and $\rho_{2k_F,f}(x)$ decay exponentially due to the presence of the disorder field $% \phi_-$ dual to $\theta_-$. In more physical terms, the exponential decay of the density-wave correlations in the system results from the constant conversion of molecules into atoms and the reciprocal process which prevents the buildup of a well defined atomic Fermi-surface or molecule pseudo-Fermi surface. The exponential decay of the density wave correlations in this system must be be contrasted with the power-law decay of these correlations in a system with only bosons or in a system of fermions with attractive interactions.[@giamarchi_book_1d] In fact, if we consider that our new particles are created by the operator $\psi_b\sim \psi_\uparrow \psi_\downarrow$, we can derive an expression of the density operators of these new particles by considering the product $\psi_b \psi_\uparrow \psi_\downarrow$. Using the Haldane expansion of the boson creation and annihilation operators[@haldane_bosons; @lukyanov_xxz_asymptotics], we can write this product as: $$\begin{aligned} \psi^\dagger_b \psi_\uparrow \psi_\downarrow &\sim& e^{-i\theta_b} \left[\sum_{m=0}^\infty \cos (2 m \phi - 2 m k_B x) \right] \times e^{i\sqrt{2} \theta_\rho}\left[\cos \sqrt{2}\phi_\sigma + \cos (\sqrt{2} \phi_\rho - 2 k_F x)\right] \nonumber \\ &\sim& \langle e^{-i(\theta_b-\sqrt{2}\theta_\rho)} \cos \sqrt{2} \phi_\sigma \rangle \cos (\sqrt{6} \phi_+ -2 (k_F+k_B) x),\end{aligned}$$ where $(k_F+k_B)=\pi (2N_b+N_f)/2L=\pi \rho_{pairs}$ can be interpreted as the pseudo Fermi wavevector of composite bosons. A scaling argument shows that the prefactor in the expression varies as a power of $\lambda$. As a result, when there is coherence between atoms and molecule, power-law correlations appear in the density-density correlator near the wavevector $2k_F+2k_B$ and the intensity of these correlations is proportional to the $ |\langle e^{-i(\theta_b-\sqrt{2}\theta_\rho)} \cos \sqrt{2} \phi_\sigma \rangle|^2$. The resulting behavior of the Fourier transform of the density-density correlator is represented on Fig. \[fig:chi-q\]. ![Fourier transform of the static density density correlations. In the decoupled phase (dashed line), two peaks are obtained at twice the Fermi wavevector of the unpaired atoms and at twice the pseudo-Fermi wavevector of the molecules. In the coupled phase (solid line), the peaks are replaced by maxima at $Q=2k_F$ and $Q=2k_B$. A new peak at $Q=2(k_F+k_B)$ is obtained as a result of Boson-Fermion coherence. []{data-label="fig:chi-q"}](density-correlations.eps){width="9cm"} Another interesting consequence of the existence of atom/molecule coherence is the possibility of having non-vanishing cross-correlations of the atom and the molecule density. In the three-dimensional case such cross correlations have been studied in [@ohashi05_bcs_bec_collective]. If we first consider cross-correlations $\langle T_\tau \rho_{2k_B,b}(x,\tau) \rho_{2k_F,f}(0,0)\rangle$ we notice that due to the presence of different exponentials of $\phi_+$ in Eqs. (\[eq:boson-staggered-gap\])– (\[eq:fermion-staggered-gap\]), this correlator vanishes exactly. Therefore, no cross correlation exists between the staggered densities. However, if we consider the cross correlations of the uniform densities, we note that since they can all be expressed as functions of $% \partial_x\phi_+,\partial_x\phi_-$, such cross correlations will be non-vanishing. More precisely, since: $$\begin{aligned} \rho_F &=& -\frac{\sqrt{2}}{\pi\sqrt{3}}\partial_x\phi_+ +\frac{2}{\pi\sqrt{3}}\partial_x\phi_- \\ \rho_B &=& -\frac{\sqrt{2}}{\pi\sqrt{3}}\partial_x\phi_+ -\frac{1}{\pi\sqrt{3}}\partial_x\phi_-,\end{aligned}$$ at low energy we have: $\rho_F \sim \rho_B \sim -\frac{\sqrt{2}}{\pi\sqrt{3}} \partial_x\phi_+$. The Luther Emery point {#sec:luther_emery} ---------------------- In this Section, we will obtain detailed expressions for these correlation functions at a special exactly solvable point of the parameter space. At this point, the kinks of the fields $\phi_\sigma$ and $\theta_-$ become free massive fermions. This property, and the equivalence of free massive fermions in 1D with the 2D non-critical Ising model[@luther_ising; @zuber_77; @schroer_ising; @kadanoff_gaussian_model; @ogilvie_ising; @boyanovsky_ising; @itzykson-drouffe-1] allows one to find exactly the correlation functions. ### mapping on free fermions As we have seen, after the rotation (\[eq:rot\]), if we neglect the interaction terms of the form $\Pi_+\Pi_-$ or $% \partial_x\phi_+\partial_x\phi_-$, the Hamiltonian of the massive modes $\phi_-,\phi_\sigma$ can be rewritten as: $$\begin{aligned} \label{eq:massive} H&=&\int \frac{dx}{2\pi} \left[ u_\sigma^* K_\sigma^* (\pi \Pi_\sigma)^2 +\frac {u_\sigma^*} {K_\sigma^*}(\partial_x\phi_\sigma)^2\right] \nonumber \\ && + \int \frac{dx}{2\pi} \left[ u_{-} K_{-} (\pi \Pi_{-})^2 +\frac {u_{-}} {K_{-}}(\partial_x\phi_{-})^2\right] \nonumber \\ && + \frac{\lambda}{\sqrt{2\pi^3\alpha^3}} \int dx \cos \sqrt{3}\theta_- \cos \sqrt{2}\phi_\sigma,\end{aligned}$$ where $K_\sigma^*=1$. When the Luttinger exponent is $K_-=3/2$, it is convenient to introduce the fields: $$\begin{aligned} \overline{\phi}=\sqrt{\frac 3 2}\theta_- \\ \overline{\theta}=\sqrt{\frac 2 3}\phi_-,\end{aligned}$$ and rewrite the Hamiltonian (\[eq:massive\]) as: $$\begin{aligned} \label{eq:massive-bar} H&=&\int \frac{dx}{2\pi} \left[ u_\sigma^* (\pi \Pi_\sigma)^2 +{u_\sigma^*} (\partial_x\phi_\sigma)^2\right] \nonumber \\ && + \int \frac{dx}{2\pi} \left[ u_{-} (\pi \overline{\Pi})^2 + {u_{-}} (\partial_x\overline{\phi})^2\right] \nonumber \\ && + \frac{\lambda}{\sqrt{2\pi^3}\alpha} \int dx \cos \sqrt{2} \overline{\phi% } \cos \sqrt{2}\phi_\sigma.\end{aligned}$$ If we neglect the velocity difference, i.e. assume that $u_\sigma^*=u_-=u$, and introduce the pseudofermion fields: $$\begin{aligned} \label{eq:pseudofermions} \Psi_{r,\sigma}=\frac {e^{i[(\overline{\theta}-r \overline{\phi}) + \sigma(\theta_\sigma-r \phi_\sigma)]}}{\sqrt{2\pi\alpha}},\end{aligned}$$ we see immediately that the Hamiltonian (\[eq:massive-bar\]) is the bosonized form of the following free fermion Hamiltonian: $$\begin{aligned} \label{eq:fermionized-ham} H= \sum_\sigma \int dx \left[ -i u \sum_{r=\pm} r \Psi_{r,\sigma}^\dagger \partial_x \Psi_{r,\sigma} + \frac{\lambda}{\sqrt{% 2\pi\alpha}} \Psi_{r,\sigma}^\dagger \Psi_{r,\sigma}\right].\end{aligned}$$ As a result, for the special value of $K_-=3/2$, the excitations can be described as massive free fermions with dispersion $\epsilon(k)=\sqrt{(u k)^2 +m^2}$, where $m=\frac{|\lambda|}{\sqrt{2\pi\alpha}}$. This is known as Luther-Emery solution.[@luther_exact; @coleman_equivalence] One can see that the fermions carry a spin $1/2$ and a jump of the phase $\theta_-$ equal to $\frac{\pi}{\sqrt{3}}$. Therefore they can be identified to the kinks obtained in the semiclassical treatment of Sec. \[sec:bf-coupling\]. Also, making all velocities equal and $V_{BF}=0$ in (\[eq:corresp-uK\]), we find the relation $3/K_-=1/K_b+2/K_\rho$ and thus the gap given by the RG varies as $\Delta \sim \frac{u}{\alpha} \left(\frac{\lambda \alpha^{1/2}}{u}\right)^{\frac 1 {\frac 3 2 - \frac{3}{4K_-}}}$. For $K_-=3/2$ this expression reduces to the one given by the fermion mapping. ### Correlation functions To calculate the correlation functions it is convenient to introduce the fields: $$\begin{aligned} \Phi_\sigma &=& \frac 1 {\sqrt{2}} (\overline{\phi}+\sigma\phi_\sigma) \\ \Theta_\sigma &=& \frac 1 {\sqrt{2}} (\overline{\theta}+\sigma\theta_\sigma)\end{aligned}$$ And reexpress operators by: $$\begin{aligned} e^{i\frac 2 {\sqrt{3}} \phi_-} &=& e^{i(\Theta_\uparrow + \Theta_\downarrow)} \nonumber \\ e^{i\sqrt{2} \phi_\sigma} e^{- i\frac 2 {\sqrt{3}} \phi_-} &=& e^{i(\Phi_\uparrow - \Phi_\downarrow)}e^{-i(\Theta_\uparrow + \Theta_\downarrow)} \sim \Psi^\dagger_{+,\uparrow} \Psi^\dagger_{-,\downarrow} \nonumber \\ e^{-i\sqrt{2} \phi_\sigma} e^{- i\frac 2 {\sqrt{3}} \phi_-} &=& e^{-i(\Phi_\uparrow - \Phi_\downarrow)}e^{-i(\Theta_\uparrow + \Theta_\downarrow)} \sim \Psi^\dagger_{-,\uparrow} \Psi^\dagger_{+,\downarrow}\end{aligned}$$ This leads us to the following expression of the fermion density $\rho_{2k_F,f}(x)$: $$\begin{aligned} \label{eq:rho} \rho_{2k_F,f}(x) =e^{i\left[\sqrt{\frac 2 3} \phi_+ -2k_F x\right]} ( \Psi^\dagger_{-,\uparrow} \Psi^\dagger_{+,\downarrow} +\Psi^\dagger_{+,\uparrow} \Psi^\dagger_{-,\downarrow} ) + \text{H. c.} ,\end{aligned}$$ which enables us to find exactly its correlations. We introduce $u=u_\sigma^*$ to simplify the notations. The Green’s functions of the fermions read: $$\begin{aligned} \label{eq:green-dirac-+} G_{++}(x,\tau)&=&-\frac m {2\pi u} \frac{\tau+i\frac x u}{\sqrt{\tau^2+\frac{% x^2}{u^2}}} K_1\left(m\sqrt{\tau^2+\left(\frac x u\right)^2}\right) \\ G_{--}(x,\tau)&=&-\frac m {2\pi u} \frac{\tau-i\frac x u}{\sqrt{\tau^2+\frac{% x^2}{u^2}}} K_1\left(m\sqrt{\tau^2+\left(\frac x u\right)^2}\right) \\ G_{-+}(x,\tau)&=&G_{+-}(x,\tau)=-\frac m {2\pi u} K_0\left(m\sqrt{% \tau^2+\left(\frac x u\right)^2}\right)\end{aligned}$$ Using (\[eq:rho\]) and Wick’s theorem we find that in real space the density density correlations read: $$\begin{aligned} \label{eq:fermion-le-realspace} \langle T_\tau \rho_{2k_F,f}(x,\tau) \rho_{-2k_F,f}(0.0)\rangle = 2 \left(\frac{m}{% 2\pi u}\right)^2 \left(\frac{\alpha^2}{x^2+(u\tau)^2}\right)^{\frac {K_+}{6}} \left[K_0^2\left(m\sqrt{\tau^2+\left(\frac x u\right)^2}\right) + K_1^2\left(m\sqrt{\tau^2+\left(\frac x u\right)^2}\right)\right].\end{aligned}$$ These correlation functions decay exponentially, with a correlation length $% u/m=\xi$. Note that expression (\[eq:fermion-le-realspace\]) is exact. On the other side the boson density is given by: $$\begin{aligned} \rho_{2k_B,b}(x) =e^{i\left[\sqrt{\frac 8 3} \phi_+ -2k_B x\right]} e^{i(\Theta_\uparrow +\Theta_\downarrow)} + H. c.\end{aligned}$$ To calculate the correlation functions in this case, we can use the equivalence of the Dirac fermions in (1+1)D with the 2D non critical Ising model[@luther_ising; @zuber_77; @kadanoff_gaussian_model; @schroer_ising; @boyanovsky_ising] to express the boson fields in terms of the order and disorder parameters of two non-critical Ising models, $\sigma_{1,2},\mu_{1,2}$ respectively, by: $$\begin{aligned} \cos \Theta_\sigma &=& \sigma_{1} \mu_{2} \\ \sin \Theta_\sigma &=& \mu_{1} \sigma_{2} \\ \cos \Phi_\sigma &=& \sigma_{1} \sigma_{2} \\ \sin \Phi_\sigma &=& \mu_{1} \mu_{2}\end{aligned}$$ We find that: $$\begin{aligned} \langle T_\tau\rho_{2k_B,b}(x,\tau) \rho_{2k_B,b}(0,0)\rangle \sim \left(% \frac{\alpha^2}{x^2+(u\tau)^2} \right)^{\frac {2K_+}{3}} 4 \langle \sigma(x,\tau) \sigma(0,0) \rangle^2 \langle \mu(x,\tau) \mu(0,0) \rangle^2,\end{aligned}$$ where we have used $\langle \sigma_{1,2}(x,\tau) \sigma_{1,2}(0,0)\rangle=\langle \sigma(x,\tau) \sigma(0,0)\rangle$ and similarly for $\mu$. In the bosonic case, the mapping on the 2D Ising model allows to calculate the correlation functions using the results of Ref. [@wu_ising], where an exact expression of the correlation functions of the Ising model in terms of Painlevé III transcendants[@ince_odes] was derived. In fact, since we are interested in the low-energy, long-distance properties of the system, it is enough to replace the Painlevé transcendants with an approximate expression in terms of modified Bessel functions. The resulting approximate expression is: $$\begin{aligned} \label{eq:boson-le-realspace} \langle T_\tau\rho_{2k_B,b}(x,\tau) \rho_{2k_B,b}(0,0)\rangle \sim \left(% \frac{\alpha^2}{x^2+(u\tau)^2} \right)^{\frac {2K_+}{3}} K_0^2\left(m \sqrt{\tau^2+\left( \frac x u \right)^2}\right).\end{aligned}$$ Knowing the correlation function in Matsubara space allows us to obtain them in the reciprocal space via Fourier transforms. The Fourier transform of the density-density response functions (\[eq:fermion-le-realspace\]) and (\[eq:boson-le-realspace\]) can be obtained in Matsubara space from integrals derived in the Appendix \[app:integral\]. We find that the bosonic structure factor is: $$\begin{aligned} \label{eq:boson_structure_factor} \chi_{\rho\rho}^B(\pm 2k_B+q,\omega)=\frac {2\pi}{u} \left(\frac{m\alpha}{u}% \right)^{\frac{4K_+}{3}} \left(\frac m u\right)^2 \frac{\sqrt{\pi}% \Gamma\left(1-\frac{2K_+}{3}\right)^3}{4\Gamma\left(\frac 3 2 -\frac{2K_+}{3}% \right)} {}_3F_2\left(1-\frac{2K_+}{3},1-\frac{2K_+}{3},1-\frac{2K_+}{3}% ;\frac 3 2 -\frac{2K_+}{3},1;-\frac{\omega^2+(uq)^2}{4m^2}\right),\end{aligned}$$ where $\Gamma(x)$ is the Gamma function and ${}_3F_2(\ldots;\ldots;z)$ is a generalized hypergeometric function.[@erdelyi_functions_1] Of course, since Eq. (\[eq:boson-le-realspace\]) is a long distance approximation, the expression (\[eq:boson\_structure\_factor\]) is also approximate. More precisely, the exact expression possesses thresholds at higher frequencies associated with the excitation of more than one pair of kinks in the intermediate state. However, the expression (\[eq:boson\_structure\_factor\]) is *exact* as long as $\omega$ is below the lowest of these thresholds. For the fermions, the expression of the structure factor is exact and reads: $$\begin{aligned} \label{eq:fermion_structure_factor} \chi_{\rho\rho}^F(\pm 2k_F + q,\omega)&=&\frac 1 {2\pi u} \left(\frac{m\alpha% }{u}\right)^{\frac{K_+}{3}} \left[ \frac{\Gamma\left(1-\frac{K_+}{6}\right)^3% }{\Gamma\left(\frac 3 2 -\frac{K_+}{6}\right)} {}_3F_2\left(1-\frac{K_+}{6}% ,1-\frac{K_+}{6},1-\frac{K_+}{6};\frac 3 2 -\frac{K_+}{6},1;-\frac{% \omega^2+(uq)^2}{4m^2}\right)\right. \nonumber \\ &&\left. + \frac{\Gamma\left(2-\frac{K_+}{6}\right)\Gamma\left(1-\frac{K_+}{6% }\right)\Gamma\left(-\frac{K_+}{6}\right)}{\Gamma\left(\frac 3 2 -\frac{K_+}{% 6}\right)} {}_3F_2\left(2-\frac{K_+}{6},1-\frac{K_+}{6},-\frac{K_+}{6};\frac 3 2 -\frac{K_+}{6},1;-\frac{\omega^2+(uq)^2}{4m^2}\right) \right]\end{aligned}$$ The response functions are then obtained by the substitution $i\omega\to \omega+i0$. Since the generalized hypergeometric functions $% {}_{p+1}F_p(\ldots;\ldots;z)$ are analytic for $|z|<1$ [@slater66_hypergeom_book], the imaginary part of the response functions is vanishing for $\omega<2m$. For $\omega>2m$, the behavior of the imaginary part is obtained from a theorem quoted in [@olsson66_gen_hypergeometric; @buehring01_hypergeometric]. According to the theorem, $$\begin{aligned} \frac{\Gamma(a_1)\ldots \Gamma(a_p)}{\Gamma(b_1)\ldots \Gamma(b_p)} {}% _{p+1}F_p(a_1\ldots a_{p+1};b_1\ldots b_p; z) =\sum_{m=0}^\infty g_m(0) (1-z)^m + (1-z)^{s_p} \sum_{m=0}^{\infty} g_m(s_p) (1-z)^m,\end{aligned}$$ provided that: $$\begin{aligned} s_p=\sum_{i=1}^p b_i -\sum_{i=1}^{p+1} a_i,\end{aligned}$$ is not an integer. Therefore, if $s_p<0$, the generalized hypergeometric function possesses power-law divergence as $z\to 1$. If $0<s_p<1$, it has a cusp singularity. In our case, for the fermion problem, we have $$\begin{aligned} s^F_2=1+\frac 3 2 -\frac{K_+}{6} -3 \left(1-\frac{K_+}{6}\right)= \frac{K_+}{% 3}-\frac 1 2\end{aligned}$$ and in the boson problem $$\begin{aligned} s_2^B=1+\frac 3 2 -\frac 2 3 K_+ -3\left(1-\frac 2 3 K_+\right) = \frac{4K_+% }{3}-\frac 1 2\end{aligned}$$ Therefore, for $K_+$ small, both the fermion and the boson density-density response functions show power law singularities for $\omega\to 2m$. For $% K_+=3/8$, the singularity in the boson density-density response is replaced by a cusp. This cusp disappears when $K_+=9/8$. For $K_+=3/2$ the singularity in the fermion density density correlator also disappears. The behavior of the imaginary part of the boson density-density correlation function is shown in Fig.\[fig:imcorrfun\]. An exact expression of the imaginary part of the ${}_3F_2$ function can be deduced from the calculations in [@olsson66_gen_hypergeometric]. Indeed, in [@olsson66_gen_hypergeometric], it was found that: $$\begin{aligned} {}_3F_2(a_1,a_2,a_3;b_1,b_2;z)=F_R(a_1,a_2,a_3;b_1,b_2;z) + \frac{% \Gamma(b_1)\Gamma(a_1+a_2+a_3-b_1-b_2)}{\Gamma(a_1)\Gamma(a_2)\Gamma(a_3)}% \xi(a_1,a_2,a_3;b_1,b_2;z),\end{aligned}$$ where $F_R$ is defined by a series that converges absolutely for $% \mathrm{Re}(z)>1/2$ and thus has no cut along $[1,+\infty]$ , and $\xi$ is singular along $[1,+\infty]$. $\xi$ can be expressed in terms of a higher hypergeometric function of two variables, the Appell function $F_3$, as: $$\begin{aligned} \xi(a_1,a_2,a_3;b_1,b_2;z) &=& z^{a_1-b_1-b_2+1} (1-z)^{b_1+b_2-a_1-a_2-a_3} \nonumber \\ && \times F_3(b_1-a_1,1-a_2,b_2-a_1,1-a_3,b_1+b_2-a_1-a_2-a_3+1,1-1/z,1-z).\end{aligned}$$ Since only $\xi$ has a cut along $[1,\infty]$, for $\omega^2>(vq)^2+4m^2$, $% z>1$, the imaginary part can be expressed as a function of $F_3$ only. This is particularly useful in the case of the fermion density correlators, because the expression (\[eq:fermion\_structure\_factor\]) is exact. In the case of the bosons, thresholds associated with the excitation of a larger number of Majorana fermions will appear at energies $\sim 4m$. The imaginary parts of correlation functions Eq. (\[eq:boson\_structure\_factor\])– (\[eq:fermion\_structure\_factor\]) can be measured by Bragg spectroscopy[@stenger99_bragg_bec; @stamper-kurn99_bragg_bec; @zambelli00_dynamical_structure_factor]. In Fig.\[fig:imcorrfun\] we plot the imaginary part of density correlation functions for the fermionic system with $K_+=1/4$ and $K_+=1/2$ as a function of frequency. In the first case, we obtain a divergence of the density-density response near the threshold, whereas in the second case, only a cusp is obtained. ![The imaginary part of the density-density correlation function for the fermionic system with $K_+=1/4,1/2$.[]{data-label="fig:imcorrfun"}](imcorrfun.eps) ### spectral functions of the fermions We also wish to calculate the spectral functions of the original fermions $% \psi_{r,\sigma}$ (not the pseudofermions $\Psi_{r,\sigma}$). To obtain these spectral functions, we express the operators $\psi_{r,\sigma}$ as a function of the fields $\phi_+,\Phi_{\uparrow,\downarrow}$ and their dual fields. We find: $$\begin{aligned} \psi_{+,\sigma}(x)&=&\frac 1 {\sqrt{2\pi\alpha}} e^{\frac i {\sqrt{6}} (\theta_+-\phi_+)} e^{i \left[-\Theta_{-\sigma} +\frac 5 6 \Phi_{-\sigma}% \right]} e^{-\frac i 6 \Phi_\sigma} \\ \psi_{-,\sigma}(x)&=&\frac 1 {\sqrt{2\pi\alpha}} e^{\frac i {\sqrt{6}} (\theta_++\phi_+)} e^{i \left[\Theta_{\sigma} +\frac 5 6 \Phi_{\sigma}\right]% } e^{-\frac i 6 \Phi_{-\sigma}}\end{aligned}$$ Therefore, the Green’s functions of the original fermions factorize as: $$\begin{aligned} -\langle T_\tau \psi_{+,\sigma}(x,\tau) \psi_{+,\sigma}(0,0)\rangle = G_{+}(x,\tau) G_{-\sigma}^{-1,5/6}(x,\tau) G_{\sigma}^{0,1/6}(x,\tau),\end{aligned}$$ where: $$\begin{aligned} G_{+}(x,\tau) = -\langle T_\tau e^{\frac i {\sqrt{6}} (\theta_+-\phi_+)(x,\tau)} e^{-\frac i {\sqrt{6}} (\theta_+-\phi_+)(0,0)} \rangle=\frac{\alpha}{u\tau -ix} \left(\frac{\alpha^2}{x^2+(u\tau)^2}% \right)^{\frac 1 {24}\left(\sqrt{K_+}-\frac 1 {\sqrt{K_+}}\right)^2},\end{aligned}$$ $$\begin{aligned} \label{eq:for-LZ} G_{-\sigma}^{-1,5/6}(x,\tau)=-\langle T_\tau e^{i \left[-\Theta_{-\sigma} +\frac 5 6 \Phi_{-\sigma}\right](x,\tau)} e^{i \left[\Theta_{\sigma} -\frac 5 6 \Phi_{\sigma}\right](0,0)}\rangle,\end{aligned}$$ and: $$\begin{aligned} \label{eq:for-Bernard} G_{-\sigma}^{0,1/6}(x,\tau)=-\langle T_\tau e^{-\frac i 6 \Phi_\sigma(x,\tau)} e^{\frac i 6 \Phi_\sigma(0,0)} \rangle.\end{aligned}$$ The correlator in Eq. (\[eq:for-Bernard\]) satisfies differential equations that were derived in [@bernard94_ising_equadiff]. However, since the fields $\Phi_\sigma$ are long range ordered, we can simply replace the terms $e^{\pm \frac i 6 \Phi_\sigma}$ by their expectation value $% \langle e^{\pm \frac i 6 \Phi_\sigma}\rangle$. This approximation only affects the behavior of the fermion correlator at high energy. Therefore, we are left with (\[eq:for-LZ\]) to evaluate. To do this, we can use exact results derived in [@lukyanov_soliton_ff; @essler_quarter_filled; @tsvelik_spectral_cdw] to obtain a form-factor expansion of Eq. (\[eq:for-LZ\]). The first term of the Form factor expansion yields: $$\begin{aligned} G_{\sigma}^{-1,5/6}(x,\tau)=\int \frac{d\psi}{2\pi} e^{\frac 5 6 \psi} e^{m(i\frac x u \sinh \psi - \tau \cosh \psi)} + O(e^{-3m\sqrt{\tau^2+(x/u)^2% }})\end{aligned}$$ Writing: $$\begin{aligned} u \tau =\rho \cos \varphi \\ x = \rho \sin \varphi\end{aligned}$$ We finally obtain that: $$\begin{aligned} G(x,\tau)\sim e^{i\varphi} \left(\frac \alpha \rho\right) ^{\frac 1 {12}\left( K_++\frac 1 {K_+}\right)} K_{\frac 5 6} \left(\frac m u \rho\right).\end{aligned}$$ The Fourier transform of $G(x,\tau)$ is given by a Weber-Schaefheitlin integral[@weber_erdelyi; @weber_gradshteyn; @weber_magnus; @tsvelik_spectral_cdw; @essler_quarter_filled]. In final form, the Fourier transform of the Fermion Green’s function reads: $$\begin{aligned} \hat{G}(q,\omega_n)\sim {}_2F_1\left(\frac 7 4 -\frac 1 {24} (K_++K_+^{-1}),\frac {13} {12} -\frac 1 {24} (K_++K_+^{-1});2;-\frac{% (uq)^2+\omega_n^2}{m^2}\right)\end{aligned}$$ When this Green’s function is analytically continued to real frequency, it is seen[@abramowitz_math_functions] that it has a power law singularity for $\omega^2=(uq)^2+m^2$, and is analytic for $\omega$ below this threshold. As a result, the Fermion Green’s function vanishes below the gap, as it would do in a superconductor[@abrikosov_book] and could be checked experimentally. Note however that the anomalous Green’s function remains zero[@orignac03_gorkov], due to the fluctuations of the phase $\theta_+$. Mott insulating state {#sec:mott-insul-state} ===================== Till now, we have only considered the case of the continuum system (\[eq:nolattice\]) or incommensurate filling in the lattice system (\[eq:lattice\]). We now turn to a lattice system at commensurate filling. We first establish a generalization of the Lieb-Schultz-Mattis theorem[@lieb_lsm_theorem; @affleck_lieb; @rojo_lsm_generalized; @affleck_lsm; @yamanaka_luttinger_thm; @cabra_ladders] in the case of the boson-fermion mixture described by the Hamiltonian (\[eq:lattice\]). This will give us a condition for the existence of a Mott insulating state without spontaneous breakdown of translational invariance. Then, we will discuss using bosonization the properties of the Mott state. We note that Mott states have been studied in the boson-fermion model in Refs. [@zhou05_mott_bosefermi; @zhou05_mott_bosefermi_long], but not in a one-dimensional case. Finally, we will consider the case when the molecules or the atoms can form a Mott insulating case in the absence of boson-fermion conversion, and we will show that this Mott state is unstable. Generalized Lieb-Schultz-Mattis theorem {#sec:gener-lieb-schulz} --------------------------------------- A generalized Lieb-Schultz-Mattis theorem can be proven for the boson fermion mixture described by the lattice Hamiltonian (\[eq:lattice\])[@lieb_lsm_theorem; @affleck_lieb; @affleck_lsm; @yamanaka_luttinger_thm]. Let us introduce the operator: $$\begin{aligned} U=\exp\left[i \frac {2\pi}{N} \sum_{j=1}^N (2 b^\dagger_jb_j + f^\dagger_{j,\downarrow}f_{j,\downarrow} + f^\dagger_{j,\uparrow}f_{j,\uparrow}) \right],\end{aligned}$$ such that $U^\dagger H_{bf} U=H_{bf}$. Following the arguments in Ref.[@yamanaka_luttinger_thm], one has: $$\begin{aligned} \langle 0 | U^\dagger H U- H_|0 \rangle &=& O\left(\frac 1 N\right) \\ U^\dagger T U &=& T e^{i 2\pi \nu},\end{aligned}$$ where $|0\rangle$ is the ground state of the system, $T$ is the translation operator and $H$ is the full Hamiltonian. The quantity $\nu$ is defined by: $$\begin{aligned} \nu=\frac 1 N \sum_{j=1}^N (2 b^\dagger_j b_j + \sum_\sigma f^\dagger_{j,\sigma} f_{j,\sigma}) = \frac 1 N (2N_b + N_f)\end{aligned}$$ For noninteger $\nu$, it results from the analysis of [@yamanaka_luttinger_thm] that there is a state $U|0\rangle$ of momentum $% 2\pi\nu\ne 0 [2\pi]$ which is orthogonal to the ground state $|0\rangle$ and is only $O(1/N)$ above the ground state. This implies either a ground state degeneracy (associated with a spontaneous breaking of translational symmetry) or the existence of gapless excitations (if the spontaneous translational symmetry is unbroken and the ground state is unique). For integer $\nu$, the ground state and the state $U|0\rangle$ have the same momentum. In that case, a gapped state without degeneracy can be obtained. This state is analogous to the Mott insulating state in the half-filled Hubbard model in one-dimension[@lieb_hubbard_exact] or the Mott insulating state in the Bose-Hubbard model with one boson per site[@kuhner_bosehubbard]. We note that for $\lambda=0$ in the Hamiltonian (\[eq:lattice\]) fermions and bosons are separately conserved, and the corresponding Fermi wavevectors are: $$\begin{aligned} k_B&=&\frac {\pi N_b}{N} \\ k_F&=&\frac {\pi N_f}{2N}\end{aligned}$$ The momentum of the state $U|0\rangle$ is thus equal to $4(k_B +k_F)$. The condition to have a Mott insulating state in the Hubbard model, $4k_F=2\pi$ is thus generalized to $4(k_B+k_F)=2\pi$ i.e. $2N_b+N_f=N$. umklapp term {#sec:umklapp} ------------ In this Section, we provide a derivation of the umklapp term valid in the case of the lattice system (\[eq:lattice\]). Let us consider the $2k_F$ and $2k_B$ components of the atom and molecule charge density, given respectively by Eq. (\[eq:fermion-density-bosonized\]) and Eq. (\[eq:boson-density-bosonized\]). These terms yield an interaction of the form: $$\begin{aligned} \label{eq:comm-repuls} && C \int dx \cos (2\phi_b -2k_B x) \cos (\sqrt{2}\phi_\rho -2k_F x) \cos \sqrt{2} \phi_\sigma \nonumber \\ && = \frac C 2 \int \cos [2\phi_b +\sqrt{2}\phi_\rho -2(k_B+k_F) x]\cos \sqrt{2} \phi_\sigma \nonumber \\ && + \frac C 2 \int \cos [2\phi_b -\sqrt{2}\phi_\rho -2(k_B-k_F) x]\cos \sqrt{2} \phi_\sigma\end{aligned}$$ In Eq. (\[eq:comm-repuls\]), the last line is the backscattering term of (\[eq:cdw-locking\]), and the second line is the umklapp term. Let us consider a case with $k_F\ne k_B$, and let us concentrate on the effect of the umklapp term. Using the rotation (\[eq:rot\]), we can reexpress it as: $$\begin{aligned} \label{eq:umk-source} = \frac C 2 \int \cos [\sqrt{6}\phi_+ -2(k_B+k_F) x]\cos \sqrt{2} \phi_\sigma \nonumber \\.\end{aligned}$$ In the following we will consider the cases corresponding to one or two atoms per site. ### Mott insulating state with one atom per site Let us consider first the case of $(k_B+k_F)=\frac \pi {2\alpha}$. Then, the term (\[eq:umk-source\]) is oscillating. In second order perturbation theory, it gives rise to the umklapp term: $$\begin{aligned} \label{eq:umklapp} H_{umk.}^{1F}=\frac{2g_U}{(2\pi\alpha)^2} \int dx \cos \sqrt{24}\phi_+.\end{aligned}$$ The condition for the appearance of the umklapp term (\[eq:umklapp\]) can be seen to correspond to having one fermion atom per site of the atomic lattice. Let us briefly mention two alternative derivations of (\[eq:umklapp\]). A simple derivation can be obtained by considering the combination of the $4k_B$ term in the boson density with the $4k_F$ term in the fermion density in Haldane’s expansion[@haldane_bosons]. A second derivation can be obtained by considering the effect of a translation by one lattice parameter on the phases $\phi_\rho$ and $\phi_b$ [@yamanaka_luttinger_thm; @oshikawa_plateaus]. The expressions of the densities (\[eq:fermion-density-bosonized\]) and (\[eq:boson-density-bosonized\]) imply that upon a translation by a single site $ \phi_\rho \to \phi_\rho -\sqrt{2} k_F \alpha$ and $\phi_b \to \phi_b- k_B \alpha$. Therefore, the combination $\sqrt{6}\phi_+ = 2 \phi_b +\sqrt{2}\phi_\rho$ transforms as :$\sqrt{6}\phi_+\to \sqrt{6}\phi_+ -2 (k_B + k_F) \alpha$. For $2(k_F+k_B)=\pi/\alpha$, the term $\cos 2 \sqrt{6}\phi_+$ is invariant upon translation, thus leading again to (\[eq:umklapp\]). The presence of the umklapp term (\[eq:umklapp\]) in the Hamiltonian can result in the opening of a charge gap and the formation of a Mott insulating state. Since the umklapp term is of dimension $6K_+$ this implies that a Mott insulating state is possible only for $K_+<1/3$ i.e. very strong repulsion. For free fermions, the Mott transition would occur at $K_\rho=1$ i.e. for weakly repulsive interaction. Thus, we see that the Josephson coupling is very effective in destabilizing the Mott state. In the Mott insulating state, the superfluid fluctuations become short ranged. Since CDW fluctuations are also suppressed, the system shows some analogy with the Haldane gapped phase of spin-1 chains[@haldane_gap] in that it is totally quantum disordered. In fact, this analogy can be strengthened by exhibiting an analog of the VBS (valence bond solid) order parameter[@nijs_dof; @kennedy_z2z2_haldane]. In Haldane gapped chains, this nonlocal order parameter measures a hidden long range order in the system associated with the breakdown of a hidden discrete symmetry in the system. The equivalent nonlocal order parameter for the atom-molecule system is discussed in Appendix \[app:non\_loc\_ord\]. ### Mott insulating state with two atoms per site Another commensurate filling, where a Mott insulating state is possible is obtained for $(k_F+k_B)=\pi/\alpha$. This case corresponds to having one molecule (or two atoms) per site of the optical lattice. In that case, the term in (\[eq:umk-source\]) is non-oscillating, and it gives rise to an umklapp term of the form: $$\begin{aligned} \label{eq:umklapp-boson} H_{umk}^{1B}=\frac{2g_U}{(2\pi\alpha)^2} \int dx \cos \sqrt{6}\phi_+ \cos \sqrt{2}\phi_\sigma.\end{aligned}$$ We notice that this umklapp term is compatible with the spin gap induced by the Josephson term (\[eq:lambda-bosonized\]). When the Josephson coupling is large, we can make $\cos \sqrt{2}\phi_\sigma \to \langle \cos \sqrt{2}\phi_\sigma\rangle$ and we see that the term (\[eq:umklapp-boson\]) becomes relevant for $K_+=4/3$. For weaker Josephson coupling, the dimension becomes $1/2+3/2 K_+$, and this term is relevant only for $K_+<1$. Since $K_+=1$ corresponds to hard core bosons, this means that for weak Josephson coupling, the Mott state with a single boson per site becomes trivial. Interestingly, we note that increasing the Josephson coupling is *enhancing* the tendency of the system to enter a Mott insulating state as a result of the formation of a spin gap. If we compare with a system of bosons at commensurate filling, we note however that the Mott transition would obtain for $K_b=2$[@giamarchi_book_1d]. Therefore, the Josephson coupling still appears to weaken the tendency to form a Mott insulating state. Such tendency was also observed in [@zhou05_mott_bosefermi]. Commensurate filling of the atomic or molecular subsystem --------------------------------------------------------- When the atomic subsystem is at commensurate filling ($4k_F=\frac{2\pi}{a}$), an umklapp term: $$\label{eq:umklapp-fermions} \frac{-2 g_3}{(2\pi\alpha)^2} \cos \sqrt{8} \phi_\rho,$$ must be added to the Hamiltonian. Such umklapp term can create a gap in the density excitations of the unpaired atoms. However, we must also take into account the term (\[eq:lambda-bosonized\]). This term is ordering $\theta_-$ and thus competes with the umklapp term (\[eq:umklapp-fermions\]). To understand what happens when $\theta_-$ is locked, it is convenient to rewrite the umklapp term (\[eq:umklapp-fermions\]) as $\propto \cos \sqrt{8/3} (\sqrt{2} \phi_+ - \phi_-)$. The terms generated by the renormalization group are of the form $\cos n\sqrt{8/3} (\sqrt{2} \phi_+ - \phi_-)$, with $n$ an integer. When $\theta_-$ is locked, replacing the terms $e^{i\beta \phi_-}$ by their expectation values, we find that all these terms vanish. Therefore, no term $\cos \beta \phi_+$ can appear in the low energy Hamiltonian. A more formal justification of the absence of the $\cos \beta \phi_+$ term in the low energy Hamiltonian can be given by noting that when the Hamiltonian is expressed in terms of $\phi_{\pm}$ it has a continuous symmetry $\phi_+ \to \phi_++\alpha$ and $\phi_- \to \phi_-+\sqrt{2} \alpha$. As a result, terms of the form $\cos \beta \phi_+$ are forbidden by such symmetry. The consequence of the absence of $\cos \beta \phi_+$ terms in the Hamiltonian when $\theta_-$ is locked is that, even if the unpaired atom density is at a commensurate filling, the umklapp terms do not destabilize the coupled phase. However, in the opposite case of a strong umklapp term and a weak boson-fermion conversion term, it is the field $\phi_\rho$ that will be ordered. The previous arguments can be reversed and show that the formation of a Mott gap for the fermions will prevent the formation of the coupled phase. Using the method of Ref.[@jose_planar_2d], one can show that the phase transition between the coupled and the decoupled state is identical to the phase transition that occurs in two non-equivalent coupled two-dimensional XY models. This phase transition was studied by the renormalization group in [@nelson80_smectics; @granato86_xy_coupled]. It was found that in the case of interest to us, this phase transition was in the Ising universality class. Thus, one expects a quantum Ising phase transition between the state where the fermions are decoupled from the bosons and form a Mott insulator and the state where the fermions and bosons are coupled and form a superfluid. Of course, the same arguments can also be applied to the bosons at commensurate filling, the role of the fields $\phi_b$ and $\phi_\rho$ being simply reversed. Relation with experiments {#sec:param-boson-hamilt} ========================= Without a potential along the tubes {#sec:expt-no-pot} ----------------------------------- To connect experiments in quasi-one-dimensional confining waveguides with theoretical models in 1D, it is necessary to obtain estimates of the parameters that enter the Hamiltonians (\[eq:lattice\]),(\[eq:nolattice\]) and the bosonized Hamiltonian (\[eq:bosonized-spin\]),(\[eq:lambda-bosonized\]) and (\[eq:detun-bosonized\]). Since the parameters in the Hamiltonian (\[eq:lattice\]) depend on the periodic optical trapping potential, we will mainly focus on the parameters that enter in the continuum Hamiltonian Eq. (\[eq:nolattice\]) and in the bosonized Hamiltonian, i.e. the Luttinger exponent $K_\rho$, the velocity, and the fermion-boson coupling $\lambda$. Before giving an estimate of the parameters we need first to remind that, at the two-body level, there is a connection between the 1D boson-fermion model and the quasi-1D single channel model[@fuchs04_resonance_bf], thus we will use one or the other depending on the physical parameter wherein we are interested. Experimentally, molecules have been formed from fermionic atoms ${}^6$Li[@zwierlein_bec; @jochim_bec; @strecker_bec; @cubizolles_bec] and ${}^{40}$K[@greiner_bec; @regal_bec; @moritz05_molecules1d]. For ${}^6$Li, the mass is roughly 6 times the mass of the proton, $m_F({}^6\mathrm{Li})=9.6\times 10^{-27}\mathrm{kg}$, and for ${}^{40}$K it is $m_F({}^{40}K)=6.4\times 10^{-26}\mathrm{kg}$. Then, we need to determine the effective interaction of atoms under cylindrical confinement. For the interaction, we will assume a contact form, i.e. $V(x-x')= g_{1D}\delta(x-x')$ and the Hamiltonian describing the confined atoms reads: $$\begin{aligned} \label{eq:atatham} H&=&-\frac{\hbar^2}{2m} \int d \mathbf{r} \psi^\dagger_\sigma (\mathbf{r} ,t) \triangle \psi_\sigma (\mathbf{r},t)+ \frac{m}{2} \int d \mathbf{r} \psi^\dagger_\sigma (\mathbf{r},t) (\omega_\perp^2\mathbf{r}_\perp^2 +\omega_z z^2) \psi_\sigma (\mathbf{r},t) \nonumber \\ && + \frac 1 2 \int d \mathbf{r} d \mathbf{r'} \psi^\dagger_\sigma (\mathbf{r},t)\psi^\dagger_\sigma (\mathbf{r'},t) U(\mathbf{r}-\mathbf{r'}) \psi_\sigma(\mathbf{r'},t)\psi_\sigma(\mathbf{r},t),\end{aligned}$$ where $\mathbf{r}=(\mathbf{r_\perp},z)$, the second term represents the harmonic confinement potential, and: $$\label{eq:3dcontact} U(\mathbf{r})= g_{3D} \delta (\mathbf{r}),$$ is the atom-atom repulsion. The coupling constant $g_{3D}$ is expressed as a function of the atom-atom scattering length $a_s$ as[@abrikosov_book] $$\begin{aligned} \label{eq:g-scatt-length} g_{3D}= \frac{4 \pi \hbar^2 a_s}{m}.\end{aligned}$$ We introduce the following decomposition of the fermion annihilation operator: $$\psi (\mathbf{r},t)=\sum_m \phi_m(\mathbf{r_\perp}) \psi_{m\sigma}(z,t),$$ where $\{\psi_{m\sigma}(z),\psi_{m'\sigma'}(z')\}=\delta_{mm'}\delta_{\sigma,\sigma'}\delta(z-z')$, and the eigenstates of the transverse Hamiltonian $\phi_m$ satisfy the Schrödinger equation: $$\left( -\frac{\hbar^2}{2m} \triangle_\perp + \frac{m\omega_\perp^2}{2} \mathbf{r}_\perp^2\right) \phi_n (\mathbf{r}_\perp)=\hbar \omega_0 (n +\frac 1 2 ) \phi_n (\mathbf{r}_\perp),$$ and the orthonormal condition $\int d\mathbf{r}_\perp \phi_n (\mathbf{r}_\perp) \phi_m (\mathbf{r}_\perp)=\delta_{n,m}$. We will assume that we have a very elongated trap, with $\omega_z\ll \omega_\perp$ so that we can neglect the longitudinal confinement. The first line of the Hamiltonian (\[eq:atatham\]) can then be rewritten as: $$\begin{aligned} H_0=\sum_n \int dz \left[ -\frac{\hbar^2}{2m} \psi_{n,\sigma}^\dagger(z,t) \triangle \psi_{n,\sigma}(z,t) + \hbar \omega_0(n+1/2) \psi_{n,\sigma}^\dagger(z,t) \psi_{n,\sigma}(z,t) \right]\end{aligned}$$ If the transverse zero-point energy is much higher than the interaction energy per atom, the transverse motion is frozen in the ground state. It is known that virtual transitions to higher states can give rise to a divergence in 1D scattering length known as confinement induced resonance (CIR) which is a kind of Fano-Feshbach resonance[@olshanii_cir; @bergeman_cir; @yurovsky_feshbach; @astrakharchik_bose_1d]. We will ignore CIR for the moment and restrict ourselves to consider the lowest energy level $n=0$ for the transverse motion. The interaction in (\[eq:atatham\]) can be rewritten as: $$H_{int}=\frac 1 2 \int dz \int dz' \psi^\dagger_{0,\sigma}(z)\psi^\dagger_{0,\sigma}(z')V(z-z')\psi_{0,\sigma}(z')\psi_{0,\sigma}(z),$$ where the effective potential $V$ reads: $$\label{eq:ueff} V(z-z')=\int d\mathbf{r}_\perp d\mathbf{r'}_\perp |\phi_0(\mathbf{r}_\perp)|^2 |\phi_0(\mathbf{r'}_\perp)|^2 U(\mathbf{r}_\perp-\mathbf{r'}_\perp,z-z'),$$ Using the expression of the ground state wave function of the transverse motion: $$\phi_0(\mathbf{r}_\perp)=\sqrt{\frac{m\omega_\perp}{\pi\hbar}} e^{-\frac{m \omega_\perp}{2\hbar}\mathbf{r}_\perp^2},$$ substituting it into (\[eq:ueff\]), using the definition of the interaction (\[eq:3dcontact\]), and integrating over the transverse coordinate, we obtain $V(z)=g_{1D}\delta(z)$ with the effective one-dimensional coupling[@dunjko_bosons1d; @astrakharchik_bose_1d]: $$\label{eq:effcoup} g_{1D}=\frac{g_{3D}}{2\pi} \frac{m \omega_\perp}{\hbar}=2 \hbar a_s \omega_\perp.$$ Eq. (\[eq:effcoup\]) can be generalized to the case in which atoms of different species, with different trapping frequencies $\omega_{\perp,1}$ and $\omega_{\perp,2}$ are interacting with each other. In that case, $$\label{eq:effcoup-12} g_{1D}=4 \hbar a_{12} \frac{\omega_{\perp 1}\omega_{\perp 2}}{(\omega_{\perp 1}+\omega_{\perp 2})},$$ where $a_{12}$ is the atom-atom scattering length. Knowing $g_{1D}$, we can obtain the Luttinger exponent of fermions in (\[eq:bosonized-spin\]) as: $$\begin{aligned} K_\rho=\left(1+\frac{g_{1D}}{\pi \hbar v_F}\right)^{-1/2}.\end{aligned}$$ In the case of bosons, the Luttinger exponent $K_b$ must be extracted from the Lieb-Liniger equations[@lieb_bosons_1D; @takahashi_tba_review; @giamarchi_book_1d]. Having obtained the form of the effective interactions, we turn to the determination of the Josephson coupling $\lambda$ in the boson-fermion model (\[eq:nolattice\]). In the 3D case[@duine_feshbach_review; @romans04_bose_ising], the boson-fermion conversion factor is given by: $$\begin{aligned} \label{eq:BF-3D} \lambda_{3D}\int d^3\mathbf{r} \psi^\dagger_B(\mathbf{r}) \psi_\uparrow(\mathbf{r}) \psi_\downarrow(\mathbf{r}),\end{aligned}$$ with: $$\begin{aligned} \lambda_{3D}=\hbar \sqrt{\frac{4\pi a_{bg} \Delta \mu \Delta B}{m}},\end{aligned}$$ where $a_{bg}$ is the atom-atom scattering length far from resonance, $\Delta B$ is the width of the resonance and $\Delta\mu$ is the difference of magnetic moment between atom and molecule. Using the projection on the lowest level, we obtain: $$\begin{aligned} \lambda^2 =2\hbar \omega_\perp a_{bg} \Delta \mu \Delta B.\end{aligned}$$ Knowing the interaction $\lambda$ in terms of the relevant physical parameters, we finally have to determine the spatial cutoff $\alpha$ to use in the bosonization and comment on the validity of bosonization approach. In the case of the optical lattice Eq. (\[eq:lattice\]), the obvious spatial cutoff is the lattice spacing. The cutoff to use in bosonization for the continuum case of Eq. (\[eq:nolattice\]) is obtained in the following way. Bosonization is applicable as long as the kinetic energy of longitudinal motion of the particles is much smaller than the trapping energy $\hbar \omega_\perp$. Thus, we have to impose the condition: $\hbar v_F \Lambda \sim \hbar \omega_\perp$, where $\Lambda$ is the momentum cutoff[@dunjko_bosons1d]. The real space cutoff in the continuum case is thus $\alpha \sim \Lambda^{-1}\sim v_F/\omega_\perp$. The condition for perturbation theory to be valid is that the energy associated with the formation of molecules, $\lambda \alpha^{-1/2}$ is small with respect to the energy cutoff $\hbar \omega_\perp$. Therefore, perturbation theory is applicable when: $$\begin{aligned} \frac{\lambda}{\hbar (v_F \omega_\perp)^{1/2}} \ll 1,\end{aligned}$$ i.e. $$\begin{aligned} \frac{a_{bg} \mu \Delta B}{\hbar v_F} \ll 1\end{aligned}$$ Using the values given in Refs. [@strecker_bec; @bruun05_6li_interaction], we find that this parameter is small for $v_F \gg 3.2\times 10^{-2}$m/s. Since $v_F$ can be expected to be of the order of $10^{-3}$m/s, this is not unreasonable. In fact, using the values of the trapping frequency given by Moritz et al.[@moritz05_molecules1d] we find that: $$\begin{aligned} v_F&=&\sqrt{\frac{2N\hbar \omega_z}{m}} \\ &=&\sqrt{\frac{2\times 100 {\text m^{-3}}\times 10^{-34} {\text J}\cdot {\text s}\times 10^3 {\text Hz}}{6\times 1.6\times 10^{-27}{\text Kg}}} \\ &=& 4.6 \times 10^{-2} \mathrm{m/s}\end{aligned}$$ Therefore, we see that with ${}^6$Li at the narrow resonance, the ratio is of order $0.7$ and we can expect that our theory is valid qualitatively. Concerning $K_\rho$, we find that: $$\begin{aligned} K_\rho \sim 1-\frac{\omega_\perp a_{bg}}{v_F} &\simeq& 1-\frac{2\pi \times (69 \times 10^3) s^{-1} \times (80\times 0.5\times 10^{-10}) m}{4.6\times 10^{-2} m.s^{-1}},\\ &\simeq& 0.995\end{aligned}$$ i.e. interactions between fermions can be neglected. Since the interaction between the molecules [@petrov05_dimers_scattering] has a scattering length $a_{BB}=0.6a_{FF}$ one sees that molecules are only weakly interacting. With a potential along the tubes {#sec:expt-w-pot} -------------------------------- As we have seen in Sec. \[sec:expt-no-pot\], in the case of a two-dimensional optical lattice without periodic potential along the tubes, the repulsion between the bosonic molecules is weak, making the decoupling transition or the Mott transition impossible to observe. To increase the effect of the repulsion, one needs to increase the effective mass of the atoms by adding a periodic potential along the tubes. A periodic potential can be imposed along the tubes by placing the atoms in a three dimensional optical lattice. The atoms experience a potential: $$\begin{aligned} \label{eq:3d-potential} V(x,y,z)= V_x \sin^2\left(\frac {2\pi x}{\lambda_l}\right)+ V_y \sin^2\left(\frac {2\pi y}{\lambda_l}\right) + V_z \sin^2\left(\frac {2\pi z}{\lambda_l}\right),\end{aligned}$$ where $\lambda_l$ is the wavelength of the laser radiation, and $V_x\ll V_y,V_z$ so that the system remains quasi one-dimensional. The strength of the potential is measured in unit of the recoil energy $E_R=\frac{\hbar^2}{2m}\left(\frac{2\pi}{\lambda_l}^2\right)$ as $V_x=sE_R$. Typical values for $s$ are in the range $5$ to $25$. For lithium atoms[@anderson96_li_lattice], the typical value of $E_R$ is $76$kHz. If the potential is sufficiently strong, the atoms tend to localize in the lowest trap states near the minima of this potential. In our case, since the periodic potential along the tubes has shallower minima than in the transverse directions, the small overlap between the trap states in the longitudinal direction yields the single band Hamiltonian (\[eq:lattice\])[@jaksch05_coldatoms; @dickerscheid_feshbach_lattice]. An expression of the parameters of the lattice model (\[eq:lattice\]) in terms of the microscopic parameters has been derived in[@dickerscheid_feshbach_lattice]. On the lattice, the Fermi velocity of the atoms and the pseudo-Fermi velocity of the molecules can be reduced by increasing the depth of the periodic potential in the longitudinal direction. This allows in principle to move the system near the decoupling transition[@sheehy_feshbach] or the Mott transition, by reducing $K_\rho$. A second possible setup[@albus03_bosefermi] is to use a cigar shaped potential: $$\begin{aligned} \label{eq:cigar-shape} V_{\text{cigar}}(x,y,z)=\frac 1 2 m \omega_0^2 (x^2 + \mu^2 \mathbf{r}_\perp^2),\end{aligned}$$ with $\mu \gg 1$, so that the atoms and the molecules are strongly confined in the transverse direction, and to apply a periodic potential: $$\begin{aligned} V_{\text{periodic}}=V_0 \sin^2 \left(\frac{\pi x} d\right),\end{aligned}$$ in order to form the one dimensional structure described by the model (\[eq:lattice\]). The main difficulty of experiments in optical lattices is that the reduction of the bandwidth results in a reduction of the Fermi velocity $v_F$. Since the perturbative regime is defined by $\lambda \alpha^{1/2}\ll v_F$, this implies that by increasing the depth of the potential in the longitudinal direction one is also pushing the system in the regime where the boson-fermion conversion term must be treated non-perturbatively[@fuchs04_resonance_bf]. However, in that regime there isn’t anymore coexistence of atoms and molecules and the decoupling transition does not exist. Moreover, in that regime, the Mott transition becomes the usual purely fermionic or purely bosonic Mott transition.[@giamarchi_book_1d] Conclusions =========== We have studied a one-dimensional version of the boson-fermion model using the bosonization technique. We have found that at low-energy the system is described by two Josephson coupled Luttinger liquids corresponding to the paired atomic and molecular superfluids. Due to the relevance of the Josephson coupling for not too strong repulsion, the order parameters for the Bose condensation and fermion superfluidity become identical, while a spin gap and a gap against the formation of phase slips are formed. As a result of these gaps, we have found that the charge density wave correlations decay exponentially, differently from the phases where only bosons or only fermions are present[@fuchs_bcs_bec; @fuchs04_resonance_bf]. We have discussed the application of a magnetic field that results in a loss of coherence between the bosons and the fermion and the disappearance of the gap, while changing the detuning has no effect on the existence of the gaps until either the fermion or the boson density is reduced to zero. We have discussed the effect of a backscattering term which induces mutual locking of the density of bosons and fermions favoring charge density wave fluctuations resulting in a quantum Ising phase transition between the density wave phase and the superfluid phase. We have found a Luther-Emery point where the phase slips and the spin excitations can be described in terms of pseudofermions. For this special point in the parameter space, we have derived closed form expressions of the density-density correlations and the spectral functions. The spectral functions of the fermions are gapped, whereas the spectral functions of the bosons remain gapless but with an enhanced divergence for momentum close to zero. Finally, we have discussed the formation of a Mott insulating state in a periodic potential at commensurate filling. We have first established a generalization of the Lieb-Schulz-Mattis theorem, giving the condition for the existence of a Mott-insulating state without spontaneous breakdown of translational invariance. Then, we have discussed the properties of the Mott-state in the case of one atom or two atoms per site showing that in the first case the Josephson coupling is very effective in destabilizing the Mott state. Finally, we have considered the case when the atoms or the molecules can form a Mott state in absence of boson-fermion conversion and shown that this Mott-state is unstable. To connect our results with experiments in quasi-one-dimensional confining waveguides we have derived estimates of the parameters that enter the bosonized Hamiltonian, as the Luttinger exponents, using the values of the trapping frequency and density used in experiments. We have seen that bosons are only weakly interacting and the necessary small fermionic Luttinger parameter required to realize a strongly interacting system, render the Mott insulating and decoupled phases difficult to observe in experiments. A nontrivial challenge is the experimental realization of the coupled Luttinger liquids phase with parameters tunable through the exactly solvable point (the Luther-Emery point). We suggest that a Fano-Feshbach resonantly interacting atomic gas confined in a highly anisotropic (1d) trap and subject to a periodic optical potential is a promising candidate for an experimental measurement of the physical quantities (correlation functions) discussed here. Finally we would like to comment on the fact that an interesting edge states physics is expected when open boundary conditions (or a cut one-dimensional boson-fermion system) are considered. The existence of edge states at the end of the system could lead to significant contribution to the density profile that could be tested in experiments. The physics of the edge states will be similar to the one of Haldane gap systems, like the valence bond solid model, and a study along this direction is in progress. Calculation of the integrals in Eqs. (\[eq:boson\_structure\_factor\])– (\[eq:fermion\_structure\_factor\]) {#app:integral} =========================================================================================================== In this appendix we will derive a slightly more general integral than those of Eqs. (\[eq:boson\_structure\_factor\]) and (\[eq:fermion\_structure\_factor\]). Namely, we will consider: $$\begin{aligned} \label{eq:integral-general} g(y)= \int_0^\infty K_\mu(u) K_\nu(u) J_\lambda(y u) u^\alpha du.\end{aligned}$$ To find (\[eq:integral-general\]) explicitly, we use the series expansion of the Bessel function $J_\lambda$ from [@abramowitz_math_functions] \[Eq. (9.1.10)\]. We find that $$\begin{aligned} g(y)=\left(\frac y 2 \right)^\lambda \sum_{k=0}^\infty \left( -\frac{y^{2}}{4} \right)^k \frac 1 {\Gamma(k+1) \Gamma(k+\lambda+1)} \int_0^\infty K_\mu(u) K_\nu(u) u^{2k+\lambda+\alpha} du.\end{aligned}$$ The integral that appears in the expansion in powers of $y^2$ is a well known Weber-Schaefheitlin integral[@weber_magnus; @weber_erdelyi; @weber_gradshteyn] with two modified Bessel functions. Its expression is: $$\begin{aligned} \int_0^\infty K_\mu(u) K_\nu(u) u^{2k+\alpha+\lambda} du &=& \frac{2^{2(k-1)+\alpha+\lambda}}{\Gamma\left(2k+\alpha+\lambda+1\right)} \Gamma\left(k +\frac{1+\nu+\mu+\alpha +\lambda}{2}\right)\Gamma\left(k +\frac{1+\nu-\mu+\alpha +\lambda}{2}\right) \nonumber \\ && \times \Gamma\left(k +\frac{1-\nu+\mu+\alpha +\lambda}{2}\right) \Gamma\left(k +\frac{1-\nu-\mu+\alpha +\lambda}{2}\right) ,\end{aligned}$$ The resulting expression of $g(y)$ can be rearranged using the duplication formula for the Gamma function, Eq. (6.1.18) in [@abramowitz_math_functions]. The final expression of $g$ is: $$\begin{aligned} g(y)=\left(\frac y 2 \right)^\lambda \frac{\pi^{1/2}}{4} \sum_{k=0}^\infty \frac{\Gamma\left(k +\frac{1+\nu+\mu+\alpha +\lambda}{2}\right)\Gamma\left(k +\frac{1+\nu-\mu+\alpha +\lambda}{2}\right) \times \Gamma\left(k +\frac{1-\nu+\mu+\alpha +\lambda}{2}\right) \Gamma\left(k +\frac{1-\nu-\mu+\alpha +\lambda}{2}\right)}{\Gamma\left(k+1+\frac {\alpha+\lambda} 2\right)\Gamma\left(k+\frac {\alpha+\lambda+1} 2\right) \Gamma(k+\lambda+1) } \frac 1 {k!} \left(-\frac {y^2}{4}\right)^k.\end{aligned}$$ This series expansion is readily identified with the definition of the generalized hypergeometric function ${}_4F_3$ given in [@slater66_hypergeom_book]. So we find finally that: $$\begin{aligned} &&g(y)=\frac{\sqrt{\pi}}{4} \left(\frac y 2 \right)^\lambda \frac {\Gamma\left(\frac{1+\nu+\mu+\alpha +\lambda}{2}\right) \Gamma\left(\frac{1+\nu-\mu+\alpha +\lambda}{2}\right) \Gamma\left(\frac{1-\nu+\mu+\alpha +\lambda}{2}\right) \Gamma\left(\frac{1-\nu-\mu+\alpha +\lambda}{2}\right)} {\Gamma\left(1+\frac {\alpha+\lambda} 2\right) \Gamma\left(\frac{\alpha+\lambda+1} 2\right) \Gamma(\lambda+1)} \\ && \times {}_4F_3\left(\frac{1+\alpha+\lambda+\nu+\mu}{2},\frac{1+\alpha+\lambda+\nu-\mu}{2},\frac{1+\alpha+\lambda-\nu+\mu}{2},\frac{1+\alpha+\lambda-\nu-\mu}{2};1+\frac {\alpha+\lambda} 2,\frac {\alpha+\lambda+1} 2,1+\lambda; -\frac {y^2} 4\right)\nonumber\end{aligned}$$ For $\nu=\mu$, the function ${}_4F_3$ reduces to a simpler ${}_3F_2$ function. This leads to Eqs. (\[eq:boson\_structure\_factor\]) and (\[eq:fermion\_structure\_factor\]). Non local order parameter for the Mott state {#app:non_loc_ord} ============================================ The Mott insulating state can be characterized by the expectation value of a non-local order parameter as the Haldane gap state in a spin-1 chain[@kennedy_z2z2_haldane; @nijs_dof]. The non-local order parameter is defined as follows: $$\begin{aligned} \label{eq:VBS} O(k,l)=\langle b^\dagger_k b_k \prod_{j>k}^{l} e^{-i\frac{2\pi}{3}(2 b^\dagger_j b_j + \sum_\sigma f^\dagger_{j,\sigma} f_{j,\sigma})} b^\dagger_l b_l \rangle\end{aligned}$$ The string operator: $$\begin{aligned} \label{eq:string} O_{\text{string}}(k,l)= \prod_{j>k}^{l} e^{-i\frac{2\pi}{3}( 2 b^\dagger_j b_j + \sum_\sigma f^\dagger_{j,\sigma} f_{j,\sigma})}\end{aligned}$$ is a product of exponentials. As a result of its definition, we see that it is counting the number of fermions located between the sites $k$ and $l$, either unbound or forming a molecule. To derive a bosonized expression of this operator, we notice that $exp(2i\pi b^\dagger_j b_j)=1$ since $b^\dagger_j b_j$ has only integer eigenvalues and rewrite the string operator as: $$\begin{aligned} \label{eq:string-bis} \prod_{j>k}^{l} e^{i\frac{2\pi}{3} ( b^\dagger_j b_j - \sum_\sigma f^\dagger_{j,\sigma} f_{j,\sigma})}\end{aligned}$$ Using bosonization and Eq. (\[eq:string\]), we find: $$\begin{aligned} \label{eq:string-bosonized} O_{\text{string}}(x,x^{\prime})= \exp \left[ i\frac{2\pi}{3} (\rho_B-\rho_F) (x-x^{\prime}) -\frac{2}{\sqrt{3}} (\phi_-(x)-\phi_-(x^{\prime})) \right]\end{aligned}$$ Using (\[eq:boson-staggered-gap\]) and (\[eq:string-bosonized\]) we obtain the nonlocal order parameter (\[eq:VBS\]) as: $$\begin{aligned} \label{eq:vbs-bosonized} O(x,x^{\prime})=\langle e^{i\sqrt{\frac {8}{3}} (\phi_+(x)-\phi_+(x^{% \prime})) -i\frac{2\pi}{3} (2\rho_B +\rho_F)(x-x^{\prime})}\rangle.\end{aligned}$$ In the Mott insulating state with one fermion per site, we have $4(k_F+k_B)=2\pi(2\rho_B+\rho_F)=2n \pi $ where $n$ is an integer. Taking $x,x'\to \infty$, we see that the expectation value of the order parameter is non-vanishing in the Mott state. A related VBS type order parameter can be defined as: $$\begin{aligned} O^{\prime}(k,l)=\langle \left(\sum_\sigma f^\dagger_{k,\sigma} f_{k,\sigma}\right) \prod_{j>k}^{l} e^{i\frac{2\pi}{3}( b^\dagger_j b_j - \sum_\sigma f^\dagger_{j,\sigma} f_{j,\sigma})}\left(\sum_\sigma f^\dagger_{l,\sigma} f_{l,\sigma}\right) \rangle\end{aligned}$$ In bosonized form, we have: $$\begin{aligned} O^{\prime}(x,x^{\prime})=\langle e^{i\sqrt{\frac {4}{3}} (\phi_+(x)-% \phi_+(x^{\prime})) +i\frac{\pi}{3} (2\rho_B +\rho_F)(x-x^{\prime})}\rangle,\end{aligned}$$ and again this order parameter is non-vanishing. The physical interpretation of the non-zero expectation value of these nonlocal order parameters is that both bosons and fermions possess a hidden charge density wave order in the Mott insulator. This charge density wave is hidden as a result of the fluctuation of the density of fermions and the density of bosons. [169]{} natexlab\#1[\#1]{}bibnamefont \#1[\#1]{}bibfnamefont \#1[\#1]{}citenamefont \#1[\#1]{}url \#1[`#1`]{}urlprefix\[2\][\#2]{} \[2\]\[\][[\#2](#2)]{} , , , , ****, (), . , ****, (). , ****, (). , ****, (). , , , ****, (). , ****, (). , , , , , , ****, (). , , , , , , , ****, (). , , , , , ****, (). , , , , ****, (). , , , , , , , , ****, (). , , , , ****, (). , , , ****, (). , , , , , ****, (). , , , , ****, (). , , , , ****, (). , , , , ****, (). , , , , , , ****, (). , , , ****, (). , , , , , , , ****, (). , , , , , , , ****, (). , , , , , , ****, (). , , , ****, (). , , , , , , , , , ****, (). , , , , , , , ****, (). , , , , , , ****, (). , , , , , ****, (). , , , , ****, (). , , , , ****, (). , ****, (). , ****, (). , ****, (). , , , , , , , ****, (). , , , (), . , ****, (). , ****, (). , ** (, , ), chap. , . , ****, (). , ****, (). , ****, (). , , , ****, (), . , , , , , , , , , , , ****, (). , , , , , , , , , , , ****, (). , , , , , , ****, (), . , , , ****, (), . , , , , , ****, (). , , , , ****, (). , , , ****, (). , , , , , , , , , ****, (). , , , , , ****, (). , , , , , ****, (). , ****, (). , **, vol. of ** (, , ). , ****, (). , , , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , ****, (). , ****, (). , ****, (). , ****, (). , , , ****, (). , ****, (). , , , , , ****, (). (), . , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , , , ****, (). , , , ****, (). , , , ****, (). , ****, (), . , ****, (), . (), . , ****, (), . , , , ****, (). , ** (, , ). , ** (, , ). , , , (), . , ****, (), . , , , , ****, (). , , , , ****, (). , ****, (). , ****, (). (), . , ****, (). , , , ****, (). , , , ****, (). (), . , ****, (). , ****, (). , ****, (). , , , , , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ** (, , ). (), . , ****, (). , ****, (). , ****, (). , ****, (). , , , , ****, (). , ****, (). , , , ****, (). , , , ****, (). , ****, (). , ****, (). , ****, (). , , , ****, (). , ** (, , ). , , , , , , ****, (). , , , ****, (). , ****, (), . (), . , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , ****, (). , **, vol.  (, , ). , ****, (). , , , , ****, (). , ** (, , ), ISBN . , , , , **, vol.  (, , ). , ** (, ). , ****, (). (), . , , , , , , ****, (), . , , , , , , , ****, (). , , , , ****, (). , ****, (), . , ****, (). , ****, (). , ****, (), . , , , , ** (, , ), vol. , p. . , ** (, , ), p. , . , , , ** (, , ), p. , ed. , ** (, , ). , , , ** (, , ). , ****, (), . , , , ****, (). , ****, (). , ****, (). , ****, (). , , , ****, (), . , , , ****, (), . , ****, (). , , , ****, (). , ****, (). , ****, (). , ****, (). , , , , ****, (). , ****, (). , , , ****, (). , , , , ****, (). , , , ****, (). , , , , ****, (). , , , ****, (). , , , ****, (). , , , ****, (). , , , ****, ().
--- abstract: 'In this note we prove that a recent result stated by D. Y. Gao and R. W. Ogden on global minimizers and local extrema in a phase transition problem is false. Our goal is achieved by providing a thorough analysis of the context and result in question and counter-examples.' address: 'Towson University, Department of Mathematics, Towson, MD-21252, U.S.A.' author: - 'M. D. VOISEI' title: 'On D. Y. Gao and R. W. Ogden’s paper “Multiple solutions to non-convex variational problems with implications for phase transitions and numerical computation”' --- Introduction ============ The optimization problem we have in focus is introduced on [@Gao/Ogden:08 p.505] where one says “The primal variational problem (1.1) for the soft device can be written in the form $(\mathcal{P}_{s})$ : $\min\limits _{u\in\mathcal{U}_{s}}{\displaystyle \left\{ P_{s}(u)=\int_{0}^{1}\Bigl[\tfrac{1}{2}\mu u_{x}^{2}+\tfrac{1}{2}\nu\left(\tfrac{1}{2}u_{x}^{2}-\alpha u_{x}\right)^{2}\Bigr]dx-F(u)\right\} }$, $\quad$(3.2)” where (see [@Gao/Ogden:08 p. 501]) “$F(u)={\displaystyle \int_{0}^{1}}fudx+\sigma_{1}u(1)$ $\quad$(2.8)”, and (see [@Gao/Ogden:08 p. 505]) “$\mathcal{U}_{s}=\left\{ u\in\mathcal{L}(0,1)\mid u_{x}\in\mathcal{L}^{4}(0,1), \ u(0)=0\right\} $. $\quad(3.1)$” In Section 2 we explain the natural interpretation for the definition of $\mathcal{U}_{s}$. As mentioned on [@Gao/Ogden:08 p. 498], “$\mu$, $\nu$ and $\alpha$ are positive material constants”, and “we focus mainly on the case for which $\nu\alpha^{2}>2\mu$” (see [@Gao/Ogden:08 p.499]). Moreover (see [@Gao/Ogden:08 p. 498]), “To make the mixing of phases more dramatic, we introduce a distributed axial loading (body force) $f\in\mathcal{C}[0,1]$ per unit length of $I$”. These assumptions will be in force throughout this article. Therefore, from “$\sigma(x)={\displaystyle \int_{x}^{1}}f(s)ds+\sigma_{1}$ $\quad$(2.12)” one obtains that $\sigma\in\mathcal{C}^{1}[0,1]$ and “$F(u)={\displaystyle \int_{0}^{1}}\sigma(x)u_{x}dx$. $\quad$(2.13)” Furthermore (see [@Gao/Ogden:08 p. 501]), one says “... we obtain the Gao–Strang total complementary energy $\Xi(u,\zeta)$ (**16**) for this non-convex problem in the form $\Xi(u,\zeta)=\cdots={\displaystyle \int_{0}^{1}\left[\tfrac{1} {2}u_{x}^{2}(\zeta+\mu)-\alpha u_{x}\zeta-\tfrac{1}{2}\nu^{-1}\zeta^{2}\right]dx-\int_{0}^{1}}fudx-\sigma_{1}u(1)$, $\quad(2.7)$”. In the text above (**16**) is our reference [@Gao/Strang:89]. In [@Gao/Ogden:08 pp. 501, 502] one obtains “the so-called *pure complementary energy functional* (**7**, **17**) $P_{s}^{d}(\zeta)=-{\displaystyle \frac{1}{2}\int_{0}^{1}\left(\frac{ (\sigma+\alpha\zeta)^{2}}{\mu+\zeta}+\nu^{-1}\zeta^{2}\right)}dx$, $\quad$(2.14) which is well defined on the dual feasible space $\mathcal{S}_{a}=\left\{ \zeta\in\mathcal{L}^{2}\mid\zeta(x)+\mu\neq0,\ \zeta(x){\geqslant}-\tfrac{1}{2}\nu\alpha^{2},\ \forall x\in[0,1]\right\} .$” References (**7**, **17**) above are our references [@Gao:98] and [@Gao:99]. Probably, by “well defined on ... $\mathcal{S}_{a}$” the authors of [@Gao/Ogden:08] mean that $P_{s}^{d}(\zeta)\in\mathbb{R}$ for every $\zeta\in\mathcal{S}_{a}$. Note that $$\frac{(\sigma+\alpha\zeta)^{2}}{\mu+\zeta}+\nu^{-1}\zeta^{2}=\frac{\beta^{2}} {\zeta+\mu}+2\alpha\beta+\alpha^{2}(\zeta+\mu)+\nu^{-1}\zeta^{2}, \label{pdsc}$$ where (see [@Gao/Ogden:08 p. 502]) “$\beta(x)=\sigma(x)-\mu\alpha,\quad\eta=(\nu\alpha^{2}-2\mu)^{3}/27\nu.$ $\quad(2.21)$” and $\beta\in\mathcal{C}^{1}[0,1]$. Let us set $B_{0}:= \{s\in[0,1]\mid\beta(s)=0\}$, $B_{0}^{c}:=[0,1]\setminus B_{0}$. Let $\zeta\in\mathcal{L}^{2}:=\mathcal{L}^{2}[0,1]$ and set $E_{\zeta}:= \{x\in[0,1]\mid\zeta(x)+\mu=0\}$. In the sequel we use the convention $0/0:=0$, which agrees with the convention $0\cdot(\pm\infty):=0$ used in measure theory. With this convention in mind, from (\[pdsc\]), we obtain that $P_{s}^{d}(\zeta)\in\mathbb{R}$ if and only if $\frac{\beta^{2}}{\zeta+\mu}\in\mathcal{L}^{1}:=\mathcal{L}^{1}[0,1]$ which implicitly provides that $\frac{\beta^{2}}{\zeta+\mu}$ is well-defined almost everywhere (a.e. for short), i.e., $E_{\zeta}\setminus B_{0}$ is negligible. Consider $$A_{1}:=\left\{ \zeta\in\mathcal{L}^{2}\mid\frac{\beta^{2}}{\zeta+ \mu}\in\mathcal{L}^{1}\right\} \subset A_{2}:=\left\{ \zeta\in\mathcal{L}^{2}\mid\zeta(x)+\mu\neq0\text{ for a.e.\ }x\in B_{0}^{c}\right\} .$$ The set $A_{1}$ is the greatest subset of $\mathcal{L}^{2}$ for which $P_{s}^{d}(\zeta)\in\mathbb{R}$. Notice that $\zeta\in\mathcal{L}^{2}$ makes $\frac{\beta^{2}}{\zeta+\mu}$ be well-defined iff $\zeta\in A_{2}$. Also, note that $\mathcal{S}_{a}\subset A_{2}$. Denote by $\lambda$ the Lebesgue measure on $\mathbb{R}$. For $\zeta\in A_{1}$ we have $$P_{s}^{d}(\zeta)=-\frac{1}{2}\int_{[0,1]\setminus E_{\zeta}} \left(\frac{(\sigma+\alpha\zeta)^{2}}{\mu+\zeta}+\nu^{-1}\zeta^{2} \right)dx-\frac{1}{2}\nu^{-1}\mu^{2}\lambda(E_{\zeta}).\label{pdsc2}$$ Notice that in the trivial case $\beta=0$ we have $A_{1}=A_{2}=\mathcal{L}^{2}$ and so $P_{s}^{d}$ is well-defined on $\mathcal{S}_{a}$ because in this case $P_{s}^{d}$ is well-defined on $\mathcal{L}^{2}$. \[dom-gresit\] If $\beta\neq0$ then $\mathcal{S}_{a} \not\subset A_{1}$ and $P_{s}^{d}$ is not well-defined on $\mathcal{S}_{a}$. Because $\beta\neq0$ and $\beta\in\mathcal{C}^{1}[0,1]$, there exist $\gamma>0$ and $0{\leqslant}a<b{\leqslant}1$ such that $\beta^{2}(x){\geqslant}\gamma$ for every $x\in[a,b]$. Consider $\zeta(x):=x-a-\mu$ for $x\in(a,b)$ and $\zeta(x):=1-\mu$ for $x\in[0,1]\setminus(a,b)$. Then $\zeta(x)> -\mu>-\tfrac{1}{2}\nu\alpha^{2}$ for every $x\in[0,1]$ and $\zeta\in\mathcal{L}^{2}$; hence $\zeta\in\mathcal{S}_{a}$. Note that $\zeta\notin A_{1}$ since $\frac{\beta^{2}}{\zeta+\mu}\ge\frac{\gamma}{\zeta+\mu}>0$ and $\int_{0}^{1}\frac{1}{\zeta(x)+\mu}dx\ge\int_{a}^{b}\frac{dx}{x-a}=+\infty$. In the sequel $P_{s}^{d}$ is understood as being defined on $A_{1}$. Assume for the rest of this section that $\beta\neq0$. This yields that $\lambda(B_{0}^{c})>0$ since $\beta$ is continuous. The next result is surely known. We give the proof for easy reference. Assume that $\sum_{n{\geqslant}1}\alpha_{n}<\infty$, where $(\alpha_{n})_{n{\geqslant}1}\subset[0,\infty)$. Then there exists a non-decreasing sequence $(\beta_{n})_{n{\geqslant}1}\subset(0,\infty)$ with $\beta_{n}\rightarrow\infty$ and $\sum_{n{\geqslant}1}\alpha_{n}\beta_{n}<\infty$. Because the series $\sum_{n{\geqslant}1}\alpha_{n}$ is convergent, the sequence $(R_{n})$ converges to $0$, where $R_{n}:=\sum_{k=n+1}^{\infty}\alpha_{n}$. Hence there exists an increasing sequence $(n_{k})_{k{\geqslant}1}\subset\mathbb{N}^{\ast}$ such that $R_{n}<2^{-k}$ for all $k{\geqslant}1$ and $n{\geqslant}n_{k}$. Consider $\beta_{n}:=1$ for $n{\leqslant}n_{1}$ and $\beta_{n}:=k$ for $n_{k}<n{\leqslant}n_{k+1}$. Clearly, $(\beta_{n})$ is non-decreasing and $\lim\beta_{n}=\infty$. Moreover, $$\begin{aligned} \sum_{p=1}^{n_{m+1}}\alpha_{p}\beta_{p} & =\sum_{p=1}^{n_{1}} \alpha_{p}+\sum_{k=1}^{m}\sum_{p=n_{k}+1}^{n_{k+1}}\alpha_{p}\beta_{p} {\leqslant}\sum_{p=1}^{n_{1}}\alpha_{p}+\sum_{k=1}^{m}k\sum_{p=n_{k}+1}^{n_{k+1}}\alpha_{p}\\ & {\leqslant}\sum_{p=1}^{\infty}\alpha_{p}+\sum_{k=1}^{m}kR_{n_{k}}{\leqslant}\sum_{p=1}^{\infty}\alpha_{p}+\sum_{k=1}^{\infty}k2^{-k}<\infty.\end{aligned}$$ Therefore, the series $\sum_{n{\geqslant}1}\alpha_{n}\beta_{n}$ is convergent. Let us denote the algebraic interior (or core) of a set by “$\operatorname*{core}$”. Assume that $\beta\neq0$. Then $\operatorname*{core}A_{2}$ is empty. In particular, $\operatorname*{core}A_{1}=\operatorname*{core}\mathcal{S}_{a}=\emptyset$. Let $\overline{\zeta}\in A_{2}$ be fixed. Then there exists a sequence $(B_{n})_{n{\geqslant}1}$ of pairwise disjoint Lebesgue measurable sets (even intervals) such that $B_{0}^{c}=\cup_{n{\geqslant}1}B_{n}$ and $\lambda(B_{n})>0$ for $n{\geqslant}1$ (see e.g. [@roy-88 p. 42]). We have that $\sum_{n{\geqslant}1}\int_{B_{n}}\left\vert \overline{\zeta}(x)+\mu\right\vert ^{2}dx=\int_{B_{0}^{c}}\left\vert \overline{\zeta}(x)+\mu\right\vert ^{2}dx<\infty$, and so, from the previous lemma, there exists a non-decreasing sequence $(\beta_{n})_{n{\geqslant}1}\subset(0,\infty)$ with $\beta_{n}\rightarrow\infty$ and $$\sum_{n{\geqslant}1}\beta_{n}\int_{B_{n}}\left\vert \overline{\zeta}(x)+ \mu\right\vert ^{2}dx<\infty.\label{star}$$ Define $u:[0,1]\rightarrow\mathbb{R}$ by $u(x):=-\sqrt{\beta_{n}}(\overline{\zeta}(x)+\mu)$ for $x\in B_{n}$ and $u(x):=0$ for $x\in B_{0}$. From (\[star\]) we have that $u\in\mathcal{L}^{2}$. Moreover, for every $\delta>0$ there exists a sufficiently large $N{\geqslant}1$ such that $t=\beta_{N}^{-1/2}\in(0,\delta)$ and $\overline{\zeta}+tu\notin A_{2}$; this happens because $B_{N}\subset \{x\in B_{0}^{c}\mid\overline{\zeta}(x)+\beta_{N}^{-1/2}u(x)+\mu=0\}$ and $\lambda(B_{N})>0$. We proved that $\overline{\zeta}\not\in\operatorname*{core}A_{2}$. Hence $\operatorname*{core}A_{2}=\emptyset$. On page 502 of [@Gao/Ogden:08] it is said that “The criticality condition with respect to $\zeta$ leads to the ... ‘dual algebraic equation’ (DAE) for ... (2.14) ..., namely $\left(2\nu^{-1}\zeta+\alpha^{2}\right)(\mu+\zeta)^{2}=(\sigma-\mu\alpha)^{2}$. $\quad$(2.16)” To our knowledge, one can speak about Gâteaux differentiability of a function $f:E\subset X\rightarrow Y$, with $X,Y$ topological vector spaces, at $\overline{x}\in E$ only if $\overline{x}$ is in the core of $E$. As we have seen above, $P_{s}^{d}(\zeta)\in\mathbb{R}$ only for $\zeta\in A_{1}$ and $\operatorname*{core}A_{1}=\emptyset.$ *So what is the precise critical point notion for* $P_{s}^{d}$ *so that, when using that notion, one gets [@Gao/Ogden:08 (2.16)], other than just formal computation?* Taking into account the comment (see [@Gao/Ogden:08 p. 502]) “It should be pointed out that the integrand in each of $P_{s}^{d}(\zeta)$ and $P_{h}^{d}(\zeta)$ has a singularity at $\zeta=-\mu$, which explains the exclusion $\zeta\neq-\mu$ in the definition of $\mathcal{S}_{a}$”, we must point out that there is an important difference between the condition $\zeta\neq-\mu$ (as measurable functions) and $\zeta(x)\neq-\mu$ a.e. on $[0,1]$ since it is known that $\zeta\neq-\mu$ means that $\zeta(x)\neq-\mu$ on a set of positive measure. Alternatively, from the above considerations, $\mathcal{L}^{2}\setminus\{-\mu\}$ is a (nonempty) open set, while the set $A_{3}:=\left\{ \zeta\in\mathcal{L}^{2} \mid\lambda(E_{\zeta})=0\right\} $ has, as previously seen, empty core (in particular has empty interior). The quoted text from [@Gao/Ogden:08 p. 502] continues with: “In fact, it turns out that, in general, $\zeta=-\mu$ does not correspond to a critical point of either $P_{s}^{d}(\zeta)$ or $P_{h}^{d}(\zeta)$. Exceptionally, we may have $\zeta(x)=-\mu$ for some $x\in(0,1)$, but this is always associated with $\sigma(x)=\mu\alpha$. It is therefore important to note that when (2.16) holds, the integrand in (2.14) and (2.15) can be written as $2\alpha(\sigma+\alpha\zeta)+\nu^{-1}\zeta(3\zeta+2\mu)$, $\quad$(2.17) and when $\zeta=-\mu$ (and $\sigma=\mu\alpha$) this reduces to $\nu^{-1}\mu^{2}$, and the singularity in the integrand is thus removed.” This shows that the convention we used (namely $0/0=0$), our interpretation for $P_{s}^{d}(\zeta)$, and formula (\[pdsc2\]) are in agreement with the authors of [@Gao/Ogden:08] point of view. Problem reformulation ===================== Every $u$ in $\mathcal{U}_{s}$ is represented by an absolutely continuous function on $[0,1]$ with $u(0)=0$ and $u_{x}\in\mathcal{L}^{4}(0,1)$. More accurately, $\mathcal{U}_{s}=\left\{ u\in W^{1,4}(0,1)\mid u(0)=0\right\} $. In a different notation, denoting by $\mathcal{L}^{p}$ the space $\mathcal{L}^{p}[0,1]$, we have $$u\in\mathcal{U}_{s}\Longleftrightarrow\exists v\in\mathcal{L}^{4},\ \forall x\in[0,1]:u(x)=\int_{0}^{x}v(t)dt.$$ So, the problem $(\mathcal{P}_{s})$ above becomes $$(\widehat{\mathcal{P}}_{s}):\quad\min_{v\in\mathcal{L}^{4}}\widehat{P}_{s} (v)=\int_{0}^{1}\left[\tfrac{1}{2}\mu v^{2}+\tfrac{1}{2}\nu\left(\tfrac{1}{2}v^{2}-\alpha v\right)^{2}-\sigma v\right]dx$$ and $\Xi$ becomes $$\widehat{\Xi}(v,\zeta)=\int_{0}^{1}\left[\tfrac{1}{2}v^{2}(\zeta+\mu)- \alpha v\zeta-\tfrac{1}{2}\nu^{-1}\zeta^{2}-\sigma v\right]dx\quad(v\in\mathcal{L}^{4},\ \zeta\in\mathcal{L}^{2}).\label{def-Xi}$$ Note that $P_{s}(u)=\widehat{P}_{s}(v)$, $\Xi(u,\zeta)=\widehat{\Xi}(v,\zeta)$, for $u(x)=\int_{0}^{x}v(t)dt$, $x\in[0,1]$. It is easy to see that $\widehat{P}_{s}$ and $\widehat{\Xi}$ are Fréchet differentiable and $$\begin{gathered} d\widehat{P}_{s}(v)(h)=\int_{0}^{1}\left[\mu v+\nu\left(\tfrac{1}{2}v^{2}- \alpha v\right)(v-\alpha)-\sigma\right]hdx,\\ d\widehat{\Xi}(\cdot,\zeta)(v)(h)=\int_{0}^{1}\left[v(\zeta+\mu)-\alpha \zeta-\sigma\right]hdx,\\ d\widehat{\Xi}(v,\cdot)(\zeta)(k)=\int_{0}^{1}\left[\tfrac{1}{2}v^{2}- \alpha v-\nu^{-1}\zeta\right]kdx,\end{gathered}$$ for $v,h\in\mathcal{L}^{4}$ and $\zeta,k\in\mathcal{L}^{2}$. Therefore, $$\begin{gathered} \nabla \widehat{P}_{s}(v)=\mu v+\nu\left(\tfrac{1}{2}v^{2}-\alpha v\right) (v-\alpha)-\sigma\in\mathcal{L}^{4/3},\nonumber \\ \nabla \widehat{\Xi}(\cdot,\zeta)(v)=v(\zeta+\mu)-\alpha\zeta-\sigma\in \mathcal{L}^{4/3},\label{dzv}\\ \nabla \widehat{\Xi}(v,\cdot)(\zeta)=\tfrac{1}{2}v^{2}-\alpha v-\nu^{-1}\zeta \in\mathcal{L}^{2}.\nonumber \end{gathered}$$ Moreover, $$d^{2}\widehat{P}_{s}(v)(h,k)=\int_{0}^{1}\left[\mu+\nu\left(\tfrac{3} {2}v^{2}-3\alpha v+\alpha^{2}\right)\right]hkdx\quad(v,h,k\in\mathcal{L}^{4}).\label{d2ps}$$ Hence $v\in\mathcal{L}^{4}$ is a critical point of $\widehat{P}_{s}$ if and only if $$\mu v+\nu\left(\tfrac{1}{2}v^{2}-\alpha v\right)(v-\alpha)-\sigma=0, \label{cp-p}$$ and $(v,\zeta)\in\mathcal{L}^{4}\times\mathcal{L}^{2}$ is a critical point of $\widehat{\Xi}$ if and only if$$v(\zeta+\mu)-\alpha\zeta-\sigma=0,\quad\tfrac{1}{2}v^{2}-\alpha v-\nu^{-1} \zeta=0.\label{cp-xi}$$ From the expression of $\widehat{\Xi}$ we observe that $\widehat{\Xi}(v,\cdot)$ is concave on $\mathcal{L}^{2}$ for every $v\in\mathcal{L}^{4}$; furthermore, $\widehat{\Xi}(\cdot,\zeta)$ is convex (concave) for those $\zeta\in\mathcal{L}^{2}$ with $\zeta{\geqslant}-\mu$ $(\zeta{\leqslant}-\mu)$. \[zv\]Let $v\in\mathcal{L}^{4}$ and set $$\zeta_{v}:=\nu\left(\tfrac{1}{2}v^{2}-\alpha v\right).\label{zetav}$$ Then $\zeta_{v}\in\mathcal{L}^{2}$, $d\widehat{\Xi}(v,.)(\zeta)=0$ iff $\zeta=\zeta_{v}$, and $$\sup_{\zeta\in\mathcal{L}^{2}}\widehat{\Xi}(v,\zeta)=\widehat{\Xi} (v,\zeta_{v})=\widehat{P}_{s}(v).\label{max-xi-v}$$ The facts that for $(v,\zeta)\in\mathcal{L}^{4}\times\mathcal{L}^{2}$ we have $\zeta=\zeta_{v}$ iff $d\widehat{\Xi}(v,.)(\zeta)=0$ and $\zeta_{v}\in\mathcal{L}^{2}$ are straightforward. Equality (\[max-xi-v\]) is due to the fact that every critical point (namely $\zeta=\zeta_{v}$) of a concave function (namely $\widehat{\Xi}(v,\cdot)$) is a global maximum point of that function. Consider the set $$A_{0}:=\bigg\{ \zeta\in\mathcal{L}^{2}\mid\frac{\beta}{\zeta+\mu}\in \mathcal{L}^{4}\bigg\} =\bigg\{ \zeta\in\mathcal{L}^{2}\mid\frac{\sigma-\alpha\mu}{\zeta+\mu}\in\mathcal{L}^{4}\bigg\} ,$$ More precisely, $\zeta\in A_{0}$ iff $\zeta\in\mathcal{L}^{2}$, $E_{\zeta}\subset B_{0}$, and $\frac{\beta}{\zeta+\mu}\in\mathcal{L}^{4}([0,1] \setminus E_{\zeta})$. For $\zeta\in\mathcal{L}^{2}$ with $E_{\zeta}\subset B_{0}$ set $$v_{\zeta}:=\frac{\sigma+\alpha\zeta}{\zeta+\mu}=\alpha+\frac{\beta}{\zeta+\mu}. \label{vzeta}$$ More precisely $v_{\zeta}(x)=\alpha+\frac{\beta(x)}{\zeta(x)+\mu}$ for $x\in[0,1]\setminus E_{\zeta}$ and $v_{\zeta}(x)=\alpha$ for $x\in E_{\zeta}$. Notice that $\zeta\in A_{0}$ iff $v_{\zeta}\in\mathcal{L}^{4}$. In the sequel $\chi_{E}$ denotes the characteristic function of $E\subset[0,1]$, that is, $\chi_{E}(x)=1$ for $x\in E$ and $\chi_{E}(x)=0$ for $x\in[0,1]\setminus E$. \[vz\] For all $\zeta\in A_{0}$ and $v\in\mathcal{L}^{4}$ we have that $d\widehat{\Xi}(\cdot,\zeta)(v_{\zeta}+\chi_{E_{\zeta}}v)=0$ and $\widehat{\Xi}(v_{\zeta}+\chi_{E_{\zeta}}v,\zeta)=P_{s}^{d}(\zeta)$. According to (\[dzv\]), we have $$d\widehat{\Xi}(\cdot,\zeta)(v_{\zeta}+\chi_{E_{\zeta}}v)=(v_{\zeta}+ \chi_{E_{\zeta}}v)(\zeta+\mu)-\alpha\zeta-\sigma=\chi_{E_{\zeta}} v(\zeta+\mu)=0\quad\forall\zeta\in A_{0},\ v\in\mathcal{L}^{4}.$$ Since $\zeta\in A_{0}$ we have $v_{\zeta}=\alpha$ and $\sigma=\alpha\mu$ on $E_{\zeta}$. Taking into account (\[def-Xi\]), (\[pdsc2\]) and using that outside $E_{\zeta}$ we have $v_{\zeta}^{2} (\zeta+\mu)=(\sigma+\alpha\zeta)v_{\zeta}=\frac{(\sigma+\alpha\zeta)^{2}}{\zeta+\mu}$, we get$$\begin{aligned} \widehat{\Xi}(v_{\zeta}+\chi_{E_{\zeta}}v,\zeta)= & -\tfrac{1}{2}\int_{[0,1] \setminus E_{\zeta}}\left(\frac{(\sigma+\alpha\zeta)^{2}}{\mu+\zeta}+\nu^{-1}\zeta^{2}\right)dx\\ & +\int_{E_{\zeta}}\left(\alpha\mu(\alpha+v)-\tfrac{1}{2}\nu^{-1}\mu^{2}- \sigma(\alpha+v)\right)dx=P_{s}^{d}(\zeta).\end{aligned}$$ In particular every $\zeta\in A_{0}$ is in the domain of $P_{s}^{d}$, that is, $A_{0}\subset A_{1}$ (which can be observed directly, too since $\beta\in\mathcal{L}^{\infty}$). The argument above shows that $\widehat{\Xi}(\cdot,\zeta)$ has no critical points if $\zeta\in\mathcal{L}^{2} \setminus A_{0}$ (due to the lack of regularity) and $\widehat{\Xi}(\cdot,\zeta)$ has an infinity of critical points of the form $v_{\zeta}+\chi_{E_{\zeta}}v$ with $v\in\mathcal{L}^{4}$, if $\zeta\in A_{0}$ and $\lambda(E_{\zeta})>0.$ Furthermore, for $\zeta\in A_{0}$, if $\zeta+\mu{\geqslant}0$ $(\zeta+\mu{\leqslant}0)$ then $v_{\zeta}$ is a global minimum (maximum) point of $\widehat{\Xi}(\cdot,\zeta)$ because $\widehat{\Xi}(\cdot,\zeta)$ is convex (concave) and $v_{\zeta}$ is a critical point of $\widehat{\Xi}(\cdot,\zeta)$. Hence$$P_{s}^{d}(\zeta)=\left\{ \begin{array}{ccc} \inf_{v\in\mathcal{L}^{4}}\widehat{\Xi}(v,\zeta) & \text{if} & \zeta\in A_{0} \text{ and }\zeta{\geqslant}-\mu,\\ \sup_{v\in\mathcal{L}^{4}}\widehat{\Xi}(v,\zeta) & \text{if} & \zeta\in A_{0}\text{ and }\zeta{\leqslant}-\mu.\end{array}\right.\label{pds}$$ \[analysis\] $\quad$\ [(i)]{} Let $(\overline{v},\overline{\zeta})\in\mathcal{L}^{4}\times\mathcal{L}^{2}$ be a critical point of $\widehat{\Xi}$. Then $\zeta_{\overline{v}}=\overline{\zeta}$, $v_{\overline{\zeta}}=(1-\chi_{E_{\overline{\zeta}}})\overline{v}+ \alpha\chi_{E_{\overline{\zeta}}}\in\mathcal{L}^{4}$, $\overline{v}$ is a critical point of $\widehat{P}_{s}$, $\overline{\zeta}\in A_{0}$, $\widehat{P}_{s}(\overline{v})=\widehat{\Xi}(\overline{v},\overline{\zeta})= P_{s}^{d}(\overline{\zeta})$, $\left(2\nu^{-1}\overline{\zeta}+\alpha^{2}\right)(\mu+\overline{\zeta})^{2}= (\sigma-\mu\alpha)^{2}$ (i.e. $\overline{\zeta}$ satisfies [@Gao/Ogden:08 (2.16)]), and $$d^{2}\widehat{P}_{s}(\overline{v})(h,k)=3\int_{0}^{1}\left(\overline{\zeta}- \rho\right)hkdx\label{d2ps-cr}$$ for $h,k\in\mathcal{L}^{4},$ where $$\rho:=-\tfrac{1}{3}\left(\mu+\nu\alpha^{2}\right).\label{rho}$$ If, in addition, $\overline{\zeta}{\geqslant}-\mu$ then $$\sup_{\zeta\in\mathcal{L}^{2}}\inf_{v\in\mathcal{L}^{4}}\widehat{\Xi}(v,\zeta) =\inf_{v\in\mathcal{L}^{4}}\widehat{\Xi}(v,\overline{\zeta})=\widehat{\Xi} (\overline{v},\overline{\zeta})=\widehat{P}_{s}(\overline{v})=\inf_{v\in \mathcal{L}^{4}}\widehat{P}_{s}(v)=P_{s}^{d}(\overline{\zeta})=\sup_{\zeta\in A_{0},\zeta{\geqslant}-\mu}P_{s}^{d}(\zeta).\label{equa}$$ In particular $\overline{v}$ is a global minimum of $\widehat{P}_{s}$ on $\mathcal{L}^{4}$. [(ii)]{} If $v\in\mathcal{L}^{4}$ is a critical point of $\widehat{P}_{s}$ then $(v,\zeta_{v})\in\mathcal{L}^{4}\times\mathcal{L}^{2}$ is a critical point of $\widehat{\Xi}$. [(iii)]{} Assume that $\zeta$ is a measurable solution of $\left(2\nu^{-1} \zeta+\alpha^{2}\right)(\mu+\zeta)^{2}=(\sigma-\mu\alpha)^{2}$ and $v\in\mathcal{L}^{4}$. Then: [(a)]{} $\zeta\in A_{0}$ and $(v_{\zeta},\zeta)\in\mathcal{L}^{\infty}\times\mathcal{L}^{\infty}\subset \mathcal{L}^{4}\times\mathcal{L}^{2}$. Moreover, $$\widehat{P}_{s}(v_{\zeta}+v\chi_{E_{\zeta}})=P_{s}^{d}(\zeta)+\tfrac{1}{8} \nu\int_{E_{\zeta}}(v^{2}-\alpha^{2}+2\nu^{-1}\mu)^{2}dx\label{pps}$$ and $(v_{\zeta}+v\chi_{E_{\zeta}},\zeta)$ is a critical point of $\widehat{\Xi}$ iff $\widehat{P}_{s}(v_{\zeta}+v\chi_{E_{\zeta}})=P_{s}^{d}(\zeta)$ iff $$v^{2}-\alpha^{2}+2\nu^{-1}\mu=0~\text{a.e.\ in }E_{\zeta}.\label{valfa}$$ In particular, $(v_{\zeta},\zeta)$ is a critical point of $\widehat{\Xi}$ iff $\lambda(E_{\zeta})=0$. [(b)]{} $v_{\zeta}+v\chi_{E_{\zeta}}$ is a critical point of $\widehat{P}_{s}$ iff $$v\left(v^{2}-\alpha^{2}+2\nu^{-1}\mu\right)=0~\text{a.e.\ in }E_{\zeta}.$$ \(i) Assume that $(\overline{v},\overline{\zeta})\in \mathcal{L}^{4}\times\mathcal{L}^{2}$ is a critical point of $\widehat{\Xi}$. From (\[cp-xi\]) we see that $\overline{\zeta}=\zeta_{\overline{v}}$, $v_{\overline{\zeta}}=(1-\chi_{E_{\zeta}})\overline{v}+\alpha \chi_{E_{\overline{\zeta}}}\in\mathcal{L}^{4}$ which provides $\overline{\zeta}\in A_{0}$, $\overline{v}$ is a critical point of $\widehat{P}_{s}$, and $\left(2\nu^{-1}\overline{\zeta}+\alpha^{2}\right)(\mu+\overline{\zeta})^{2} =(\sigma-\mu\alpha)^{2}$. Note that $v_{\overline{\zeta}}+\chi_{E_{\overline{\zeta}}}(\overline{v}-\alpha)= \overline{v}$. The equality $\widehat{P}_{s}(\overline{v})=\widehat{\Xi}(\overline{v}, \overline{\zeta})=P_{s}^{d}(\overline{\zeta})$ is a consequence of Lemmas \[zv\], \[vz\]. Taking into account (\[d2ps\]) and the second equation in (\[cp-xi\]) we obtain that for $h,k\in\mathcal{L}^{4},$ $$d^{2}\widehat{P}_{s}(\overline{v})(h,k)=\int_{0}^{1}\left[\mu+\nu\left(3\nu^{-1} \overline{\zeta}+\alpha^{2}\right)\right]hkdx=3\int_{0}^{1} \left(\overline{\zeta}-\rho\right)hkdx.$$ Assume, in addition, that $\overline{\zeta}{\geqslant}-\mu$. Therefore $\widehat{\Xi}(\cdot,\overline{\zeta})$ is convex and $P_{s}^{d}(\overline{\zeta})=\inf_{v\in\mathcal{L}^{4}}\widehat{\Xi}(v, \overline{\zeta})$ (see (\[pds\])). Since $\overline{v}$ is a critical point it yields that $\overline{v}$ is a global minimum point of $\widehat{\Xi}(\cdot,\overline{\zeta})$. Similarly, $\overline{\zeta}$ is a global maximum point for the concave function $\widehat{\Xi}(\overline{v},\cdot)$. We get $$\widehat{\Xi}(v,\overline{\zeta}){\geqslant}\widehat{\Xi}(\overline{v},\overline{\zeta}) {\geqslant}\widehat{\Xi}(\overline{v},\zeta)\quad\forall v\in\mathcal{L}^{4},\ \forall\zeta\in\mathcal{L}^{2}.$$ This implies that $$\sup_{\zeta\in\mathcal{L}^{2}}\inf_{v\in\mathcal{L}^{4}}\widehat{\Xi}(v,\zeta) {\geqslant}\inf_{v\in\mathcal{L}^{4}}\widehat{\Xi}(v,\overline{\zeta})=\widehat{\Xi} (\overline{v},\overline{\zeta})=\sup_{\zeta\in\mathcal{L}^{2}}\widehat{\Xi} (\overline{v},\zeta){\geqslant}\inf_{v\in\mathcal{L}^{4}}\sup_{\zeta\in\mathcal{L}^{2}} \widehat{\Xi}(v,\zeta).$$ Since $\sup_{\zeta\in\mathcal{L}^{2}}\inf_{v\in\mathcal{L}^{4}} \widehat{\Xi}(v,\zeta){\leqslant}\inf_{v\in\mathcal{L}^{4}}\sup_{\zeta\in\mathcal{L}^{2}} \widehat{\Xi}(v,\zeta)$ (this happens for every function $\widehat{\Xi}$), we obtain together with (\[max-xi-v\]) that$$\sup_{\zeta\in\mathcal{L}^{2}}\inf_{v\in\mathcal{L}^{4}}\widehat{\Xi}(v,\zeta)= \inf_{v\in\mathcal{L}^{4}}\widehat{\Xi}(v,\overline{\zeta})=\widehat{\Xi} (\overline{v},\overline{\zeta})=\widehat{P}_{s}(\overline{v})=\inf_{v\in \mathcal{L}^{4}}\widehat{P}_{s}(v)=P_{s}^{d}(\overline{\zeta}).\label{minmax}$$ In particular, $\overline{v}$ is a global minimum of $\widehat{P}_{s}$ on $\mathcal{L}^{4}$. From (\[pds\]) and (\[minmax\]) we have $$P_{s}^{d}(\overline{\zeta})= \inf_{v\in\mathcal{L}^{4}}\widehat{\Xi}(v,\overline{\zeta})\le\sup_{\zeta\in A_{0},\zeta{\geqslant}-\mu}\inf_{v\in\mathcal{L}^{4}}\widehat{\Xi}(v,\zeta)\le \sup_{\zeta\in\mathcal{L}^{2}}\inf_{v\in\mathcal{L}^{4}}\widehat{\Xi} (v,\zeta)=P_{s}^{d}(\overline{\zeta}).$$ The assertion (ii) follows directly from (\[cp-p\]) and (\[cp-xi\]). \(iii) For given $\beta\in\mathcal{C}^{1}[0,1]$ relation $\left(2\nu^{-1}\zeta+ \alpha^{2}\right)(\mu+\zeta)^{2}=\beta^{2}(x)$ $(=(\sigma(x)-\mu\alpha)^{2})$ is a polynomial equation in $\zeta$. Let $\zeta:[0,1]\to\mathbb{R}$ be such that $\zeta(x)$ is a solution of the previous equation for every $x\in[0,1]$, that is, $\zeta$ is a solution of [@Gao/Ogden:08 (2.16)]. Because $\beta^{2}$ is bounded (being continuous) we have that $\zeta$ is bounded. If, in addition, $\zeta$ is measurable then $\zeta\in\mathcal{L}^{\infty}\subset\mathcal{L}^{2}$. \(a) Note that, due to [@Gao/Ogden:08 (2.16)], $E_{\zeta}\subset B_{0}$ and $v_{\zeta}=\alpha+\beta/(\mu+\zeta)$ outside $E_{\zeta}$ whence $(v_{\zeta}-\alpha)^{2}=2\nu^{-1}\zeta+\alpha^{2}\in\mathcal{L}^{\infty}([0,1] \setminus E_{\zeta})$. Therefore $v_{\zeta}\in\mathcal{L}^{\infty}\subset\mathcal{L}^{4}.$ This shows that $\zeta\in A_{0}$. Let $v\in\mathcal{L}^{4}$. Recall that $v_{\zeta}+v\chi_{E_{\zeta}}=\alpha+v$, $\sigma=\alpha\mu$, $\zeta=-\mu$ inside $E_{\zeta}$ and $v_{\zeta}+v\chi_{E_{\zeta}}=v_{\zeta}$ outside $E_{\zeta}$, and so $$\begin{aligned} \widehat{P}_{s}(v_{\zeta}+v\chi_{E_{\zeta}})= & \int_{[0,1]\setminus E_{\zeta}} \left[\tfrac{1}{2}\mu v_{\zeta}^{2}+\tfrac{1}{2}\nu\left(\tfrac{1}{2}v_{\zeta}^{2} -\alpha v_{\zeta}\right)^{2}-\sigma v_{\zeta}\right]dx\notag\\ & +\int_{E_{\zeta}}\left[\tfrac{1}{2}\mu(\alpha+v)^{2}+\tfrac{1}{2}\nu \left(\tfrac{1}{2}(\alpha+v)^{2}-\alpha(\alpha+v)\right)^{2}- \alpha\mu(\alpha+v)\right]dx.\label{r-phs}\end{aligned}$$ Taking into account that $\zeta(x)$ is a solution of the equation [@Gao/Ogden:08 (2.16)] and that for $x\in[0,1]\setminus E_{\zeta}$ one has $\zeta(x)+\mu\neq0$, one gets $$\tfrac{1}{2}\mu v_{\zeta}^{2}+\tfrac{1}{2}\nu\left(\tfrac{1}{2}v_{\zeta}^{2}- \alpha v_{\zeta}\right)^{2}-\sigma v_{\zeta}=-\tfrac{1}{2}\frac{(\sigma+\alpha\zeta)^{2}}{\zeta+\mu}-\tfrac{1}{2} \nu^{-1}\zeta^{2}\quad\text{on }[0,1]\setminus E_{\zeta}.$$ A simple verification shows that $$\tfrac{1}{2}\mu(\alpha+v)^{2}+\tfrac{1}{2}\nu\left(\tfrac{1}{2}(\alpha+v)^{2}- \alpha(\alpha+v)\right)^{2}-\alpha\mu(\alpha+v)=\tfrac{1}{8}\nu(v^{2}-\alpha^{2}+ 2\nu^{-1}\mu)^{2}-\tfrac{1}{2}\nu^{-1}\mu^{2}.$$ Using the preceding equalities, from (\[r-phs\]) and (\[pdsc2\]) we obtain that (\[pps\]) holds. A direct computation shows that $(v_{\zeta}+v\chi_{E_{\zeta}},\zeta)$ is a critical point of $\widehat{\Xi}$ if and only if $v^{2}-\alpha^{2}+2\nu^{-1}\mu=0$ a.e. in $E_{\zeta}$. Therefore the mentioned equivalencies are true. Moreover, because $\nu\alpha^{2}>2\mu$ the last equivalence holds, too. \(b) Similarly, $v_{\zeta}+v\chi_{E_{\zeta}}$ is a critical point of $\widehat{P}_{s}$ if and only if $v(v^{2}-\alpha^{2}+2\nu^{-1}\mu)=0$ a.e. in $E_{\zeta}$. Note the following direct consequences of the previous theorem: - if $v\in\mathcal{L}^{4}$ is a critical point of $\widehat{P}_{s}$, then $(v,\zeta_{v})$ is a critical point of $\widehat{\Xi}$, $\zeta_{v}\in\mathcal{L}^{2}$ is a solution of [@Gao/Ogden:08 (2.16)], and $\widehat{P}_{s}(v)= \widehat{\Xi}(v,\zeta_{v})=P_{s}^{d}(\zeta_{v})$; - if $\zeta$ is a measurable solution of [@Gao/Ogden:08 (2.16)] and $v\in\mathcal{L}^{4}$ satisfies (\[valfa\]) then $\zeta=\zeta_{(v_{\zeta}+ v\chi_{E_{\zeta}})}$ and $v_{\zeta}+v\chi_{E_{\zeta}}$ is a global minimum of $\widehat{P}_{s}$ on $\mathcal{L}^{4}$; - it is possible $v_{\zeta}+v\chi_{E_{\zeta}}$ to be a critical point of $\widehat{P}_{s}$ without $(v_{\zeta}+v\chi_{E_{\zeta}},\zeta)$ being a critical point of $\widehat{\Xi}$; such a situation happens when $v=0$ and $\lambda(E_{\zeta})>0.$ Discussion of [@Gao/Ogden:08 Th. 3] =================================== Based on the above considerations we discuss the result in [@Gao/Ogden:08 Th. 3]; for completeness we also quote its proof. Recall that “$\beta(x)=\sigma(x)-\alpha\mu,\quad\eta=(\nu\alpha^{2}-2\mu)^{3}/27\nu.$ $\quad(2.21)$” “<span style="font-variant:small-caps;">Theorem 3</span>. (Global minimizer and local extrema) Suppose that the body force $f(x)$ and dead load $\sigma_{1}$ are given and that $\sigma(x)$ is defined by (2.12). Then, if $\beta^{2}(x)>\eta$, $\forall x\in(0,1)$, the DAE (2.16) has a unique solution $\overline{\zeta}(x)>-\mu$, which is a global maximizer of $P_{s}^{d}$ over $\mathcal{S}_{a}$, and the corresponding solution $\overline{u}(x)$ is a global minimizer of $P_{s}(u)$ over $\mathcal{U}_{s}$, $P_{s}(\overline{u})=\min\limits _{u\in\mathcal{U}_{s}}P_{s}(u)=\max \limits _{\zeta\in\mathcal{S}_{a}}P_{s}^{d}(\zeta)=P_{s}^{d}(\overline{\zeta}).\quad(3.9)$ If $\beta^{2}(x){\leqslant}\eta$, $\forall x\in(0,1)$, then (2.16) has three real roots ordered as in (3.5). Moreover, $\overline{\zeta}_{1}(x)$ is a global maximizer of $P_{s}^{d}(\zeta)$ over the domain $\zeta>-\mu$, the corresponding solution $\overline{u}_{1}(x)$ is a global minimizer of $P_{s}(u)$ over $\mathcal{U}_{s}$ and $P_{s}(\overline{u}_{1})=\min\limits _{u\in\mathcal{U}_{s}}P_{s}(u)=\max \limits _{\zeta>-\mu}P_{s}^{d}(\zeta)=P_{s}^{d}(\overline{\zeta}_{1}).\quad(3.10)$ For $\overline{\zeta}_{2}(x)$ and $\overline{\zeta}_{3}(x)$, the corresponding solutions $\overline{u}_{2}(x)$ and $\overline{u}_{3}(x)$ are, respectively, a local minimizer and a local maximizer of $P_{s}(u)$, $P_{s}(\overline{u}_{2})=\min\limits _{u\in\mathcal{U}_{2}}P_{s}(u)= \min\limits _{\overline{\zeta}_{3}<\zeta<-\mu}P_{s}^{d}(\zeta)=P_{s}^{d} (\overline{\zeta}_{2})\quad(3.11)$ and $P_{s}(\overline{u}_{3})=\max\limits _{u\in\mathcal{U}_{3}}P_{s}(u)= \max\limits _{-\tfrac{1}{2}\nu\alpha^{2}<\zeta<\overline{\zeta}_{2}}P_{s}^{d}(\zeta) =P_{s}^{d}(\overline{\zeta}_{3}),\quad(3.12)$ where $\mathcal{U}_{j}$ is a neighborhood of $\overline{u}_{j}$, for $j=2,3$. *Proof.* This theorem is a particular application of the general analytic solution obtained in (**7**, **14**) following triality theory.” Note that (**7**, **14**) are our references [@Gao:98] and [@Gao:00]. Before discussing the previous result let us clarify the meaning of $\overline{\zeta}_{i}$ and $\overline{u}_{i}$ (as well as $\overline{\zeta}$ and $\overline{u}$) appearing in the statement above. Actually these functions are introduced in the statement of [@Gao/Ogden:08 Th. 2]: “<span style="font-variant:small-caps;">Theorem</span> 2. (Closed-form solutions) For a given body force $f(x)$ and dead load $\sigma_{1}$ such that $\sigma(x)$ is defined by (2.12), the DAE (2.16) has at most three real roots $\overline{\zeta}_{i}(x)$, $i=1,2,3$, given by (2.22)–(2.24) and ordered as $\overline{\zeta}_{1}(x){\geqslant}-\mu{\geqslant}\overline{\zeta}_{2}(x){\geqslant}\overline{\zeta}_{3}(x){\geqslant}-\tfrac{1}{2}\nu\alpha^{2}.\quad(3.5)$ For $i=1$, the function defined by $\overline{u}_{i}(x)={\displaystyle \int_{0}^{x}\frac{\sigma(s)+\alpha \overline{\zeta}_{i}(s)}{\overline{\zeta}_{i}(s)+\mu}ds}\quad$ (3.6) is a solution of (BVP1). For each of $i=2,3$, (3.6) is also a solution of (BVP1) provided $\overline{\zeta}_{i}$ is replaced by $\overline{\zeta}_{1}$ for values of $s\in\lbrack0,x)$ for which $\overline{\zeta}_{i}(s)$ is complex. For a given $t$ such that $\sigma_{1}$ is determined by $(3.3)_{3}$, one of $\overline{u}_{i}(x)$, $i=1,2,3$, satisfies $(3.4)_{3}$ and hence solves (BVP2). Furthermore, $P_{s}(\overline{u}_{i})=P_{s}^{d}(\overline{\zeta}_{i}),\ \ i=1,2,3.\quad(3.7)$” Considering $g:\mathbb{R}\rightarrow\mathbb{R}$ defined by $g(\varsigma):=\left(2\nu^{-1} \varsigma+\alpha^{2}\right)(\mu+\varsigma)^{2}$, in fact, $\overline{\zeta}_{1}(x)$ is the unique solution of the equation $g(\varsigma)=\beta^{2}(x)$ on the interval $[-\mu,\infty)$, that is $g(\overline{\zeta}_{1}(x))=\beta^{2}(x)$ and $\overline{\zeta}_{1}(x)\ge-\mu$, while $\overline{\zeta}_{2}(x)$ and $\overline{\zeta}_{3}(x)$ are the unique solutions of the equation $g(\varsigma)=\beta^{2}(x){\leqslant}\eta$ on $[\rho,-\mu]$ and $[-\tfrac{1}{2}\nu\alpha^{2},\rho]$, respectively. We give this argument later on. Besides the fact that it is not explained how $\frac{\sigma(s)+\alpha \overline{\zeta}_{i}(s)}{\overline{\zeta}_{i}(s)+\mu}$ is defined in the case $\overline{\zeta}_{i}(s)+\mu=0$ (which is possible if $\beta(s)=0$) the only mention to $\overline{u}_{i}$ is in the following paragraph of the proof of [@Gao/Ogden:08 Th. 2]: “For each solution $\overline{\zeta}_{i}$, $i=1,2,3$, the corresponding solution $\overline{u}_{i}$ is obtained by rearranging (2.10) in the form $u_{x}=(\sigma+\alpha\zeta)/(\zeta+\mu)$ and integrating. For a given $t$, the dead load $\sigma_{1}$ is uniquely determined by $(3.3)_{3}$. Therefore, there is one $\overline{u}_{i}(x)$, $i=1,2$ or $3$, satisfying the boundary condition $\overline{u}_{i}(1)=t$, and this solves (BVP2).” With our reformulation of the problem $(\mathcal{P}_{s})$, in the statements of [@Gao/Ogden:08 Th. 2, Th. 3] one must replace $\mathcal{U}_{s}$ by $\mathcal{L}^{4}$, $\overline{u}_{i}$ by $\overline{v}_{i}:=\frac{\sigma+\alpha\overline{\zeta}_{i}}{\overline{\zeta}_{i}+\mu},$ $\overline{u}$ by $\overline{v}$ and $P_{s}$ by $\widehat{P}_{s},$ $\mathcal{U}_{j}$ being a neighborhood of $\overline{v}_{j}$, for $j=2,3$ (this is possible since the operator $v\in\mathcal{L}^{4}\rightarrow u= \int_{0}^{x}v\in\mathcal{U}_{s}$ and its inverse $\mathcal{U}_{s}\ni u\rightarrow v=u_{x}\in\mathcal{L}^{4}$ are linear continuous under the $W^{1,4}$ topology on $\mathcal{U}_{s}$; whence $u\in\mathcal{U}_{s}$ is a local extrema for $P_{s}$ iff the corresponding $v\in\mathcal{L}^{4}$ is a local extrema for $\widehat{P}_{s}$). We agree that for $\tau^{2}>\eta$ the equation $\left(2\nu^{-1}\varsigma+\alpha^{2}\right)(\mu+\varsigma)^{2}=\tau^{2}$ has a unique real solution $\varsigma_{1}>-\mu$, while for $0{\leqslant}\tau^{2}{\leqslant}\eta$ the preceding equation has three real solutions $\varsigma_{1},\varsigma_{2},\varsigma_{3}$ with $$-\tfrac{1}{2}\nu\alpha^{2}{\leqslant}\varsigma_{3}{\leqslant}\rho{\leqslant}\varsigma_{2}{\leqslant}- \mu{\leqslant}\varsigma_{1},$$ where $\rho$ is given in Eq. (\[rho\]). Indeed, let $g:\mathbb{R}\rightarrow\mathbb{R}$ be defined by $g(\varsigma):=\left(2\nu^{-1}\varsigma+\alpha^{2}\right)(\mu+\varsigma)^{2}$. Then $g(\rho)=\eta$ and $$g^{\prime}(\varsigma)=2\nu^{-1}(\varsigma+\mu)^{2}+2\left(2\nu^{-1}\varsigma+ \alpha^{2}\right)(\mu+\varsigma)=6\nu^{-1}(\varsigma+\mu)(\varsigma-\rho).$$ The behavior and graph of $g$ are showed in Tables \[tab1\] and \[tab1-1\]. $\varsigma$ $-\infty$ $-\tfrac{1}{2}\nu\alpha^{2}$ $\rho$ $-\mu$ $+\infty$ ------------------------- ----------- ------------ ------------------------------ ------------ -------- ------------ -------- ------------ ----------- $g^{\prime}(\varsigma)$ $+$ $+$ $+$ $0$ $-$ $0$ $+$ $g(\varsigma)$ $-\infty$ $\nearrow$ $0$ $\nearrow$ $\eta$ $\searrow$ $0$ $\nearrow$ $+\infty$ : The behavior of $g$.[]{data-label="tab1"} ![image](graph-g) Note that for $\tau=0$ we have $\varsigma_{1}=\varsigma_{2}=-\mu$, $\varsigma_{3}=-\tfrac{1}{2}\nu\alpha^{2}$. For $\tau\in\mathbb{R}$ consider also the function $$h_{\tau}:\mathbb{R}\setminus\{-\mu\}\rightarrow\mathbb{R},\quad h_{\tau} (\varsigma):=-\frac{1}{2}\left[\frac{\tau^{2}}{\varsigma+\mu}+2\alpha\tau +\alpha^{2}(\varsigma+\mu)+\nu^{-1}\varsigma^{2}\right].$$ Note that $h_{0}$ is the restriction to $\mathbb{R}\setminus\{-\mu\}$ of the continuous function $\hat{h}_{0}:\mathbb{R}\rightarrow\mathbb{R}$ defined by $\hat{h}_{0}(\varsigma):=-\frac{1}{2}\left[\alpha^{2}(\varsigma+\mu) +\nu^{-1}\varsigma^{2}\right]$; clearly $\hat{h}_{0}(-\mu)=-\frac{1}{2}\nu^{-1}\mu^{2}$. Then $$h_{\tau}^{\prime}(\varsigma)=-\frac{1}{2}\left(-\frac{\tau^{2}}{(\varsigma+\mu)^{2}} +\alpha^{2}+2\nu^{-1}\varsigma\right)=-\frac{1}{2}\frac{g(\varsigma)-\tau^{2}} {(\varsigma+\mu)^{2}}\quad\forall\varsigma\in\mathbb{R}\setminus\{-\mu\}.$$ Taking into account the above discussion (note also the graph of $g$), the behavior of $h_{\tau}$ is presented in Table \[tab2\] for $\tau^{2}>\eta$ and in Table \[tab3\] for $0<\tau^{2}{\leqslant}\eta$. $\varsigma$ $-\infty$ $-\mu$ $\varsigma_{1}$ $+\infty$ -------------------------------- ----------- ------------ ------------------------- ------------ --------------------------- ------------ ----------- $h_{\tau}^{\prime}(\varsigma)$ $+$ $|$ $+$ $0$ $-$ $0$ $h_{\tau}(\varsigma)$ $-\infty$ $\nearrow$ $^{+\infty}|_{-\infty}$ $\nearrow$ $h_{\tau}(\varsigma_{1})$ $\searrow$ $-\infty$ : The behavior of $h$ for $\tau^{2}>\eta$.[]{data-label="tab2"} $\varsigma$ $-\infty$ $\varsigma_{3}$ $\varsigma_{2}$ $-\mu$ $\varsigma_{1}$ $+\infty$ -------------------------------- ----------- ------------ --------------------------- ------------ --------------------------- ------------ ------------------------- ------------ --------------------------- ------------ ----------- $h_{\tau}^{\prime}(\varsigma)$ $+$ $+$ $0$ $-$ $0$ $+$ $|$ $+$ $0$ $-$ $h_{\tau}(\varsigma)$ $-\infty$ $\nearrow$ $h_{\tau}(\varsigma_{3})$ $\searrow$ $h_{\tau}(\varsigma_{2})$ $\nearrow$ $^{+\infty}|_{-\infty}$ $\nearrow$ $h_{\tau}(\varsigma_{1})$ $\searrow$ $-\infty$ : The behavior of $h$ for $0<\tau^{2}\le\eta$.[]{data-label="tab3"} For $\tau=0$ we have that $\hat{h}_{0}$ is increasing on $(-\infty,- \tfrac{1}{2}\nu\alpha^{2}]$ and decreasing on $[-\tfrac{1}{2}\nu\alpha^{2},+\infty)$. So, when $\beta^{2}>\eta$ on $(0,1)$ by taking $\tau=\beta(x)$ we obtain a unique (continuous) solution $\overline{\zeta}$ of [@Gao/Ogden:08 (2.16)] (with $\overline{\zeta}(x)>-\mu$ for every $x\in(0,1)$), while for $\beta^{2}{\leqslant}\eta$ on $(0,1)$ one obtains three continuous solutions $\overline{\zeta}_{1},\overline{\zeta}_{2},\overline{\zeta}_{3}$ of [@Gao/Ogden:08 (2.16)] satisfying $$-\tfrac{1}{2}\nu\alpha^{2}{\leqslant}\overline{\zeta}_{3}{\leqslant}\rho{\leqslant}\overline{\zeta}_{2} {\leqslant}-\mu{\leqslant}\overline{\zeta}_{1}\text{ on }[0,1].$$ \[rem3\]In the case $\beta^{2}{\leqslant}\eta,$ $\overline{\zeta}_{1}, \overline{\zeta}_{2},\overline{\zeta}_{3}$ are not the only possible solutions of [@Gao/Ogden:08 (2.16)] with $\zeta\in\mathcal{L}^{\infty}\subset\mathcal{L}^{2}.$ More precisely, the general measurable solution $\zeta:[0,1]\rightarrow\mathbb{R}$ of [@Gao/Ogden:08 (2.16)] has the form $\zeta(x)=\overline{\zeta}_{j}(x)$ for $x\in B_{j},$ $j=1,2,3$, where $B_{1},B_{2},B_{3}$ are measurable pairwise disjoint subsets of $[0,1]$ such that $[0,1]=B_{1}\cup B_{2}\cup B_{3}$. This shows that none of the $\mathcal{L}^{2}$-solutions of [@Gao/Ogden:08 (2.16)] is isolated in $\mathcal{L}^{2}$ because all measurable solutions of [@Gao/Ogden:08 (2.16)] are in $\mathcal{L}^{\infty}$ and given a measurable solution of [@Gao/Ogden:08 (2.16)] one can modify it on a sufficiently small subset (by interchanging the values $\overline{\zeta}_{j}$) so that it stays still a solution and close enough. In the sequel we assume that $\beta\neq0$, and so $\lambda(B_{0}^{c})>0$; the case $\beta=0$ is completely uninteresting. *Discussion of [@Gao/Ogden:08 (3.9)].* Assume that $\beta^{2}>\eta$ on $(0,1)$. As we have seen above, $P_{s}^{d}(\zeta)\in\mathbb{R}$ only for $\zeta\in A_{1}\supset A_{0}$, so considering $\sup_{\zeta\in \mathcal{S}_{a}}P_{s}^{d}(\zeta)$ in [@Gao/Ogden:08 (3.9)] has no sense. In the sequel we find sets on which [@Gao/Ogden:08 (3.9)] holds and then try to further enlarge them. In this case the unique solution $\overline{\zeta}$ of [@Gao/Ogden:08 (2.16)] described above has $\overline{\zeta}+\mu>0$, and so $E_{\overline{\zeta}}=\emptyset$. According to Theorem \[analysis\] (i), (iii) (b) we have relation (\[equa\]) with $\overline{v}=v_{\overline{\zeta}}=\alpha+\beta/(\mu+\overline{\zeta})$. This shows that [@Gao/Ogden:08 (3.9)] holds if one replaces $\max_{\zeta\in\mathcal{S}_{a}}P_{s}^{d}(\zeta)$ by $\max_{\zeta\in A_{0}, \zeta\ge-\mu}P_{s}^{d}(\zeta)$ (note that $\{\zeta\in A_{0}\mid\zeta\ge-\mu\}\subset\mathcal{S}_{a}$ because $\nu\alpha^{2}>2\mu$). In fact we have that [@Gao/Ogden:08 (3.9)] holds if one replaces $\max_{\zeta\in\mathcal{S}_{a}}P_{s}^{d}(\zeta)$ by $\max_{\zeta\in A_{1},\zeta\ge-\mu}P_{s}^{d}(\zeta)$. Indeed, consider $\zeta\in A_{1}$ with $\zeta\ge-\mu$. Hence $E_{\zeta}$ is negligible since $B_{0}=\emptyset$; so we may (and do) suppose that $\zeta$ is finite-valued and $E_{\zeta}=\emptyset$. For $x\in[0,1]$, from the behavior of $h_{\tau}$ with $\tau=\beta(x)$ (see Table \[tab2\]), we obtain that $h_{\beta(x)}(\zeta(x)){\leqslant}h_{\beta(x)}(\overline{\zeta}(x))$, whence $$P_{s}^{d}(\zeta)=\int_{0}^{1}h_{\beta(x)}(\zeta(x))dx{\leqslant}\int_{0}^{1} h_{\beta(x)}(\overline{\zeta}(x))dx=P_{s}^{d}(\overline{\zeta}).$$ Next we study whether the last equality in [@Gao/Ogden:08 (3.9)] holds when one replaces $\max_{\zeta\in\mathcal{S}_{a}}P_{s}^{d}(\zeta)$ by $\max_{\zeta\in A_{1}^{0}}P_{s}^{d}(\zeta)$, where $A_{1}^{0}:=\{\zeta\in A_{1}\mid\zeta{\geqslant}-\tfrac{1}{2}\nu\alpha^{2}\}$. Unfortunately, that is not true. Indeed consider $\zeta_{n}(x)=-\mu-\gamma x$ for $x\in[n^{-1},1]$ and $\zeta_{n}(x)=-\mu-\gamma n^{-1}$ for $x\in[0,n^{-1})$, where $n\ge1$ and $0<\gamma<\tfrac{1}{2}\nu\alpha^{2}-\mu$. Clearly $-\mu-\gamma/n{\geqslant}\zeta_{n}{\geqslant}-\mu-\gamma>-\tfrac{1}{2}\nu\alpha^{2}$ on $[0,1]$, and so $\zeta_{n}\in A_{1}^{0}$ for every $n\ge1$. Moreover $$-\int_{0}^{1}\frac{\beta^{2}}{\zeta_{n}+\mu}dx{\geqslant}\int_{1/n}^{1} \frac{\beta^{2}(x)}{\gamma x}dx{\geqslant}\frac{\eta}{\gamma}\ln n\rightarrow\infty,$$ which proves that $\sup_{\zeta\in A_{1}^{0}}P_{s}^{d}(\zeta)=+\infty.$ In conclusion [@Gao/Ogden:08 (3.9)] holds if $\mathcal{S}_{a}$ is replaced by anyone of the sets $\left\{ \zeta\in A_{0}\mid\zeta\ge-\mu\right\} $, $\left\{ \zeta\in A_{1}\mid\zeta\ge-\mu\right\} $. Actually the argument above shows that [@Gao/Ogden:08 (3.9)] holds for $\beta^{2}>0$ on $(0,1)$ if $\mathcal{S}_{a}$ is replaced by anyone of the sets $\left\{ \zeta\in A_{0}\mid\zeta\ge-\mu\right\} $, $\left\{ \zeta\in A_{1}\mid\zeta\ge-\mu\right\} $, $\left\{ \zeta\in A_{0}\mid\zeta>-\mu\right\} $, $\left\{ \zeta\in A_{1}\mid\zeta>-\mu\right\} $ with $\overline{\zeta}=\overline{\zeta}_{1}>-\mu$ (when $\beta^{2}\le\eta$). The fact that $\overline{v}$ is a minimum point of $\widehat{P}_{s}$ is confirmed by the fact that $d^{2}\widehat{P}_{s} (\overline{v})(h,h)=3\int_{0}^{1}(\overline{\zeta}-\rho)h^{2}dx>0$ for every $h\in\mathcal{L}^{4}\setminus\{0\}$ [\[]{}see (\[d2ps-cr\])[\]]{}. *Discussion of [@Gao/Ogden:08 (3.10)].* Assume that $\beta^{2}{\leqslant}\eta$ on $(0,1)$. As above, if $0<\beta^{2}$ on $(0,1)$ then [@Gao/Ogden:08 (3.10)] holds if $\{\zeta\mid\zeta>-\mu\}$ is replaced by anyone of of the sets $\left\{ \zeta\in A_{0}\mid\zeta>-\mu\right\} $, $\left\{ \zeta \in A_{1}\mid\zeta>-\mu\right\} $. However, $P_{s}^{d}(\zeta)$ is not defined for any $\zeta\in\mathcal{L}^{2}$ with $\zeta>-\mu$ so the previous choices are the only natural ones. Indeed, take $\zeta(x):=-\mu+x\beta^{2}(x)$ for $x\in(0,1)$; then $\zeta\in\mathcal{L}^{2}\setminus A_{1}$ and $\zeta>-\mu$ on $(0,1).$ Assume now that $\lambda(B_{0})>0$ (which happens if $\beta$ is zero on a nontrivial interval). In this case $E_{\overline{\zeta}_{1}}=B_{0}$. Consider $\zeta\in A_{1}$ with $\zeta>-\mu$; hence $E_{\zeta}=\emptyset\subset B_{0}$. For $x\in B_{0}^{c}$ we have that $h_{\beta(x)}(\zeta(x)){\leqslant}h_{\beta(x)}(\overline{\zeta}_{1}(x))$ (see Table \[tab3\]), while for $x\in B_{0}$, because $h_{0}$ is decreasing on $[-\tfrac{1}{2}\nu a^{2},+\infty)\setminus\{-\mu\}$, we have that $$h_{\beta(x)}(\zeta(x))=-\tfrac{1}{2}\left[\alpha^{2}(\zeta(x)+\mu)+\nu^{-1} \zeta(x)^{2}\right]{\leqslant}-\tfrac{1}{2}\nu^{-1}\mu^{2}.$$ Together with relation (\[pdsc2\]) applied for $\overline{\zeta}_{1}$, it follows that $P_{s}^{d}(\zeta){\leqslant}P_{s}^{d}(\overline{\zeta}_{1})$. Taking $\varepsilon\in(0,1)$ and $\zeta_{\varepsilon}(x):=\overline{\zeta}_{1}(x)$ for $x\in B_{0}^{c}$ and $\zeta_{\varepsilon}(x):=-\mu+\varepsilon$ for $x\in B_{0}$ we see that $\zeta_{\varepsilon}\in A_{0}\subset A_{1}$ (since $\overline{\zeta}_{1}\in A_{0}$), $\zeta_{\varepsilon}>-\mu$, and $$\begin{aligned} P_{s}^{d}(\zeta_{\varepsilon}) & =\int_{0}^{1}h_{\beta(x)} (\zeta_{\varepsilon}(x))dx=\int_{B_{0}^{c}}h_{\beta(x)}(\overline{\zeta}_{1} (x))dx+\int_{B_{0}}h_{0}(-\mu+\varepsilon)dx\\ & =P_{s}^{d}(\overline{\zeta}_{1})-\tfrac{1}{2}[\nu^{-1}\varepsilon^{2} +\nu^{-1}(\nu\alpha^{2}-2\mu)\varepsilon]\lambda(B_{0}).\end{aligned}$$ This implies that $\sup_{\zeta\in A_{1},\zeta>-\mu}P_{s}^{d}(\zeta)= \sup_{\zeta\in A_{0},\zeta>-\mu}P_{s}^{d}(\zeta)=P_{s}^{d}(\overline{\zeta}_{1}).$ In the present case [\[]{}that is, $\lambda(B_{0})>0$[\]]{} $\overline{v}_{1}$ is not uniquely determined on $B_{0}$. Taking $\overline{v}_{1}= v_{\overline{\zeta}_{1}}$, i.e., $\overline{v}_{1}(s):=\frac{\sigma+\alpha\overline{\zeta}_{1}}{\overline{\zeta}_{1}+\mu}$ for $s\in B_{0}^{c}$ and $\overline{v}_{1}(s):=\alpha$ for $s\in B_{0}$ (the natural choice due to the convention $0/0:=0$), we see from (\[pps\]) applied for $\zeta=\overline{\zeta}_{1}$ and $v=0$ that $\widehat{P}_{s}(\overline{v}_{1})\neq P_{s}^{d}(\overline{\zeta}_{1})$, and so [@Gao/Ogden:08 (3.10)] does not hold. Again from (\[pps\]), we see that in order to have that $\widehat{P}_{s}(\overline{v}_{1})=P_{s}^{d}(\overline{\zeta}_{1})$ we need to have $\overline{v}_{1}:=v_{\overline{\zeta}_{1}}+\chi_{B_{0}}v$ with $v\in\mathcal{L}^{4}$ and $v^{2}-\alpha^{2}+2\nu^{-1}\mu=0$ a.e. in $E_{\overline{\zeta}_{1}}=B_{0}$ . In this case, according to Theorem \[analysis\] (iii)(a), $(\overline{v}_{1},\overline{\zeta}_{1})$ is a critical point of $\widehat{\Xi}$, and so [@Gao/Ogden:08 (3.10)] holds using (\[equa\]) if we replace $\max_{\zeta>-\mu}P_{s}^{d}(\zeta)$ by $\sup_{\zeta\in A_{0},\zeta>-\mu}P_{s}^{d}(\zeta)$ or $\sup_{\zeta\in A_{1}, \zeta>-\mu}P_{s}^{d}(\zeta).$ Again, in this case $d^{2}\widehat{P}_{s}(\overline{v}_{1})(h,h)=3\int_{0}^{1} (\overline{\zeta}_{1}-\rho)h^{2}dx>0$ for every $h\in\mathcal{L}^{4}\setminus\{0\}$ [\[]{}see (\[d2ps-cr\])[\]]{} as a confirmation of $\widehat{P}_{s}(\overline{v}_{1})=\min_{v\in\mathcal{L}^{4}}\widehat{P}_{s}(v)$. *Discussion of [@Gao/Ogden:08 (3.11)].* Assume that $\beta^{2}{\leqslant}\eta$ on $(0,1)$. It is easy to show that $\{\zeta\in\mathcal{L}^{2}\mid\rho <\zeta<-\mu\}\not\subset A_{1}$, which proves that $\{\zeta\in\mathcal{L}^{2}\mid\overline{\zeta}_{3}<\zeta<-\mu\}\not\subset A_{1}$; take for example $\beta^{2}>0$ and $\zeta(x)=-\mu+\frac{\rho+\mu}{\eta}x\beta^{2}(x)$, $x\in(0,1)$. This shows that $\min_{\overline{\zeta}_{3}<\zeta<-\mu}P_{s}^{d}(\zeta)$ in [@Gao/Ogden:08 (3.11)] does not make sense. Therefore in [@Gao/Ogden:08 (3.11)] we replace the set $\{\zeta\in\mathcal{L}^{2} \mid\overline{\zeta}_{3}<\zeta<-\mu\}$ by $A_{1}^{2}:=\{\zeta\in A_{1}\mid\overline{\zeta}_{3}<\zeta<-\mu\}$. Here again $B_{0}=E_{\overline{\zeta}_{2}}$. Since $\overline{\zeta}_{2}(x)$ is the unique minimum point of $h_{\beta(x)}$ on $[\overline{\zeta}_{3}(x),-\mu)$ for $x\in B_{0}^{c}$ and $h_{0}$ is decreasing on $[\overline{\zeta}_{3}(x),-\mu)=[-\tfrac{1}{2}\nu\alpha^{2},-\mu)$ for $x\in B_{0}$, we obtain that for every $\zeta\in A_{1}^{2}$ we have $$P_{s}^{d}(\zeta)=\int_{0}^{1}h_{\beta(x)}(\zeta(x))dx=\int_{B_{0}^{c}} \ldots+\int_{B_{0}}\ldots\ge\int_{B_{0}^{c}}h_{\beta(x)} (\overline{\zeta}_{2}(x))dx+\int_{B_{0}}\hat{h}_{0}(-\mu)dx=P_{s}^{d}(\overline{\zeta}_{2}).$$ As above we obtain that $\inf_{\zeta\in A_{1}^{2}}P_{s}^{d} (\zeta)=P_{s}^{d}(\overline{\zeta}_{2})$ after taking $0<\varepsilon<-\mu-\rho$ and considering $\zeta_{\varepsilon}\in A_{1}^{2}$ given by $\zeta_{\varepsilon}(x):=\overline{\zeta}_{2}(x)$ for $x\in B_{0}^{c}$ and $\zeta_{\varepsilon}(x):=-\mu-\varepsilon$ for $x\in B_{0}$. As seen in the previous discussion (recall also (\[pps\])), in order to have $\widehat{P}_{s}(\overline{v}_{2})=P_{s}^{d}(\overline{\zeta}_{2})$ in [@Gao/Ogden:08 (3.10)] we must take $\overline{v}_{2}:=v_{\overline{\zeta}_{2}}+\chi_{B_{0}}v$ with $v\in\mathcal{L}^{4}$ and $v^{2}-\alpha^{2}+2\nu^{-1}\mu=0$ a.e. in $E_{\overline{\zeta}_{2}}=B_{0}.$ With $\overline{v}_{2}$ chosen this way we have $$d^{2}\widehat{P}_{s}(\overline{v}_{2})(h,h)=3\int_{0}^{1}(\overline{\zeta}_{2}- \rho)h^{2}dx{\geqslant}0\quad\forall h\in\mathcal{L}^{4}.$$ However, in general, this $\overline{v}_{2}$ is not a local minimum point of $\widehat{P}_{s}$. First this is due to the fact that for $\beta^{2}=\eta$, we have $\overline{\zeta}_{2}=\rho$, $E_{\overline{\zeta}_{2}} =B_{0}=\emptyset$, and by direct computation the polynomial that governs $\widehat{P}_{s}$ (i.e. $\widehat{P}_{s}(v)=\int_{0}^{1}p(v(x))dx$), namely $$p(y):=\tfrac{1}{2}\mu y^{2}+\tfrac{1}{2}\nu\left(\tfrac{1}{2}y^{2}-\alpha y\right)^{2}-(\alpha\mu+\beta)y\label{pol-p}$$ has $v_{0}:=\alpha+\beta/(\rho+\mu)$ a critical point which is not a local extremum since $p'(v_{0})=p''(v_{0})=0$, $p'''(v_{0})=3\nu\beta/(\rho+\mu)\neq0$ and these facts imply that $v_{0}$ is not a local extremum point for $p$. This implies that whenever $\beta^{2}=\eta$, $v_{\rho}(x)=v_{0}$, $x\in(0,1)$ is a critical point but not a local extremum point of $\widehat{P}_{s}$. Based on the previous facts it is easy to build a counterexample by taking $\beta$ such that $\beta^{2}=\eta$ on a nonempty open sub-interval of $[0,1]$. Hence [@Gao/Ogden:08 (3.11)] is not true even with the correct choice of $\overline{v}_{2}$ and with $\{\zeta\in\mathcal{L}^{2}\mid\overline{\zeta}_{3}<\zeta<-\mu\}$ replaced by $A_{1}^{2}$ due to the failure of its first equality. The next natural question is whether $\overline{v}_{2}=v_{\overline{\zeta}_{2}}+\chi_{B_{0}}v$ with $v\in\mathcal{L}^{4}$ and $v^{2}-\alpha^{2}+2\nu^{-1}\mu=0$ a.e. in $E_{\overline{\zeta}_{2}}=B_{0}$ is a local minimum point of $\widehat{P}_{s}$ when $0<\beta^{2}<\eta$ on $(0,1)$ because in this case $d^{2}\widehat{P}_{s}(\overline{v}_{2})(h,h)>0$ for every $h\in\mathcal{L}^{4}\setminus\{0\}.$ The answer is still negative as the next example shows. Take $\nu:=\mu:=1,$ $\alpha:=3$ and $\beta:=\sqrt{5}$ (a constant function). Note that $\eta=343/27\simeq12.7>\beta^{2}$. Then the equation $g(\varsigma)=\beta^{2}$ has the solutions $\varsigma_{1}=(\sqrt{65}-9)/4,$ $\varsigma_{2}=-2$ and $\varsigma_{3}=-(\sqrt{65}+9)/4.$ Hence $\overline{\zeta}_{2}$ is the constant function $-2$ and so $E_{\overline{\zeta}_{2}}=B_{0}=\emptyset$. It follows that $P_{s}^{d}(\overline{\zeta}_{2})=h_{\sqrt{5}}(-2)=-3\sqrt{5}$ and $\overline{v}_{2}(x)=v_{\overline{\zeta}_{2}}(x)=y_{0}:=3-\sqrt{5}.$ Moreover, $$p\left(y_{0}+h\right)-p(y_{0})=\tfrac{1}{8}h^{2}\big(h-2\sqrt{5}+2\big) \big(h-2\sqrt{5}-2\big)\quad(h\in\mathbb{R}),$$ where $p$ is the polynomial in (\[pol-p\]). Consider $\varepsilon\in(0,1)$ and $v:[0,1]\rightarrow\mathbb{R}$ defined by $v(x):=y_{0}+2\sqrt{5}$ for $x\in[0,\varepsilon]$ and $v(x):=y_{0}$ for $x\in(\varepsilon,1].$ Then $\left\Vert v-\overline{v}_{2}\right\Vert _{\mathcal{L}^{4}}=2\sqrt{5}\varepsilon^{1/4}$ and $\widehat{P}_{s}(v)-\widehat{P}_{s}(\overline{v}_{2})=-10\varepsilon<0,$ which proves that $\overline{v}_{2}$ is not a local minimum of $\widehat{P}_{s}.$ *Discussion of [@Gao/Ogden:08 (3.12)].* Assume that $\beta^{2}{\leqslant}\eta$ on $(0,1)$. First note that $A_{1}^{3}:=\{\zeta\in\mathcal{L}^{2}\mid-\tfrac{1}{2}\nu\alpha^{2}< \zeta<\overline{\zeta}_{2}\}\subset A_{0}\subset A_{1}$ since $\overline{\zeta}_{2}\in A_{0}$, and so $P_{s}^{d}(\zeta)$ makes sense on $A_{1}^{3}$. More precisely, for $\zeta\in A_{1}^{3}$ we have that $$\left(\frac{\beta(x)}{\zeta(x)+\mu}\right)^{2}<\left(\frac{\beta(x)}{\overline{\zeta}_{2}(x) +\mu}\right)^{2}=2\nu^{-1}\overline{\zeta}_{2}(x)+\alpha^{2}\quad\forall x\in B_{0}^{c}$$ and $\frac{\beta(x)}{\zeta(x)+\mu}=0$ for $x\in B_{0}$; so $\frac{\beta}{\zeta +\mu}\in\mathcal{L}^{4}$, whence $\zeta\in A_{0}$. Since $\overline{\zeta}_{3}(x)$ is the maximum point of $h_{\beta(x)}$ on $[-\tfrac{1}{2}\nu\alpha^{2},\overline{\zeta}_{2}(x)]$ for $x\in B_{0}^{c}$ and $h_{0}$ is decreasing on $[-\tfrac{1}{2}\nu\alpha^{2},-\mu)$ and $\overline{\zeta}_{3}(x)=-\tfrac{1}{2}\nu\alpha^{2}$ for $x\in B_{0}$, we obtain similarly that $P_{s}^{d}(\zeta){\leqslant}P_{s}^{d}(\overline{\zeta}_{3})$ for every $\zeta\in A_{1}^{3}$ or equivalently $\sup_{\zeta\in A_{1}^{3}}P_{s}^{d}(\zeta)\le P_{s}^{d}(\overline{\zeta}_{3})$. In a similar manner one can prove $\sup_{\zeta\in A_{1}^{3}}P_{s}^{d}(\zeta) =P_{s}^{d}(\overline{\zeta}_{3})$ (see previous discussions). Since $\overline{\zeta}_{3}$ is not in $A_{1}^{3}$ for those $\beta$ with $\beta^{2}(x)=0$ or $\beta^{2}(x)=\eta$ at some $x\in(0,1)$, one must replace $\max_{\zeta\in A_{1}^{3}}P_{s}^{d}(\zeta)$ by $\sup_{\zeta\in A_{1}^{3}} P_{s}^{d}(\zeta)$. This time $\widehat{P}_{s}(\overline{v}_{3})=P_{s}^{d}(\overline{\zeta}_{3})$ because $E_{\overline{\zeta}_{3}}=\emptyset$. However, as previously seen for [@Gao/Ogden:08 (3.11)], in general $\overline{v}_{3}$ is not a local maximum point of $\widehat{P}_{s}$. So [@Gao/Ogden:08 (3.12)] is not true under the hypotheses of [@Gao/Ogden:08 Th. 3] again because its first equality does not hold. Conclusions =========== - The statement of [@Gao/Ogden:08 Th. 3] is ambiguous because $P_{s}^{d}(\zeta)$ is not defined for all $\zeta$ to which it is referred and $\overline{u}_{1}$ and $\overline{u}_{2}$ are not clearly and properly defined. - The left equalities in [@Gao/Ogden:08 (3.11)] and [@Gao/Ogden:08 (3.12)] are not true in general even when proper choices are considered for the sets where the maximization or minimization of $P_{s}$ happens and correct choices of $\overline{u}_{i}$ are taken. - For proper choices of the sets where the maximization or minimization of $P_{s}^{d}$ is considered, the right equalities in relations (3.9)–(3.12) of [@Gao/Ogden:08 Th. 3] follow by very elementary arguments. - Note that in Gao’s book [@Gaoo-book page 140] it is said: “For any given critical point $(\overline{u},\overline{\varsigma})\in\mathcal{L}_{c}$, we let $\mathcal{U}_{r}\times\mathcal{T}_{r}$ be its neighborhood such that, on $\mathcal{U}_{r}\times\mathcal{T}_{r},$ $(\overline{u},\overline{\varsigma})$ is the only critical point of $L$. The following result is of fundamental importance in nonconvex analysis.****\ **Theorem 3.5.2 (Triality Theorem)** Suppose that $(\overline{u}, \overline{\varsigma})\in\mathcal{L}_{c}$, and $\mathcal{U}_{r}\times\mathcal{T}_{r}$ is a neighborhood of $(\overline{u}, \overline{\varsigma})...$”****\ We think that such a result was used for proving [@Gao/Ogden:08 Th. 3]. Taking into account Remark \[rem3\], we see that, for $\beta^{2}{\leqslant}\eta$, $\widehat{\Xi}$ has no isolated critical points; hence the previous theorem cannot be used as an argument for [@Gao/Ogden:08 Th. 3]. Having in view this situation, it would be interesting to know the precise result the authors used to derive [@Gao/Ogden:08 Th. 3]. The paper was submitted to “The Quarterly Journal of Mechanics and Applied Mathematics” in February 2010 under the title ‘On a result about global minimizers and local extrema in phase transition’. Besides the title, the only difference is that in the Introduction instead of $(\mathcal{P}_{s})$ : $\min\limits _{u\in\mathcal{U}_{s}}{\displaystyle \left\{ P_{s}(u)=\int_{0}^{1}\Bigl[\tfrac{1}{2}\mu u_{x}^{2}+\tfrac{1}{2}\nu\left(\tfrac{1}{2}u_{x}^{2}-\alpha u_{x}\right)^{2}\Bigr]dx-F(u)\right\} }$, $\quad$(3.2)” there was $(\mathcal{P}_{s})$ : $\min\limits _{u\in\mathcal{U}_{s}}{\displaystyle \left\{ P_{s}(u)=\int_{0}^{1}\Bigl[\tfrac{1}{2}\mu u_{x}^{2}+\tfrac{1}{2}\nu\left(\tfrac{1}{2}u_{x}^{2}-\alpha u_{x}\right)^{2}dx-F(u)\Bigr]\right\} }$, $\quad$(3.2)”. [7]{} D. Y. Gao, Duality, triality and complementary extremum principles in non-convex parametric variational problems with applications, *IMA J. Appl. Math.* **61** (1998) 199–235. D. Y. Gao, General analytic solutions and complementary variational principles for large deformation nonsmooth mechanics, *Meccanica* **34** (1999) 169–198. D. Y. Gao, Analytic solutions and triality theory for non-convex and nonsmooth variational problems with applications, *Nonlinear Anal.* **42** (2000) 1161–1193. D. Y. Gao, *Duality Principles in Nonconvex Systems: Theory, Methods and Applications* (Kluwer, Dordrecht 2000). D. Y. Gao, R. W. Ogden, Multiple solutions to non-convex variational problems with implications for phase transitions and numerical computation, *Quart. J. Mech. Appl. Math.* **61** (2008) 497–522. D. Y. Gao, G. Strang, Geometric nonlinearity: Potential energy, complementary energy, and the gap function, *Quart. Appl. Math.* **47** (1989) 487–504. H. L. Royden, *Real analysis (3rd edition)* (Macmillan Publishing Company, New York 1988).
--- author: - Carlo Graziani title: 'A “Spiffy” Trigger for Gamma-Ray Bursts' --- Introduction ============ The search for untriggered GRBs is has been an active field of research at least since the public release of BATSE data. Most recently, Kommers et al. [@kommers01] and Stern et al. [@stern01] have described searches of BATSE data directed at revealing GRBs that occurred without being detected by the BATSE on-board triggers, either for operational reasons or because their spectral or temporal morphologies were poor fits to the on-board trigger criteria. Naturally, the same considerations apply to GRB detection by HETE. The HETE mission deploys an unprecedentedly varied set of trigger criteria — the FREGATE DSP trigger [@atteia02] uses four timescales and operates in two energy bands, while the WXM XG trigger [@fenimore01; @tavenner02] is typically configured to apply thirty or so criteria, some on WXM data and others on FREGATE data. Nevertheless the variety of GRB morphologies, and operational considerations, can result in GRBs that are not detected in flight. It is important to develop a strategy to mine HETE survey data for such untriggered GRBs. Typical ground searches for untriggered bursts use detection methods that largely mirror on-board trigger algorithms [@kommers01; @stern01]. Background and burst samples are specified in terms of acquisition times that are of fixed duration and of fixed elapsed time from each other. Each GRB timescale — risetime or duration — is probed by a different fixed choice of these parameters. Each such fixed set of time windows is then swept through the time series being probed for transient events, searching for samples that maximize the signal-to-noise of the background-subtracted burst sample. This scheme has the disadvantage of being rather inflexible about the the timescales that are probed. This inflexibility is especially troublesome when seeking weak signals, for which inaptly chosen burst or background samples may lead to a signal dilution that prevents detection. In this work we describe an alternative approach that has been quite successful in identifying extremely weak events. In this approach, the background and burst samples are treated as free parameters, which are varied using the downhill simplex method of Nelder & Mead [@nelder65] to maximize the signal-to-noise ratio of the background-subtracted burst sample. Implementation ============== The operation of the code, [spiffy-trigger]{}, is illustrated in Figure \[timefig\]. The trigger operates as a simplified “bracket trigger” [@fenimore01; @tavenner02], in that the background is estimated using samples before and after the burst sample. The background is assumed constant, so no interpolation (linear or otherwise) is performed to obtain the background rate during the burst sample. This restriction is not an essential feature of the method, but rather merely a simplification. The two background intervals are restricted to remain equidistant from the burst sample, so as to prevent the maximization procedure from exploiting a monotonic increase or decrease in the background rate to estimate an erroneously low background, by driving one of the background samples to a region of lower background without driving the other to a region of higher background. The code operates on a time-series of integer counts. It advances a trigger window of fixed duration through the time series by steps of size [$\tau_{\mbox{skip\ }} $]{}. It sets up a burst sample interval, of duration [$\tau_{\mbox{bu\ }} $]{}, bracketed at an elapsed time [[$\tau_{\mbox{el\ }} $]{}]{} by two background sample intervals of duration [$\tau_{\mbox{bk\ }} $]{}, the second of which ends at time [$t_{\mbox{end\ }} $]{}. It calculates the SNR for the burst sample, assuming a background rate calculated by a weighted average of the count rates in the two background intervals. The SNR is computed as follows: Assume for the sake of generality that the two background accumulation times may differ, so that we accumulate [$n_{\mbox{bk1\ }} $]{}counts in the first background during an accumulation time [$\tau_{\mbox{bk1\ }} $]{}, and [$n_{\mbox{bk2\ }} $]{}counts in the later background accumulation time [$\tau_{\mbox{bk2\ }} $]{}. Denoting the estimated background counts during the burst sample by [$\mu_{\mbox{bu\ }} $]{}, and assuming the Gaussian approximation to the Poisson distribution, it is a straightforward exercise in Gaussian estimation to show that $$\begin{aligned} {\ensuremath{\mu_{\mbox{bu\ }} }}&=&\frac{{\ensuremath{\tau_{\mbox{bk1\ }} }}+{\ensuremath{\tau_{\mbox{bk2\ }} }}}{{\ensuremath{\tau_{\mbox{bu\ }} }}}\,{\ensuremath{\Sigma^2\ }},\\ {\ensuremath{\Sigma^2\ }}&=&{\ensuremath{\tau_{\mbox{bu\ }} }}^2 \left(\frac{{\ensuremath{\tau_{\mbox{bk1\ }} }}^2}{{\ensuremath{n_{\mbox{bk1\ }} }}}+\frac{{\ensuremath{\tau_{\mbox{bk2\ }} }}^2}{{\ensuremath{n_{\mbox{bk2\ }} }}}\right)^{-1},\end{aligned}$$ where [$\Sigma^2\ $]{}is the variance in the estimate [$\mu_{\mbox{bu\ }} $]{}. Denote by [$n_{\mbox{bu\ }} $]{}the counts that we accumulate during the burst sample. Then the net signal in the burst sample is $s={\ensuremath{n_{\mbox{bu\ }} }}-{\ensuremath{\mu_{\mbox{bu\ }} }}$. The variance in $s$ is the sum of [$\Sigma^2\ $]{}and the variance in [$n_{\mbox{bu\ }} $]{}. Triggering is essentially hypothesis testing, with the null hypothesis consisting of the assumption that the count rate in the burst sample is the same as what is estimated using the background samples. Thus the appropriate choice for the variance of [$n_{\mbox{bu\ }} $]{}is “model variance”, that is ${\ensuremath{\sigma_{\mbox{bu\ }}^2 }}={\ensuremath{\mu_{\mbox{bu\ }} }}$. Thus the SNR of the burst sample is $$\mbox{SNR}=\frac{{\ensuremath{n_{\mbox{bu\ }} }}-{\ensuremath{\mu_{\mbox{bu\ }} }}}{\left({\ensuremath{\mu_{\mbox{bu\ }} }}+{\ensuremath{\Sigma^2\ }}\right)^{1/2}}. \label{snreq}$$ This is the quantity that [spiffy-trigger]{} endeavors to maximize. The code uses the simplex method to vary the four parameters [$t_{\mbox{end\ }} $]{}, [$\tau_{\mbox{bk\ }} $]{}, [$\tau_{\mbox{el\ }} $]{}, and [$\tau_{\mbox{bu\ }} $]{}, which are viewed by the simplex minimization routine as continuous parameters. A very lax convergence criterion is imposed — the absolute variation of the SNR must be less than 0.1 across the simplex — because in triggering there is no point in determining the SNR to great accuracy, and because we don’t want to spend many CPU cycles chasing noise. The parameter [$t_{\mbox{end\ }} $]{}is constrained to be later than the end of the trigger window in the previous invocation. Consequently, the arrangement of burst and background samples “accordions out” backwards in time from the current time, without repeating choices of intervals made during previous iterations. When there is no transient event in the data, the simplex will typically not wander very far from its initial configuration. On the other hand if there is a transient event, and the initial simplex includes a vertex corresponding to a configuration in which the burst sample even partially includes the event, the simplex will rapidly climb the SNR slope, dynamically adjusting its timescales until the event is well-bracketed. Since the simplex does not wander far if it doesn’t find much at the outset, it is important to ensure that [$\tau_{\mbox{skip\ }} $]{}is not so large that a short event may “fall between the cracks” — that is, fail to have any of its constituent time samples included in a burst sample probed by the initial simplex. It is therefore a good idea to ensure that at simplex initialization, ${\ensuremath{\tau_{\mbox{bu\ }} }}>{\ensuremath{\tau_{\mbox{skip\ }} }}$ for at least one of the simplex vertices. This ensures that every data sample passes through the burst sample of at least one initial simplex parameter vertex. Constraints on the time parameters are imposed by making the SNR function return a large negative value when the constraints are violated. The previously-discussed constraint on the parameter [$t_{\mbox{end\ }} $]{}is enforced in this way. The code also uses this parameter-constraint mechanism to prevent intervals from encroaching upon each other, to ensure that [$\tau_{\mbox{bk\ }} $]{}, [$\tau_{\mbox{el\ }} $]{}, and [$\tau_{\mbox{bu\ }} $]{}remain positive-valued, and to keep all intervals inside the current trigger window. Other useful constraints that it is good practice to enforce are a minimum value for [$\tau_{\mbox{el\ }} $]{}(so that burst and background samples are well-separated), a minimum duration for [$\tau_{\mbox{bk\ }} $]{}(so as to minimize the risk of the background nestling into a low fluctuation), and a maximum duration [$\tau_{\mbox{bu\ }} $]{}(so as to minimize the risk of triggering on very long duration trends in the background). Deployment ========== [spiffy-trigger]{} is currently used in three different contexts within the HETE project: - The Chicago ground location pipeline [@graziani02] uses [spiffy-trigger]{} to identify the burst sample time with maximal signal-to-noise in the WXM data. This sample is used throughout the subsequent location analysis. - A robot script that runs after every downlink uses [spiffy-trigger]{} to search for untriggered bursts in FREGATE band C (40-300 keV) 1.3s resolution survey data. During normal HETE operation, it tends to see about 1 possible GRB per week, above and beyond detecting all triggers picked up in flight that are sufficiently hard, and long (or short but bright) to register at this timescale and in this energy band. Figure \[robotgrb\] shows an example of such an event, which was confirmed by BeppoSax. - The general untriggered burst search described by Butler & Doty [@butler02] uses [spiffy-trigger]{} in parallel to Butler & Doty’s wavelet trigger, and runs on all survey data products. GRB011212 was in fact detected on the ground in this pipeline, by both the wavelet algorithm and by [spiffy-trigger]{}. Conclusions =========== The [spiffy-trigger]{} algorithm can probe a wide spectrum of burst timescales. It is still possible that initialization with a very short [$\tau_{\mbox{bu\ }} $]{}might miss a very long, slow-rising event, or that a very long initial [$\tau_{\mbox{bu\ }} $]{}might cause the SNR of a weak, short event to be too diluted to register before convergence is reached. However, careful choice of the range of [$\tau_{\mbox{bu\ }} $]{}spanned by the initial simplex can address this issue to a large extent. In any event, the algorithm may be re-run with radically different initial values of [$\tau_{\mbox{bu\ }} $]{}. For example, re-running the algorithm three times, with [$\tau_{\mbox{bu\ }} $]{}set initially to 0.1s, 3s, and 100s — with suitably chosen initial simplices — one may probe a range of timescales that would probably require hundreds of criteria for a traditional trigger algorithm to examine. In principle, there is no reason the [spiffy-trigger]{} algorithm could not be deployed in flight in a future mission. The floating-point operations that it performs are not particularly expensive, particularly for modern space computing hardware. While more complex than a traditional trigger, it is not vastly more so, and its complexity is offset by its great flexibility, configurability, and dynamic range of burst timescales to which it is sensitive. [99]{} Kommers, J. M., Lewin, W. H. G., Kouveliotou, C., van Paradijs, J., Pendleton, G. N., Meegan, C. A., and Fishman, G. J., [*A*pJ]{} [**134**]{}, 385 (2001) Stern, B. E., Tikhomirova, Y., Kompaneets, D., Svensson, R., & Poutanen, J., [*A*pJ]{}, [**563**]{}, 80, 2001 Atteia, J-L., Boer, M., Cotin, F., Couteret, J., Dezalay, J-P., Ehanno, M., Evrard, J., Lagrange, D., Niel, M., Olive, J-F., Rouaix, G., Souleille, P., Vedrenne, G., Hurley, K., Ricker, G., Vanderspek, R., Crew, G. Doty, J. and Butler, N. “In-Flight Performance and First Results of FREGATE”, these proceedings, (2002) Fenimore, E. E., and Galassi, M. “The HETE Triggering Algorithm”, in [*G*amma-Ray Bursts in the Afterglow Era]{}, edited by E. Costa, F. Frontera, and J. Hjorth, Springer, Berlin, 2001, pp. 393-395. Tavenner, T., Fenimore, E., Galassi, M., Vanderspek, R., Preger, B., Graziani, C., Lamb, D., Kawai, N., Yoshida, A., Shirasaki, Y., and Tamagawa, T. “The Effectiveness of the HETE-2 Triggering Algorithm”, these proceedings, (2002) Nelder, J. A. and Mead, R. “A Simplex Method for Function Minimization” *Comput. J.* [**7**]{}, 308-313, (1965) Graziani, C., and Lamb, D.Q., “Localization of GRBs by Bayesian Analysis of Data from the HETE WXM”, these proceedings (2002) Butler, N. and Doty, J. “Using Wavelets to Detect HETE Untriggered Bursts”, these proceedings, (2002)
--- abstract: 'We study theoretically the optical response of the surface states of a topological insulator, especially the generation of helicity-dependent direct current by circularly polarized light. Interestingly, the dominant current, due to an interband transition, is controlled by the Berry curvature of the surface bands. This extends the connection between photocurrents and Berry curvature beyond the quasiclassical approximation where it has been shown to hold. Explicit expressions are derived for the (111) surface of the topological insulator Bi$_{2}$Se$_{3}$ where we find significant helicity dependent photocurrents when the rotational symmetry of the surface is broken by an in-plane magnetic field or a strain. Moreover, the dominant current grows linearly with time until a scattering occurs, which provides a means for determining the scattering time.' author: - Pavan Hosur title: 'Optical characterization of topological insulator surface states: Berry curvature-dependent response' --- introduction ============ Topological insulators (TIs) are characterized by topologically protected surface states (SSs). In their simplest incarnation, these correspond to the dispersion of a single Dirac particle, which cannot be realized in a purely two dimensional band structure with time reversal invariance. This dispersion is endowed with the property of spin-momentum locking, i.e., for each momentum there is a unique spin direction of the electron. Most of the experimental focus on TIs so far has been towards trying to directly observe these exotic SSs in real or momentum space, in tunneling[@BiSbSTM] and photoemission[@Chen; @Hsieh; @Xia] experiments, respectively, and establish their special topological nature. However, there has so far been a dearth of experiments which study the response of these materials to external perturbations, such as an external electromagnetic field. In order to fill this gap, we study here the response of TI surfaces to circularly polarized (CP) light. Since photons in CP light have a well-defined angular momentum, CP light can couple to the spin of the surface electrons. Then, because of the spin-momentum-locking feature of the SSs, this coupling can result in dc transport which is sensitive to the helicity (right- vs left-circular polarization) of the incident light. This phenomenon is known as the circular photogalvanic effect (CPGE). In this work, we derive general expressions for the direct current on a TI surface as a result of the CPGE at normal incidence within a two-band model and estimate its size for the (111) surface of Bi$_{2}$Se$_{3}$, an established TI, and find it to be well within measurable limits. Since bulk Bi$_{2}$Se$_{3}$ has inversion symmetry and the CPGE, which is a second-order non-linear effect, is forbidden for inversion symmetric systems, this current can only come from the surface. We find, remarkably, that the dominant contribution to the current is controlled by the *Berry curvature* of the electron bands and *grows linearly with time*. In practice this growth is cut-off by a scattering event which resets the current to zero. At the microscopic level, this part of the current involves the absorption of a photon to promote an electron from the valence to the conduction band. The total current contains two other terms - both time-independent - one again involving an interband transition and the other resulting from intraband dynamics of electrons. However, for clean samples at low temperatures, the scattering or relaxation time is expected to be large, and these contributions will be eclipsed by the linear-in-time one. Hence, this experiment can also be used to measure the relaxation time for TI SSs. Historically, the Berry curvature has been associated with fascinating phenomena such as the anomalous Hall effect[@HaldaneAHE] and the integer quantum Hall effect[@TKNN] and therefore, it is exciting that it appears in the response here. Its main implication here is that is gives us a simple rule, in addition to the requirement of the right symmetries, for identifying the perturbations that can give a linear-in-time CPGE at normal incidence: we look for perturbations that result in a non-zero Berry curvature. Put another way, we can identify perturbations that have the right symmetries but still do not give this current because the Berry curvature vanishes for these perturbations. Importantly, for TI SSs, the requirement of a non-zero Berry curvature amounts to the simple physical condition that the spin-direction of the electrons have all three components non-zero. In other words, if the electron spin in the SSs is completely in-plane, the Berry curvature is zero and no linear-in-time CPGE is expected. The spins must somehow be tipped slightly out of the plane, as shown in Fig\[fig:absorption-imbalance\]a, in order to get such a response. Thus, a pure Dirac (linear) dispersion, for which the spins are planar, cannot give this response; deviations from linearity, such as the hexagonal warping on the (111) surface of Bi$_{2}$Te$_{3}$[@Fu3fold], are essential for tilting the spins out of the plane. ![(Color online) (a) Schematic illustration of preferential absorption at one out of two points related by the reflection symmetry about the $yz$-plane. The short arrows denote the spin direction of electrons in various states. At low energies, the spins are completely in-plane. They acquire a small out-of-plane component at higher energies. The dotted lines represent incoming photons of helicity $-1$ (left-CP photons). These photons can only *raise* the $\langle S_{z}\rangle$ of an electron, and thus are preferentially absorbed by electrons whose $\langle S_{z}\rangle<0$ in the valence band. The chemical potential $\mu$ must be between the initial and final states for any absorption to occur. \[fig:absorption-imbalance\] (b) Constant energy contours for the surface conduction band of Bi$_{2}$Se$_{3}$. Dark lines denote lower energy. (a) is drawn at $p_{y}=0$. (c) Geometry of the experiment. Light is incident normally on (111) surface of Bi$_{2}$Se$_{3}$. The dotted lines represent the mirror plane $m$ about which the lattice has a reflection symmetry. The current $j_{a2}(t)$ (see text) is along $\hat{x}$.](coloredmainfig) Finally, we estimate the current on the (111) surface of Bi$_{2}$Se$_{3}$ using an effective model for the SSs[@Fu3fold; @STImodel]. This model captures the deviations from linearity of the SS dispersion due to the threefold rotational symmetry of the (111) surface of Bi$_{2}$Se$_{3}$. These deviations have been observed in photoemission experiments on Bi$_{2}$Te$_{3}$[@Chen]. Similar deviations are expected for Bi$_{2}$Se$_{3}$[@STImodel], though they cannot be seen in the slightly smaller momentum range compared to Bi$_{2}$Te$_{3}$ over which data is currently available[@BiSefermisurface]. In order to get a direct current with CP light at normal incidence, rotational symmetry about the surface normal needs to be broken. Based on the requirement of non-zero Berry curvature, we propose to do this in two ways: 1. by applying an in-plane magnetic field and including deviations from linearity of the dispersion 2. by applying a strain. With a magnetic field of $10T$ (With a 1% strain) and assuming a scattering time of we find that a current density of $\sim100nA/mm$ ($\sim10nA/mm$) can be obtained due to the CPGE with a 1Watt laser. This value can be easily measured by current experimental techniques. Conversely, the scattering time, crucial for transport processes, for Bi$_{2}$Se$_{3}$ SSs can be determined by measuring the current. In comparison, circular photogalvanic currents of a few nanoamperes per Watt of laser power have been measured in GaAs and SiGe quantum wells. A connection between the optical response of a system and the Berry curvature of its bands has been previously noted at the low frequencies, where a semiclassical mechanism involving the anomalous velocity of electrons in a single band explains it[@DeyoPGE]. Here, we show it for inter band transitions where no quasiclassical approximation is applicable. Instead, we calculate the quadratic response function directly. A connection is still present which points to a deeper relation between the response functions and the Berry curvature. This paper is organized as follows. In Sec. \[sec:Symmetry-considerations\], we state the symmetry conditions under which a CPGE may occur. We present our results, both general as well as for Bi$_{2}$Se$_{3}$ in particular, in Sec. \[sub:Results\] and describe the microscopic mechanism in Sec. \[sub:Physical-process\]. The calculation is described briefly in Sec. \[sub:Calculation\] and in detail in Appendix \[sec:current calc\]. In Sec. \[sec:spin generation\], we give our results for dc spin symmetry considerations for the CPGE\[sec:Symmetry-considerations\] =================================================================== In this section, we specify the symmetry conditions under which one can get a CPGE on the surface of a TI. But first, let us briefly review the concept of the CPGE in general. The dominant dc response of matter to an oscillating electric field is, in general, quadratic in the electric field. When the response of interest is a current, the effect is known as the photogalvanic effect. This current can be written as$$j_{\alpha}=\eta_{\alpha\beta\gamma}\mathcal{E}_{\beta}(\omega)\mathcal{E}_{\gamma}(-\omega)$$ where $\mathcal{E}_{\alpha}(t)=\mathcal{E}_{\alpha}(\omega)e^{i\omega t}+\mathcal{E}_{\alpha}^{*}(\omega)e^{-i\omega t}$ is the incident electric field, $\mathcal{E}_{\alpha}^{*}(\omega)=\mathcal{E}_{\alpha}(-\omega)$ and $\eta_{\alpha\beta\gamma}$ is a third rank tensor, which has non-zero components only for systems that break inversion symmetry, such as the surface of a crystal. For $j_{\alpha}$ to be real, one has $\eta_{\alpha\beta\gamma}=\eta_{\alpha\gamma\beta}^{*}$. Thus, the real (imaginary) part of $\eta_{\alpha\beta\gamma}$ is symmetric (anti-symmetric) under interchange of $\beta$ and $\gamma$, and therefore describes a current that is even (odd) under the transformation $\omega\to-\omega$. Consequently, $j_{\alpha}$ can be conveniently separated according to$$j_{\alpha}=\mathtt{S}_{\alpha\beta\gamma}\left(\frac{\mathcal{E}_{\beta}(\omega)\mathcal{E}_{\gamma}^{*}(\omega)+\mathcal{E}_{\beta}^{*}(\omega)\mathcal{E}_{\gamma}(\omega)}{2}\right)+i\mathtt{A}_{\alpha\mu}(\boldsymbol{\mathcal{E}}\times\boldsymbol{\mathcal{E}}^{*})_{\mu}\label{eq:intro current}$$ where $\mathtt{S}_{\alpha\beta\gamma}$ is the symmetric part of $\eta_{\alpha\beta\gamma}$ and $\mathtt{A}_{\alpha\mu}$ is a second-rank pseudo-tensor composed of the anti-symmetric part of $\eta_{\alpha\beta\gamma}$. For CP light, $\boldsymbol{\mathcal{E}}\propto\hat{x}\pm i\hat{y}$ if $\hat{z}$ is the propagation direction and only the second term in Eq. (\[eq:intro current\]) survives, and hence represents the CPGE. This effect is odd in $\omega$. On the other hand, the first term, which is even in $\omega$, represents the linear photogalvanic effect as it is the only contribution for linearly polarized light. Since the transformation $\omega\to-\omega$, or equivalently, $\boldsymbol{\mathcal{E}}\to\boldsymbol{\mathcal{E}}^{*}$ reverses the helicity of CP light, i.e., changes right-CP light to left-CP light and vice versa, the CPGE is the helicity-dependent part of the photogalvanic effect. The helicity of CP light is odd (i.e., right- and left-CP light get interchanged) under time-reversal. It is also odd under mirror reflection about a plane that contains the incident beam, but invariant under arbitrary rotation about the direction of propagation. Let us consider normal incidence of CP light on a TI surface normal to the $z$ axis. Let us further assume that there is a mirror plane which is the $y$-$z$ plane (See Fig. \[fig:absorption-imbalance\]c). Then the symmetries above imply that the only component of direct current that reverses direction on switching the helicity is a current along the $x$ axis. If there is also rotation symmetry $R_{z}$ about the $z$-axis (such as the threefold rotation symmetry on the (111) surface of Bi$_{2}$Se$_{3}$), then no surface helicity-dependent direct photocurrent is permitted. One needs to break this rotation symmetry completely by applying, for example, and in-plane magnetic field, strain etc., to obtain a nonvanishing current. helicity-dependent direct photocurrent ====================================== We now present our main results for the photocurrent and estimate it for Bi$_{2}$Se$_{3}$. After painting a simple microscopic picture for the mechanism, we give a brief outline of the full quantum mechanical treatment of the phenomenon. Results\[sub:Results\] ---------------------- A general two-band Hamiltonian (in the absence of the incident light) can be written as $$\mathbb{H}=\sum_{\mathbf{p}}H_{\mathbf{p}}=\sum_{\mathbf{p}}|E_{\mathbf{p}}|\mathbf{\hat{n}}(\mathbf{p}).\boldsymbol{\sigma}\label{eq:Hspecial}$$ upto a term proportional to the identity matrix, which is not important for our main result which involves only inter-band transitions. Here $\hat{\mathbf{n}}(\mathbf{p})$ is a unit vector and $\boldsymbol{\sigma}$ are the spin-Pauli matrices. Clearly, this can capture a Dirac dispersion, eg. with $E(\mathbf{p})=\pm v_{F}p$ and $\hat{n}(\mathbf{p})=v_{F}\hat{\mathbf{z}}\times\mathbf{p}$. It can also capture the SSs of Bi$_{2}$Se$_{3}$ in the vicinity of the Dirac point, which includes deviations beyond the Dirac limit. We also assume the Hamiltonian has a reflection symmetry $m$ about $y$-axis, where $\hat{\mathbf{z}}$ is the surface normal. Using the zero temperature quadratic response theory described in Sec \[sub:Calculation\], we calculate the current due to the CPGE and find that$$\vec{j}_{CPGE}(t)=\left(j_{na}+j_{a1}+j_{a2}(t)\right)\hat{\mathbf{x}}\label{eq:total current}$$ where the subscripts $a$ ($na$) stand for “absorptive” and “non-absorptive”, respectively. The absorptive part of the response involves a zero momentum interband transition between a pair of levels separated by energy $\hbar\omega$. These terms are only non zero when there is one occupied and one empty level. In this part of the response, we find a term that is time-dependent, $j_{a2}(t)$. In particular, this term grows linearly with the time over which the electromagnetic perturbation is present, which is allowed for a dc response. In reality, this linear growth is cut off by a decay process which equilibrates populations, and is characterized by a time constant $\tau$. In clean samples at sufficiently low temperatures, characterized by large $\tau$, this contribution is expected to dominate the response, and hence, is the focus of our work. The other contributions are discussed in Appendix \[sec:current calc\]. Conversely, because of the linear growth with time, one can determine the lifetime of the excited states by measuring the photocurrent. This term is $$j_{a2}(t)=-\frac{\pi e^{3}\hbar\mathcal{E}_{0}^{2}t\textrm{sgn}(\omega)}{4}\sum_{\mathbf{p}}\delta(\hbar|\omega|-2|E_{\mathbf{p}}|)v_{x}(\mathbf{p})F(\mathbf{p})\label{eq:current general}$$ where we have assumed that the chemical potential is in between the two energy levels $\pm|E_{\mathbf{p}}|$ connected by the optical frequency $\hbar\omega$, and that temperature can be neglected compared to this energy scale. Here, $v_{x}(\mathbf{p})=\frac{\partial|E_{\mathbf{p}}|}{\partial p_{x}}$ is the conventional velocity and $F(\mathbf{p})=i\sum_{\mathbf{p}}\langle\partial_{p_{x}}u(\mathbf{p})|\partial_{p_{y}}u(\mathbf{p})\rangle+c.c.$, where $|u(\mathbf{p})\rangle$ is the conduction band Bloch state at momentum $\mathbf{p}$, is the [*Berry curvature*]{} of the conduction band at momentum $\mathbf{p}$. For the class of Hamiltonians (\[eq:Hspecial\]) that we are concerned with, the Berry curvature is given by (See Appendix \[sec:Berry expression proof\]): $$F(\mathbf{p})=\hat{\mathbf{n}}.\left(\frac{\partial\hat{\mathbf{n}}}{\partial p_{x}}\times\frac{\partial\hat{\mathbf{n}}}{\partial p_{y}}\right)$$ which is the skyrmion density of the unit vector $\hat{\mathbf{n}}$ in momentum space. Since $\partial_{p_{i}}\hat{\mathbf{n}}\perp\hat{\mathbf{n}}$ for $i=x,y$, $F(\mathbf{p})\neq0$ only if all three components of $\hat{\mathbf{n}}$ are nonvanishing. For linearly dispersing bands, $\hat{\mathbf{n}}$ has only two non-zero components (eg. $H_{\mathbf{p}}=p_{y}\sigma_{x}-p_{x}\sigma_{y}$, $\hat{\mathbf{n}}\propto(p_{y},-p_{x},0)$). Hence, corrections beyond the pure Dirac dispersion are essential. Also, due to $m$, the Berry curvature satisfies $F(p_{x},p_{y})=-F(-p_{x},p_{y})$. Since in Eq. (\[eq:current general\]) we have the $x$-velocity multiplying the Berry curvature, which also transforms the same way, a finite contribution is obtained on doing the momentum sum. We now calculate $j_{a2}(t)$ for the threefold-symmetric (111) surface of Bi$_{2}$Se$_{3}$ starting from the effective Hamiltonian[@Fu3fold; @STImodel]$$H=v_{F}(p_{x}\sigma_{y}-p_{y}\sigma_{x})+\frac{\lambda}{2}\left(p_{+}^{3}+p_{-}^{3}\right)\sigma_{z}\label{eq:surface ham}$$ where $v_{F}\sim5\times10^{5}m/s$[@TIprediction] and $\lambda=50.1eV\cdot$Å$^{3}$[@STImodel]. A spin independent quadratic term has been dropped since it does not modify the answers for interband transitions, which only involve the energy difference between the bands. To get a non-zero $j_{CPGE}$, the threefold rotational symmetry must be broken, which we first propose to do by applying a magnetic field $B$ in the $x$-direction. This field has no orbital effect, and can be treated by adding a Zeeman term $-g_{x}\mu_{B}B\sigma_{x}$, where $g_{x}$ is the appropriate g-factor and $\mu_{B}$ is the Bohr magneton, to the Hamiltonian (\[eq:surface ham\]). To lowest order in $\lambda$ and $B$, we get$$j_{a2}(t)=\frac{3e^{3}v_{F}\mathcal{E}_{0}^{2}\lambda(g_{x}\mu_{B}B)^{2}t}{16\hbar^{2}\omega}\mathcal{A}\label{eq:current with B}$$ to lowest order in $\lambda$ and $B^{2}$, where $\mathcal{A}$ is the laser spot-size. For $g_{x}=0.5$[@STImodel], and assuming the experiment is done in a $10T$ field with a continuous wave laser with $\hbar\omega=0.1eV$ which is less than the bulk band gap of $0.35eV$[@Xia], $\mathcal{A}\sim1mm^{2}$, a laser power of $1W$, and the spin relaxation time , we get a current density of , which is easily measurable by current experimental techniques. Note that the expression (\[eq:current with B\]) for $j_{a2}(t)$ contains the parameter $\lambda$ which measures the coupling to $\sigma_{z}$ in Eq. (\[eq:surface ham\]). Since $\vec{B}=B\hat{\mathbf{x}}$ breaks the rotation symmetry of the surface completely, a naive symmetry analysis suggests, wrongly, that deviations from linearity, measured by $\lambda$, are not needed to get $j_{a2}(t)$. The rotation symmetry can also be broken by applying a strain along $x$, which can be modeled by adding a term $\delta\lambda p_{x}^{3}\sigma_{z}$ to $H$ in Eq.\[eq:surface ham\]). This gives $$j_{a2}(t)=\frac{3e^{3}v_{F}(\delta\lambda)\mathcal{E}_{0}^{2}\omega t}{2^{7}}\mathcal{A}\label{eq:current with strain}$$ to lowest order in $\lambda$ and $\delta\lambda$. For a 1% strain, $\delta\lambda/\lambda=0.01$, and the same values for the other paramaters as in Eq.(\[eq:current with B\]), we get a current density of . Eq. (\[eq:current with strain\]) does not contain $\lambda$; this is because $\delta\lambda$ alone both breaks the rotation symmetry and tips the spins out of the $xy$-plane. Physical process\[sub:Physical-process\] ---------------------------------------- The appearance of the Berry curvature suggests a role of the anomalous velocity in generating the current. Such mechanisms have been discussed in the literature in the context of the CPGE[@DeyoPGE; @Moore_Orenstein_Berry_curvature]. However, those mechanisms only work when the electric field changes slowly compared to the typical scattering time. The SSs of Bi$_{2}$Se$_{3}$ probably have lifetimes of tens of picoseconds, and thus, we are in the opposite limit when $\hbar\omega=0.1eV$, which corresponds to a time scale times shorter. In this limit, the dc responses are a result of a preferential absorption of the photon at one of the two momentum points for each pair of points $(\pm p_{x},p_{y})$ related by $m$, as shown in Fig. \[fig:absorption-imbalance\]a for $p_{y}=0$. According to the surface Hamiltonian (\[eq:surface ham\]), the spin vector $\mathbf{S}=\frac{\boldsymbol{\sigma}}{2}\hbar$ gets tipped out of the $xy$-plane for states that lie beyond the linear dispersion regime, but the direction of the tipping is opposite for $(p_{x},p_{y})$ and $(-p_{x},p_{y})$. Thus, photons of helicity $-1$, which can only *raise* $\langle S_{z}\rangle$ of an electron, are preferentially absorbed by the electrons that have $\langle S_{z}\rangle<0$ in the ground state. The response, then, is determined by the properties of these electrons. Clearly, the process is helicity-dependent as reversing the helicity would cause electrons with $\langle S_{z}\rangle>0$ to absorb the light preferentially. This is consistent with the requirement of a non-zero Berry curvature, which essentially amounts to the spin direction $\hat{\mathbf{n}}$ having to be a three-dimensional vector. In the linear limit, where $H=v_{F}(p_{x}\sigma_{y}-p_{y}\sigma_{x})$, the spin is entirely in-plane, and all the electrons absorb the incident light equally. Calculation in brief\[sub:Calculation\] --------------------------------------- We now briefly outline the calculation of the helicity-dependent photocurrent. The detailed calculation can be found in Appendix \[sec:current calc\]. Readers only interested in our results may wish to skip this section. **The Model:** The Hamiltonian and relevant electric field (vector potential) perturbations for getting a direct current to second order in the electric field of the incident photon are$$\begin{aligned} H & = & |E_{\mathbf{p}}|\hat{\mathbf{n}}(p).\boldsymbol{\sigma}\label{eq:hamiltonian}\\ H^{\prime} & = & j_{x}A_{x}(t)+j_{y}A_{y}(t)\label{eq:perturbation}\\ j_{\alpha} & = & \frac{\partial H}{\partial p_{\alpha}}\label{eq:current operator}\\ A_{x}(t)+iA_{y}(t) & = & A_{0}e^{i(\omega-i\epsilon)t}\label{eq:vector potential}\end{aligned}$$ where $\mathbf{A}$ is the vector potential, $\hat{z}$ is assumed to be the surface normal, and $\epsilon$ is a small positive number which ensures slow switch-on of the light. **Quadratic response Theory:** In general, the current along $x$ to all orders in the perturbation $H^{\prime}$ is $$\langle j_{x}\rangle(t)=\left\langle T^{*}\left(e^{i\int_{-\infty}^{t}dt^{\prime}H^{\prime}(t^{\prime})}\right)j_{x}(t)T\left(e^{-i\int_{-\infty}^{t}dt^{\prime}H^{\prime}(t^{\prime})}\right)\right\rangle \label{eq:all orders of perturbation}$$ where $T\,(T^{*})$ denotes time-ordering (anti-time-ordering) and $O(t)=e^{iHt}Oe^{-iHt}$. Terms first order in $H^{\prime}$ cannot give a direct current. The contribution to the current from the second order terms can be written as $$\begin{gathered} \langle j_{x}\rangle(t)=\intop_{-\infty}^{t}dt^{\prime}\intop_{-\infty}^{t_{1}}dt^{\prime\prime}\left\langle \left[\left[j_{x}(t),H^{\prime}(t^{\prime})\right],H^{\prime}(t^{\prime\prime})\right]\right\rangle \\ =\intop_{-\infty}^{t}dt^{\prime}\intop_{-\infty}^{t_{1}}dt^{\prime\prime}\chi_{x\alpha\beta}(t,t^{\prime},t^{\prime\prime})A_{\alpha}(t^{\prime})A_{\beta}(t^{\prime\prime})\end{gathered}$$ where $\alpha,\beta\in\{x,y\}$, $\chi_{x\alpha\beta}(t,t^{\prime},t^{\prime\prime})=\chi_{x\alpha\beta}(0,t^{\prime}-t,t^{\prime\prime}-t)=\left\langle \left[\left[j_{x},j_{\alpha}(t^{\prime}-t)\right],j_{\beta}(t^{\prime\prime}-t)\right]\right\rangle \equiv\chi_{x\alpha\beta}(t^{\prime}-t,t^{\prime\prime}-t)$ due to time translational invariance, and the expectation value is over the ground state which has all states with $E_{\mathbf{p}}<(>)\,0$ filled (empty). For Hamiltonians of the form of Eq. (\[eq:hamiltonian\]), the expectation value of any traceless operator $O$ in the Fermi sea ground state can be written as a trace:$$\langle O\rangle=\sum_{\mathbf{p}}\frac{1}{2}\mathrm{Tr}\left\{ \left(1-\frac{H}{|E_{\mathbf{p}}|}\right)O\right\} =-\sum_{\mathbf{p}}\frac{\mathrm{Tr}\left(HO\right)}{2|E_{\mathbf{p}}|}\label{eq:avg_to_trace}$$ This gives, $$\begin{aligned} \chi_{x\alpha\beta}(t_{1},t_{2}) & =-\sum_{p}\frac{\mathrm{Tr}\left(H\left[\left[j_{x},j_{\alpha}(t_{1})\right],j_{\beta}(t_{2})\right]\right)}{2|E_{\mathbf{p}}|}\label{eq:chi_definition}\end{aligned}$$ Eq. (\[eq:chi\_definition\]) is the zero temperature limit of the finite temperature expression for the quadratic susceptibility proven in Ref. [@NLO_Butcher]. Because of the mirror symmetry $m$, $\chi_{x\alpha\beta}(t_{1},t_{2})$ is non-vanishing only for $\alpha\neq\beta$. To get a direct current, we retain only the non-oscillating part of $A_{x}(t+t_{i})A_{y}(t+t_{j})=\frac{A_{0}^{2}}{2}e^{2\epsilon t}\left[\sin\left(2\omega t+\omega(t_{i}+t_{j})\right)-\sin\left(\omega(t_{i}-t_{j})\right)\right]$. Thus,$$\begin{gathered} j_{x}^{dc}(t)=\frac{A_{0}^{2}e^{2\epsilon t}}{4}\intop_{-\infty}^{0}dt_{1}\intop_{-\infty}^{t_{1}}dt_{2}\bigg\{\left(\chi_{xxy}-\chi_{xyx}\right)(t_{1},t_{2})\times\\ e^{\epsilon(t_{1}+t_{2})}\sin\left(\omega(t_{2}-t_{1})\right)\bigg\}\label{eq:jxdc}\end{gathered}$$ **The Result:** After carrying out the two time-integrals, we get the three currents mentioned in Eq. (\[eq:total current\]). For clean samples at low temperatures, $j_{a2}(t)$, which grows linearly with time, is expected to dominate. A general expression for this term is (in the units $e=\hbar=v_{F}=1$ where $v_{F}$ is the Fermi velocity)$$\begin{aligned} j_{a2}(t) & =\nonumber \\ & \frac{iA_{0}^{2}\pi t\text{sgn}(\omega)}{2\omega^{2}}\sum_{\mathbf{p}}\delta(|\omega|-2|E_{\mathbf{p}}|)\mathrm{Tr}(Hj_{x})\mathrm{Tr}(H[j_{x},j_{y}])\label{eq:current as trace}\end{aligned}$$ Using Eqs. (\[eq:hamiltonian\]) and (\[eq:current operator\]) and the Lie algebra of the Pauli matrices, $[\sigma_{i},\sigma_{j}]=2i\epsilon_{ijk}\sigma_{k}$ where $\epsilon_{ijk}$ is the anti-symmetric tensor, the above traces can be written as$$\begin{aligned} \mathrm{Tr}(Hj_{x}) & = & 2|E_{\mathbf{p}}|v_{x}(\mathbf{p})\label{eq:velocity as trace}\\ \mathrm{Tr}(H\left[j_{x},j_{y}\right]) & = & 4i|E_{\mathbf{p}}|^{3}\hat{\mathbf{n}}.\left(\frac{\partial\hat{\mathbf{n}}}{\partial p_{x}}\times\frac{\partial\hat{\mathbf{n}}}{\partial p_{y}}\right)\nonumber \\ & = & 4i|E_{\mathbf{p}}|^{3}F(\mathbf{p})\label{eq: curvature}\end{aligned}$$ Eqs. (\[eq:current as trace\]), (\[eq:velocity as trace\]) and (\[eq: curvature\]) give our main result Eq. (\[eq:current general\]). spin generation\[sec:spin generation\] ====================================== Having understood the microscopic mechanism underlying the generation of the photocurrent $j_{a2}(t)$ , we wonder, next, whether such a population imbalance can lead to any other helicity-dependent macroscopic responses. Since each absorbed photon flips the $z$-component of the spin of an electron, a net $\langle S_{z}\rangle$ is expected to be generated on the surface. The calculation of $\langle S_{z}\rangle$ is identical to that of $j_{CPGE}$. The total $\langle S_{z}\rangle$ generated consists of the same three parts as $j_{CPGE}$, and the dominant part is$$S_{a2}^{z}(t)=-\frac{\pi e^{2}\mathcal{E}_{0}^{2}\hbar t\mbox{sgn}(\omega)}{8}\sum_{\mathbf{p}}\delta(\hbar|\omega|-2|E_{p}|)n_{z}(\mathbf{p})F(\mathbf{p})\label{eq:Sz general}$$ $S_{z}$ does not break the rotational symmetry of the surface, so we calculate $S_{a2}^{z}(t)$ directly for the threefold symmetric Hamiltonian (\[eq:surface ham\]) and obtain$$S_{a2}^{z}(t)=\frac{e^{2}\mathcal{E}_{0}^{2}(\hbar\omega)^{3}\lambda^{2}t}{2^{10}}\mathcal{A}\label{eq:spin result}$$ For the same values of all the parameters as for $j_{a2}(t)$, we get , which means only ten electron spins are flipped over an area of $\sim1mm^{2}$. If, instead, we ignore the cubic corrections but assume magnetic ordering on the surface, so that $H=v_{F}(p_{x}\sigma_{y}-p_{y}\sigma_{x})+M\sigma_{z}$, we get $$S_{a2}^{z}(t)=-\frac{e^{2}\mathcal{E}_{0}^{2}M^{2}t}{16(\hbar\omega)^{3}}\mathcal{A}$$ which again gives a rather small value of for $M\sim10K$, a typical magnetic ordering transition temperature. However, the spin generated could be measurable if one uses a pulsed laser of, say, MegaWatt power, and performs a time-resolved experiment. Conclusions =========== In summary, we studied the CPGE on the surface of a TI at normal incidence, and applied the results to the (111) surface of Bi$_{2}$Se$_{3}$. If the rotational symmetry of the TI surface is broken by applying an in-plane magnetic field or a strain, we predict an experimentally measurable direct photocurrent. A striking feature of this current is that it depends on the Berry curvature of the electron bands. Such a dependence can be understood intuitively as a result of the incident photons getting absorbed unequally by electrons of different momenta and hence, different average spins. The current grows linearly with time until a decay process equilibrates populations, which provides a way of determining the excited states lifetime. We also calculated the amount of dc helicity-dependent out-of-plane component of the electron spin generated. This does not require any rotational symmetry breaking; however, the numerical value is rather small with typical values of parameters. In the future, we hope to find a generalization of our results for oblique incidence. Experimentally, this is a very attractive way of breaking the rotational symmetry of the surface; indeed, such experiments have already been performed successfully on graphene[@graphene; @photocurrent]. In graphene, helicity-dependent direct photocurrents have also been predicted by applying a dc bias[@PhotoHallEffect]. However, with a dc bias across a TI surface and ordinary continuous lasers, we find the current to be too low to be measurable. Finally, we also wonder whether the Berry curvature dependence of the helicity-dependent response to CP light survives for three- and higher-band models. If it does, it would be interesting to write such a model for semiconductor quantum wells such as GaAs and SiGe. It could also enable one to treat oblique incidence, by considering transitions to higher bands of different parities, because they are driven by the normal component of the electric field, $E_{z}$.\ We would like to thank Ashvin Vishwanath for enlightening discussions, Joseph Orenstein for useful experimental inputs, and Ashvin Vishwanath and Yi Zhang for invaluable feedback on the draft. This work was supported by LBNL DOE-504108. Proof of Berry curvature expression\[sec:Berry expression proof\] ================================================================= Here we show that the Berry curvature defined for Bloch electrons as $$F(\mathbf{p})=i\left(\langle\partial_{p_{x}}u|\partial_{p_{y}}u\rangle-\langle\partial_{p_{y}}u|\partial_{p_{x}}u\rangle\right){\color{blue}}\label{eq:curvature bloch}$$ can be written as $$F(\mathbf{p})=\hat{\mathbf{n}}.\left(\partial_{p_{x}}\hat{\mathbf{n}}\times\partial_{p_{y}}\hat{\mathbf{n}}\right)\label{eq:curvature unit vector}$$ for the band with energy $|E_{\mathbf{p}}|$ for Hamiltonians of the form $H_{\mathbf{p}}=|E_{\mathbf{p}}|\hat{\mathbf{n}}(\mathbf{p}).\boldsymbol{\sigma}$. At momentum $\mathbf{p}$, the Bloch state $|u_{\mathbf{p}}\rangle$ with energy $|E_{\mathbf{p}}|$ is defined as the state whose spin is along $\hat{\mathbf{n}}(\mathbf{p})$. Defining $|\uparrow\rangle$ as the state whose spin is along $+\hat{\mathbf{z}}$, $|u_{\mathbf{p}}\rangle$ is obtained by performing the appropriate rotations,$$|u_{\mathbf{p}}\rangle=e^{-i\frac{\sigma_{z}}{2}\phi(\mathbf{p})}e^{i\frac{\sigma_{y}}{2}\theta(\mathbf{p})}|\uparrow\rangle\label{eq:state at p}$$ where $\theta(\mathbf{p})$ and $\phi(\mathbf{p})$ are the polar angles that define $\hat{\mathbf{n}}(\mathbf{p})$:$$\hat{\mathbf{n}}(\mathbf{p})=\sin\theta(\mathbf{p})\cos\phi(\mathbf{p})\hat{x}+\sin\theta(\mathbf{p})\sin\phi(\mathbf{p})\hat{y}+\cos\theta(\mathbf{p})\hat{z}\label{eq:n polar}$$ Substituting Eq. (\[eq:state at p\]) in Eq. (\[eq:curvature bloch\]), one gets$$F(\mathbf{p})=\sin\theta(\mathbf{p})\left(\partial_{p_{x}}\theta(\mathbf{p})\partial_{p_{y}}\phi(\mathbf{p})-\partial_{p_{x}}\phi(\mathbf{p})\partial_{p_{y}}\theta(\mathbf{p})\right)$$ which, on using Eq. (\[eq:n polar\]) and some algebra, reduces to the required expression Eq. (\[eq:curvature unit vector\]). current calculation for the cpge\[sec:current calc\] ==================================================== Here we explain the current-calculation of Sec. \[sub:Results\] in more detail and also state results for the parts of the current that we chose not to focus on there. As shown in Sec. \[sub:Calculation\], the relevant susceptibility is$$\begin{aligned} \chi^{x\alpha\beta}(t,t^{\prime},t^{\prime\prime}) & = & -\frac{1}{2}\sum_{\mathbf{p}}\mathrm{Tr}\left(\frac{H}{|E_{\mathbf{p}}|}\left[\left[j^{x}(t),j^{\alpha}(t^{\prime})\right],j^{\beta}(t^{\prime\prime})\right]\right)\nonumber \\ & = & -\sum_{\mathbf{p}}\frac{1}{2|E_{\mathbf{p}}|}\mathrm{Tr}\left(H\left[\left[j^{x},j^{\alpha}(t_{1})\right],j^{\beta}(t_{2})\right]\right)\nonumber \\ & \equiv & \chi^{x\alpha\beta}(t_{1},t_{2})\label{eq:chi_defintion}\end{aligned}$$ where $t_{1}=t^{\prime}-t,\, t_{2}=t^{\prime\prime}-t$, and the non-vanishing components of $\chi^{x\alpha\beta}$ are those for which $\alpha\neq\beta$. The non-oscillating part of the current, hence, is$$\begin{gathered} \langle j_{x}^{dc}\rangle(t)=j_{CPGE}(t)=\frac{A_{0}^{2}e^{2\epsilon t}}{4}\intop_{-\infty}^{0}dt_{1}\intop_{-\infty}^{t_{1}}dt_{2}\\ \left(\chi^{xxy}(t_{1},t_{2})-\chi^{xyx}(t_{1},t_{2})\right)e^{\epsilon(t_{1}+t_{2})}\sin\left(\omega(t_{2}-t_{1})\right)\end{gathered}$$ Since $j_{CPGE}(t)$ is an odd function of $\omega$, it reverses on reversing the polarization, as expected. The traces in the susceptibility expressions are calculated by introducing a complete set of states in place of the identity several times. Thus,$$\begin{aligned} & \chi^{xxy}(t_{1},t_{2})\label{eq:chi 1}\\ = & -\sum_{\mathbf{p}}\frac{1}{2|E_{\mathbf{p}}|}\mathrm{Tr}\left(H\left[\left[j^{x},j^{x}(t_{1})\right],j^{y}(t_{2})\right]\right)\nonumber \\ = & -\frac{1}{2}\sum_{\mathbf{p}}\sum_{nml}\textrm{sgn}(E_{n})\biggl\{ e^{i(E_{m}-E_{n})t_{2}}\times\nonumber \\ & \left(e^{i(E_{l}-E_{m})t_{1}}-e^{-i(E_{l}-E_{n})t_{1}}\right)X_{nl}X_{lm}Y_{mn}+\textrm{c.c.}\biggl\}\nonumber \end{aligned}$$ where $X_{nl}=\langle n\left|j_{x}\right|m\rangle$ etc. and the subscript $\mathbf{p}$ on $E_{\mathbf{p}}$ has been dropped to enhance the readability. Similarly,$$\begin{aligned} & \chi^{xyx}(t_{1},t_{2})\label{eq:chi 2}\\ & =-\sum_{\mathbf{p}}\frac{1}{2E_{\mathbf{p}}}\mathrm{Tr}\left(H\left[\left[j^{x},j^{y}(t_{1})\right],j^{x}(t_{2})\right]\right)\nonumber \\ & =-\frac{1}{2}\sum_{\mathbf{p}}\sum_{nml}\textrm{sgn}(E_{n})\biggl\{ e^{i(E_{m}-E_{n})t_{2}}X_{mn}\times\nonumber \\ & \left(e^{i(E_{l}-E_{m})t_{1}}X_{nl}Y_{lm}-e^{-i(E_{l}-E_{n})t_{1}}Y_{nl}X_{lm}\right)+\textrm{c.c.}\biggl\}\nonumber \end{aligned}$$ Substituting (\[eq:chi 1\]) and (\[eq:chi 2\]) in (\[eq:DC current expression\]), we get $$\begin{aligned} & j_{CPGE}(t)=\frac{A_{0}^{2}e^{2\epsilon t}}{4}\mathfrak{Re}\intop_{-\infty}^{0}dt_{1}\intop_{-\infty}^{t_{1}}dt_{2}e^{\epsilon(t_{1}+t_{2})}\times\\ & \sin\left(\omega(t_{1}-t_{2})\right)\sum_{\mathbf{p},nml}\textrm{sgn}(E_{n})e^{i(E_{m}-E_{n})t_{2}}\times\nonumber \\ & \biggl\{\left(e^{i(E_{l}-E_{m})t_{1}}-e^{-i(E_{l}-E_{n})t_{1}}\right)X_{nl}X_{lm}Y_{mn}-\nonumber \\ & X_{mn}\left(e^{i(E_{l}-E_{m})t_{1}}X_{nl}Y_{lm}-e^{-i(E_{l}-E_{n})t_{1}}Y_{nl}X_{lm}\right)\biggl\}\nonumber \end{aligned}$$ where $\mathfrak{Re}$ stands for ‘the real part of’. Carrying out the the two time integrations gives$$\begin{aligned} & j_{CPGE}(t)=\frac{A_{0}^{2}e^{2\epsilon t}}{8}\mathfrak{Im}\sum_{\mathbf{p}}\sum_{nml}\textrm{sgn}(E_{n})\times\\ & \left[\frac{1}{E_{m}-E_{n}+\omega-i\epsilon}-\frac{1}{E_{m}-E_{n}-\omega-i\epsilon}\right]\times\nonumber \\ & \left\{ \frac{X_{nl}\left(X_{lm}Y_{mn}-Y_{lm}X_{mn}\right)}{E_{l}-E_{n}-2i\epsilon}+\frac{X_{lm}\left(Y_{mn}X_{nl}-X_{mn}Y_{nl}\right)}{E_{l}-E_{m}+2i\epsilon}\right\} \nonumber \end{aligned}$$ where $\mathfrak{Im}$ stands for ‘the imaginary part of’. Using $\mathfrak{Im}\left(\frac{1}{\Omega-i\epsilon}\right)=\pi\delta(\Omega)$ and $\mathfrak{Re}\left(\frac{1}{\Omega-i\epsilon}\right)=\frac{1}{\Omega}$ in the limit $\epsilon\to0$, we get after some algebra, $j_{CPGE}(t)=j_{na}+j_{a1}+j_{a2}(t)$, where ($\mathrm{Tr}$ denotes the trace)$$\begin{gathered} j_{na}=\frac{A_{0}^{2}}{16}\sum_{\mathbf{p}}\frac{\omega(\omega^{2}-12E_{\mathbf{p}}^{2})}{i|E_{\mathbf{p}}|^{3}(\omega^{2}-4E_{\mathbf{p}}^{2})^{2}}\times\nonumber \\ \mathrm{Tr}(Hj_{x})\mathrm{Tr}(H\left[j_{x},j_{y}\right])\end{gathered}$$ comes from intraband processes and is constant in time,$$\begin{gathered} j_{a1}=-\frac{\pi A_{0}^{2}\textrm{sgn}(\omega)}{32}\sum_{\mathbf{p}}\frac{\delta(|\omega|-2|E_{\mathbf{p}}|)}{E_{\mathbf{p}}^{2}}\times\nonumber \\ \mathrm{Tr}(H\left[j_{x},\left[j_{x},j_{y}\right]\right])\end{gathered}$$ is a result of an interband transition absorption as indicated by the $\delta$-function in energy and is also constant in time, and $$\begin{gathered} j_{a2}(t)=i\frac{A_{0}^{2}\pi t\,\textrm{sgn}(\omega)}{8}\sum_{p}\delta(|\omega|-2|E_{\mathbf{p}}|)\times\nonumber \\ \frac{\mathrm{Tr}(Hj_{x})\mathrm{Tr}(H\left[j_{x},j_{y}\right])}{E_{\mathbf{p}}^{2}}\end{gathered}$$ which also results from interband absorption and increases linearly in time. The last term was the main focus of our work. [19]{} P. Roushan et al., Nature 460, 1106-1109 (2009). D. Hsieh, D. Qian, L. Wray, Y. Xia, Y. Hor, R. J. Cava, and M. Z. Hasan, Nature **452**, 970 (2008). Y. Xia, L. Wray, D. Qian, D. Hsieh, A. Pal, H. Lin, A. Bansil, D. Grauer, Y. S. Hor, R. J. Cava, M. Z. Hasan, Nature Physics Vol. 5, No. 6, pp398 (2009) F. D. M. Haldane, Phys. Rev. Lett. 93, 206602 (2004). Thouless, Kohmoto, Nightingale, den Nijs, Phys. Rev. Lett. 49, 405, (1982). Ganichev et al., PRL 86, 4358 (2001). Ganichev et. al., Mat. Res. Soc. Symp. Proc. Vol. 690, F3.11.1 (2002). Chao-Xing Liu, Xiao-Liang Qi, HaiJun Zhang, Xi Dai, Zhong Fang, Shou-Cheng Zhang, arXiv:1005.1682 Y. L. Chen, J. G. Analytis, J. H. Chu, Z. K. Liu, S. K. Mo, X. L. Qi, H. J. Zhang, D. H. Lu, X. Dai, Z. Fang, S. C. Zhang, I. R. Fisher, Z. Hussain, Z. X. Shen, Science Vol. 325 no. 5937, pp178 (2009). L. Fu, Phys. Rev. Lett. 103, 266801 (2009). D. Hsieh, Y. Xia, D. Qian, L. Wray, J. H. Dil, F. Meier, J. Osterwalder, L. Patthey, J. G. Checkelsky, N. P. Ong, A. V. Fedorov, H. Lin, A. Bansil, D. Grauer, Y. S. Hor, R. J. Cava & M. Z. Hasan, Nature 460, 1101 (2009). Haijun Zhang, Chao-Xing Liu, Xiao-Liang Qi, Xi Dai, Zhong Fang & Shou-Cheng Zhang, Nature Physics 5, 438-442 (2009). Munoz, Perez, Vina, Ploog, Phys. Rev. B 51, 4247 (1995). E. Deyo et al., arXiv:0904.1917v1 Ong, Lee, Foundations of Quantum Mechanics, ed. Sachio Ishioka and Kazuo Fujikawa (World Scientific, 2006), p. 121. J. E. Moore, J. Orenstein, arXiv:0911.3630v1 Ch. 7, ‘Nonlinear Optical Phenomena’, Paul N. Butcher, Eq. 7.25 and preceding discussion. Oka, Aoki, Phys. Rev. B 79, 081406(R), 2009. Karch et. al., arXiv:1002.1047v1
--- abstract: 'We classify all finite groups with five relative commutativity degrees. Also, we give a partial answer to our previous conjecture on a lower bound of the number of relative commutativity degrees of finite groups.' address: 'Department of Mathematics, Institute for Advanced Studies in Basic Sciences (IASBS), and the Center for Research in Basic Sciences and Contemporary Technologies, IASBS, Zanjan 66731-45137, Iran' author: - 'M. Farrokhi D. G.' title: Finite groups with five relative commutativity degrees --- Introduction ============ A finite group is either abelian or non-abelian, but not all non-abelian groups share the same commutativity relation among their elements. Roughly speaking, nilpotent groups seems to be more commutative than solvable groups, and solvable groups seems even more commutative than non-solvable groups. To compare groups via their commutativity of elements, one can count all commuting pairs in a finite group $G$ and normalize it by dividing this number by the number of all pairs. This quantity defined explicitly as $$d(G):=\frac{\#\{(x,y)\in G\times G\ :\ xy=yx\}}{|G|^2}$$ is known as the *commutativity degree* of $G$. Erdös and Turan [@pe-pt] defined the commutativity degree of groups in their study of symmetric groups and showed that it satisfies the identity $$d(G)=\frac{k(G)}{|G|},$$ where $k(G)$ denotes the number of conjugacy classes of $G$. This formula is used to give various lower and upper bounds for $d(G)$ in the literature. The most simple upper bound for $d(G)$ is given by Gustafson [@whg] in 1973 who showed that $d(G){\leqslant}5/8$ for all non-abelian groups with equality if and only if $G/Z(G)$ is the Klein four group. The next remarkable and significant result is due to Rusin [@djr] in 1979 who classified all finite groups with commutativity degrees greater than $11/32$. Since then the commutativity degree of finite groups is studied actively and we may refer the interested reader to [@fm-dm-ans; @se; @ive-bs; @rmg-grr; @eh-dm-ans; @ph; @pl; @pl-hnn-yy] for some major contributions to the field. While the commutativity degree can be applied to distinguish between groups but it is not so strong to reveal internal structure of groups in general. For instance, $$d(A_4)=d(D_{18})=\frac{1}{3}$$ while $A_4$ and $D_{18}$ have quite different structures. One way to overcome this problem is to work on local structure of groups by looking at subgroups of a group and how their elements commute with other elements of the group under consideration. Accordingly, one can define the *relative commutativity degree* of a subgroup $H$ of a finite group $G$ as $$d(H,G):=\frac{\#\{(h,g)\in H\times G\ :\ hg=gh\}}{|H||G|}.$$ The above quantity is introduced by Erfanian, Rezaei, and Lescot [@ae-rr-pl] in 2007 where the authors apply it to show, among other results, the following monotonic property of (relative) commutativity degrees $$d(G){\leqslant}d(K,G){\leqslant}d(H,G){\leqslant}d(H,K){\leqslant}d(H),$$ when $G$ is a finite group and $H$ and $K$ are subgroups of $G$ with $H{\leqslant}K$. Barzegar, Erfanian, and Farrokhi [@rb-ae-mfdg] consider the set ${\mathcal{D}}(G)$ of all relative commutativity degrees of a finite group $G$, namely $${\mathcal{D}}(G)=\{d(H,G)\ :\ H{\leqslant}G\},$$ and study the groups $G$ when ${\mathcal{D}}(G)$ is small. It is evident that ${\mathcal{D}}(G)$ is a singleton if and only if $G$ is abelian. They show that there is no finite groups $G$ with ${\mathcal{D}}(G)$ possessing only two elements, and obtain the following classification of finite groups $G$ for which ${\mathcal{D}}(G)$ has three elements. \[|D(G)|=3\] Let $G$ be a finite group. Then $|{\mathcal{D}}(G)|=3$ if and only if $G$ satisfies one of the following cases: - $G/Z(G)\cong C_p\times C_p$ for some prime $p$ (nilpotent case). Then $${\mathcal{D}}(G)={\left\{1,\ \frac{2p-1}{p^2},\ \frac{p^2+p-1}{p^3}\right\}}.$$ - $G/Z(G)\cong C_p\rtimes C_q$ is a non-abelian group of order $pq$ for some distinct primes $p$ and $q$ (non-nilpotent case). Then $${\mathcal{D}}(G)={\left\{1,\ \frac{p+q-1}{pq},\ \frac{p+q^2-1}{pq^2}\right\}}.$$ The above results are extended by Erfanian and Farrokhi [@ae-mfdg] by classifying all finite groups $G$ with ${\mathcal{D}}(G)$ containing four elements. \[|D(G)|=4\] Let $G$ be a finite group. Then $|{\mathcal{D}}(G)|=4$ if and only if $G$ satisfies one of the following cases: - $G/Z(G)$ is a $p$-group of order $p^3$ and $G$ has no abelian maximal subgroups (nilpotent case). Then $${\mathcal{D}}(G)={\left\{1,\ \frac{p^2+p-1}{p^3},\ \frac{2p^2-1}{p^4},\ \frac{p^2+p^3-1}{p^5}\right\}}.$$ - $G/Z(G)\cong(C_p\times C_p)\rtimes C_q$ is a minimal Frobenius group and the Sylow $p$-subgroup of $G$ is abelian, where $p$ and $q$ are distinct primes (non-nilpotent case). Then $${\mathcal{D}}(G)={\left\{1,\ \frac{p+q-1}{pq},\ \frac{p^2+q-1}{p^2q},\ \frac{p^2+q^2-1}{p^2q^2}\right\}}.$$ Here by a minimal Frobenius group we mean a Frobenius group none of its proper subgroups are Frobenius. Notice that a Frobenius group $G$ is minimal if and only if its kernel is elementary abelian, its complements are cyclic groups of prime orders, and both the kernel and complements are maximal subgroups of $G$. For a finite group $G$, we can write the set ${\mathcal{D}}(G)$ as $${\mathcal{D}}(G)=\{d_0,\ d_1,\ \ldots,\ d_n\}$$ where $1=d_0>d_1>\cdots>d_n=d(G)$. Recently, the above results on finite groups $G$ satisfying $|{\mathcal{D}}(G)|{\leqslant}4$ are generalized by Farrokhi and Safa [@mfdg-hs] by describing those subgroups $H$ of a finite group $G$ satisfying $d(H,G){\geqslant}d_3$. They show that $H/H\cap Z(G)$ is a product of at most $i$ primes when $d(H,G)=d_i{\geqslant}d_3$ and pose the following conjecture: \[|D(G)|&gt;=Omega(G/Z(G))+1?\] Let $G$ be a finite group and $H$ be a subgroup of $G$ with $d(H,G)=d_k$. Then $|H/Z(H,G)|$ is a product of at most $k$ primes. As a result, $|G/Z(G)|$ is a product of at most $|{\mathcal{D}}(G)|-1$ primes. Also, they state a weaker version of the above conjecture as \[|D(G)|&gt;=l\_M(G/Z(G))+1?\] Every finite group $G$ satisfies $$|{\mathcal{D}}(G)|{\geqslant}l_M\left(\frac{G}{Z(G)}\right)+1,$$ where $l_M(X)$ denotes the maximum length of chains of subgroups of the group $X$. In this paper, we shall classify all finite groups with five relative commutativity degrees. We divide these groups into two families. For nilpotent groups, we have \[nilpotent\] Let $G$ be a finite nilpotent group. Then $|{\mathcal{D}}(G)|=5$ if and only if one of the following holds: - $|G/Z(G)|=p^3$ and $G$ has an abelian maximal subgroup. Then $${\mathcal{D}}(G)={\left\{1,\frac{2p-1}{p^2},\frac{p^2+p-1}{p^3},\frac{3p-2}{p^3},\frac{2p^2-1}{p^4}\right\}}.$$ - $|G/Z(G)|=p^4$ and $G$ has two conjugacy class sizes $1$ and $p^m$ for some fixed $m\in\{1,2,3\}$. Then $${\mathcal{D}}(G)={\left\{1,\frac{p^m+p-1}{p^{m+1}},\frac{p^m+p^2-1}{p^{m+2}},\frac{p^m+p^3-1}{p^{m+3}},\frac{p^m+p^4-1}{p^{m+4}}\right\}}.$$ The second family contains non-nilpotent groups and described as follows: \[non-nilpotent\] Let $G$ be a finite non-nilpotent group. Then $|{\mathcal{D}}(G)|=5$ if and only if one of the following holds: - $G/Z(G)\cong C_p\rtimes C_{q^2}$ is a Frobenius group. Then $${\mathcal{D}}(G)={\left\{1,\ \frac{p+q-1}{pq},\ \frac{p+q^2-1}{pq^2},\ \frac{p+q^3-1}{pq^3},\ \frac{p+q^4-1}{pq^4}\right\}}.$$ - $G/Z(G)\cong C_{p^2}\rtimes C_q$ is a Frobenius group. Then $${\mathcal{D}}(G)={\left\{1,\ \frac{p+q-1}{pq},\ \frac{p^2+q-1}{p^2q},\ \frac{p^2+q^2+pq-p-q}{p^2q^2},\ \frac{p^2+q^2-1}{p^2q^2}\right\}}.$$ - $G/Z(G)\cong(C_p\times C_p)\rtimes C_q$ is a Frobenius group with a normal subgroup of order $p$ and the Sylow $p$-subgroup of $G$ is abelian. Then $${\mathcal{D}}(G)={\left\{1,\ \frac{p+q-1}{pq},\ \frac{p^2+q-1}{p^2q},\ \frac{p^2+q^2+pq-p-q}{p^2q^2},\ \frac{p^2+q^2-1}{p^2q^2}\right\}}.$$ - $G/Z(G)\cong (C_p\times C_p)\rtimes C_q$ is a minimal Frobenius group and the Sylow $p$-subgroup of $G$ is non-abelian. Then, either $G/Z(G)\cong A_4$ for which $${\mathcal{D}}(G)={\left\{1,\ \frac{7}{12},\ \frac{1}{2},\ \frac{3}{8},\ \frac{7}{24}\right\}},$$ or $p>q$ and $${\mathcal{D}}(G)={\left\{1,\ \frac{p^2+q-1}{p^2q},\ \frac{pq+p-1}{p^2q},\ \frac{pq+p^2-1}{p^3q},\ \frac{p^2q+p^2-1}{p^4q}\right\}}.$$ - $G/Z(G)\cong (C_p\times C_p\times C_p)\rtimes C_q$ is a minimal Frobenius group and the Sylow $p$-subgroup of $G$ is abelian. Then $${\mathcal{D}}(G)={\left\{1,\ \frac{p+q-1}{pq},\ \frac{p^2+q-1}{p^2q},\ \frac{p^3+q-1}{p^3q},\ \frac{p^3+q^2-1}{p^3q^2}\right\}}.$$ Here, $p$ and $q$ denote distinct primes. One observe that all groups $G$ with at most five relative commutativity degrees satisfy Conjecture \[|D(G)|&gt;=Omega(G/Z(G))+1?\]. In [@mfdg-hs] the authors show that all supersolvable groups admit Conjecture \[|D(G)|&gt;=Omega(G/Z(G))+1?\] too. Hence all these groups satisfy Conjecture \[|D(G)|&gt;=l\_M(G/Z(G))+1?\]. As a partial result, in Lemma \[|D(G)|&gt;=Omega(G/Z(G))+1\], we also show that all finite groups whose nontrivial elements of their central factor groups have prime power orders satisfy Conjecture \[|D(G)|&gt;=l\_M(G/Z(G))+1?\]. Preliminary results =================== In this section, we recall/prove a set of useful tools which we shall use frequently in our proofs. We note that $d(H,G)=d(HZ,G)$ for every subgroup $H$ and every central subgroup $Z$ of a finite group $G$. We shall use this fact without further citing. \[d(K,G)&lt;=d(H,G)\] Let $G$ be a finite group and $H,K$ be subgroups of $G$ such that $H{\leqslant}K$. Then $d(K,G){\leqslant}d(H,G)$ with equality if and only if $K=HC_K(g)$ for all $g\in G$ \[d(&lt;x&gt;,G)&lt;d(&lt;x\^p&gt;,G)\] Let $G$ be a finite non-abelian group and $x\in G$. If the order of $xZ(G)$ in $G/Z(G)$ is divisible by a prime $p$, then $d({\langlex\rangle},G)<d({\langlex^p\rangle},G)$. \[HnonabelianKabelian\] Let $G$ be a finite non-abelian group. If $K{\leqslant}G$ is non-abelian and $H{\leqslant}K$ is abelian, then $d(K,G)<d(H,G)$. \[CHgABH\] Let $G$ be a finite non-abelian group, $H$ be a subgroup of $G$ and $g\in G\setminus C_G(H)$. If $A,B$ are subgroups of $H$ such that $C_H(g){\leqslant}A<B{\leqslant}H$, then $d(B,G)<d(A,G)$. \[xHCGx\] Let $G$ be a finite group and $x\in G$. If $C_G(x)$ is non-abelian and $H$ is an abelian subgroup of $G$ such that ${\langlex\rangle}\subset H\subset C_G(x)$ and $H\not\subseteq Z(C_G(x))$, then $$d(C_G(x),G)<d(H,G)<d({\langlex\rangle},G).$$ By Lemma \[HnonabelianKabelian\], $d(C_G(x),G)<d(H,G)$. If $d(H,G)=d({\langlex\rangle},G)$, then $H={\langlex\rangle}C_H(g)$ for all $g\in G$. If $g\in C_G(x)$, then $x\in C_H(g)$ and $H=C_H(g)$, from which it follows that $H\subseteq Z(C_G(x))$, a contradiction. Thus $d(H,G)<d({\langlex\rangle},G)$. We shall use the following result to reduce the nilpotent groups under consideration to the class of $p$-groups. \[D(HxK)\] Let $H$ and $K$ be two finite groups with coprime orders. Then - ${\mathcal{D}}(H\times K)={\mathcal{D}}(H){\mathcal{D}}(K)$ is the set of products of elements of ${\mathcal{D}}(H)$ by elements of ${\mathcal{D}}(K)$; - ${\mathcal{D}}(H)\cap{\mathcal{D}}(K)=\{1\}$. While the above lemma gives the simple lower bound $$|{\mathcal{D}}(H\times K)|{\geqslant}|{\mathcal{D}}(H)|+|{\mathcal{D}}(K)|$$ for $|{\mathcal{D}}(H\times K)|$ when $H$ and $K$ are finite groups with coprime orders, we believe that a more stronger result should holds for any two such groups. For any two finite groups $H$ and $K$ of coprime orders, we have $$|{\mathcal{D}}(H\times K)|=|{\mathcal{D}}(H)|\times|{\mathcal{D}}(K)|.$$ Notice that the above conjecture is not valid in general. The smallest counter-example is $(A_4,S_4)$ for which $|{\mathcal{D}}(A_4\times S_4)|=|{\mathcal{D}}(A_4)||{\mathcal{D}}(S_4)|-3$. Two rather paradoxical examples are $$|{\mathcal{D}}(S_4\times S_4)|=|{\mathcal{D}}(S_4)|^2-17\ \text{and}\ |{\mathcal{D}}(S_5\times S_5)|=|{\mathcal{D}}(S_5)|^2+24$$ showing that not only the difference between $|{\mathcal{D}}(H\times K)|$ and $|{\mathcal{D}}(H)||{\mathcal{D}}(K)|$ can be large but also they cannot be compared in general. So, we may ask the following question: How the difference $$|{\mathcal{D}}(H\times K)|-|{\mathcal{D}}(H)||{\mathcal{D}}(K)|$$ grows with respect to $|{\mathcal{D}}(H)|$ and $|{\mathcal{D}}(K)|$? All over this paper, $G$ denotes the group we are working on and ${\overline}{G}$ stand for the factor group $G/Z(G)$. Accordingly, for a subgroup $H$ of $G$ and element $g\in G$, ${\overline}{H}$ and ${\overline}{g}$ stand for $HZ(G)/Z(G)$ and $gZ(G)$. Nilpotent groups ================ Our classification of finite nilpotent groups with four relative commutativity degrees relies on two classes of groups; $p$-groups with an abelian maximal subgroup and $p$-groups whose all non-central elements have the same conjugacy class sizes. In both cases, we can simply compute all relative commutativity degrees and count them. \[abelian maximal subgroup\] Let $G$ be a non-abelian finite $p$-group with an abelian maximal subgroup. If $|G/Z(G)|=p^n$, then $|{\mathcal{D}}(G)|=2n-1$ and $${\mathcal{D}}(G)={\left\{\frac{p^i+p-1}{p^{i+1}},\ \frac{p^{j-1}+p-1}{p^{j+1}}+\frac{p-1}{p^n}\ :\ 0{\leqslant}i<n,\ 1<j{\leqslant}n\right\}}.$$ Let $M$ be the unique abelian maximal subgroup of $G$. Clearly, $C_G(g)=M$ for all $g\in M\setminus Z(G)$, and $C_G(g)={\langleZ(G),g\rangle}$ has order $p|Z(G)|$ for all $g\in G\setminus M$. Let $H$ be a subgroup of $G$ containing $Z(G)$ and $|{\overline}{H}|=p^i$. If $H\subseteq M$, then $0{\leqslant}i<n$ and $$d(H,G)=\frac{|Z(G)||G|+(|H|-|Z(G)|)|M|}{|H||G|}=\frac{p^i+p-1}{p^{i+1}}.$$ Also, if $H\not\subseteq M$, then $1{\leqslant}i{\leqslant}n$ and $|{\overline}{H\cap M}|=p^{i-1}$. Thus $$\begin{aligned} d(H,G)&=\frac{|Z(G)||G|+(|H\cap M|-|Z(G)|)|M|+(|H|-|H\cap M|)p|Z(G)|}{|H||G|}\\ &=\frac{p^{i-1}+p-1}{p^{i+1}}+\frac{p-1}{p^n}.\end{aligned}$$ On the other hand, if $$\frac{p^i+p-1}{p^{i+1}}=\frac{p^{j-1}+p-1}{p^{j+1}}+\frac{p-1}{p^n},$$ for some $0{\leqslant}i<n$ and $1{\leqslant}j{\leqslant}n$, then a simple verification shows that $i=n-1$ and $j=1$. Therefore $|{\mathcal{D}}(G)|=2n-1$, as required. \[two conjugacy class sizes\] Let $G$ be a finite $p$-group whose non-central elements have conjugacy classes of the same size $p^m$. If $|G/Z(G)|=p^n$, then $|{\mathcal{D}}(G)|=n+1$ and $${\mathcal{D}}(G)={\left\{\frac{p^m+p^i-1}{p^{m+i}}\ :\ 0{\leqslant}i{\leqslant}n\right\}}.$$ Let $H$ be a subgroup of $G$ containing $Z(G)$. If ${\overline}{H}=p^i$, then $$d(H,G)=\frac{|Z(G)||G|+(|H|-|Z(G)|)|G|/p^m}{|H||G|}=\frac{p^m+p^i-1}{p^{m+i}},$$ as required. In what follows, $Z$ and $Z_g$ stand for the subgroups $Z(G)$ and ${\langleg\rangle}\cap Z(G)$ of a group $G$ for all elements $g\in G$. Utilizing the above two lemmas, we can classify all nilpotent groups with five relative commutativity degrees. ***Proof of Theorem \[nilpotent\].*** Suppose $|{\mathcal{D}}(G)|=5$. Let $G=P_1\times\cdots\times P_k$ be the factorization of $G$ into the direct product of Sylow $p$-subgroups. If $P_i$ and $P_j$ are non-abelian for some $i\neq j$, then the paragraph after Lemma \[D(HxK)\] shows that $|{\mathcal{D}}(G)|{\geqslant}6$, which is a contradiction. Therefore $G/Z(G)$ is a $p$-group. By Theorem \[|D(G)|=3\](i) and [@mfdg-hs Theorem 2.3], we must have $|G/Z(G)|=p^3$ or $p^4$. If $|G/Z(G)|=p^3$, then $G/Z(G)$ has an abelian maximal subgroup by Theorem \[|D(G)|=4\](i) so that Lemma \[abelian maximal subgroup\] yields $G$ is a group of type (i) with ${\mathcal{D}}(G)$ as given in the theorem. Now, suppose that $|G/Z(G)|=p^4$. If $G$ has two conjugacy class sizes, then we apply Lemma \[two conjugacy class sizes\] to show that $G$ is a group of type (ii) with ${\mathcal{D}}(G)$ as in the theorem. For the rest of the proof, we further assume that $G$ has at least three conjugacy class sizes. By Lemma \[abelian maximal subgroup\], $G$ has no abelian maximal subgroups. We have two cases to consider: Case 1. $\exp({\overline}{G})=p^2$. Let ${\overline}{x}\in{\overline}{G}$ be an element of order $p^2$. Clearly, $C_G(x)={\langleZ(G),x\rangle}$ as $G$ has no abelian maximal subgroups. We show that $C_G(x^p)$ is a maximal subgroup of $G$. Suppose on the contrary that $C_G(x^p)=C_G(x)$. Let $M$ be a maximal subgroup of $G$ containing $x$. Since $d(C_G(x),G)=d({\langlex\rangle},G)$, by Corollary \[d(&lt;x&gt;,G)&lt;d(&lt;x\^p&gt;,G)\] and Lemma \[CHgABH\], $$d(G)<d(M,G)<d({\langlex\rangle},G)<d({\langlex^p\rangle},G)<1.$$ If $g\in G$ is such that $|{\overline}{C_G(g)}|=p$, and $C_G(g)\subset M_1\subset M_2\subset G$ for some subgroups $M_1$ and $M_2$ of $G$, then Lemma \[CHgABH\] along with the fact that $d(C_G(g),G)=d({\langleg\rangle},G)$ yield $$d(G)<d(M_2,G)<d(M_1,G)<d({\langleg\rangle},G)<1.$$ Thus $d({\langleg\rangle},G)=d({\langlex^p\rangle},G)$, which implies that $|{\overline}{C_G(g)}|=|{\overline}{C_G(x^p)}|{\geqslant}p^2$, a contradiction. Hence $|{\overline}{C_G(g)}|{\geqslant}p^2$ for all $g\in G$. As a result, $G$ contains an element $y$ such that $C_G(y)$ is a maximal subgroup of $G$. Notice that $C_G(y)$ is non-abelian and hence we should have $|{\overline}{y}|=p$. By Lemma \[HnonabelianKabelian\], $$d(G)<d(C_G(y),G)<d({\langley\rangle},G)<1.$$ Since $d({\langley\rangle},G)\neq d({\langlex^p\rangle},G)$, it follows that $d({\langley\rangle},G)=d({\langlex\rangle},G)$. On the other hand, $$d({\langlex\rangle},G)=\frac{|Z_x||G|+(|x|-|Z_x|)p^2|Z|}{|x||G|}=\frac{2p^2-1}{p^4}$$ and $$d({\langley\rangle},G)=\frac{|Z_y||G|+(|y|-|Z_y|)p^3|Z|}{|y||G|}=\frac{2p-1}{p^3},$$ which imply that $d({\langley\rangle},G)\neq d({\langlex\rangle},G)$, a contradiction. Therefore $C_G(x^p)$ is a maximal subgroup of $G$, and by Corollary \[d(&lt;x&gt;,G)&lt;d(&lt;x\^p&gt;,G)\] and Lemma \[HnonabelianKabelian\], $$d(G)<d(C_G(x^p),G)<d({\langlex\rangle},G)<d({\langlex^p\rangle},G)<1.$$ Next we show that $C_G(g)$ is a maximal subgroup of $G$ for all $g\in G$ satisfying $|{\overline}{g}|=p$. To this end, let $g\in G$ be such that $|{\overline}{g}|=p$, $|{\overline}{C_G(g)}|=p^c\neq p^3$, and $M$ be a maximal subgroup of $G$ containing $C_G(g)$. Then Lemma \[CHgABH\] yields $$d(G)<d(M,G)<d(C_G(g),G){\leqslant}d({\langleg\rangle},G)<1.$$ Since $d({\langleg\rangle},G)\neq d({\langlex^p\rangle},G)$, it follows that $d({\langleg\rangle},G)=d({\langlex\rangle},G)$. On the other hand, $$d({\langlex\rangle},G)=\frac{|Z_x||G|+(|x^p|-|Z_x|)p^3|Z|+(|x|-|x^p|)p^2|Z|}{|x||G|}=\frac{3p-2}{p^3}$$ and $$d({\langleg\rangle},G)=\frac{|Z_g||G|+(|g|-|Z_g|)p^c|Z|}{|g||G|}=\frac{p^3+p-1}{p^4}\ \text{or}\ \frac{p^2+p-1}{p^3},$$ which imply that $d({\langleg\rangle},G)\neq d({\langlex\rangle},G)$, a contradiction. Therefore $|{\overline}{C_G(g)}|=p^3$ for all $g\in G$ such that $|{\overline}{g}|=p$. If ${\overline}{G}$ has a subgroup ${\overline}{H}\cong C_p\times C_p$, then $$d(H,G)=\frac{|Z||G|+(|H|-|Z|)p^3|Z|}{|H||G|}=\frac{p^2+p-1}{p^3}.$$ On the other hand, if $g\in G\setminus C_G(H)$, then $C_H(g)$ is a non-central subgroup of $G$ and hence $$d(H,G)<d(C_H(g),G)<1$$ by Lemma \[CHgABH\]. Let $a,b$ be the number of cycles of order $p^2$ and $p$ in ${\overline}{M}$, respectively with $M$ a maximal subgroup of $G$ containing $H$. Then $$d(M,G)=\frac{|Z||G|+p(p-1)ap^2|Z|^2+(p-1)bp^3|Z|^2}{|M||G|}=\frac{p+(p-1)(a+b)}{p^4},$$ which implies that $d(H,G)\neq d(M,G)$. Note that if $d(H,G)=d(M,G)$, then we get $a+b=p(p+2)$. As $1+p(p-1)a+(p-1)b=p^3$ (the order of $|{\overline}{M}|$), we obtain $a=-1$, which is a contradiction. Similarly, we can show that $d(H,G)\neq d(G)$. Thus $d(H,G)=d({\langlex\rangle},G)$, which is impossible. Therefore ${\overline}{G}$ has a unique subgroup of order $p$ showing that ${\overline}{G}\cong Q_{16}$ (see [@djsr Theorem 5.3.6]). Accordingly, ${\overline}{G}$ contains an element of order $8$ so that $G$ has an abelain maximal subgroup, which is a contradiction. Case 2. $\exp({\overline}{G})=p$. Let $g\in G\setminus Z(G)$. We consider three possibilities for the order of $C_G(g)$. \(a) $|{\overline}{C_G(g)}|=p$. Let $A_g$ and $B_g$ be subgroups of $G$ such that $C_G(g)\subset A_g\subset B_g\subset G$. Since $d(C_G(g),G)=d({\langleg\rangle},G)$, Lemma \[CHgABH\] yields $$\label{centralizer1} d(G)<d(B_g,G)<d(A_g,G)<d({\langleg\rangle},G)<1.$$ \(b) $|{\overline}{C_G(g)}|=p^2$. Let $C_g$ be a maximal subgroup of $G$ containing $C_G(g)$. If $d(C_G(g),G)=d({\langleg\rangle},G)$, then $C_G(g)={\langleg\rangle}C_{C_G(g)}(g')$ for all $g'\in G$. Hence $g'\in C_G(g'')$ for some $g''\in C_G(g)\setminus{\langleZ(G),g\rangle}$ for all $g'\in G\setminus C_G(g)$. Then $${\overline}{G}=\bigcup_{{\overline}{1}\neq{\langle{\overline}{g''}\rangle}{\leqslant}{\overline}{C_G(g)}}{\overline}{C_G(g'')},$$ from which it follows that $$p^4{\leqslant}p^2+p(p^3-p^2)=p^4-p^3+p^2<p^4,$$ a contradiction. Therefore, Lemmas \[d(K,G)&lt;=d(H,G)\] and \[CHgABH\] give us $$\label{centralizer2} d(G)<d(C_g,G)<d(C_G(g),G)<d({\langleg\rangle},G)<1.$$ \(c) $|{\overline}{C_G(g)}|=p^3$. Let $g'\in C_G(g)\setminus{\langleZ(G),g\rangle}$ and $g''\in C_G(g)\setminus{\langleZ(G),g,g'\rangle}$. If $D_g:={\langleZ(G),g,g'\rangle}$, then $C_{C_G(g)}(g')=D_g$ and $C_{D_g}(g'')={\langleZ(G),g\rangle}$ so that $$d(C_G(g),G)\neq d(D_g,G)\neq d({\langleg\rangle},G)$$ by Lemma \[d(K,G)&lt;=d(H,G)\]. Thus $$\label{centralizer3} d(G)<d(C_G(g),G)<d(D_g,G)<d({\langleg\rangle},G)<1.$$ Now, from , , and , it follows that $d({\langlex\rangle},G)=d({\langley\rangle},G)$ and hence $|C_G(x)|=|C_G(y)|$ for all non-central elements $x,y$ of $G$. This shows that $G$ has only two conjugacy class sizes, which contradicts our assumption. The proof is complete. $\Box$ Non-nilpotent groups ==================== This section is devoted to non-nilpotent finite groups with five relative commutativity degrees. To classify these groups, first we show that non-trivial elements in central factor group of any such group have prime power orders. \[G/Z(G) has prime power order elements\] Let $G$ be a finite non-nilpotent group. If $|{\mathcal{D}}(G)|=5$, then the order of every element of $G/Z(G)$ is equal to $p$ or $p^2$ for some prime $p$. First assume that ${\overline}{G}$ has an element ${\overline}{x}$ of order $pqr$ for some primes $p,q,r$. By Corollary \[d(&lt;x&gt;,G)&lt;d(&lt;x\^p&gt;,G)\], $$\label{|x|=pqr} d({\langlex\rangle},G)<d({\langlex^p\rangle},G)<d({\langlex^{pq}\rangle},G)<1.$$ If $M$ is a maximal subgroup of $G$ containing $C_G(x)$, then $$d(G)<d(M,G){\leqslant}d(C_G(x),G){\leqslant}d({\langlex\rangle},G),$$ from which, in conjunction with , it follows that $d(M,G)=d(C_G(x),G)=d({\langlex\rangle},G)$. Hence, $C_G(x)=M$ is an abelian maximal subgroup of $G$ by Lemmas \[HnonabelianKabelian\] and \[CHgABH\]. Now, the properties of $C_G(x)$ yield $$d({\langlex\rangle},G)=\frac{1}{pqr}+\left(1-\frac{1}{pqr}\right)\frac{1}{[G:C_G(x)]}$$ and $$d(C_G(x),G)=\frac{1}{[C_G(x):Z(G)]}+\left(1-\frac{1}{[C_G(x):Z(G)]}\right)\frac{1}{[G:C_G(x)]},$$ so that $[C_G(x):Z(G)]=pqr$ and consequently $C_G(x)={\langleZ(G),x\rangle}$. We have two cases to consider: Case 1. $C_G(x){\trianglelefteq}G$. Then $|G/Z(G)|=pqrs$ for some prime $s$. Let $y\in G\setminus C_G(x)$ be an $s$-element and $H={\langleZ(G),x^p,y\rangle}$. Since $H$ is non-abelian, Lemma \[HnonabelianKabelian\] shows that $d(H,G)<d({\langley\rangle},G)$. On the other hand, $$d({\langley\rangle},G)=\frac{1}{s}+\left(1-\frac{1}{s}\right)\frac{1}{pqr}=d({\langlex\rangle},G),$$ which implies that $d(H,G)<d({\langlex\rangle},G)$. Thus $d(H,G)=d(G)$ so that $G=HC_G(y)=H$ by Lemma \[d(K,G)&lt;=d(H,G)\], a contradiction. Case 2. $C_G(x)\not{\trianglelefteq}G$. Then $N_G(C_G(x))=C_G(x)$ and $C_G(x)\cap C_G(x)^g=Z(G)$ for all $g\in G$. It follows that ${\overline}{G}$ is a Frobenius group with complement ${\overline}{C_G(x)}$. Let ${\overline}{K}$ be the kernel of ${\overline}{G}$. Then ${\overline}{K}$ is an elementary abelian $s$-group for some prime $s$. If $y\in K\setminus Z(G)$, then $${\langley\rangle}{\leqslant}C_G(y){\leqslant}K<K{\langlex^{pq}\rangle}<K{\langlex^p\rangle}<G.$$ Hence, Lemma \[CHgABH\] yields $$d(G)<d(K{\langlex^p\rangle},G)<d(K{\langlex^{pq}\rangle},G)<d(K,G){\leqslant}d(C_G(y),K){\leqslant}d({\langley\rangle},G)<1,$$ which implies that $d(K,G)=d({\langley\rangle},G)$. Thus $K={\langley\rangle}C_K(x)={\langleZ(G),y\rangle}$ by Lemma \[d(K,G)&lt;=d(H,G)\], which implies that $|{\overline}{K}|=s$ is a prime. Now, proceeding the same arguments as in Case 1 leads us to a contradiction. Therefore ${\overline}{G}$ has no elements of order $pqr$. For the rest of the proof, we suppose that ${\overline}{G}$ has an element ${\overline}{x}$ of order $pq$, where $p$ and $q$ are distinct primes. Then ${\overline}{x}={\overline}{a}{\overline}{b}$, where $|{\overline}{a}|=p$, $|{\overline}{b}|=q$, and $ab=ba$. If $d({\langlea\rangle},G)\neq d({\langleb\rangle},G)$, then since $d({\langlex\rangle},G)<d({\langlea\rangle},G),d({\langleb\rangle},G)$, by replacing $x^{pq}$ by $x^q$ and noting that $d(K{\langlex^p\rangle},G)\neq d(K{\langlex^q\rangle},G)$ via direct computations, we reach to a contradiction by the same arguments as above. Thus, we suppose in addition that $d({\langlea\rangle},G)=d({\langleb\rangle},G)$. It follows that $|C_G(a)|\neq|C_G(b)|$ and hence $C_G(x)$ is not a maximal subgroup of $G$. If $C_G(x)$ is non-abelian and $H$ is an abelian non-central subgroup of $C_G(x)$ containing ${\langlex\rangle}$ properly, then we obtain $$d(C_G(x),G)<d(H,G)<d({\langlex\rangle},G)$$ by Lemma \[xHCGx\], which implies that $|{\mathcal{D}}(G)|>5$, a contradiction. Thus $C_G(x)$ is abelian so that ${\overline}{C_G(x)}\cong C_p^m\times C_q^n$ for some $m,n{\geqslant}1$ as ${\overline}{G}$ has no elements of orders a product of three primes . Let $M$ be a maximal subgroup of $G$ containing $C_G(x)$, and ${\overline}{H}$ be a Sylow subgroup of ${\overline}{C_G(x)}$. It is evident that all non-central elements of $H$ have the same centralizer sizes as $C_G(x)$ is abelian. Then, by Lemmas \[d(K,G)&lt;=d(H,G)\] and \[CHgABH\], we get $$d(G)<d(M,G)<d(C_G(x),G)<d(H,G){\leqslant}d({\langleg\rangle},G)<1$$ for every $g\in H\setminus Z(G)$. Hence $d(H,G)=d({\langleg\rangle},G)$, from which it follows that $H={\langleZ(G),g\rangle}$. Therefore $C_G(x)={\langleZ(G),x\rangle}$. Now, we show that ${\overline}{C_G(a)}$ is a $\{p,q\}$-group. Suppose on the contrary that $\pi({\overline}{C_G(a)})\neq\{p,q\}$ and ${\overline}{c}\in{\overline}{C_G(a)}$ is an element of prime order $r\neq p,q$. Then ${\overline}{ac}$ is an element of order $pr$ in ${\overline}{G}$ and, as above, we should have $d({\langlec\rangle},G)=d({\langlea\rangle},G)$. Also, we must have $d({\langleac\rangle},G)=d({\langlex\rangle},G)$, from which it follows that $|{\overline}{C_G(a)}|=pqr$. By Schur-Zassenhaus theorem (see [@djsr Theorem 9.1.2]), we get ${\overline}{C_G(a)}={\langle{\overline}{a}\rangle}\times{\langle{\overline}{b},{\overline}{c}\rangle}$ so that ${\langle{\overline}{b},{\overline}{c}\rangle}$ is a non-abelian group of order $qr$. Without loss of generality, we assume that ${\langle{\overline}{b},{\overline}{c}\rangle}={\langle{\overline}{b}\rangle}\rtimes{\langle{\overline}{c}\rangle}$, hence $r\mid q-1$. Now, by invoking Lemma \[d(K,G)&lt;=d(H,G)\], we can show that $$d(C_G(a),G)<d({\langleb,c\rangle},G)<d({\langlea\rangle},G),$$ which yields $d({\langlex\rangle},G)=d({\langleb,c\rangle},G)$. On the other hand, $$\begin{aligned} d({\langlex\rangle},G)&=\frac{|G|+(p-1)|C_G(a)|+(q-1)|C_G(b)|+(p-1)(q-1)|C_G(x)|}{pq|G|}\\ &=\frac{|G|+(p-1)pqr|Z|+(q-1)|C_G(b)|+(p-1)(q-1)pq|Z|}{pq|G|}\end{aligned}$$ and $$\begin{aligned} d({\langleb,c\rangle},G)&=\frac{|G|+(q-1)|C_G(b)|+q(r-1)|C_G(c)|}{qr|G|},\end{aligned}$$ from which in conjunction with the fact that ${\langlea\rangle}$, ${\langleb\rangle}$, and ${\langlec\rangle}$ have the same relative commutativity degrees, we obtain $$|{\overline}{G}|=pr\cdot\frac{pqr+p-pr-qr}{p-r}.$$ Since $q$ divides $|{\overline}{G}|$, it follows that $q\mid r-1$, which contradicts our earlier result that $r\mid q-1$. Therefore $\pi({\overline}{C_G(a)})=\{p,q\}$ and $|{\overline}{C_G(a)}|=p^mq^n$ for some $m,n{\geqslant}1$. Let ${\overline}{P}$ and ${\overline}{Q}$ be a Sylow $p$-subgroup and a Sylow $q$-subgroup of ${\overline}{C_G(a)}$, respectively. We show that $|{\overline}{Q}|=q$. Suppose on the contrary that $|{\overline}{Q}|>q$ and ${\overline}{Q}_0$ is a subgroup of ${\overline}{Q}$ of order $q^2$ containing ${\overline}{b}$. Notice that all non-central elements of $Q_0$ have the same centralizer sizes. If $d({\langlex\rangle},G)=d(Q_0,G)$, then as $d({\langlea\rangle},G)=d({\langleb\rangle},G)$, and $$d({\langlex\rangle},G)=\frac{|G|+(p-1)|C_G(a)|+(q-1)|C_G(b)|+(p-1)(q-1)|C_G(x)|}{pq|G|}$$ and $$d(Q_0,G)=\frac{|G|+(q^2-1)|C_G(b)|}{q^2|G|},$$ we obtain $$|{\overline}{G}|=pq\cdot\frac{pq^2-pq-q^2+p}{p-q}.$$ Hence $|{\overline}{G}|$ is not divisible by $q^2$ contradicting our assumption. Thus $d({\langlex\rangle},G)\neq d(Q_0,G)$, and by Lemma \[HnonabelianKabelian\], $$d(G)<d(C_G(a),G)<d({\langlex\rangle},G),\ d(Q_0,G)<d({\langleb\rangle},G)<1,$$ which is a contradiction. Therefore $|{\overline}{Q}|=q$ and subsequently $|{\overline}{P}|>p$ by Theorem \[|D(G)|=3\](ii). Let ${\overline}{H}$ be a subgroup of ${\overline}{P}$ containing ${\overline}{a}$ properly. Since ${\langlea\rangle}{\leqslant}H{\leqslant}P{\leqslant}C_G(a)$, we have $$d(C_G(a),G){\leqslant}d(P,G){\leqslant}d(H,G){\leqslant}d({\langlea\rangle},G).$$ If $d(H,G)=d({\langlea\rangle},G)$, then $$H={\langlea\rangle}C_H(x)={\langlea\rangle}{\langleZ(G),a\rangle}={\langleZ(G),a\rangle}$$ by Lemma \[d(K,G)&lt;=d(H,G)\], which is a contradiction. Also, if $d(P,G)=d(C_G(a),G)$, then, by applying Lemma \[d(K,G)&lt;=d(H,G)\] once more, it follows that $C_G(a)=PC_{C_G(a)}(g)=P$ for any ${\overline}{g}\in{\overline}{P}\setminus{\langle{\overline}{a}\rangle}$, a contradiction. Notice that $C_G(x)={\langleZ(G),x\rangle}$ and hence $C_P(b)={\langleZ(G),a\rangle}$. Thus $$d(G)<d(C_G(a),G)<d(P,G){\leqslant}d(H,G)<d({\langlea\rangle},G)<1,$$ from which we obtain $d(P,G)=d(H,G)=d({\langlex\rangle},G)$. As a result, $P=HC_P(b)=H{\langleZ(G),a\rangle}=H$, which implies that $|{\overline}{P}|=p^2$ and hence ${\overline}{|C_G(a)|}=p^2q$. Furthermore, ${\overline}{P}$ is non-cyclic otherwise the equalities $d(P,G)=d({\langlex\rangle},G)$ and $d({\langlea\rangle},G)=d({\langleb\rangle},G)$ result in a contradiction. If ${\overline}{a'}\in {\overline}{P}\setminus\{{\overline}{1}\}$ is such that $|C_G(a')|\neq|C_G(a)|$, then we must have $d({\langlea'\rangle},G)=d({\langlex\rangle},G)$ for $$d(G)<d(C_G(a),G)<d({\langlex\rangle},G)<d({\langlea\rangle},G)<1$$ and $d(C_G(a),G)<d({\langlea'\rangle},G)\neq d({\langlea\rangle},G)$. It follows that $|{\overline}{G}|{\leqslant}p^2(2q-1)$, which contradicts the fact that $|{\overline}{G}|{\geqslant}2|{\overline}{C_G(a)}|=2p^2q$. Thus $|C_G(a')|=|C_G(a)|$ for all ${\overline}{a'}\in {\overline}{P}\setminus\{{\overline}{1}\}$. Now, the equalities $d(P,G)=d({\langlex\rangle},G)$ and $d({\langlea\rangle},G)=d({\langleb\rangle},G)$ leads us to the final contradiction. The proof is complete. Having proved the above major lemma, we need yet to state two rather easy related results before proving our main classification theorem. \[d(&lt;x&gt;,G)&lt;&gt;d(&lt;y&gt;,G)\] Let $G$ be a finite group and ${\overline}{x},{\overline}{y}\in{\overline}{G}$ be elements of distinct prime orders $p$ and $q$ such that ${\overline}{C_G(x)}$ and ${\overline}{C_G(y)}$ have prime power orders. Then either ${\overline}{G}$ is a group of order $pq$ or $d({\langlex\rangle},G)\neq d({\langley\rangle},G)$. If the equality $d({\langlex\rangle},G)=d({\langley\rangle},G)$ holds, then $$q(|{\overline}{G}|+(p-1)|{\overline}{C_G(x)}|)=p(|{\overline}{G}|+(q-1)|{\overline}{C_G(y)}|).$$ Assume $p>q$. As $|{\overline}{C_G(x)}|$ divides the right hand side of the above equality, it follows that $|{\overline}{C_G(x)}|$ divides $p$, hence $|{\overline}{C_G(x)}|=p$. Then $$|{\overline}{G}|=pq\left(1-\frac{(q-1)(|{\overline}{C_G(y)}|/q-1)}{p-q}\right)<pq,$$ if $|{\overline}{C_G(y)}|>q$. Thus $|{\overline}{C_G(y)}|=q$, which implies that $|{\overline}{G}|=pq$. The following lemma gives a partial answer to Conjecture \[|D(G)|&gt;=l\_M(G/Z(G))+1?\]. Note that the groups $G/Z(G)$ in the lemma are classified in [@wb-gt]. \[|D(G)|&gt;=Omega(G/Z(G))+1\] Let $G$ be a finite non-nilpotent group such that nontrivial elements of $G/Z(G)$ have prime power orders. Then $|{\mathcal{D}}(G)|{\geqslant}l_M(G/Z(G))+1$. Let $H$ and $K$ be subgroups of $G$ such that $Z(G){\leqslant}H<K$. If $d(H,G)=d(K,G)$, then $K=HC_K(g)$ for all $g\in G$. Let $p$ be a prime divisor of ${\overline}{G}$ such that either $p\nmid [K:H]$ or $[K:H]$ is divisible by $pq$ for some prime $q\neq p$. If ${\overline}{g}\in{\overline}{G}$ is a $p$-element, then ${\overline}{C_K(g)}$ is a $p$-subgroup of ${\overline}{K}$ so that $|HC_K(g)|=|H|[C_K(g):C_H(g)]\neq |K|$. Thus $d(K,G)<d(H,G)$ by Lemma \[d(K,G)&lt;=d(H,G)\], from which the result follows. Now, we are able to classify all non-nilpotent finite groups with five relative commutativity degrees. ***Proof of Theorem \[non-nilpotent\].*** By Lemma \[G/Z(G) has prime power order elements\], we know that non-trivial elements of ${\overline}{G}$ have non-cubic prime power orders. First we show that ${\overline}{G}$ is a $\{p,q\}$-group. Suppose on the contrary that $|{\overline}{G}|$ is divisible by three distinct primes $p,q,r$ and ${\overline}{a},{\overline}{b},{\overline}{c}$ are elements of ${\overline}{G}$ of orders $p,q,r$, respectively. By Lemma \[d(&lt;x&gt;,G)&lt;&gt;d(&lt;y&gt;,G)\], we can assume that $$d(G)<d({\langlea\rangle},G)<d({\langleb\rangle},G)<d({\langlec\rangle},G)<1.$$ By Lemmas \[d(K,G)&lt;=d(H,G)\] and \[CHgABH\], ${\langle{\overline}{a}\rangle}$ is a maximal subgroup of ${\overline}{G}$, which implies that ${\overline}{G}$ is a Frobenius group whose kernel and complements are both prime power groups, a contradiction. Thus ${\overline}{G}$ is a $\{p,q\}$-group so that $|{\overline}{G}|=p^mq^n$ for some $m,n{\geqslant}1$. Furthermore, as $|{\overline}{G}|\neq pq$, Lemma \[|D(G)|&gt;=Omega(G/Z(G))+1\] yields $m+n=3$ or $4$. Let $P$ and $Q$ be a Sylow $p$-subgroup and a Sylow $q$-subgroup of $G$, respectively. First assume that $|{\overline}{G}|=p^2q^2$. Then all non-central $p$-elements (resp. $q$-elements) of $G$ have the same centralizer sizes, say $p^c|Z|$ for some $c\in\{1,2\}$ (resp. $q^d|Z|$ for some $d\in\{1,2\}$). If ${\overline}{P^*}$ and ${\overline}{Q^*}$ denote maximal subgroups of ${\overline}{P}$ and ${\overline}{Q}$, respectively, then at least two numbers among $$d(P,G),\ d(P^*,G),\ d(Q,G),\ d(Q^*,G)$$ must be equal. Examining all possible cases, it yields $c=d=2$ and $d(P,G)=d(Q,G)$. Hence $P$ and $Q$ are abelian. Then $P\cap P^g=Q\cap Q^{g'}=Z(G)$ for all $g\in G\setminus N_G(P)$ and $g'\in G\setminus N_G(Q)$. If $q^j=[G:N_G(P)]$ and $p^i=[G:N_G(Q)]$, then as conjugates of ${\overline}{P}\setminus\{{\overline}{1}\}$ and ${\overline}{Q}\setminus\{{\overline}{1}\}$ partition ${\overline}{G}\setminus\{{\overline}{1}\}$, we must have $$p^2q^2-1=q^j(p^2-1)+p^i(q^2-1),$$ which has no solutions for primes $p$ and $q$ assuming that $i,j\in\{1,2\}$. Therefore, $|{\overline}{G}|\neq p^2q^2$ and we can assume that $|{\overline}{Q}|=q$. In what follows, ${\overline}{P_i}$ stands for any subgroup of ${\overline}{P}$ of order $p^i$ for $i=1,\ldots,m$. From the proof Lemma \[|D(G)|&gt;=Omega(G/Z(G))+1\], it follows that all non-central $p$-elements of $G$ have the same centralizer size, say $p^c|Z|$. Clearly, $P$ is abelian if $c{\geqslant}2$ so that either $p^c=p$ or $p^c=p^m$. A simple verification shows that for a subgroup $P^*$ of $P$ we have $d(P^*,G)=d(Q,G)$ if and only if $P=P^*$ is abelian. Hence $P$ is abelian when $|{\overline}{P}|=p^3$ otherwise the following elements $$d(G),\ d(P_3,G),\ d(P_2,G),\ d(P_1,G),\ d(Q,G),\ 1$$ of ${\mathcal{D}}(G)$ are pairwise distinct, which is a contradiction. Suppose $N_G(Q)\neq Q$. If $H$ denotes a subgroup of $N_G(Q)$ such that $|{\overline}{H}|=pq$, then simple computations show that $d(H,G)\neq d(P_i,G)$ for $i=1,\ldots,m$. Since $d(G)<d(P_m,G)<\cdots<d(P_1,G)<1$, it follows that $m=2$. In particular, we must have $d(P,G)=d(Q,G)$, which implies that $P$ is abelian as mentioned in the previous paragraph. Since ${\overline}{H}\cong C_q\rtimes C_p$ is non-abelian, we have $p<q$ so that $H{\trianglelefteq}G$ as $[{\overline}{G}:{\overline}{H}]$ is the smallest prime dividing $|{\overline}{G}|$. Hence ${\overline}{Q}{\trianglelefteq}{\overline}{G}$. The fact that ${\mathrm{Aut}}({\overline}{Q})$ is cyclic and $C_{{\overline}{G}}({\overline}{Q})={\overline}{Q}$ implies that ${\overline}{P}$ is cyclic. Therefore $G$ is a group as in part (i). Now, assume that $N_G(Q)=Q$. Then ${\overline}{G}={\overline}{P}\rtimes{\overline}{Q}$ is a Frobenius group. Assume ${\overline}{G}$ has a normal subgroup ${\overline}{P^*}$ of order $p^k$ for some $1{\leqslant}k<m$. Then $$d(P^*Q,G)=\frac{p^mq+(p^k-1)p^c+p^k(q-1)q}{p^{k+m}q^2}\neq\frac{p^mq+(p^i-1)p^c}{p^{i+m}q}=d(P_i,G)$$ for $i=1,\ldots,m$. Since $d(G)<d(P_m,G)<\cdots<d(P_1,G)<1$, we must have $m=2$ and $k=1$. Furthermore, as the elements $$d(G),\ d(P,G),\ d(P^*Q,G),\ d(P^*,G),\ 1$$ of ${\mathcal{D}}(G)$ are pairwise distinct and $d(Q,G)\neq d(P^*,G),d(P^*Q,G)$ by Lemmas \[d(&lt;x&gt;,G)&lt;&gt;d(&lt;y&gt;,G)\] and \[|D(G)|&gt;=Omega(G/Z(G))+1\], we must have $d(Q,G)=d(P,G)$, which yields $P$ is abelian as mentioned above. Hence $G$ is a group as in parts (ii) or (iii). Finally, assume that ${\overline}{P}$ is a minimal normal subgroup of ${\overline}{G}$. Then ${\overline}{P}$ is an elementary abelian $p$-group. If $|{\overline}{P}|=p^2$, then part (iii) and Theorem \[|D(G)|=4\](ii) yield $c=1$ and $P$ is non-abelian, which implies that $G$ is a group as in part (iv). Now, assume that $|{\overline}{P}|=p^3$. Then $$d(P_i,G)=\frac{|Z||G|+(p^i-1)p^c|Z|^2}{p^i|Z||G|}=\frac{p^{3-c}q+p^i-1}{p^{3+i-c}q}$$ for $i=1,2,3$. On the other hand, we must have $d(Q,G)=d(P_i,G)$ for some $i\in\{2,3\}$ as $d(Q,G)\neq d(P_1,G)$ by Lemma \[d(&lt;x&gt;,G)&lt;&gt;d(&lt;y&gt;,G)\]. A simple verification shows that $i=c=3$, hence $P$ is abelian and $G$ is a group as in part (v). The proof is complete. $\Box$ [0]{} W. Bannuscher and G. Tiedt, On a theorem of Deaconescu, *Rostock. Math. Kolloq.* **47** (1994), 23–26. F. Barry, D. MacHale, and $\acute{A}$. N$\acute{i}$Sh$\acute{e}$, Some supersolvability conditions for finite groups, *Math. Proc. R. Ir. Acad.* **106**A(2) (2006), 163–177. R. Barzegar, A. Erfanian, and M. Farrokhi D. G., Finite groups with three relative commutativity degrees, *Bull. Iranian Math. Soc.* **39**(2) (2013), 271–280. S. Eberhard, Commuting probabilities of finite groups, *Bull. Lond. Math. Soc.* **47**(5) (2015), 796–808. P. Erdös and P. Turan, On some problems of a statistical group-theory, IV, *Acta Math. Acad. Sci. Hungary* **19** (1968), 413–435. A. Erfanian and M. Farrokhi D. G., Finite groups with four relative commutativity degrees, *Algebra Colloq.* **22**(3) (2015), 449–458. A. Erfanian, R. Rezaei, and P. Lescot, On the relative commutativity degree of a subgroup of a finite group, *Comm. Algebra* **35**(12) (2007), 4183–4197. I. V. Erovenko and B. Sury, Commutativity degrees of wreath products of finite abelian groups, *Bull. Aust. Math. Soc.* **77**(1) (2008), 31–36. M. Farrokhi D. G. and H. Safa, Subgroups with large relative commutativity degree, *Quaest. Math.* **40**(7) (2017), 973–979. R. M. Guralnik and G. R. Robinson, On the commuting probability in finite groups, *J. Algebra* **300** (2006), 509–528. W. H. Gustafson, What is the probability that two group elements commute?, *Amer. Math. Monthly* **80** (1973), 1031–1034. R. Heffernan, D. Machale, and $\acute{A}$. N$\acute{i}$Sh$\acute{e}$, Restrictions on commutativity ratios in finite groups, *Int. J. Group Theory* **3**(4) (2014), 1–12. P. Hegarty, Limit points in the range of the commuting probability function on finite groups, *J. Group Theory* **16**(2) (2013), 235–247. P. Lescot, Isoclinism classes and commutativity degrees of finite groups, *J. Algebra* **177** (1995), 847–869. P. Lescot, H. N. Nguyen, and Y. Yang, On the commuting probability and supersolvability of finite groups, *Monatsh. Math.* **174**(4) (2014), 567–576. D. J. S. Robinson, *A Course in the Theory of Groups*, Second Edition, Spring-Verlag, New York, 1996. D. J. Rusin, What is the probability that two elements of a finite group commute?, *Pacific J. Math.* **82** (1979), 237–247.
--- abstract: '[The MetropolisHastings algorithm allows one to sample asymptotically from any probability distribution $\pi$ admitting a density with respect to a reference measure, also denoted $\pi$ here, which can be evaluated pointwise up to a normalising constant. There has been recently much work devoted to the development of variants of the MetropolisHastings update which can handle scenarios where such an evaluation is impossible, and yet are guaranteed to sample from $\pi$ asymptotically. The most popular approach to have emerged is arguably the pseudo-marginal MetropolisHastings algorithm which substitutes an unbiased estimate of an unnormalised version of $\pi$ for $\pi$ ]{}[@Lin_2000; @Beaumont_2003; @Andrieu_and_Roberts_2009]**.**[ Alternative pseudo-marginal algorithms relying instead on unbiased estimates of the MetropolisHastings acceptance ratio have also been proposed [@Neal_2004; @Murray_et_al_2006; @Nicholls_et_al_2012]. These algorithms can have better properties than standard pseudo-marginal algorithms. Convergence properties of both classes of algorithms are known to depend on the variability (in the sense of the convex order) of the estimators involved ]{}[@Andrieu_and_Vihola_2014][, and reduced variability is guaranteed to decrease the asymptotic variance of ergodic averages and will shorten the “burn-in” period, or convergence to equilibrium, in most scenarios of interest. A simple approach to reduce variability, amenable to parallel computations, consists of averaging independent estimators. However, while averaging estimators of $\pi$ in a pseudo-marginal algorithm retains the guarantee of sampling from $\pi$ asymptotically, naive averaging of acceptance ratio estimates breaks detailed balance, leading to incorrect results. We propose an original methodology which allows for a correct implementation of this idea. We establish theoretical properties which parallel those available for standard pseudo-marginal algorithms and discussed above. We demonstrate the interest of the approach on various inference problems involving doubly intractable distributions, latent variable models, model selection, and state-space models. In particular we show that convergence to equilibrium can be significantly shortened, therefore offering the possibility to reduce a user’s waiting time in a generic fashion when a parallel computing architecture is available.]{}' author: - 'Christophe Andrieu$^{*}$, Arnaud Doucet$^{\dagger}$, Sinan Yildirim$^{+}$ and Nicolas Chopin$^{\blacklozenge}$' bibliography: - 'myrefs\_thesis.bib' title: 'On the utility of Metropolis-Hastings with asymmetric acceptance ratio' --- $^{*}$School of Mathematics, University of Bristol, U.K.\ $^{\dagger}$Department of Statistics, University of Oxford, U.K.\ $^{+}$Faculty of Engineering and Natural Sciences, Sabanci University, Turkey.\ $^{\blacklozenge}$ENSAE, France. [Keywords: Annealed Importance Sampling; Doubly intractable distributions; Intractable likelihood; Markov chain Monte Carlo; Reversible jump Monte Carlo; Sequential Monte Carlo; State-space models.]{} Introduction\[sec: Introduction\] ================================== Suppose we are interested in sampling from a given probability distribution $\pi$ on some measurable space $(\mathsf{X},\mathcal{X})$. When it is impossible or too difficult to generate perfect samples from $\pi$, one practical resource is to use a Markov chain Monte Carlo (MCMC) algorithm which generates an ergodic Markov chain $\{X_{n},n\geq0\}$ whose invariant distribution is $\pi$. Among MCMC methods, the MetropolisHastings (MH) algorithm plays a central rôle. The MH update proceeds as follows: given $X_{n}=x$ and a Markov transition kernel $q\big(x,\cdot\big)$ on $(\mathsf{X},\mathcal{X})$, we propose $y\sim q(x,\cdot)$ and set $X_{n+1}=y$ with probability $\alpha(x,y):=\min\left\{ 1,r(x,y)\right\} $, where $$r(x,y):=\frac{\pi({\rm d}y)q(y,{\rm d}x)}{\pi({\rm d}x)q(x,{\rm d}y)}\label{eq:genericMHacceptratio}$$ for $(x,y)\in\mathsf{S}\subset\mathsf{X}^{2}$ (see Appendix \[sec: A general framework for MPR and MHAAR algorithms\] for a definition of $\mathsf{S}$) is a well defined RadonNikodym derivative, and $r(x,y)=0$ otherwise. When the proposed value $y$ is rejected, we set $X_{n+1}=x$. We will refer to $r(x,y)$ as the acceptance ratio. The transition kernel of the Markov chain $\{X_{n},n\geq0\}$ generated with the MH algorithm with proposal kernel $q(\cdot,\cdot)$ is $$P(x,{\rm d}y)=q(x,{\rm d}y)\alpha(x,y)+\rho(x)\delta_{x}({\rm d}y),\quad x\in\mathsf{X},\label{eq: MH transition kernel}$$ where $\rho(x)$ is the probability of rejecting a proposed sample when $X_{n}=x$, $$\rho(x):=1-\int_{\mathsf{X}}\alpha(x,y)Q(x,{\rm d}y)$$ and $\delta_{x}(\cdot)$ is the Dirac measure centred at $x$. Expectations of functions, say $f$, with respect to $\pi$ can be estimated with $S_{M}:=M^{-1}\sum_{n=1}^{M}f(X_{n})$ for $M\in\mathbb{N}$, which is consistent under mild assumptions. Being able to evaluate the acceptance ratio $r(x,y)$ is obviously central to implementing the MH algorithm in practice. Recently, there has been much interest in expanding the scope of the MH algorithm to situations where this acceptance ratio is intractable, that is, impossible or very expensive to compute. A canonical example of intractability is when $\pi$ can be written as the marginal of a given joint probability distribution for $x$ and some latent variable $z$. A classical way of addressing this problem consists of running an MCMC targeting the joint distribution, which may however become very inefficient in situations where the size of the latent variable is highthis is for example the case for general state-space models. In what follows, we will briefly review some more effective ways of tackling this problem. To that purpose we will use the following simple running example to illustrate various methods. This example has the advantage that its setup is relatively simple and of clear practical relevance. We postpone developments for much more complicated setups to Sections \[sec: Pseudo-marginal ratio algorithms for latent variable models\] and \[sec: State-space models: SMC and conditional SMC within MHAAR\]. \[ex:doublyintractable\] In this scenario the likelihood function of the unknown parameter $\theta\in\Theta$ for the dataset $\mathfrak{y}\in\mathsf{Y}$, $\ell_{\theta}(\mathfrak{y})$, is only known up to a normalising constant, that is $$\ell_{\theta}(\mathfrak{y})=\frac{g_{\theta}(\mathfrak{y})}{C_{\theta}},$$ where $C_{\theta}$ is unknown, while $g_{\theta}(\mathfrak{y})$ can be evaluated pointwise for any value of $\theta\in\Theta$. In a Bayesian framework, for a prior density $\eta(\theta)$, we are interested in the posterior density $\pi(\theta)$, with respect to some measure, given by $$\pi(\theta)\propto\eta(\theta)\ell_{\theta}(\mathfrak{y}).$$ With $x=\theta,y=\theta'$ in , the resulting acceptance ratio of the MH algorithm associated to a proposal density $q(\theta,\theta')$ is $$\begin{aligned} r(\theta,\theta')=\frac{q(\theta',\theta)}{q(\theta,\theta')}\frac{\eta(\theta')}{\eta(\theta)}\frac{g_{\theta'}(\mathfrak{y})}{g_{\theta}(\mathfrak{y})}\frac{C_{\theta}}{C_{\theta'}},\label{eq: MCMC acceptance probability with intractable likelihood-1}\end{aligned}$$ which cannot be calculated because of the unknown ratio $C_{\theta}/C_{\theta'}$. While the likelihood function may be intractable, sampling artificial datasets $\mathfrak{z}\sim\ell_{\theta}(\cdot)$ may be possible for any $\theta\in\Theta$, and sometimes computationally cheap. We will describe two known approaches which exploit and expand this property in order to design Markov kernels preserving $\pi(\theta)$ as invariant density. Estimating the target density\[subsec: Estimating the target density\] ---------------------------------------------------------------------- Assume for simplicity that $\pi$ has a probability density with respect to some $\sigma$-finite measure. We will abuse notation slightly by using $\pi$ for both the probability distribution and its density. A powerful, yet simple, method to tackle intractability which has recently attracted substantial interest consists of replacing the value of $\pi(x)$ with a non-negative random estimator $\hat{\pi}(x)$ whenever it is required in the implementation of the MH algorithm above. If $\mathbb{E}[\hat{\pi}(x)]=C\pi(x)$ for all $x\in\mathsf{X}$ and a constant $C>0$, a property we refer somewhat abusively as unbiasedness, this strategy turns out to lead to exact algorithms, that is sampling from $\pi$ is guaranteed at equilibrium under very mild assumptions on $\hat{\pi}(x)$. This approach leads to so called pseudo-marginal algorithms [@Andrieu_and_Roberts_2009]. However, for reasons which will become clearer later, we refer from now on to these techniques as Pseudo-Marginal Target (PMT) algorithms. \[ex: pseudo-marginal for doubly intractable models\] Let $h_{\mathfrak{y}}:\mathsf{Y}\rightarrow[0,\infty)$ be an integrable non-negative function of integral equal to $1$. For a given $\theta$, an unbiased estimate of $\pi(\theta)$ can be obtained via importance sampling whenever the support of $g_{\theta}$ includes that of $h_{\mathfrak{y}}$: $$\begin{aligned} \hat{\pi}^{N}(\theta)\propto\eta(\theta)g_{\theta}(\mathfrak{y})\left\{ \frac{1}{N}\sum_{i=1}^{N}\frac{h_{\mathfrak{y}}(\mathfrak{z}^{(i)})}{g_{\theta}(\mathfrak{z}^{(i)})}\right\} ,\quad\mathfrak{z}^{(i)}\overset{{\rm iid}}{\sim}\ell_{\theta}(\cdot),\quad i=1,\ldots,N,\end{aligned}$$ since the normalised sum is an unbiased estimator of $1/C_{\theta}$. The auxiliary variable method of @Muller_et_al_2006 corresponds to $N=1$. An interesting feature of this approach is that $N$ is a free parameter of the algorithm which reduces the variability of this estimator. It is shown in @Andrieu_and_Vihola_2014 that increasing $N$ in a PMT algorithm always reduces the asymptotic variance of averages using this chain. This is particularly interesting in a parallel computing environment, but also serial for some models. We illustrate this numerically on a simple Ising model (see details in Section \[subsec: Numerical example: the Ising model\]) for a $20\times20$ lattice and $h_{\mathfrak{y}}(\mathfrak{z})=g_{\hat{\theta}}\big(\mathfrak{z}\big)$, where $\hat{\theta}$ is an approximation of the maximum likelihood estimator of $\theta$ for the data $\mathfrak{y}$. In Figure \[fig:IACIsingPseudoMarginal\] we report the estimated integrated auto-covariance (IAC) for the identity, that is $\lim_{M\rightarrow\infty}M\mathsf{\mathbb{\mathsf{var}}}\big(S_{M}\big)/\mathbb{\mathsf{var}}_{\pi}(f)$ for the function $f(\theta)=\theta$, as a function of $N$ and values of $\hat{\theta}$. The results are highly dependent on the value of $\hat{\theta}$, but adjusting $N$ allows one to compensate for a wrong choice of this parameter. This is important in practice since for more complicated scenarios obtaining a good approximation of the maximum likelihood estimator of $\theta$ may be difficult. ![IAC of the algorithm as a function of $N$.\[fig:IACIsingPseudoMarginal\]](Potts_IAC_vs_N_in_Pseudo_marginal_algorithm) Estimating the acceptance ratio \[subsec: Estimating the acceptance ratio\] --------------------------------------------------------------------------- One can in fact push the idea of replacing algebraic expressions with estimators further. Instead of approximating the numerator and denominator of the acceptance ratio $r(x,y)$ independently, it is indeed possible to use directly estimators of the acceptance ratio $r(x,y)$ and still obtain algorithms guaranteed to sample from $\pi$ at equilibrium. We will refer to these algorithms as Pseudo-Marginal Ratio (PMR) algorithms. A general framework is described in @Andrieu_and_Vihola_2014 as well as in Section \[subsec: Pseudo-marginal ratio algorithms\], but this idea has appeared earlier in various forms in the literature, see e.g. @Nicholls_et_al_2012 and the references therein. An interesting feature of PMR algorithms is that we estimate the ratio $r(x,y)$ afresh whenever it is required. On the contrary, in a PMT framework, if the estimate $\hat{\pi}(x)/C$ of the current state significantly overestimates $\pi(x)$, this results in poor performance as the algorithm will typically reject many transitions away from $x$ as the same estimate of $\pi(x)$ is used until a proposal is accepted. In the following continuation of Example \[ex:doublyintractable\], we present a particular case of this PMR idea proposed by @Murray_et_al_2006. \[ex: exchange algorithm\] The exchange algorithm of @Murray_et_al_2006 is motivated by the realisation that while for $\mathfrak{z}\sim\ell_{\theta'}(\cdot)$ and any $\mathfrak{y}\in\mathsf{Y}$, $h_{\mathfrak{y}}(\mathfrak{z})/g_{\theta'}(\mathfrak{z})$ is an unbiased estimator of $1/C_{\theta'}$, the particular choice $h_{\mathfrak{y}}(\mathfrak{z})=g_{\theta}(\mathfrak{z})$ leads to an unbiased estimator $g_{\theta}(\mathfrak{z})/g_{\theta'}(\mathfrak{z})$ of $C_{\theta}/C_{\theta'}$. We can expect this estimator to have a reasonable variance when $\theta$ and $\theta'$ are close if $\theta\mapsto g_{\theta}(\mathfrak{z})$ satisfies some form of continuity. This suggests the following algorithm. Given $\theta\in\Theta$, sample $\theta'\sim q(\theta,\cdot)$, then $\mathfrak{z}\sim\ell_{\theta'}(\cdot)$ and use the acceptance ratio $$\frac{q(\theta',\theta)}{q(\theta,\theta')}\frac{\eta(\theta')}{\eta(\theta)}\frac{g_{\theta'}(\mathfrak{y})}{g_{\theta}(\mathfrak{y})}\frac{g_{\theta}(\mathfrak{z})}{g_{\theta'}(\mathfrak{z})},\label{eq: Murray's acceptance ratio-1}$$ which is an unbiased estimator of the acceptance ratio in . The remarkable property of this algorithm is that it admits $\pi(\theta)$ as an invariant distribution and hence, under mild exploration related assumptions, it is guaranteed to produce samples asymptotically distributed according to $\pi$. Contribution\[subsec: Contribution\] ------------------------------------ As for PMT algorithms, it is natural to ask whether it is possible to further improve the performance of PMR algorithms by reducing the variability of the acceptance ratio estimator by averaging a number of such estimators, while preserving the target distribution $\pi$ invariant. We shall see that, unfortunately, such naïve averaging approach does not work for the PMR methods currently available as it breaks the reversibility of the kernels with respect to $\pi$. A contribution of the present paper is the introduction of a novel class of PMR algorithms which can exploit acceptance ratio estimators obtained through averaging and sample from $\pi$ at equilibrium. These algorithms described in Section \[sec: Pseudo-marginal ratio algorithms using averaged acceptance ratio estimators\] naturally lend themselves to parallel computations as independent ratio estimators can be computed in parallel at each iteration. In this respect, our methodology contributes to the emerging literature on the use of parallel processing units, such as Graphics Processing Units (GPUs) or multicore chips for scientific computation @lee2010utility [@suchard2010understanding]. We show that this generic procedure is guaranteed to decrease the asymptotic variance of ergodic averages as the number of independent ratios $N$ one averages increases and that the burn-in period will be reduced in most scenarios. The latter is particularly relevant since exact and generic methods to achieve this are scarce [@sohn1995parallel], in contrast with variance reduction techniques for which better embarrassingly parallel solutions [@doi:10.1093/biomet/asx031; @bornn2017use] and/or post-processing methods are available [@delmas2009does; @dellaportas2012control]. We demonstrate experimentally its performance gain for the exchange algorithm. This new class of PMR algorithms can be understood as being a particular instance of a more general principle which we exploit further in this paper, beyond the above example. Let $Q_{1},Q_{2}\colon\mathsf{X}\times\mathcal{X}\rightarrow[0,1]$ be a pair of kernels such that the following Radon-Nikodym derivative $$r_{1}(x,y):=\frac{\pi({\rm d}y)Q_{2}(y,{\rm d}x)}{\pi({\rm d}x)Q_{1}(x,{\rm d}y)}$$ is well defined for $(x,y)$ on some symmetric set $\mathsf{S}$ and set $r_{1}(x,y)=0$ otherwise. This can be thought of as an asymmetric version of the standard MH acceptance ratio and naturally leads to two questions. 1. Assuming that sampling from $Q_{1}(x,\cdot)$ and $Q_{2}(x,\cdot)$ for any $x\in\mathsf{X}$ is feasible and that $r_{1}(x,y)$ is tractable, can one design a correct MCMC algorithm for $\pi$ that involves simulating from $Q_{1}(x,\cdot)$ and $Q_{2}(x,\cdot)$ and evaluating $r_{1}(x,y)$ ? 2. Assuming the answer to the above is positive, can this additional degree of freedom be beneficial in order to design correct MCMC algorithms with practically appealing features e.g. accelerated convergence? The answer to the first question is unsurprisingly yes, and we will refer to the corresponding class of algorithms as MH with Asymmetric Acceptance Ratio (MHAAR). MHAAR has already been exploited in some specific contexts [@tjelmeland-eidsvik-2004; @andrieu2008tutorial], but its best known application certainly remains the reversible jump MCMC methodology of @Green_1995. However the way we take advantage of this additional flexibility seems completely novel. We also note, as detailed in our discussion in Section \[sec: Discussion\], that such asymmetric acceptance ratios are also at the heart of non-reversible MCMC algorithms which have recently attracted renewed interest in the Physics and Statistical communities [@gustafson1998guided; @turitsyn2011irreversible]. In Appendix \[sec: A general framework for MPR and MHAAR algorithms\], we describe and justify a slightly more general framework to the above which ensures reversibility with respect to $\pi$. The answer to the second question is the object of this paper, and averaging acceptance ratios as suggested earlier is one such application. In Section \[sec: Improving pseudo-marginal ratio algorithms for doubly intractable models\] we further investigate the doubly intractable scenario by incorporating the Annealed Importance Sampling (AIS) mechanism [@Neal_2001; @Murray_et_al_2006] in MHAAR, and explore numerically the performance of MHAAR with AIS on an Ising model. In Section \[sec: Pseudo-marginal ratio algorithms for latent variable models\] we expand the class of problems our methodology can address by considering latent variable models. This leads to important extensions of the original AIS within MH algorithm proposed in @Neal_2004. We demonstrate the efficiency of our MHAAR-based approach by recasting the popular reversible jump MCMC (RJ-MCMC) methodology as a particular case of our framework and illustrate the computational benefits of our novel algorithm in this context on the Poisson change-point model in @Green_1995. In Section \[sec: State-space models: SMC and conditional SMC within MHAAR\], we show how MHAAR can be advantageous in the context of inference in state-space models when it is utilised with sequential Monte Carlo (SMC) algorithms. In particular, we expand the scope of particle MCMC algorithms [@Andrieu_et_al_2010] and show novel ways of using multiple or all possible paths from backward sampling of conditional SMC (cSMC) to estimate the marginal acceptance ratio. In Section \[sec: Discussion\], we provide some discussion and two interesting extensions of MHAAR. Specifically, in Section \[sec: Using SMC based estimators for the acceptance ratio\] we discuss an SMC-based generalisation of our algorithms involving AIS. Furthermore, in Section \[sec: Links to non-reversible algorithms\] we provide a new insight to non-reversible versions of MH algorithms that is relevant to our setting. We briefly demonstrate how non-reversible versions of our algorithms can be obtained with a small modification so that one can benefit both from non-reversibility and the ability to average acceptance ratio estimators. Some of the proofs of the validity of our algorithms as well as additional discussion on the generalisation of the methods can found in the Appendices. PMR algorithms using averaged acceptance ratio estimators\[sec: Pseudo-marginal ratio algorithms using averaged acceptance ratio estimators\] ============================================================================================================================================= PMR algorithms\[subsec: Pseudo-marginal ratio algorithms\] ---------------------------------------------------------- We introduce here generic PMR algorithms, that is MH algorithms relying on an estimator of the acceptance ratio. We then show that in their standard form these algorithms cannot use an estimator of this ratio obtained through averaging independent estimators. A slightly more general framework is provided in @Nicholls_et_al_2012, while a more abstract description is provided in @Andrieu_and_Vihola_2014. To that purpose we introduce a $(\mathsf{U},\mathcal{U})$-valued auxiliary variable $u$ (we use small letters for random variables and realisations throughout) and let $\varphi:\mathsf{U}\rightarrow\mathsf{U}$ be a measurable involution, that is $\varphi=\varphi^{-1}$. Then we introduce a pair of families of proposal distributions $\{Q_{1}(x,\cdot),x\in\mathsf{X}\}$ , $\{Q_{2}(x,\cdot),x\in\mathsf{X}\}$ on $(\mathsf{X}\times\mathsf{U},\mathcal{X}\times\mathcal{U})$, where $$Q_{1}\big(x,{\rm d}(y,u)\big):=q(x,{\rm d}y)Q_{x,y}({\rm d}u)\label{eq: PMR Q1}$$ with $Q_{x,y}(\cdot)$ denoting the conditional distribution of $u$ given $x,y\in\mathsf{X}$, and $$Q_{2}\big(x,{\rm d}(y,u)\big):=q(x,{\rm d}y)\bar{Q}_{x,y}({\rm d}u),\label{eq: PMR Q2}$$ where, for any $A\in\mathcal{U}$ we have $$\bar{Q}_{x,y}\big(A\big):=Q_{x,y}\big(\varphi(A)\big).\label{eq: Q_bar}$$ which means that in order to sample $u\sim\bar{Q}_{x.y}(\cdot)$, one can sample $\bar{u}\sim Q_{x,y}(\cdot)$ and set $u=\varphi(\bar{u})$. PMR algorithms are defined by the following transition kernel $$\mathring{P}(x,{\rm d}y)=\int_{\mathsf{U}}Q_{1}\big(x,{\rm d}(y,u)\big)\min\{1,\mathring{r}_{u}(x,y)\}+\mathring{\rho}(x)\delta_{x}({\rm d}y),\label{eq:defPring}$$ where the acceptance ratio is equal, for $(x,y,u)\in\mathring{\mathsf{S}}$ and $\mathring{\mathsf{S}}$ defined similarly to , to $$\begin{aligned} \mathring{r}_{u}(x,y) & :=\frac{\pi({\rm d}y)Q_{2}\big(y,{\rm d}(x,u)\big)}{\pi(dx)Q_{1}\big(x,{\rm d}(y,u)\big)}\label{eq:accept-ratio-circle}\\ & =r(x,y)\frac{\bar{Q}_{y,x}({\rm d}u)}{Q_{x,y}({\rm d}u)},\nonumber \end{aligned}$$ and to $1$ otherwise. It is clear from that the acceptance ratio $\mathring{r}_{u}(x,y)$ is an unbiased estimator of the standard MH acceptance $r(x,y)$, i.e. $$\int_{\mathsf{U}}\mathring{r}_{u}(x,y)Q_{x,y}({\rm d}u)=r(x,y).\label{eq:unbiasednessratioMH}$$ Due to the particular form of symmetry between $Q_{1}$ and $Q_{2}$ imposed by , $\mathring{P}$ is reversible with respect to $\pi$ by considering detailed balance for fixed $u\in\mathsf{U}$; see Theorem \[thm: pseudo-marginal ratio algorithms\] in Appendix \[sec: A general framework for MPR and MHAAR algorithms\]. As long as PMR algorithms are concerned, we call $Q_{1}$ the proposal kernel of PMR and $Q_{2}$ its complementary kernel, owing to the way is constructed. Motivation for this enumeration will be clear in Section \[subsec: Pseudo-marginal algorithm with averaged acceptance ratio estimator\], in particular by Remark \[rem: MHAAR with N =00003D 1\]. \[remark:ratiounbiasednotsufficient\]A cautionary remark is in order. When we substitute a non-negative unbiased estimator of $\pi$ for $\pi$ in the MH algorithm, the resulting PMT algorithm is $\pi-$invariant. However, if we substitute a positive unbiased estimator of $r(x,y)$ for $r(x,y)$ in the MH algorithm then the resulting transition kernel is not necessarily $\pi-$invariant. To establish that $\mathring{P}$ is $\pi-$invariant, we require our estimator to have the specific structure given in . A particular instance of this algorithm was given earlier in the set-up of Example \[ex:doublyintractable\], where $x=\theta$, the random variable $u=\mathfrak{z}$ corresponds to a fictitious dataset used to estimate the ratio of normalising constants, and $\varphi(u)=u$. The need to consider more general transformations $\varphi$ will become apparent in Section \[sec: Improving pseudo-marginal ratio algorithms for doubly intractable models\]. This type of algorithms is motivated by the fact that while in some situations $r(x,y)$ cannot be computed, the introduction of the auxiliary variable $u$ makes the computation of $\mathring{r}_{u}(x,y)$ possible. However, this computational tractability comes at a price. Applying Jensen’s inequality to shows that $$\int_{\mathsf{U}}Q_{x,y}({\rm d}u)\min\{1,\mathring{r}_{u}(x,y)\}\leq\min\{1,r(x,y)\}.$$ Peskun’s result [@Tierney_1998] thus implies that the MCMC algorithm relying on $\mathring{P}$ is always inferior to that using $P$ for various performance measures (see Theorem \[thm:theoreticaljustification\] for details). As pointed out in @Andrieu_and_Vihola_2014 reducing the variability of $\mathring{r}_{u}(x,y)$, for example in the sense of the convex order, for all $x,y\in\mathsf{X}^{2}$ will reduce the gap in the inequality above, resulting in improved performance. From the rightmost expression in a possibility to reduce variability might be to change $Q_{x,y}\big(\cdot\big)$ (and possibly $u$) in such a way that $Q_{x,y}\simeq\bar{Q}_{y,x}$ for all $x,y\in\mathsf{X}$, but this is impossible in most practical scenarios. In contrast a natural idea consists of averaging ratios $\mathring{r}_{u^{(i)}}(x,y)$’s for, say, realisations $u^{(1)},\ldots,u^{(N)}\overset{{\rm iid}}{\sim}Q_{x,y}(\cdot)$ and use the acceptance ratio $$\mathring{r}_{\mathfrak{u}}^{N}(x,y):=\frac{1}{N}\sum_{i=1}^{N}\mathring{r}_{u^{(i)}}(x,y),\label{eq: average acceptance ratio}$$ where $\mathfrak{u}:=u^{(1:N)}=\big(u^{(1)},\ldots,u^{(N)}\big)\in\mathfrak{U}:=\mathsf{U}^{N}$we drop the dependence on $N$ in order to alleviate notation whenever no ambiguity is possible. While this reduces the variance of the estimator of $r\big(x,y\big)$, this naïve modification of the acceptance rule of $\mathring{P}$ breaks detailed balance with respect to $\pi$. Indeed one can check that with $Q_{1}^{N}(x,{\rm d}(y,\mathfrak{u})):=q(x,{\rm d}y)\prod_{i=1}^{N}Q_{x,y}({\rm d}u^{(i)})$, $h:\mathsf{X}^{2}\rightarrow\mathbb{R}$ a bounded measurable function and using Fubini’s result, $$\begin{aligned} \int_{\mathsf{X}\times\mathfrak{U}\times\mathsf{X}}\pi\big({\rm d}x\big)Q_{1}^{N}\big(x,{\rm d}(y,{\rm \mathfrak{u})}\big)\min\{1,\mathring{r}_{\mathfrak{u}}^{N}(x,y)\}h\big(x,y\big)\\ \neq\int_{\mathsf{X}\times\mathfrak{U}\times\mathsf{X}}\pi\big({\rm d}y\big)Q_{1}^{N}\big(y,{\rm d}(x,{\rm \mathfrak{u}})\big) & \min\{1,\mathring{r}_{\mathfrak{u}}^{N}(y,x)\}h\big(x,y\big)\end{aligned}$$ in general. This is best seen in a scenario where $\mathsf{X}$ and $\mathsf{U}$ are finite and $h(x,y)=\mathbb{I}\{x=a\}\mathbb{I}\{y=b\}$ for some $a,b\in\mathsf{X}$, and it can be shown that $\pi$ is not left invariant by the corresponding Markov transition probability. MHAAR for averaging PMR estimators \[subsec: Pseudo-marginal algorithm with averaged acceptance ratio estimator\] ----------------------------------------------------------------------------------------------------------------- We show here how MHAAR updates can be used in order to use the acceptance ratio in , while preserving $\pi-$reversibility. Our novel scheme is described in Algorithm \[alg: MHAAR for Pseudo-marginal ratio\]. For $m\in\mathbb{N}$ and $w_{1},\ldots,w_{m}\in\mathbb{R}_{+}$ we let $\mathcal{P}\big(w_{1},\ldots,w_{m}\big)$ denote the probability distribution of the random variable $\omega$ on $[m]:=\{1,\ldots,m\}$ such that $\mathbb{P}(\omega=k)\propto w_{k}$. Sample $y\sim q(x,\cdot)$ and $v\sim\mathcal{U}(0,1)$.\ The unusual step in this update is the random choice between two sampling mechanisms for the auxiliary variables $u^{(1)},\ldots,u^{(N)}$ and the fact that depending on this choice either $\mathring{r}_{\mathfrak{u}}^{N}(x,y)$ or $1/\mathring{r}_{\mathfrak{u}}^{N}(y,x)$ is used. Apart from the reversible jump MCMC context [@Green_1995] and specific uses @tjelmeland-eidsvik-2004 [@andrieu2008tutorial], this type of asymmetric updates has rarely been used see Appendix \[subsec: Generalisation and suboptimality\] for an extensive discussion and from Section \[sec: Pseudo-marginal ratio algorithms for latent variable models\] on for other applications. The probability distributions corresponding to the two proposal mechanisms in Algorithm \[alg: MHAAR for Pseudo-marginal ratio\] are given by $$\begin{aligned} Q_{1}^{N}\big(x,{\rm d}(y,\mathfrak{u},k)\big) & :=q(x,{\rm d}y)\prod_{i=1}^{N}Q_{x,y}({\rm d}u^{(i)})\frac{\mathring{r}_{u^{(k)}}(x,y)}{\sum_{i=1}^{N}\mathring{r}_{u^{(i)}}(x,y)},\\ Q_{2}^{N}\big(x,{\rm d}(y,\mathfrak{u},k)\big) & :=q(x,{\rm d}y)\frac{1}{N}\bar{Q}_{x,y}({\rm d}u^{(k)})\prod_{i=1,i\neq k}^{N}Q_{y,x}(\mathrm{d}u^{(i)}),\end{aligned}$$ and the corresponding Markov transition kernel by $$\begin{aligned} & \mathring{P}^{N}(x,{\rm d}y):=\frac{1}{2}\left[\int_{\mathfrak{U}\times[N]}Q_{1}^{N}\big(x,{\rm d}(y,\mathfrak{u},k)\big)\min\left\{ 1,\mathring{r}_{\mathfrak{u}}^{N}(x,y)\right\} +\mathring{\rho}_{1}(x)\delta_{x}({\rm d}y)\right]\nonumber \\ & \quad\quad\quad\quad\quad\quad\quad+\frac{1}{2}\left[\int_{\mathfrak{U}\times[N]}Q_{2}^{N}\big(x,{\rm d}(y,\mathfrak{u},k)\big)\min\left\{ 1,1/\mathring{r}_{\mathfrak{u}}^{N}(y,x)\right\} +\mathring{\rho}_{2}(x)\delta_{x}({\rm d}y)\right],\label{eq:PcircleN}\end{aligned}$$ where $\mathring{\rho}_{1}(x)$ and $\mathring{\rho}_{2}(x)$ are the rejection probabilities for each sampling mechanism. We establish the $\pi-$reversibility of $\mathring{P}^{N}$ in Theorem \[thm:theoreticaljustification\]. \[rem:samplingkcanbeomitted\]It is necessary to include the variable $k$ in $Q_{1}^{N}$ and $Q_{2}^{N}$ to obtain tractable acceptance ratios validating the algorithm but, practically, its value is clearly redundant in Algorithm \[alg: MHAAR for Pseudo-marginal ratio\] and sampling $k$ is therefore not required. \[rem: MHAAR with N =00003D 1\]$Q_{1}^{N}$ and $Q_{2}^{N}$ reduce to $Q_{1}$ and $Q_{2}$ in and when $N=1$ in which case $k$ becomes redundant. This implies generality over PMR algorithms even for $N=1$ (although probably not a useful one), in the sense that in MHAAR one can also propose from $Q_{2}$. \[ex:doublyintractaveraging\] As noticed in @Nicholls_et_al_2012, the exchange algorithm [@Murray_et_al_2006] can be recast as a PMR algorithm of the form $\mathring{P}$ given in where $x=\theta,y=\theta',u=\mathfrak{z}$, $\varphi(u)=u$ and $Q_{x,y}$ corresponds to $\ell_{\theta'}$. Hence an extension of this algorithm using an averaged acceptance ratio estimator is given by Algorithm \[alg: MHAAR for Pseudo-marginal ratio\]. Taking into account Remark \[rem:samplingkcanbeomitted\], this takes the following form. Sample $\theta'\sim q(\theta,\cdot)$, then with probability $1/2$ sample $u^{(1)},\ldots,u^{(N)}\overset{{\rm iid}}{\sim}\ell_{\theta'}(\cdot)$ and compute $$\mathring{r}_{\mathfrak{u}}^{N}(\theta,\theta')=\frac{q(\theta',\theta)}{q(\theta,\theta')}\frac{\eta(\theta')}{\eta(\theta)}\frac{g_{\theta'}(\mathfrak{y})}{g_{\theta}(\mathfrak{y})}\frac{1}{N}\sum_{i=1}^{N}\frac{g_{\theta}(u^{(i)})}{g_{\theta'}(u^{(i)})},$$ or (i.e. with probability 1/2) sample $u^{(1)}\sim\ell_{\theta'}(\cdot)$ and $u^{(2)},\ldots,u^{(N)}\overset{{\rm iid}}{\sim}\ell_{\theta}(\cdot)$, and compute $\mathring{r}_{\mathfrak{u}}^{N}(\theta',\theta)$. This algorithm was implemented for an Ising model (see details in Section \[subsec: Numerical example: the Ising model\]) and numerical simulations are presented in Figure \[fig: Potts exchange with bridging vs asymmetric MCMC\] where the IAC of $f(\theta)=\theta$ is reported as a function of $N$ (red/grey colour). As anticipated, increasing $N$ improves performance. ### Theoretical results on validity and performance of MHAAR The following theorem justifies the theoretical usefulness of Algorithm \[alg: MHAAR for Pseudo-marginal ratio\]. The result follows from Hilbert space techniques and the recent realisation that the convex order plays an important rôle in the characterisation of MH updates based on estimated acceptance ratios [@Andrieu_and_Vihola_2014]. We consider standard performance measures associated to a Markov transition probability $\Pi$ of invariant distribution $\mu$ defined on some measurable space $\big(\mathsf{E},\mathcal{E}\big)$. Let $L^{2}(\mathsf{E},\mu):=\big\{ f\colon\mathsf{E}\rightarrow\mathbb{R},\mathsf{var}_{\mu}(f)<\infty\big\}$ and $L_{0}^{2}(\mathsf{E},\mu):=L^{2}(\mathsf{E},\mu)\cap\{f\colon\mathsf{E}\rightarrow\mathbb{R},\mathbb{E}_{\mu}(f)=0\}$. For any $f\in L^{2}(\mathsf{E},\mu)$ the asymptotic variance is defined as $$\mathsf{var}(f,\Pi):=\lim_{M\rightarrow\infty}\mathsf{var}_{\mu}\left(M^{-1/2}{\textstyle \sum}_{i=1}^{M}f(X_{i})\right),$$ which is guaranteed to exist for reversible Markov chains (although it may be infinite) and for a $\mu-$reversible kernel $\Pi$ its right spectral gap $${\rm Gap}_{R}\left(\Pi\right):=\inf\{\mathcal{E}_{\Pi}(f)\,:\,f\in L_{0}^{2}(\mathsf{E},\mu),\,{\rm var}_{\mu}(f)=1\},$$ where for any $f\in L^{2}\big(\mathsf{E},\mu\big)$ $\mathcal{E}_{\Pi}(f):=\frac{1}{2}\int_{\mathsf{E}}\mu\big({\rm d}x\big)\Pi\big(x,{\rm d}y\big)\big[f(x)-f(y)\big]^{2}$ is the so-called Dirichlet form. The right spectral gap is particularly useful in the situation where $\Pi$ is a positive operator, in which case ${\rm Gap}_{R}\left(\Pi\right)$ is related to the geometric rate of convergence of the Markov chain. \[thm:theoreticaljustification\]With $P$ and $\mathring{P}^{N}$ as defined in and, respectively, 1. For any $N\geq1$ $\mathring{P}^{N}$ is $\pi-$reversible, 2. For all $N$, ${\rm Gap}_{R}(\mathring{P}^{N})\leq{\rm Gap}_{R}(P)$ and $N\mapsto{\rm Gap}_{R}(\mathring{P}^{N})$ is non decreasing, 3. For any $f\in L^{2}(\mathsf{X},\pi)$, 1. $N\mapsto\mathcal{E}_{\mathring{P}^{N}}(f)$ (or equivalently first order auto-covariance coefficient) is non decreasing (non increasing), 2. $N\mapsto\mathsf{var}(f,\mathring{P}^{N})$ is non increasing, 3. for all $N$, $\mathsf{var}(f,\mathring{P}^{N})\geq\mathsf{var}(f,P)$. The reversibility follows from the fact that this Markov transition kernel fits in the framework of asymmetric MH updates described in Theorem \[thm:asymmetricMH-1\] in Appendix \[sec: A general framework for MPR and MHAAR algorithms\] after checking that for any $x,y,\mathfrak{u}\in\mathring{\mathsf{S}}^{N}$, $$\begin{aligned} \frac{\pi({\rm d}y)Q_{2}^{N}\big(y;{\rm d}(x,\mathfrak{u},k)\big)}{\pi({\rm d}x)Q_{1}^{N}\big(x,{\rm d}(y,\mathfrak{u},k)\big)} & =\frac{\pi({\rm d}y)q(y,{\rm d}x)\frac{1}{N}\bar{Q}_{y,x}({\rm d}u^{(k)})\prod_{i\neq k}Q_{x,y}({\rm d}u^{(i)})}{\pi({\rm d}x)q(x,{\rm d}y)Q_{x,y}({\rm d}u^{(k)})\prod_{i\neq k}Q_{x,y}({\rm d}u^{(i)})\times\frac{\mathring{r}_{u^{(k)}}(x,y)}{\sum_{i=1}^{N}\mathring{r}_{u^{(i)}}(x,y)}}\nonumber \\ & =\mathring{r}_{u^{(k)}}(x,y)\frac{1/N}{\frac{\mathring{r}_{u^{(k)}}(x,y)}{N\times\mathring{r}_{\mathfrak{u}}^{N}(x,y)}}=\mathring{r}_{\mathfrak{u}}^{N}(x,y).\label{eq:eq:simplifiedacceptPcircN}\end{aligned}$$ For the other statements we first start by noticing that the expression for the Dirichlet form associated with $\mathring{P}^{N}$ can be rewritten in either of the following simplified forms $$\begin{aligned} \mathcal{E}_{\mathring{P}^{N}} & =\frac{1}{2}\int\pi({\rm d}x)\int_{\mathsf{\mathfrak{U}\times[N]}}Q_{1}^{N}\big(x,{\rm d}(y,\mathfrak{u},k)\big)\min\{1,\mathring{r}_{\mathfrak{u}}^{N}(x,y)\}\left(f(x)-f(y)\right)^{2}\\ & =\frac{1}{2}\int\pi({\rm d}x)\int_{\mathfrak{U}\times[N]}Q_{2}^{N}\big(x,{\rm d}(y,\mathfrak{u},k)\big)\min\{1,1/\mathring{r}_{\mathfrak{u}}^{N}(y,x)\}\left(f(x)-f(y)\right)^{2}.\end{aligned}$$ This follows from the identities established in and . The expression on the first line turns out to be particularly convenient. A well known result from the convex order literature states that for any $n\geq2$ exchangeable random variables $Z_{1},\ldots,Z_{n}$ and any convex function $\phi$ we have $\mathbb{E}\left[\phi\left(n^{-1}\sum_{i=1}^{n}Z_{i}\right)\right]\leq\mathbb{E}\left[\phi\left((n-1)^{-1}\sum_{i=1}^{n-1}Z_{i}\right)\right]$ whenever the expectations exist [@mullercomparison Corollary 1.5.24]. The two sums are said to be convex ordered. Now since $a\mapsto-\min\{1,a\}$ is convex we deduce that for any $N\geq1$, $x,y\in\mathsf{X}$, $$\int_{\mathsf{U}^{N}}Q_{x,y}^{N}({\rm d}\mathfrak{u})\min\{1,\mathring{r}_{\mathfrak{u}}^{N}(x,y)\}\leq\int_{\mathsf{U}^{N+1}}Q_{x,y}^{N+1}({\rm d}\mathfrak{u})\min\{1,\mathring{r}_{\mathfrak{\mathfrak{u}}}^{N+1}(x,y)\}\label{eq:convexorderingratio}$$ where $Q_{x,y}^{N}(\mathrm{d}\mathfrak{u}):=\prod_{i=1}^{N}Q_{x,y}(\mathrm{d}u^{(i)})$, and consequently for any $f\in L^{2}(\mathsf{X},\pi)$ and $N\geq1$ $$\mathcal{E}_{\mathring{P}^{N+1}}(f)\leq\mathcal{E}_{\mathring{P}^{N}}(f).$$ All the monotonicity properties follow from @Tierney_1998 since $\mathring{P}^{N}$ and $\mathring{P}^{N+1}$ are $\pi-$reversible. The comparisons to $P$ follow from the application of Jensen’s inequality to $a\mapsto\min\{1,a\}$, which leads for any $x,y\in\mathsf{X}$ to $$\int_{\mathfrak{U}}Q_{x,y}^{N}({\rm d}\mathfrak{u})\min\{1,\mathring{r}_{\mathfrak{u}}^{N}(x,y)\}\leq\min\{1,r\big(x,y\big)\},$$ and again using the results of @Tierney_1998. This result motivates the practical usefulness of the algorithm, in particular in a parallel computing environment. Indeed, one crucial property of Algorithm \[alg: MHAAR for Pseudo-marginal ratio\] is that in both moves $Q_{1}^{N}(\cdot)$ and $Q_{2}^{N}(\cdot)$, sampling of $u^{(1)},\ldots,u^{(N)}$ and computation of $\mathring{r}_{u^{(1)}}(x,y),\ldots,\mathring{r}_{u^{(N)}}(x,y)$ can be performed in a parallel fashion and offers the possibility to reduce the variance $\mathsf{var}(f,\mathring{P})$ of estimators, but more importantly the burn-in period of algorithms. Indeed one could object that running $M\in\mathbb{N}^{+}$ independent chains in parallel with $N=1$ and combining their averages, instead of using the output from a single chain with $N=M$ would achieve variance reduction. However our point is that the former does not speed up convergence to equilibrium, while the latter will, in general. Unfortunately, while estimating the asymptotic variance $\mathsf{var}(f,\mathring{P}^{N})$ from simulations is achievable, estimating time to convergence to equilibrium is far from standard in general. The following toy example is an exception and illustrates our point. Here we let $\pi$ be the uniform distribution on $\mathsf{X}=\{-1,1\}$, $\mathsf{U}=\{a,a^{-1}\}$ for $a>0$, $Q_{x,-x}(u=a)=1/(1+a)$, $Q_{x,-x}(u=1/a)=a/(1+a)$ and $\varphi(u)=1/u$. In other words $\mathring{P}$ can be reparametrized in terms of $a$ and with the choice $q(x,-x)=1-\theta$ for $(\theta,x)\in[0,1)\times\mathsf{X}$ we obtain $$\begin{aligned} \mathring{P}(x,-x) & =(1-\theta)\left[\frac{1}{1+a}\min\big\{1,a\big\}+\frac{a}{1+a}\min\big\{1,a^{-1}\big\}\right].\end{aligned}$$ Note that there is no need to be more specific than say $Q_{x,x}(u)>0$ for $x,u\in\mathsf{X}\times\mathsf{U}$ as then a proposed “stay” is always accepted. This suggests that we are in fact drawing the acceptance ratio, and corresponds to [Example 8 in @Andrieu_and_Vihola_2014] of their abstract parametrisation of PMR algorithms. Now for $N\geq2$ and $x\in\mathsf{X}$ we have $$\begin{aligned} \mathring{P}^{N}(x,-x) & =\frac{1-\theta}{2}\left[\sum_{k=0}^{N}\beta^{N}(k)\min\big\{1,w_{k}(N)\big\}\right.\\ & \hspace{1.5cm}+\left.\sum_{k=0}^{N}\left(\frac{a}{1+a}\beta^{N-1}(k-1)+\frac{1}{1+a}\beta^{N-1}(k)\right)\min\big\{1,w_{k}^{-1}(N)\big\}\right],\end{aligned}$$ where $\beta^{N}(k)$ is the probability mass function of the binomial distribution of parameters $N$ and $1/(1+a)$ and $w_{k}(N):=ka/N+\big(1-k/N\big)a^{-1}.$The second largest eigenvalue of the corresponding Markov transition matrix is $\lambda_{2}(N)=1-2\mathring{P}^{N}(x,-x)$ from which we find the relaxation time $T_{{\rm relax}}(N):=1/\big(2\mathring{P}^{N}(x,-x)\big)$, and bounds on the mixing time $T_{{\rm mix}}(\epsilon,N)$, that is the number of iterations required for the Markov chain to have marginal distribution within $\epsilon$ of $\pi$, in the total variation distance, @levin2017markov [Theorem 12.3 and Theorem 12.4] $$-(T_{{\rm relax}}(N)-1)\log(2\epsilon)\leq T_{{\rm mix}}(\epsilon,N)\leq-T_{{\rm relax}}(N)\log(\epsilon/2).$$ We define the time reduction, $\gamma(N):=T_{{\rm relax}}(N)/T_{{\rm relax}}(1)$, which is independent of $\theta$ and captures the benefit of MHAAR in terms of convergence to equilibrium. In Fig. \[fig:toy-example-relaxation\] we present the evolution of $N\mapsto\gamma(N)$ for $a=2,5,10$ and $\gamma(1000)$ as a function of $a$. As expected the worse the algorithm corresponding to $\mathring{P}$ is, the more beneficial averaging is: for $a=2,5,10$ we observe running time reductions of approximately $35\%$, $65\%$ and $80\%$ respectively. This suggests that computationally cheap, but possibly highly variable, estimators of the acceptance ratio may be preferable to reduce burn-in when a parallel machine is available and communication costs are negli-geable. ![\[fig:toy-example-relaxation\]Top left: $a=2$, Top right: $a=5$, Bottom left: $a=10$, Bottom right: evolution of $\gamma(1000)$ as a function of $a$](gain-function-N-a-is-2 "fig:"){width="0.3\textheight"}![\[fig:toy-example-relaxation\]Top left: $a=2$, Top right: $a=5$, Bottom left: $a=10$, Bottom right: evolution of $\gamma(1000)$ as a function of $a$](gain-function-N-a-is-5 "fig:"){width="0.3\textheight"} ![\[fig:toy-example-relaxation\]Top left: $a=2$, Top right: $a=5$, Bottom left: $a=10$, Bottom right: evolution of $\gamma(1000)$ as a function of $a$](gain-function-N-a-is-10 "fig:"){width="0.3\textheight"}![\[fig:toy-example-relaxation\]Top left: $a=2$, Top right: $a=5$, Bottom left: $a=10$, Bottom right: evolution of $\gamma(1000)$ as a function of $a$](gain-function-a "fig:"){width="0.3\textheight"} ### Introducing dependence \[subsec: Introducing dependence\] The following discussion on possible extensions can be omitted on a first reading. There are numerous possible variations around the basic algorithm presented above. A practically important extension related to the order in which variables are drawn is discussed in Section \[sec: Pseudo-marginal ratio algorithms for latent variable models\] in the general context of latent variable models. There is another possible extension worth mentioning here. Close inspection of the proof of $\pi-$reversibility of $\mathring{P}^{N}$ in Theorem \[thm:theoreticaljustification\] suggests that conditional independence of $u^{(1)},\ldots,u^{(N)}$ is not a requirement. Define $\mathfrak{u}^{(-k)}:=\left(u^{(1)},\ldots,u^{(k-1)},u^{(k+1)},\ldots,u^{(N)}\right)$. \[thm:generalisationexchangeable\]Let $N\geq1$ and for any $x,y\in\mathsf{X}$ let $Q_{x,y}^{N}({\rm d}\mathfrak{u})$ be a probability distribution on $\big(\mathfrak{U},\mathcal{U}^{\otimes N}\big)$ such that all its marginals are identical and equal to $$\begin{aligned} Q_{1}^{N}\big(x,{\rm d}(y,\mathfrak{u},k)\big) & :=q(x,{\rm d}y)Q_{x,y}^{N}({\rm d}\mathfrak{u})\frac{\mathring{r}_{u^{(k)}}(x,y)}{\sum_{i=1}^{N}\mathring{r}_{u^{(i)}}(x,y)},\\ Q_{2}^{N}\big(x,{\rm d}(y,\mathfrak{u},k)\big) & :=q(x,{\rm d}y)\frac{1}{N}\bar{Q}_{x,y}({\rm d}u^{(k)})Q_{y,x}^{N}({\rm d}\mathfrak{u}^{(-k)}\mid u^{(k)}).\end{aligned}$$ Then $\mathring{P}^{N}$ with acceptance ratio $\mathring{r}_{\mathfrak{u}}^{N}(x,y)$ as in is $\pi-$reversible. Further, if $u^{(1)},\ldots,u^{(N)}$ are exchangeable with respect to $Q_{x,y}^{N}({\rm d}\mathfrak{u})$ then all the comparison results in Theorem \[thm:theoreticaljustification\] still hold. One can check that $$\begin{aligned} \frac{\pi({\rm d}y)Q_{2}^{N}\big(y,{\rm d}(x,\mathfrak{u},k)\big)}{\pi({\rm d}x)Q_{1}^{N}\big(x,{\rm d}(y,\mathfrak{u},k)\big)} & =\frac{\pi({\rm d}y)q(y,{\rm d}x)\frac{1}{N}\bar{Q}_{y,x}({\rm d}u^{(k)})Q_{x,y}^{N}({\rm d}\mathfrak{u}^{(-k)}\mid u^{(k)})}{\pi({\rm d}x)q(x,{\rm d}y)Q_{x,y}({\rm d}u^{(k)})Q_{x,y}^{N}({\rm d}\mathfrak{u}^{(-k)}\mid u^{(k)})\frac{\mathring{r}_{u^{(k)}}(x,y)}{\sum_{i=1}^{N}\mathring{r}_{u^{(i)}}(x,y)}}\\ & =\mathring{r}_{u^{(k)}}(x,y)\frac{1/N}{\frac{\mathring{r}_{u^{(k)}}(x,y)}{N\times\mathring{r}_{\mathfrak{u}}^{N}(x,y)}}=\mathring{r}_{\mathfrak{u}}^{N}(x,y),\end{aligned}$$ which remains the same as in . The exchangeability assumption ensures that holds. The following is a short discussion of a scenario which may be relevant in practice. Assume that it is possible to sample $u^{\left(1\right)}$ from $Q_{x,y}(\cdot)$ but that this is computationally expensive, as is the case for sampling exactly from Markov random fields such as the Ising model. One could suggest sampling the remaining samples $\mathfrak{u}^{(-1)}$ as defined in $Q_{1}^{N}(\cdot,\cdot)$ using a $Q_{x,y}-$reversible Markov transition probability $K_{x,y}$ (and similarly for $Q_{y,x}(\cdot)$ in $Q_{2}^{N}(\cdot,\cdot)$ using $K_{y,x}$), which will in general be far less expensive. Here $Q_{1}^{N}(\cdot,\cdot)$ corresponds to sampling $$u^{(1:N)}\sim Q_{x,y}({\rm d}u^{(1)})K_{x,y}(u^{(1)},{\rm d}u^{(2)})\ldots K_{x,y}(u^{(N-1)},{\rm d}u^{(N)}).$$ In order to describe sampling in $Q_{2}^{N}(\cdot,\cdot)$, we first establish a convenient expression for $Q_{x,y}^{N}({\rm d}\mathfrak{u}^{(-k)}\mid u^{(k)})$ for $x,y\in\mathsf{X}^{2}$ and $k=1,\ldots,N$. By reversibility of $K_{x,y}$, we have for $k=1,\ldots,N$ (with straightforward conventions for $k\in[N]$) $$\begin{aligned} Q_{x,y}({\rm d}u^{(1)})\prod_{i=2}^{N}K_{x,y}(u^{(i-1)},{\rm d}u^{(i)})=Q_{x,y}({\rm d}u^{(k)})\prod_{i=2}^{k}K_{x,y}(u^{(i)},{\rm d}u^{(i-1)})\prod_{i=k+1}^{N}K_{x,y}(u^{(i-1)},{\rm d}u^{(i)})\end{aligned}$$ from which one obtains the desired conditional, and deduces that sampling the auxiliary variables in $Q_{2}^{N}(\cdot,\cdot)$ consists of sampling $k\sim\mathcal{U}\{1,2,\ldots,N\}$, $u^{(k)}\sim\bar{Q}_{x,y}(\cdot)$, and then simulate the rest of the chain “forward” and “backward” as follows $$(u^{(k-1)},\ldots,u^{(1)})\sim\prod_{i=2}^{k}K_{y,x}(u^{(i)},{\rm d}u^{(i-1)}),\quad(u^{(k+1)},\ldots,u^{(N)})\sim\prod_{i=k+1}^{N}K_{y,x}(u^{(i-1)},{\rm d}u^{(i)}).$$ Note that in this case, Remark \[rem:samplingkcanbeomitted\] does not hold. While sampling $k$ is still not necessary in $Q_{1}^{N}(\cdot,\cdot)$, sampling $k$ in $Q_{2}^{N}(\cdot,\cdot)$ is required. The last part of the theorem is applicable by averaging over the set of permutations of $[N]$ $$Q_{x,y}^{N}\big({\rm d}\mathfrak{u}\big)=\frac{1}{N!}\sum_{\sigma\in\mathfrak{S}}Q_{x,y}({\rm d}u^{(\sigma(1))})K_{x,y}(u^{(\sigma(1))},{\rm d}u^{(\sigma(2))})\ldots K_{x,y}(u^{(\sigma(N-1))},{\rm d}u^{(\sigma(N))}),$$ and noting that for $k\in[N]$ and by using the reversibility as above for each $\sigma\in\mathfrak{S}$ leads to $$Q_{x,y}^{N}\big({\rm d}\mathfrak{u}\big)=Q_{x,y}({\rm d}u^{(k)})\frac{1}{N!}\sum_{\sigma\in\mathfrak{S}}\prod_{i=2}^{\sigma^{-1}(k)}K_{x,y}(u^{(\sigma(i))},{\rm d}u^{(\sigma(i-1))})\prod_{i=\sigma^{-1}(k)+1}^{N}K_{x,y}(u^{(\sigma(i-1))},{\rm d}u^{(\sigma(i))}).$$ We do not investigate this algorithm further here. Improving PMR algorithms with AIS \[sec: Improving pseudo-marginal ratio algorithms for doubly intractable models\] =================================================================================================================== Before moving on to more complex scenarios in Section \[sec: Pseudo-marginal ratio algorithms for latent variable models\], we focus in this section on the averaging of acceptance ratios in the specific context of our running Example \[ex:doublyintractaveraging\]. The exchange algorithm [@Murray_et_al_2006], described in Example \[ex: exchange algorithm\], exploits the fact that for $\theta,\theta'\in\Theta$ and $u\sim\ell_{\theta'}(\cdot)$, the ratio $g_{\theta}\big(u\big)/g_{\theta'}\big(u\big)$ is an estimator of $C_{\theta}/C_{\theta'}$. Another possible estimator of $C_{\theta}/C_{\theta'}$, based on AIS [@Crooks1998; @Neal_2001], was also used in @Murray_et_al_2006. It has the advantage that it involves a tuning parameter which can be used to reduce the variability of the estimator, and hence improve the theoretical performance of exchange type algorithms. It has recently been established theoretically that this approach can beat the curse of dimensionality by reducing complexity from exponential to polynomial in the problem dimension @2016arXiv161207583A [@beskos2014stability]. This is however at the expense of an additional computational cost. In this section, we show that the AIS based exchange algorithm can be reinterpreted as a PMR algorithm of the form . It is thus straightforward to extend this methodology through Algorithm \[alg: MHAAR for Pseudo-marginal ratio\] so as to use acceptance ratio estimators obtained through averaging. AIS based exchange algorithm and its average acceptance ratio form \[subsec: AIS based exchange algorithm and its average acceptance ratio form\] ------------------------------------------------------------------------------------------------------------------------------------------------- The estimator $g_{\theta}\big(u\big)/g_{\theta'}\big(u\big)$ for $u\sim\ell_{\theta'}(\cdot)$ of the ratio of $C_{\theta}/C_{\theta'}$ may be very variable when the functions $g_{\theta}(\cdot)$ and $g_{\theta'}(\cdot)$ differ too much. The basic idea behind AIS consists of rewriting the ratio of interest as a telescoping product of ratios of normalising constants corresponding to a sequence of artificial probability densities $$\mathscr{P}_{\theta,\theta',T}:=\big\{\pi_{\theta,\theta',t}(\cdot),t=0,\ldots,T+1\big\}$$ for some $T\geq1$ evolving from $\pi_{\theta,\theta',0}(u)=\ell_{\theta'}(u)$ to $\pi_{\theta,\theta',T+1}(u)=\ell_{\theta}(u)$; i.e. $\pi_{\theta,\theta',t}(u)=f_{\theta,\theta',t}(u)/C_{\theta,\theta',t}$ where $f_{\theta,\theta',t}(u)$ can be computed pointwise but $C_{\theta,\theta',t}$ is intractable. More precisely one rewrites $C_{\theta}/C_{\theta'}=\prod_{t=0}^{T}C_{\theta,\theta',t+1}/C_{\theta,\theta',t}$ (with $C_{\theta,\theta',0}=C_{\theta'}$ and $C_{\theta,\theta',T+1}=C_{\theta}$) where the densities $\big\{ f_{\theta,\theta',t}(\cdot),t=1,\ldots,T\big\}$ are such that estimating each term $C_{\theta,\theta',t+1}/C_{\theta,\theta',t}$ can be performed efficiently using the technique above for example. Good performance therefore necessitates that successive unnormalised densities are close (and become ever closer as $T$ increases). A naive implementation would require exact sampling from each of the intermediate probability distributions but the remarkable fact noticed independently in @Crooks1998 and @Neal_2001 is that the estimators involved in the product may arise from an inhomogeneous Markov chain, therefore rendering the algorithm highly practical. The following proposition establishes that this algorithm is of the same form as $\mathring{P}$ given in . \[prop: AIS MCMC for doubly intractable models\]Assume the set-up of Example \[ex:doublyintractable\] and for all $\theta,\theta'\in\Theta$, let 1. $\mathscr{F}_{\theta,\theta',T}=\big\{ f_{\theta,\theta',t}(\cdot),t=0,\ldots,T+1\big\}$ be a family of tractable unnormalised densities of $\mathscr{P}_{\theta,\theta',T}$ such that for $t=0,\ldots,T$ 1. $f_{\theta,\theta',t}(\cdot)$ and $f_{\theta,\theta',t+1}(\cdot)$ have the same support, 2. for any $u\in\mathsf{Y}$ $$f_{\theta,\theta',0}(u)=g_{\theta'}(u),\quad f_{\theta,\theta',T+1}(u)=g_{\theta}(u),\quad f_{\theta,\theta',t}(u)=f_{\theta',\theta,T+1-t}(u),$$ 2. $\mathscr{R}_{\theta,\theta',T}=\big\{ R_{\theta,\theta',t}(\cdot,\cdot)\colon\mathsf{Y}\times\mathcal{Y}\to[0,1],t=1,\ldots,T\big\}$ be a family of Markov transition kernels such that for any $t=1,\ldots,T$ 1. $R_{\theta,\theta',t}(\cdot,\cdot)$ is $\pi_{\theta,\theta',t}-$reversible, 2. $R_{\theta,\theta',t}(\cdot,\cdot)=R_{\theta',\theta,T+1-t}(\cdot,\cdot)$, 3. $Q_{\theta,\theta'}(\cdot)$ be the probability distributions $\big(\mathsf{U},\mathcal{U}\big)$, where $\mathsf{U}:=\mathcal{\mathsf{Y}}^{T+1}$, defined for $u:=(u_{0},\ldots,u_{T})\in\mathsf{U}$ as $$\begin{aligned} \;Q_{\theta,\theta'}({\rm d}u):= & \ell_{\theta'}({\rm d}u_{0})\prod_{t=1}^{T}R_{\theta,\theta',t}(u_{t-1},{\rm d}u_{t}),\label{eq: Q for doubly intractable with annealing}\end{aligned}$$ and $\varphi$ the involution reversing the order of the components of $u$; i.e. $\varphi(u_{0},u_{1},\ldots,u_{T}):=(u_{T},u_{T-1},\ldots,u_{0})$ for all $u\in\mathsf{U}$. Then for any $\theta,\theta'\in\Theta$ and any $u\in\mathsf{U}$ $$\bar{Q}_{\theta,\theta'}({\rm d}u)=\ell_{\theta'}({\rm d}u_{T})\prod_{t=1}^{T}R_{\theta',\theta,T-t+1}(u_{T-t+1},{\rm d}u_{T-t}),$$ and $$\frac{\bar{Q}_{\theta',\theta}({\rm d}u)}{Q_{\theta,\theta'}({\rm d}u)}=\frac{C_{\theta'}}{C_{\theta}}\prod_{t=0}^{T}\frac{f_{\theta,\theta',t+1}(u_{t})}{f_{\theta,\theta',t}(u_{t})}.$$ The AIS based exchange algorithm of @Murray_et_al_2006 corresponds to $\mathring{P}$ in with proposal distribution $$Q_{1}(\theta,{\rm d}(\theta',u))=q(\theta,{\rm d}\theta')Q_{\theta,\theta'}({\rm d}u)$$ and its complementary kernel $$Q_{2}(\theta,{\rm d}(\theta',u))=q(\theta,{\rm d}\theta')\bar{Q}_{\theta,\theta'}({\rm d}u).$$ Its acceptance ratio on $\mathring{\mathsf{S}}$ is $$\mathring{r}_{u}(\theta,\theta')=\frac{q(\theta',\theta)}{q(\theta,\theta')}\frac{\eta(\theta')}{\eta(\theta)}\frac{g_{\theta'}(\mathfrak{y})}{g_{\theta}(\mathfrak{y})}\prod_{t=0}^{T}\frac{f_{\theta,\theta',t+1}(u_{t})}{f_{\theta,\theta',t}(u_{t})}.\label{eq: ratio for doubly intractable with annealing}$$ Since $Q_{\theta,\theta'}(\varphi(A))=\bar{Q}_{\theta,\theta'}(A)$, we can check that the pair $Q_{1}(x,\cdot)$, $Q_{2}(x,\cdot)$ satisfy the assumption of Theorem \[thm: pseudo-marginal ratio algorithms\] in Appendix \[sec: A general framework for MPR and MHAAR algorithms\]. Moreover, by the symmetry assumption on $\mathscr{R}_{\theta,\theta',T}$, we obtain $$\begin{aligned} \bar{Q}_{\theta,\theta'}({\rm d}u) & =\ell_{\theta'}({\rm d}u_{T})\prod_{t=1}^{T}R_{\theta,\theta',t}(u_{T-t+1},{\rm d}u_{T-t})\\ & =\ell_{\theta'}({\rm d}u_{T})\prod_{t=1}^{T}R_{\theta',\theta,T-t+1}(u_{T-t+1},{\rm d}u_{T-t}),\end{aligned}$$ so we can apply Theorem \[thm:AIS\] in Appendix \[sec: A short justification of AIS and an extension\] with $\mu_{0}=\ell_{\theta'}$, $\mu_{\tau+1}=\ell_{\theta}$, $\tau=T$ and $\mu_{t}=\pi_{\theta,\theta',t}$ and $\Pi_{t}=R_{\theta,\theta',t}$ for $t=1,\ldots,T$ to show that $\bar{Q}_{\theta',\theta}(\cdot)$ is absolutely continuous with respect to $Q_{\theta,\theta'}(\cdot)$ and the expression for the corresponding Radon-Nikodym derivative ensures that is indeed equal to . By selecting an appropriate sequence of intermediate distributions $\mathscr{P}_{\theta,\theta',T}$ as detailed in Section \[subsec: Numerical example: the Ising model\], the variability of this noisy acceptance ratio can be reduced by increasing $T$. Another approach to reduce variability is given in Algorithm \[alg: MHAAR-AIS exchange algorithm\] which consists of averaging acceptance ratios as described in Algorithm \[alg: MHAAR for Pseudo-marginal ratio\]. For $T=0$ and $N>1$ Algorithm \[alg: MHAAR-AIS exchange algorithm\] reduces to that in Example \[ex:doublyintractaveraging\], for $N=1$ and $T>0$, we recover the exchange algorithm with bridging of @Murray_et_al_2006 and for $T=0$ and $N=1$, this reduces to the exchange algorithm. Our generalisation presents a clear computational interest: while sampling a realisation of the Markov chain defined by $Q_{\theta,\theta'}(\cdot)$ is fundamentally a serial operation, sampling $N$ independent such realisations is trivially parallelisable. On an ideal parallel computer, running the algorithm for any $N>1$ or $N=1$ would take the same amount of the user’s time. We explore numerically combinations of the parameters $T$ and $N$ in Section \[subsec: Numerical example: the Ising model\]. Sample $\theta'\sim q(\theta,\cdot)$ and $v\sim\mathcal{U}(0,1)$.\ Using a single sample from $\ell_{\theta'}(\cdot)$ per iteration\[subsec: Using a single sample from ell\_theta\_prime per iteration\] --------------------------------------------------------------------------------------------------------------------------------------- This section can be omitted on a first reading. In Algorithm \[alg: MHAAR-AIS exchange algorithm\], each of the $N$ chains has a different initial point, which is a sample from an intractable distribution. Obtaining such a sample can be computationally expensive. Algorithm \[alg: MHAAR-AIS exchange algorithm - reduced computation\] is an alternative that only requires one such sample at each iteration. The proof that the associated Markov kernel is $\pi$-reversible can be derived from Theorem \[thm:generalisationexchangeable\] in Section \[subsec: Introducing dependence\], hence we omit it. Although computationally more expensive on a serial machine, we expect Algorithm \[alg: MHAAR-AIS exchange algorithm\] to have better statistical properties than Algorithm \[alg: MHAAR-AIS exchange algorithm - reduced computation\] as it uses independent chains to estimate the acceptance ratio. This is demonstrated experimentally in Section \[subsec: Numerical example: the Ising model\]. Moreover, the computational advantage of Algorithm \[alg: MHAAR-AIS exchange algorithm - reduced computation\] is questionable on a parallel architecture, where one can in principle run all the chains in $Q_{1}^{N}(\cdot,\cdot)$ and $Q_{2}^{N}(\cdot,\cdot)$ of Algorithm \[alg: MHAAR-AIS exchange algorithm\] at the same time. In fact, Algorithm \[alg: MHAAR-AIS exchange algorithm\] may be even faster since all the chains in the backward move can be produced in parallel whereas this can not be done in Algorithm \[alg: MHAAR-AIS exchange algorithm - reduced computation\]. Sample $\theta'\sim q(\theta,\cdot)$ and $v\sim\mathcal{U}(0,1)$.\ Numerical example: the Ising model\[subsec: Numerical example: the Ising model\] -------------------------------------------------------------------------------- We illustrate the performance of Algorithms \[alg: MHAAR-AIS exchange algorithm\] and \[alg: MHAAR-AIS exchange algorithm - reduced computation\] on the Ising model used in statistical mechanics to model ferromagnetism. For $m,n\in\mathbb{N}$ we consider an $m\times n$ lattice $\Lambda$. Associated to each site $k\in\Lambda$ is a binary variable $\mathfrak{z}[k]\in\{-1,1\}$ representing the spin configuration of the site. The probability of a given configuration $u=\{\mathfrak{z}[k],k\in\Lambda\}$ depends on an energy function, or Hamiltonian, which may depend on some parameter $\theta$. A standard choice used in the absence of an external magnetic field is $$H_{\theta}(\mathfrak{z})=-\theta\sum_{i\sim j}\mathfrak{z}[i]\mathfrak{z}[j],$$ where $i\sim j$ denotes a pair or adjacent sites and $\theta\in\Theta=\mathbb{R}_{+}$ is referred to as the inverse temperature parameter. The probability of configuration $\mathfrak{z}$ for temperature $\theta^{-1}$ is given by $\ell_{\theta}(\mathfrak{z})=g_{\theta}(\mathfrak{z})/C_{\theta}$ where $g_{\theta}(\mathfrak{z})=\exp(-H_{\theta}(\mathfrak{z}))$ and $C_{\theta}=\sum_{\mathfrak{z}\in\{0,1\}^{|\Lambda|}}g_{\theta}(\mathfrak{z})$ is the intractable and $\theta$-dependent normalising constant. In the following experiment, we perform Bayesian estimation of $\theta$ given a $20\times30$ configuration $\mathfrak{y}$ drawn from $\ell_{\theta^{*}}(\cdot)$ for $\theta^{\ast}=0.35$, which is slightly above the critical (inverse) temperature $\log(1+\sqrt{2})/2$, resulting in strongly correlated neighbouring sites. The prior distribution for $\theta$ is taken to be the uniform distribution on $(0,10)$. The difficulty here is that computing $C_{\theta}$ requires the summation of $2^{600}$ terms, which is computationally infeasible. The sequence of intermediate distributions used within AIS relies on a geometric annealing schedule for the unnormalised densities of the annealing distributions that is $$f_{\theta,\theta',t}(\mathfrak{z})=g_{\theta}(\mathfrak{z})^{1-\beta_{t}}g_{\theta'}(\mathfrak{z})^{\beta_{t}}=g_{\theta(1-\beta_{t})+\theta'\beta_{t}}(\mathfrak{z}),\quad\beta_{t}=1-\frac{t}{T+1},\quad t=0,1,\ldots,T+1.$$ Sampling from the intractable distribution is performed approximately by running Wolff’s algorithm, essentially an MCMC kernel iterated for $100$ iterations. For $\theta,\theta'\in\Theta$ and $t=1,\ldots,T$ we chose $R_{\theta,\theta',t}$ to be a single iteration of the MCMC kernel of the Wolff’s algorithm targeting $\ell_{\theta(1-\beta_{t})+\theta'\beta_{t}}(\cdot)$. We ran both Algorithms \[alg: MHAAR-AIS exchange algorithm\] and \[alg: MHAAR-AIS exchange algorithm - reduced computation\] for all of the combinations of $N=1,10,20,\ldots,100$ and $T=1,2,\ldots,10,20,\ldots,100$. For each run, $K=10^{6}$ samples were generated and the last $3K/4$ of them were used to compute the IAC of the sequence $\{\theta_{i},i=1\geq1\}$. Figure \[fig: Potts exchange with bridging vs asymmetric MCMC\] concentrates on the two extreme scenarios where $N=1$ and when $T=0$, that correspond to the exchange algorithm with bridging as in @Murray_et_al_2006 and our novel averaging algorithm applied to Example \[ex:doublyintractaveraging\], respectively. The figure suggests that our algorithm is computationally superior on an ideal parallel machine, at least for the present example. The rest of the results are shown in Figure \[fig: IAC vs T vs N for the Ising model\]. The results are organised in order to contrast Algorithms \[alg: MHAAR-AIS exchange algorithm\] and \[alg: MHAAR-AIS exchange algorithm - reduced computation\]. The figure suggests that Algorithm \[alg: MHAAR-AIS exchange algorithm\], which uses multiple samples from the intractable distribution per iteration, is uniformly better, as expected. Finally, although for large $T$ the performances of the two algorithms get closer, for small $T$ the advantage of using more samples from the intractable distribution, i.e. using Algorithm \[alg: MHAAR-AIS exchange algorithm\] is more significant. ![IAC for $\theta$ in the Ising model vs (a) the number of averaged ratios $N=1,10,20,\ldots,100$ for $T=0$ (red/grey) and (b) the number of annealing steps $T=0,1,2,\ldots,10,20,\ldots,100$ for $N=1$ (black).[]{data-label="fig: Potts exchange with bridging vs asymmetric MCMC"}](Potts_AIS_MCMCvsT_and_asymMCMCvsN) ![IAC for $\theta$ for the combinations of $N=1,10,20,\ldots,100$ and $T=1,2,\ldots,10,20,\ldots,100$. Each plot shows IAC vs $N$ for a fixed $T$.[]{data-label="fig: IAC vs T vs N for the Ising model"}](Potts_asymMCMCvsNvsT) PMR algorithms for latent variable models \[sec: Pseudo-marginal ratio algorithms for latent variable models\] ============================================================================================================== Latent variable models \[subsec: Latent variable models\] --------------------------------------------------------- We consider here sampling from a distribution that is the marginal of a given joint distribution. More precisely, let $(\Theta,\mathcal{E})$ and $(\mathsf{Z},\mathcal{Z})$ be two measurable spaces, and define the product spaces $\mathsf{X}=\Theta\times\mathsf{Z}$ and $\mathcal{X}=\mathcal{E}\otimes\mathcal{Z}$ the corresponding product $\sigma$-algebra. Let $\pi({\rm d}x):=\pi({\rm d}(\theta,z))$ be a probability distribution on $(\mathsf{X},\mathcal{X})$ which is assumed known up to a normalising constant. Our primary interest is to sample from the marginal distribution of $\theta$, $$\pi({\rm d}\theta)=\int_{\mathsf{Z}}\pi\big({\rm d}(\theta,z)\big),$$ assumed to be intractable, i.e. no useful density is available, even up to a normalising constant. The doubly intractable scenario covered so far falls into this category. It exploits the fact that $$\pi\big(\theta,z\big)\propto\eta(\theta)g_{\theta}(\mathfrak{y})\frac{h_{\mathfrak{y}}(z)}{g_{\theta}(z)}\ell_{\theta}\big(z\big),$$ has $\pi\big(\theta\big)\propto\eta(\theta)\ell_{\theta}\big(\mathfrak{y}\big)$ as marginals, but also the additional property that sampling from the intractable distribution $\ell_{\theta}\big(z\big)$ is possible. This latter property is fundamental to by-pass the intractability of the normalising constant, but also allows one to refresh $z$ at each iteration of the MCMC algorithm, in contrast with the pseudo-marginal approach. As a result the exchange algorithm defines an algorithm which directly targets $\pi(\theta)$ with a Markov chain defined on $(\Theta,\mathcal{E})$. This however turns out to be too specific and restrictive for numerous applications, such as state-space models. \[ex: state-space example for latent variable section\]We consider the well-known non-linear state-space, often used to assess the performance of inference methods for non-linear state-space models, $$\begin{aligned} Z_{t} & =Z_{t-1}/2+25Z_{t-1}/(1+Z_{t-1}^{2})+8\cos(1.2t)+V_{t},\quad t\geq2\\ Y_{t} & =Z_{t}^{2}/20+W_{t},\quad t\geq1,\end{aligned}$$ where $Z_{1}\sim\mathcal{N}(0,10)$, $V_{t}\overset{\mathrm{iid}}{\sim}\mathcal{N}(0,\sigma_{v}^{2})$, $W_{t}\overset{\mathrm{iid}}{\sim}\mathcal{N}(0,\sigma_{w}^{2})$. The parameter of primary interest is $\theta=(\sigma_{v}^{2},\sigma_{w}^{2})$ and is ascribed the prior $(\sigma_{v}^{2},\sigma_{w}^{2})\overset{\mathrm{iid}}{\sim}\mathcal{IG}(0.01,0.01)$ where $\mathcal{IG}(a,b)$ is the inverse gamma distribution with shape and scale parameters $a$ and $b$. The aim is to infer $x=(\theta,z)$, where the latent variable is $z=z_{1:P}$ for some $P>1$, from a particular data set $Y_{1:P}=y_{1:P}$. Ideally we would like to use the following “marginal” algorithm. Let $q(\theta,\cdot)$ be a Markov kernel on $(\Theta,\mathcal{E})$ such that for each $\theta\in\Theta$, $q(\theta,\cdot)$ admits a density $q(\theta,\cdot)$ with respect to ${\rm d}\theta'$. The acceptance rate of the MH algorithm with proposal kernel $q(\cdot,\cdot)$ targeting $\pi(\theta)$ is $$r(\theta,\theta')=\frac{q(\theta',\theta)\pi(\theta')}{q(\theta,\theta')\pi(\theta)}.\label{eq: marginal MCMC acceptance probability}$$ The latter cannot be evaluated in numerous scenarios of interest and the aim of this section is to extend the framework developed for the doubly intractable scenario to the more general situation where sampling of the latent variable must be included in the MCMC scheme itself and cannot be performed exactly. This results in an algorithm tar-getting the distribution $\pi(\mathrm{d}(\theta,z))$. It turns out that the framework developed in Section \[sec: Pseudo-marginal ratio algorithms using averaged acceptance ratio estimators\] can also be easily adapted to this scenario. More precisely, here we have $x=(\theta,z)$ and $y=(\theta',z')$ and the only difference with the developments of Algorithm \[alg: MHAAR for Pseudo-marginal ratio\] is concerned with the order in which the variables are sampled. In Algorithm \[alg: MHAAR for Pseudo-marginal ratio\] we have assumed a specific sampling order for the variables involved, that is the auxiliary variable copies are sampled after the proposed value $y$. Here we are going to consider the scenario where $\theta'$ is sampled first, then the auxiliary variables $u^{(1)},\ldots,u^{(N)}$ are sampled from a kernel $Q_{\theta,\theta',z}({\rm d}u)$ and $z'$ is proposed last, conditional upon the auxiliary variables $\theta,\theta',z$ and $u^{(1)},\ldots,u^{(N)}$. The resulting expression for the acceptance ratio remains the same as that used in Algorithm \[alg: MHAAR for Pseudo-marginal ratio\] since it is not affected by the order in which the variables are sampled. AIS within MH for latent variable models \[subsec: AIS within MH algorithms\] ----------------------------------------------------------------------------- @Neal_2004 suggested to use AIS, as described in Section \[sec: Pseudo-marginal ratio algorithms using averaged acceptance ratio estimators\] and Theorem \[thm:AIS\] in Appendix \[sec: A short justification of AIS and an extension\], in order to achieve sampling from $\pi$. The idea should be clear upon noticing that for $\theta\in\Theta$ fixed, $\pi(\theta)$ is the normalising constant of the conditional distribution for $z$ that is proportional to $\pi(\theta,z)$, that is $\pi_{\theta}(z)\propto\pi(\theta,z)$. To estimate the ratio $\pi(\theta')/\pi(\theta)$ one therefore defines a sequence of artificial probability densities $$\mathscr{P}_{\theta,\theta',T}:=\big\{\pi_{\theta,\theta',t},t=0,\ldots,T+1\big\}$$ for some $T\geq1$ evolving from $\pi_{\theta,\theta',0}(z)=\pi_{\theta}(z)$ to $\pi_{\theta,\theta',T+1}(z)=\pi_{\theta'}(z)$, through a sequence of unnormalised intermediate probability densities $\mathscr{F}_{\theta,\theta',T}=\{f_{\theta,\theta',t},t=0,\ldots,T+1\}$. The following proposition establishes that this algorithm is conceptually of the same form as $\mathring{P}$ given in and this allows us to extend this methodology through Algorithm \[alg: MHAAR for Pseudo-marginal ratio\]. \[prop:MHwithAISinside\]Consider the latent variable model given in the introduction of Section \[sec: Pseudo-marginal ratio algorithms for latent variable models\] and for any $\theta,\theta'\in\Theta$ let 1. \[enu:firstassumptionMHwithAISinside\]$\mathscr{F}_{\theta,\theta',T}=\big\{ f_{\theta,\theta',t},t=0,\ldots,T+1\big\}$ be a family of tractable unnormalised densities of $\mathscr{P}_{\theta,\theta',T}$ defined on $\big(\mathsf{Z},\mathcal{Z}\big)$ such that 1. \[enu:a\]for $t=0,\ldots,T$, $f_{\theta,\theta',t}$ and $f_{\theta,\theta',t+1}$ have the same support, 2. \[enu:b\]for any $z\in\mathsf{Z}$ and $t=1,\ldots,T$ $f_{\theta,\theta',t}(z)=f_{\theta',\theta,T+1-t}(z)$, 3. \[enu:c\]$f_{\theta,\theta',0}(z)=\pi(\theta,z)$ and $f_{\theta,\theta',T+1}(z)=\pi(\theta',z)$, 2. \[enu:secondassumptionMHwithAISinside\]$\mathscr{R}_{\theta,\theta',T}=\big\{ R_{\theta,\theta',t}(\cdot,\cdot)\colon\mathsf{Z}\times\mathcal{Z}\to[0,1],t=1,\ldots,T\big\}$ be a family of Markov transition kernels such that for any $t=1,\ldots,T$ 1. $R_{\theta,\theta',t}(\cdot,\cdot)$ is $\pi_{\theta,\theta',t}-$reversible, 2. $R_{\theta,\theta',t}(\cdot,\cdot)=R_{\theta',\theta,T-t+1}(\cdot,\cdot)$, 3. \[enu:Ruzispithetareversibleforlatents\]$R_{\theta}(\cdot,\cdot)\colon\mathsf{Z}\times\mathcal{Z}\to[0,1]$ be a $\pi_{\theta}-$reversible Markov transition kernel, 4. \[enu:fourhassumption MHwithAISinside\]$Q_{\theta,\theta',z}(\cdot)$ be probability distributions on $\big(\mathsf{U},\mathcal{U}\big)$ where $\mathsf{U}:=\mathcal{\mathsf{Z}}^{T+1}$ defined for $$\begin{aligned} Q_{\theta,\theta',z}({\rm d}u) & =R_{\theta}(z,{\rm d}u_{0})\prod_{t=1}^{T}R_{\theta,\theta',t}(u_{t-1},{\rm d}u_{t}),\label{eq: M}\end{aligned}$$ and let $\varphi$ be the involution which reverses the order of the components of $u$; i.e. $\varphi(u_{0},u_{1},\ldots,u_{T}):=(u_{T},u_{T-1},\ldots,u_{0})$ for all $u\in\mathsf{U}$. Then for any $(\theta,z),(\theta',z'),u\in\big(\Theta\times\mathsf{\mathsf{Z}}\big)^{2}\times\mathsf{U}$ $$\bar{Q}_{\theta,\theta',z}({\rm d}u)=R_{\theta}(z,{\rm d}u_{T})\prod_{t=1}^{T}R_{\theta',\theta,t}(u_{t},{\rm d}u_{t-\text{1}}),\label{eq:L}$$ and $$\frac{\pi_{\theta'}({\rm d}z')\bar{Q}_{\theta',\theta,z'}({\rm d}u)R_{\theta}(u_{0},{\rm d}z)}{\pi_{\theta}({\rm d}z)Q_{\theta,\theta',z}({\rm d}u)R_{\theta'}(u_{T},{\rm d}z')}=\frac{\pi(\theta)}{\pi(\theta')}\prod_{t=0}^{T}\frac{f_{\theta,\theta',t+1}(u_{t})}{f_{\theta,\theta',t}(u_{t})}.\label{eq:AISbasedAcceptRatio}$$ The AIS MCMC algorithm of @Neal_2004 for latent variable models corresponds to $\mathring{P}$ in Theorem \[thm: pseudo-marginal ratio algorithms\] with $x=(\theta,z)$ and $y=(\theta',z')$, the proposal kernel $$Q_{1}(x,{\rm d}(y,u)):=q(\theta,{\rm d}\theta')Q_{\theta,\theta',z}({\rm d}u)R_{\theta'}(u_{T},{\rm d}z')$$ and its complementary kernel $$Q_{2}(x,{\rm d}(y,u)):=q(\theta,{\rm d}\theta')\bar{Q}_{\theta,\theta',z}({\rm d}u)R_{\theta'}(u_{0},{\rm d}z').$$ Its acceptance ratio on $\mathring{\mathsf{S}}$ is $$\mathring{r}_{u}(\theta,z;\theta',z')=\frac{\pi({\rm d}x')Q_{2}(y,{\rm d}(x,u))}{\pi({\rm d}x)Q_{1}(x,{\rm d}(y,u))}=\frac{q(\theta',\theta)}{q(\theta,\theta')}\prod_{t=0}^{T}\frac{f_{\theta,\theta',t+1}(u_{t})}{f_{\theta,\theta',t}(u_{t})}.$$ Since $\bar{Q}_{\theta,\theta',z}(A)=Q_{\theta,\theta',z}(\varphi(A))$, we can check that the pair $Q_{1}(x,\cdot)$, $Q_{2}(x,\cdot)$ satisfy the assumption of Theorem \[thm: pseudo-marginal ratio algorithms\] in Appendix \[sec: A general framework for MPR and MHAAR algorithms\]. Next, using the symmetry assumption on $\mathscr{R}_{\theta,\theta',T}$, we obtain $$\begin{aligned} \bar{Q}_{\theta,\theta',z}({\rm d}u) & =R_{\theta}(z,{\rm d}u_{T})\prod_{t=1}^{T}R_{\theta',\theta,T-t+1}(u_{T-t+1},{\rm d}u_{T-t})\end{aligned}$$ and we can thus apply Theorem \[thm:AIS\] in Appendix \[sec: A short justification of AIS and an extension\] (with $\tau=T+2$ intermediate distributions, two repeats $\mu_{0}=\mu_{1}$ and $\mu_{\tau}=$ $\mu_{\tau+1}$, $\mu_{t}=\pi_{\theta,\theta',t-1}$ for $t=2,\ldots,\tau-1$ and kernels $\Pi_{1}=R_{\theta}$, $\Pi_{\tau}=R_{\theta'}$ and $\Pi_{t}=R_{\theta,\theta',t-1}$ for $t=2,\ldots,\tau-1$) to show that $\pi_{\theta'}\times\bar{Q}_{\theta',\theta,\cdot}\times R_{\theta}$ is absolutely continuous with respect to $\pi_{\theta}\times Q_{\theta,\theta',\cdot}\times R_{\theta'}$ and that the expression for the corresponding Radon-Nikodym derivative ensures that the acceptance ratio defined in is indeed equal to . The standard choice made in @Neal_2004 corresponds to $R_{\theta}(z,{\rm d}u_{0})=\delta_{z}\big({\rm d}u_{0}\big)$, but more general choices are possible. As we shall see in the next section, a choice different from $\delta_{z}\big({\rm d}u_{0}\big)$ can improve performance significantly when averaging acceptance ratios. The variance of this unbiased estimator $\mathring{r}_{u}(\theta,z;\theta',z')$ of $r\big(\theta,\theta'\big)$ can usually be tuned by increasing $T$, under natural smoothness conditions on the sequences $\mathscr{F}_{\theta,\theta',T}$ for $T\geq1$. An important point here is that although the approximated acceptance ratio is reminiscent of that of a MH algorithm targeting $\pi({\rm d}\theta)$, the present algorithm targets the joint distribution $\pi\big({\rm d}(\theta,z)\big)$: the simplification occurs only because the random variable corresponding to $u_{T}$ in will be approximately distributed according to $\pi_{\theta'}(\cdot)$ when $T$ is large enough, under proper mixing conditions. We note that the expression for $\mathring{r}_{u}(\theta,z;\theta',z')$ does not depend on either $z$ or $z'$, and can in particular be calculated before sampling $z'$. This is of importance in what follows and justifies the use of the simplified piece of notation $\mathring{r}_{u}(\theta,\theta')$ below. Averaging AIS based pseudo-marginal ratios \[subsec: Averaging AIS based acceptance ratios\] -------------------------------------------------------------------------------------------- We show here how the algorithm of the previous section (Proposition \[prop:MHwithAISinside\]) can be modified in order to average multiple ($N>1$) estimators $\mathring{r}_{u}(\theta,\theta')$ of $r(\theta,\theta')$ while preserving reversibility of the algorithm of interest. Let $u=(u_{0},\ldots,u_{T})\in\mathsf{U}=\mathsf{Z}^{T+1}$ and $k\in\{1,\ldots,N\}$. \[prop: asymmetric MCMC with latent variables\]Assume that the conditions of Proposition \[prop:MHwithAISinside\] hold. For $N\geq1$ define the proposal kernels $Q_{1}^{N}(\cdot,\cdot)$ and $Q_{2}^{N}(\cdot,\cdot)$ on $\big(\mathsf{X}\times\mathsf{\mathfrak{U}}\times[k],\mathcal{X}\otimes\mathscr{U}\otimes\mathscr{P}[k]\big)$ $$\begin{aligned} Q_{1}^{N}\big(x;{\rm d}(y,\mathfrak{u},k)\big) & =q(\theta,{\rm d}\theta')\prod_{i=1}^{N}Q_{\theta,\theta',z}({\rm d}u^{(i)})\frac{\mathring{r}_{u^{(k)}}(\theta,\theta')}{\sum_{i=1}^{N}\mathring{r}_{u^{(i)}}(\theta,\theta')}R_{\theta'}(u_{T}^{(k)},{\rm d}z'),\label{eq: asymmetric MCMC combining AIS and pseudo MCMC Q1}\\ Q_{2}^{N}\big(x;{\rm d}(y,\mathfrak{u},k)\big) & =q(\theta,{\rm d}\theta')\frac{1}{N}\bar{Q}_{\theta,\theta',z}({\rm d}u^{(k)})R_{\theta'}(u_{0}^{(k)},{\rm d}z')\prod_{i=1,i\neq k}^{N}Q_{\theta',\theta,z'}({\rm d}u^{(i)}).\label{eq: asymmetric MCMC combining AIS and pseudo MCMC Q2}\end{aligned}$$ Then one can implement $\mathring{P}^{N}$ corresponding to $\mathring{P}$ defined in Proposition \[prop:MHwithAISinside\], with $Q_{1}^{N}(\cdot,\cdot)$ and $Q_{2}^{N}(\cdot,\cdot)$ above and $$\mathring{r}_{\mathfrak{u}}^{N}(\theta,\theta')=\frac{1}{N}\sum_{i=1}^{N}\mathring{r}_{u^{(i)}}(\theta,\theta').$$ One can check directly that $\mathring{r}^{N}(\theta,\theta')$ is of the expected form despite the sampling order change $$\begin{aligned} \frac{\pi({\rm d}y)Q_{2}^{N}\big(y,{\rm d}(x,\mathfrak{u},k)\big)}{\pi({\rm d}x)Q_{1}^{N}\big(x,{\rm d}(y,\mathfrak{u},k)\big)} & =\frac{\pi({\rm d}y)q(\theta',{\rm d}\theta)\frac{1}{N}\bar{Q}_{\theta,\theta',z'}({\rm d}u^{(k)})R_{\theta}(u_{0}^{(k)},{\rm d}z)\prod_{i=1,i\neq k}^{N}Q_{\theta,\theta',z}({\rm d}u^{(i)})}{\pi({\rm d}x)q(\theta,{\rm d}\theta')\prod_{i=1}^{N}Q_{\theta,\theta',z}({\rm d}u^{(i)})\frac{\mathring{r}_{u^{(k)}}(\theta,\theta')}{\sum_{i=1}^{N}\mathring{r}_{u^{(i)}}(\theta,\theta')}R_{\theta'}(u_{T}^{(k)},{\rm d}z')}\\ & =\frac{q(\theta',\theta)\pi(\theta')}{q(\theta,\theta')\pi(\theta)}\frac{\pi_{\theta'}(\mathrm{d}z')\bar{Q}_{\theta',\theta,z'}({\rm d}u^{(k)})R_{\theta}(u_{0}^{(k)},{\rm d}z)}{\pi_{\theta}(\mathrm{d}z)Q_{\theta,\theta',z}({\rm d}u^{(k)})R_{\theta'}(u_{T}^{(k)},{\rm d}z')}\mathring{r}_{u^{(k)}}^{-1}(\theta,\theta')\frac{1}{N}\sum_{i=1}^{N}\mathring{r}_{u^{(i)}}(\theta,\theta').\end{aligned}$$ The implementation of the resulting asymmetric MCMC algorithm is described in Algorithm \[alg: MHAAR for pseudo-marginal ratio in latent variable models\]. The interest of introducing a general form for $R_{\theta}$ should now be clear: the standard choice $R_{\theta}\big(z,\cdot\big)=\delta_{z}(\cdot)$ introduces dependence among $u^{(1)},u^{(2)},\ldots,u^{(N)}$ which can be alleviated by the introduction of a more general ergodic transition, which may consist of an iterated reversible Markov transition of invariant distribution $\pi_{\theta}$. We also notice that some computational savings are possible. For example when $Q_{1}^{N}(\cdot,\cdot)$ is the distribution we sample from, the acceptance ratio does not depend on $k$, whose sampling can therefore be postponed until after a decision to accept has been made. The complementary update for which we sample from $Q_{2}^{N}(\cdot,\cdot)$ effectively does not require sampling $k$ which is set to $1$ in our implementation in Algorithm \[alg: MHAAR for pseudo-marginal ratio in latent variable models\]. Sample $\theta'\sim q(\theta,\cdot)$ and $v\sim\mathcal{U}(0,1)$.\ \[ex: ctd state-space example for latent variable section\]In order to illustrate the interest of our approach, we generated data from the model for $P=500$, $\sigma_{v}^{2}=10$ and $\sigma_{w}^{2}=0.1$. The set-up for Algorithm \[alg: MHAAR for pseudo-marginal ratio in latent variable models\] was as follows. We let $T=1$ and for $\theta,\theta'\in\Theta$ the unnormalised density of the intermediate distribution was chosen to be $f_{\theta,\theta',1}(z)=\pi((\theta+\theta')/2,z)$. The MCMC kernel $R_{\theta,\theta',1}(\cdot,\cdot)$ was a conditional SMC (cSMC) @Andrieu_et_al_2010 tar getting the intermediate distribution, with $M=100$ particles and the model transitions as proposal distributions; for convenience the cSMC kernel is described in Section \[sec: State-space models: SMC and conditional SMC within MHAAR\]. We used a normal random walk proposal with diagonal covariance matrix as a parameter proposal, where the standard deviations for $\sigma_{v}$ and $\sigma_{w}$ were $0.15$ and $0.08$ respectively. Performance, measured in terms of convergence to equilibrium and asymptotic variance for $N=1$, $N=10$ and $N=100$, is presented in Figure \[fig: IAC vs N for the state-space model-1\] and \[fig: IAC vs N for the state-space model\]. For each set-up, $200$0 independent Monte Carlo runs of length 1000 each were used to assess convergence to the posterior mean, posterior second moment and median, via ensemble averages over the runs. We observe in Figure \[fig: IAC vs N for the state-space model-1\] that this simple approach improves performance and reduces time to convergence by approximately 50%. In addition to faster convergence, of the order of $30\%$, in terms of IAC in Figure \[fig: IAC vs N for the state-space model-1\]. The estimated IAC values were obtained after discarding the first 300 iterations and by averaging over 2000 Monte Carlo runs. We present further new developments for this application in Section \[subsec: An application: trans-dimensional distributions\]. ![Convergence results for $\theta=(\sigma_{v}^{2},\sigma_{w}^{2})$ vs $N$ in Algorithm \[alg: MHAAR for pseudo-marginal ratio in latent variable models\].[]{data-label="fig: convergence results for the state-space model"}](HMM_convergence_first_two_moments_and_median_multpl_SMC_and_med_N_100_150_2000_runs) ![IAC for $\sigma_{w}^{2}$ and $\sigma_{w}^{4}$ vs $N$ in Algorithm \[alg: MHAAR for pseudo-marginal ratio in latent variable models\].[]{data-label="fig: IAC vs N for the state-space model-1"}](HMM_IAC_times_for_x_and_x_sq_N_100_multpl_cSMC_2000_runs) Generalisations of MHAAR algorithms for latent variable models \[subsec: Generalisations of pseudo-marginal asymmetric MCMC\] ----------------------------------------------------------------------------------------------------------------------------- We now discuss two generalisations of Algorithm \[alg: MHAAR for pseudo-marginal ratio in latent variable models\] above which will prove crucial in Section \[subsec: An application: trans-dimensional distributions\], where we present our trans-dimensional example as an application of the methodology presented here, albeit in a scenario involving additional complications. ### Annealing in a different space \[subsec: Annealing in a different space\] The first generalisation is based on the main idea that condition \[enu:Ruzispithetareversibleforlatents\] in Proposition \[prop:MHwithAISinside\] can be relaxed in the light of Theorem \[thm:AIS\], and in particular allows the latent variable $z$ and auxiliary variables $u_{t}$ to live on different spaces. \[prop:extensionMHwithAISinside\]Suppose that assumptions \[enu:a\]-\[enu:b\] and \[enu:secondassumptionMHwithAISinside\] of Proposition \[prop:MHwithAISinside\] are satisfied with $\mathscr{F}_{\theta,\theta',T},\mathscr{P}_{\theta,\theta',T}$ and $\mathscr{R}_{\theta,\theta',T}$ now defined on some space $\big(\mathsf{V},\mathcal{V}\big)$ (and $\mathsf{U:=\mathsf{V}}^{T+1}$), (therefore $\pi_{\theta,\theta',0}\neq$$\pi_{\theta}$ , and $\pi_{\theta,\theta',T+1}\neq\pi_{\theta'}$ in general), and assumptions \[enu:c\] and \[enu:Ruzispithetareversibleforlatents\] replaced, for $\theta,\theta'\in\Theta$ and $z,z'\in\mathsf{Z}$, with 1. the endpoint conditions for the unnormalised densities are of the form $$\begin{aligned} f_{\theta,\theta',0}(v) & =\pi_{\theta,\theta',0}(v)\pi(\theta),\\ f_{\theta,\theta',T+1}(v) & =\pi_{\theta,\theta',T+1}(v)\pi(\theta'),\end{aligned}$$ 2. the existence of Markov transition kernels $\overrightarrow{R}_{\theta,\theta',0}$,$\overleftarrow{R}_{\theta,\theta',T+1}:\mathsf{Z}\times\mathcal{V}\rightarrow[0,1]$ and $\overrightarrow{R}{}_{\theta,\theta',T+1}$,$\overleftarrow{R}_{\theta,\theta',0}:\mathsf{V}\times\mathcal{Z}\rightarrow[0,1]$ such that $$\begin{aligned} \pi_{\theta}({\rm d}z)\overrightarrow{R}_{\theta,\theta',0}(z,{\rm d}v) & =\pi_{\theta,\theta',0}({\rm d}v)\overleftarrow{R}_{\theta,\theta',0}(v,{\rm d}z),\\ \pi_{\theta,\theta',T+1}({\rm d}v)\overrightarrow{R}_{\theta,\theta',T+1}(v,{\rm d}z) & =\pi_{\theta'}({\rm d}z)\overleftarrow{R}{}_{\theta,\theta',T+1}(z,{\rm d}v),\end{aligned}$$ 3. Define the proposal probability distributions on $\big(\mathsf{U},\mathcal{U}\big)$ such that for any $u\in\mathsf{U}=\mathsf{V}^{T+1}$, $$\begin{aligned} Q_{\theta,\theta',z}\big({\rm d}u\big) & =\overrightarrow{R}_{\theta,\theta',0}(z,{\rm d}u_{0})\prod_{t=1}^{T}R_{\theta,\theta',t}(u_{t-1},{\rm d}u_{t}),\end{aligned}$$ and the involution $\varphi$ reversing the order of the components of $u$; i.e. $\varphi(u_{0},u_{1},\ldots,u_{T}):=(u_{T},u_{T-1},\ldots,u_{0})$ for all $u\in\mathsf{U}$. Then for any $\big((\theta,z),(\theta',z'),u\big)\in\big(\Theta\times\mathsf{\mathsf{Z}}\big)^{2}\times\mathsf{U}$ $$\bar{Q}_{\theta,\theta',z}\big({\rm d}u\big)=\overleftarrow{R}_{\theta',\theta,T+1}(z,{\rm d}u_{T})\prod_{t=1}^{T}R_{\theta',\theta,T-t+1}(u_{T-t+1},{\rm d}u_{T-t}),$$ and $$\frac{\pi_{\theta'}({\rm d}z')\bar{Q}_{\theta',\theta,z'}({\rm d}u)\overleftarrow{R}_{\theta,\theta',0}(u_{0},{\rm d}z)}{\pi_{\theta}({\rm d}z)Q_{\theta,\theta',z}({\rm d}u)\overrightarrow{R}_{\theta,\theta',T+1}(u_{T},{\rm d}z')}=\frac{\pi(\theta)}{\pi(\theta')}\prod_{t=0}^{T}\frac{f_{\theta,\theta',t+1}(u_{t})}{f_{\theta,\theta',t}(u_{t})}.\label{eq:AISbasedAcceptRatio-1}$$ Furthermore, suppose the additional symmetry conditions $$\overrightarrow{R}_{\theta,\theta',T+1}(v,{\rm d}z)=\overleftarrow{R}_{\theta',\theta,0}(v,{\rm d}z),\quad\overrightarrow{R}_{\theta,\theta',0}(z,{\rm d}v)=\overleftarrow{R}_{\theta',\theta,T+1}(z,{\rm d}v).\label{eq: general AIS MCMC further symmetry conditions}$$ Then, a generalisation of the AIS MCMC algorithm in @Neal_2004 corresponds to $\mathring{P}$ in Theorem \[thm: pseudo-marginal ratio algorithms\] with $x=(\theta,z)$ and $y=(\theta',z')$, the proposal kernel $$Q_{1}\big(x,{\rm d}(y,u)\big):=q(\theta,{\rm d}\theta')Q_{\theta,\theta',z}({\rm d}u)\overrightarrow{R}_{\theta,\theta',T+1}(u_{T},{\rm d}z')$$ and its complementary kernel $$Q_{2}\big(x,{\rm d}(y,u)\big):=q(\theta,{\rm d}\theta')\bar{Q}_{\theta,\theta',z}({\rm d}u)\overleftarrow{R}_{\theta',\theta,0}(u_{0},{\rm d}z').$$ Its acceptance ratio on set $\mathring{\mathsf{S}}$ is $$\mathring{r}_{u}(\theta;\theta')=\frac{\pi\big({\rm d}y\big)Q_{2}\big(y,{\rm d}(x,u)\big)}{\pi\big({\rm d}x\big)Q_{1}\big(x,{\rm d}(y,u)\big)}=\frac{q(\theta',\theta)}{q(\theta,\theta')}\prod_{t=0}^{T}\frac{f_{\theta,\theta',t+1}(u_{t})}{f_{\theta,\theta',t}(u_{t})}.\label{eq: AIS-MCMC-acceptance-ratio-general}$$ The first claim follows from Theorem \[thm:extensionAIS\], which can be exploited with similar steps to those in the proof of Proposition \[prop:MHwithAISinside\]. The second claim on the generalisation of AIS MCMC follows from the fact that the symmetry conditions in ensure that $Q_{1}$ and $Q_{2}$ defined in the proposition satisfy the assumption of Theorem \[thm: pseudo-marginal ratio algorithms\]. It may appear that the additional coupling conditions on the initial and terminal distributions is only satisfied for reversible kernels. However it should be clear that in the formulation above $z,z'$ and $u_{0},\ldots,u_{T}$ can be of a different nature i.e. defined on different spaces, which turns out to be relevant in some scenarios, including that considered in Section \[subsec: Numerical example: Poisson multiple changepoint model\]. In fact, the generalisation of AIS MCMC mentioned in Proposition \[prop:extensionMHwithAISinside\] corresponds to the AIS RJ-MCMC algorithm of @Karagiannis_and_Andrieu_2013 for trans-dimensional distributions. It also covers the standard version of the hybrid Monte Carlo algorithm, for example. One can build upon this generalisation and use the framework of asymmetric acceptance ratio MH algorithms corresponding to $\mathring{P}^{N}$ of Section \[subsec: Averaging AIS based acceptance ratios\] in order to define a $\pi-$reversible Markov transition probability. \[prop: asymmetric MCMC with latent variables - general spaces\]Assume that the conditions of Proposition \[prop:extensionMHwithAISinside\] hold. For $N\geq1$ define the proposal kernels $Q_{1}^{N}(\cdot)$ and $Q_{2}^{N}(\cdot)$ on $\big(\mathsf{X}\times\mathsf{\mathfrak{U}}\times[k],\mathcal{X}\otimes\mathscr{U}\otimes\mathscr{P}[k]\big)$ $$\begin{aligned} Q_{1}^{N}\big(x,{\rm d}(y,\mathfrak{u},k)\big) & =q(\theta,{\rm d}\theta')\prod_{i=1}^{N}Q_{\theta,\theta',z}({\rm d}u^{(i)})\frac{\mathring{r}_{u^{(k)}}(\theta,\theta')}{\sum_{i=1}^{N}\mathring{r}_{u^{(i)}}(\theta,\theta')}\overrightarrow{R}_{\theta,\theta',T+1}(u_{T}^{(k)},{\rm d}z'),\label{eq: asymmetric MCMC combining AIS and pseudo MCMC Q1-1}\\ Q_{2}^{N}\big(x,{\rm d}(y,\mathfrak{u},k)\big) & =q(\theta,{\rm d}\theta')\frac{1}{N}\bar{Q}_{\theta,\theta',z}({\rm d}u^{(k)})\overleftarrow{R}_{\theta',\theta,0}(u_{0}^{(k)},{\rm d}z')\prod_{i=1,i\neq k}^{N}Q_{\theta',\theta,z'}({\rm d}u^{(i)}).\label{eq: asymmetric MCMC combining AIS and pseudo MCMC Q2-1}\end{aligned}$$ Then one can implement $\mathring{P}^{N}$ corresponding to $\mathring{P}$ defined in Proposition \[prop:MHwithAISinside\], with $Q_{1}^{N}(\cdot,\cdot)$ and $Q_{2}^{N}(\cdot,\cdot)$ as above and $$\mathring{r}_{\mathfrak{u}}^{N}(\theta,\theta')=\frac{1}{N}\sum_{i=1}^{N}\mathring{r}_{u^{(i)}}(\theta,\theta')$$ with $\mathring{r}_{u}(\theta;\theta')$ defined in . ### Choosing $Q_{1}^{N}(\cdot,\cdot)$ and $Q_{2}^{N}(\cdot,\cdot)$ with different probabilities \[subsec: Choosing Q1 and Q2 with different probabilities\] Notice from and that $Q_{1}^{N}(\cdot)$ and $Q_{2}^{N}(\cdot)$ share the same proposal distribution for $\theta'$ and start differing from each other when generating the auxiliary variables and proposing $z'$ thereafter. In some cases, depending on the values of $\theta$ and $\theta'$, $Q_{1}^{N}(\cdot)$ (or $Q_{2}^{N}(\cdot)$) may be preferable over $Q_{2}^{N}(\cdot)$ (or $Q_{1}^{N}(\cdot)$) for proposing $z'$. This is indeed the case in our trans-dimensional example in Section \[subsec: An application: trans-dimensional distributions\], where the $\theta$ component stands for the model number. One can enjoy this degree of freedom by a function $\beta:\Theta^{2}\rightarrow[0,1]$ which satisfies $$\int\beta(\theta,\theta')Q_{1}^{N}(x,\mathrm{\mathrm{d}}(y,\mathfrak{u},k))+\left(1-\beta(\theta,\theta')\right)Q_{2}^{N}(x,\mathrm{d}(y,\mathfrak{u},k))=1.\label{eq: condition for alpha and Q1 and Q2}$$ Then, we can modify the overall transition kernel of the asymmetric MCMC as follows: $$\begin{aligned} \begin{aligned}\mathring{\bar{P}}^{N}(x,dy)= & \left[\int\beta(\theta,\theta')Q_{1}^{N}(x,\mathrm{\mathrm{d}}(y,\mathfrak{u},k)\min\left\{ 1,\mathring{\overline{r}}_{\mathfrak{u}}^{N}(\theta,\theta')\right\} +\delta_{x}(\mathrm{d}y)\mathring{\bar{\rho}}_{1}(x)\right]\end{aligned} \nonumber \\ +\left[\int\left(1-\beta(\theta,\theta')\right)Q_{2}^{N}(x,\mathrm{d}(y,\mathfrak{u},k))\min\left\{ 1,1/\mathring{\overline{r}}_{\mathfrak{u}}^{N}(\theta',\theta)\right\} +\delta_{x}(\mathrm{d}y)\mathring{\bar{\rho}}_{2}(x)\right]\label{eq: generalised asymmetric kernel with alpha}\end{aligned}$$ where the modified acceptance ratio is defined as $$\mathring{\overline{r}}_{\mathfrak{u}}^{N}(\theta,\theta'):=\mathring{r}_{\mathfrak{u}}^{N}(\theta,\theta')\frac{1-\beta(\theta',\theta)}{\beta(\theta,\theta')}.\label{eq: acceptance ratio modified by alpha}$$ Implementing the modification with respect to Algorithm \[alg: MHAAR for pseudo-marginal ratio in latent variable models\] is straightforward: One needs to replace $v\leq1/2$ with $v\leq\beta(\theta,\theta')$ and use $\mathring{\overline{r}}_{\mathfrak{u}}^{N}(x,y)$ instead of $\mathring{r}_{\mathfrak{u}}^{N}(x,y)$. Proof of reversibility is very similar to the proof of Proposition \[prop: asymmetric MCMC with latent variables\] and we skip it. Note that the condition in ensures that is a valid transition kernel and it is satisfied whenever $\theta'$ is proposed by $Q_{1}^{N}$ and $Q_{2}^{N}$ in the same way, as in and where the same $q(\theta,\mathrm{d}\theta')$ is used. One can in principle write an even more general kernels than the one in by making $\beta$ a function of $x$ and $y$ and imposing a condition similar to , however we find this generalisation not as interesting from a practical point of view. An application: trans-dimensional distributions\[subsec: An application: trans-dimensional distributions\] ---------------------------------------------------------------------------------------------------------- Consider a trans-dimensional distribution $\bar{\pi}(m,{\rm d}z_{m})$ on $\mathsf{X}=\cup_{m\in\Theta}\{m\}\times\mathsf{Z}_{m}$ where $\Theta\subseteq\mathbb{N}$ and the dimension $d_{m}$ of $\mathsf{Z}_{m}$ depends on $m$. For each $m$, we assume that the distribution $\pi(m,{\rm d}z_{m})$ admits a density $\pi(m,z_{m})$ known up to a normalising constant not depending on $m$ or $z_{m}$. We let $\mathscr{Z}_{m}$ be the sigma-algebra of the conditional distribution $\pi_{m}({\rm d}z_{m})$. We are interested in efficient sampling from the marginal distribution $\pi(m)$. An approach for sampling from trans-dimensional distributions is the reversible jump MCMC (RJ-MCMC) algorithm of @Green_1995. Designing efficient RJ-MCMC algorithms is notoriously difficult and can lead to unreliable samplers. @Karagiannis_and_Andrieu_2013 develop what they call the AIS RJ-MCMC algorithm to improve on the performance of the standard RJ-MCMC algorithm. The AIS RJ-MCMC algorithm is a variant of the AIS MCMC algorithm of @Neal_2004 devised for trans-dimensional distributions. Full details of the method are available in @Karagiannis_and_Andrieu_2013; however, we will need to go into some details here as well, in order to state our contribution, the reversible multiple jump MCMC (RmJ-MCMC), of which we present an instance in Algorithm \[alg: Reversible multiple jump MCMC\]. In what follows, for notational simplicity, we consider only algorithms consisting of a single “move” in Green’s terminology, between any pair of models $m,m'\in\Theta$the generalisation to multiple pairs is straightforward but requires additional indexing. A RJ-MCMC update can be understood as being precisely the procedure proposed in Section \[subsec: Annealing in a different space\], but adapted to the present trans-dimensional set-up. In this scenario the nature of the target distributions comes with the additional complication that statistically interpretable parameters ($z_{m},z_{m'}$ for models $m,m'$ respectively) must be, following @Green_1995’s idea, embedded in a potentially larger common space and that this expanded parametrisation is only unique up to an invertible transformation. We mainly deal with this issue in this section, as the details of the algorithm are then very similar to those of Section \[subsec: Annealing in a different space\]. ### Dimension matching and “forward” parametrisation Following @Green_1995 we couple models pairwise. More precisely, for any couple $m,m'\in\Theta$, consider the $d_{m,m'}$ and $d_{m',m}$ dimensional variables such that $d_{m}+d_{m,m'}=d_{m'}+d_{m',m}$, $$\mathfrak{z}_{m,m'}\in\mathfrak{Z}_{m,m'},\quad\mathfrak{z}_{m,m'}\sim\omega_{m,m'},\quad\mathfrak{z}_{m',m}\in\mathsf{\mathfrak{Z}}_{m',m},\quad\mathfrak{z}_{m',m}\sim\omega_{m',m},$$ which are called dimension matching variables, with the convention that these variables and associated quantities should be ignored when either $d_{m,m'}=0$ or $d_{m',m}=0$. Letting the extended space $\mathsf{Z}_{m,m'}:=\mathsf{Z}_{m}\times\mathfrak{Z}_{m,m'}$, consider a one-to-one measurable mapping $\phi_{m,m'}:\mathsf{Z}_{m,m'}\rightarrow\mathsf{Z}_{m',m}$ with its inverse $\phi_{m,m'}^{-1}=\phi_{m',m}$. Note that the nature of $z_{m}$ and $z_{m'}$ may differ as may that of $\mathfrak{z}{}_{m,m'}$ and $\mathfrak{z}_{m',m}$, which explains the need for the (cumbersome) indexing. In order to ease the notation in the following presentation, for $z_{m,m'}:=(z_{m},\mathfrak{z}_{m,m'})\in\mathsf{Z}_{m,m'}$ and $A_{m}\in\mathscr{Z}_{m,m'}$, we will use the following transformations with implicit reference to $z_{m,m'}$ and $A_{m,m'}$: $$\begin{split} & z_{m',m}:=(z_{m'},\mathfrak{z}_{m',m})=\phi_{m,m'}(z_{m,m'}),\quad A_{m',m}:=\phi_{m,m'}(A_{m,m'})\\ & z_{m,m'}^{[1]}:=z_{m},\quad z_{m,m'}^{[2]}:=\mathfrak{z}_{m,m'},\quad\phi_{m,m'}^{[1]}(z_{m,m'}):=z_{m'}\quad\phi_{m,m'}^{[2]}(z_{m,m'}):=\mathfrak{z}_{m',m}. \end{split} \label{eq: transdimensional model shorthand transformations}$$ This change of variables plays a crucial rôle in describing and establishing the correctness of the algorithms. In the following, we define the ingredients required for the AIS RJ-MCMC algorithm and its MHAAR extension, paralleling the conditions of Propositions \[prop:MHwithAISinside\] and \[prop:extensionMHwithAISinside\]. - For any $m,m'\in\Theta$, we first define below the sequence of bridging distributions $\mathscr{P}_{m,m',T}=\{\pi_{m,m',t},t=0,\ldots,T+1\}$ on the extended probability space $\big(\mathsf{Z}_{m,m'},\mathscr{Z}_{m,m'}\big)$. First, we impose the end-point condition $$\begin{aligned} \pi_{m,m',0}(z_{m,m'})\propto\pi(m,z_{m})\omega_{m,m'}(\mathfrak{z}_{m,m'})=:f_{m,m',0}\big(z_{m,m'}\big),\end{aligned}$$ from which for any $m,m'\in\Theta$ we define $\pi_{m,m',T+1}(\cdot)$ and its unnormalised density $f_{m,m',T+1}(\cdot)$ via a change of variable, that is for any $A_{m,m'}\in\mathscr{Z}_{m,m'}$, $$\pi_{m,m',T+1}(A_{m,m'}):=\pi_{m',m,0}\big(A_{m',m}\big),$$ where we recall that $\pi_{m',m,0}(\cdot)$ has marginal $\pi_{m'}(\cdot)$. From the associated densities one can define $f_{m,m',t}(\cdot)$ for $t=1,\ldots,T$, as discussed in earlier sections for non-trans-dimensional setups (see also @Karagiannis_and_Andrieu_2013 for a detailed discussion). In order to satisfy \[enu:b\] of Proposition \[prop:MHwithAISinside\] we further impose, noting the bijective nature of $z_{m',m}=\phi_{m,m'}\big(z_{m,m'}\big)$, that for any $A_{m,m'}\in\mathscr{Z}_{m,m'}$ $$\begin{aligned} \pi_{m,m',t}(A_{m,m'})=\pi_{m',m,T+1-t}\big(A_{m',m}\big),\quad t=1,\ldots,T.\end{aligned}$$ It is this set of constraints which requires care, and an arbitrary choice of parametrisation in the calculation of the Radon-Nikodym derivative of our algorithm. The normalising constants for $f_{m,m',0}(z_{m,m'})$ and $f_{m,m',T+1}(z_{m,m'})$ are $\pi(m)$ and $\pi(m')$ respectively, so the AIS stage of the algorithm will produce an estimate of the ratio $\pi(m')/\pi(m)$. - Next, we define the AIS kernels used in the proposal mechanism. For any $m,m'\in\Theta$ and $t=1,\ldots,T$ we let $R_{m,m',t}(\cdot,\cdot)$ be a $\pi_{m,m',t}-$reversible Markov kernels and impose the symmetry conditions for any $(z_{m,m'},A_{m,m'})\in\mathsf{Z}_{m,m'}\times\mathscr{Z}_{m,m'}$, $$R_{m,m',t}(z_{m,m'},A_{m,m'})=R_{m',m,T-t+1}(z_{m',m},A_{m',m}).\label{eq:RJ-symmetry-cond}$$ The space bridging transition kernels $\overrightarrow{R}_{m,m',0}\colon\mathsf{Z}_{m}\times\mathscr{Z}_{m,m'}\rightarrow[0,1]$ and $\overrightarrow{R}_{m,m',T+1}\colon\mathsf{Z}_{m,m'}\times\mathscr{Z}_{m'}\rightarrow[0,1]$ are defined as $$\begin{split}\overrightarrow{R}_{m,m',0}\big(z_{m},{\rm d}z_{m,m'}\big) & :=\delta_{z_{m}}\big({\rm d}z_{m,m'}^{[1]}\big)\omega_{m,m'}({\rm d}z_{m,m'}^{[2]})\\ \overrightarrow{R}_{m,m',T+1}\big(z_{m,m'},{\rm d}z_{m'}\big) & :=\delta_{\phi_{m,m'}^{[1]}(z_{m,m'})}\big({\rm d}z_{m'}\big) \end{split} \label{eq: trandimensional first two space bridging kernels}$$ The other space bridging kernels $\overleftarrow{R}_{m,m',0}\colon\mathsf{Z}_{m,m'}\times\mathscr{Z}_{m}\rightarrow[0,1]$ and $\overleftarrow{R}_{m,m',T+1}\colon\mathsf{Z}_{m'}\times\mathscr{Z}_{m,m'}\rightarrow[0,1]$ will be defined from the first two above. Specifically, for any $m,m'\in\Theta$ and $\big(z_{m},z_{m'},z_{m,m'},A_{m,m'}\big)\in\mathsf{Z}_{m}\times\mathsf{Z}_{m'}\times\mathsf{Z}_{m,m'}\times\mathscr{Z}_{m,m'}$ $$\begin{split}\overleftarrow{R}_{m,m',0}\big(z_{m,m'},{\rm d}z{}_{m}\big) & :=\overrightarrow{R}_{m',m,T+1}\big(z_{m',m},{\rm d}z{}_{m}\big)\\ \overleftarrow{R}_{m,m',T+1}\big(z_{m'},A_{m,m'}\big) & :=\overrightarrow{R}_{m',m,0}\big(z_{m'},A_{m',m}\big) \end{split} .\label{eq: trandimensional last two dim-matching kernels-1}$$ We notice the important properties, central to Green’s methodology, $$\begin{split}\pi_{m}({\rm d}z_{m})\overrightarrow{R}_{m,m',0}\big(z_{m},{\rm d}z_{m,m'}\big) & =\pi_{m,m',0}({\rm d}z_{m,m'})\overleftarrow{R}_{m,m',0}\big(z_{m,m'},{\rm d}z_{m}\big)\\ \pi_{m'}({\rm d}z_{m'})\overleftarrow{R}_{m,m',T+1}\big(z_{m'},{\rm d}z_{m,m'}\big) & =\pi_{m,m',T+1}({\rm d}z_{m,m'})\overrightarrow{R}_{m,m',T+1}\big(z_{m,m'},{\rm d}z_{m'}\big) \end{split} \label{eq: trans-dimensional model endpoint relations}$$ so that we are in the framework described in Theorem \[thm:extensionAIS\] and satisfy the corresponding conditions in Proposition \[prop:extensionMHwithAISinside\]. - Finally, we define the distribution for the auxiliary variables of AIS and the involution function. Given $m,m'\in\Theta$, define the auxiliary variables $$u_{m,m'}:=(u_{m,m',0},\ldots,u_{m,m',T})\in\mathsf{Z}_{m,m'}^{T+1}$$ and the mapping $\varphi_{m,m'}:\mathsf{Z}_{m,m'}^{T+1}\rightarrow\mathfrak{\mathsf{Z}}_{m',m}^{T+1}$ $$u_{m',m}=\varphi_{m,m'}(u_{m,m'}):=\big(\phi_{m,m'}(u_{m,m',T}),\phi_{m,m'}(u_{m,m',T-1}),\ldots,\phi_{m,m'}(u_{m,m',0})\big),\label{eq: trans-dimensional bijection+involution}$$ so that $\varphi_{m,m'}^{-1}=\varphi_{m',m}$. For $m,m'\in\Theta$ and $T\geq0$, we define the distribution for the auxiliary variables $$Q_{m,m',z_{m}}({\rm d}u_{m,m'}):=\overrightarrow{R}_{m,m',0}(z_{m},{\rm d}u_{m,m',0})\prod_{t=1}^{T}R_{m,m',t}(u_{m,m',t-1},{\rm d}u_{m,m',t}).$$ Now we are ready to define the AIS RJ-MCMC algorithm. From the symmetry conditions and , and our choice of $\varphi_{m,m'}$, one can establish that $$\bar{Q}_{m,m',z_{m}}({\rm d}u_{m,m'})=\overleftarrow{R}_{m',m,T+1}(z_{m},{\rm d}u_{m',m,T})\prod_{t=1}^{T}R_{m',m,T-t+1}(u_{m',m,T-t+1},{\rm d}u_{m',m,T-t}),$$ where $u_{m',m,t}=\phi_{m,m'}(u_{m,m',T-t+1})$ for $t=0,1,\ldots,T$ by . This implies in particular that $$\begin{aligned} \bar{Q}_{m',m,z_{m'}}({\rm d}u_{m',m}) & =\overleftarrow{R}_{m,m',T+1}(z_{m'},{\rm d}u_{m,m',T})\prod_{t=1}^{T}R_{m,m',T-t+1}(u_{m,m',T-t+1},{\rm d}u_{m,m',T-t}).\label{eq: AIS RJ-MCMC Q_bar}\end{aligned}$$ The AIS RJ-MCMC algorithm of @Karagiannis_and_Andrieu_2013 uses the proposal kernel $$Q_{1}((m,z_{m}),{\rm d}(m',z{}_{m'},u_{m,m'})=q(m,m')Q_{m,m',z_{m}}({\rm d}u_{m,m'})\overrightarrow{R}_{m,m',T+1}(u_{m,m',T},{\rm d}z{}_{m'})$$ and its complementary $$\begin{aligned} Q_{2}((m,z_{m}),{\rm d}(m',z{}_{m'},u_{m,m'}) & =q(m,m')\bar{Q}_{m,m',z_{m}}({\rm d}u_{m,m'})\overleftarrow{R}_{m',m,0}(u_{m',m,0},{\rm d}z{}_{m'}),\label{eq: AIS RJ-MCMC Q2 line1}\\ & =q(m,m')\bar{Q}_{m,m',z_{m}}({\rm d}u_{m,m'})\overrightarrow{R}_{m,m',T+1}(u_{m,m',0},{\rm d}z{}_{m'}),\label{eq: AIS RJ-MCMC Q2 line2}\end{aligned}$$ (We have kept to emphasise that one can write (and implement) both kernels using the same auxiliary variables $u_{m,m'}$. This will be more relevant in the MHAAR extension in Section \[subsec:Extension to MHAAR\] where one also samples from $Q_{2}^{N}$, which is based on $Q_{2}$.) Equation , combined with and , show that we are in the framework of Theorem \[thm: pseudo-marginal ratio algorithms\] and Theorem \[thm:extensionAIS\]. The acceptance rate of the AIS RJ-MCMC can be written in terms of $m,m',z_{m}$ and $u_{m,m'}$, leading to $$\mathring{r}_{u_{m,m'}}(m,m')=\frac{q(m',m)}{q(m,m')}\prod_{t=1}^{T}\frac{f_{m,m',t+1}(u_{m,m',t})}{f_{m,m',t}(u_{m,m',t})}.$$ When $T=0$, the AIS RJ-MCMC algorithm reduces to the original RJ-MCMC algorithm of @Green_1995. ### MHAAR extension of AIS RJ-MCMC \[subsec:Extension to MHAAR\] The MHAAR extension of AIS RJ-MCMC for averaging AIS based pseudo-marginal ratios, that is Algorithm \[alg: MHAAR for pseudo-marginal ratio in latent variable models\] crafted for the trans-dimensional model, should be clear now. By analogy to the case in general latent variable models, the proposal mechanisms of the MHAAR extension of AIS RJ-MCMC follows immediately from the kernels defined above as $$\begin{aligned} Q_{1}^{N}((m,z_{m}),\mathrm{d}(y,\mathfrak{u}_{m,m'},k))\\ :=q(m,m') & \prod_{i=1}^{N}Q_{m,m',z_{m}}({\rm d}u_{m,m'}^{(i)})\frac{\mathring{r}_{u_{m,m'}^{(k)}}(m,m')}{\sum_{i=1}^{N}\mathring{r}_{u_{m,m'}^{(i)}}(m,m')}\overrightarrow{R}_{m,m',T+1}(u_{m,m',T}^{(k)},{\rm d}z{}_{m'}),\\ Q_{2}^{N}\big((m,z_{m});{\rm d}(y,\mathfrak{u}_{m,m'},k)\big)\\ :=q(m,m') & \frac{1}{N}\bar{Q}_{m,m',z_{m}}({\rm d}u_{m,m'}^{(k)})\overrightarrow{R}_{m,m',T+1}(u_{m,m',0}^{(k)},{\rm d}z_{m'})\prod_{i=1,i\neq k}^{N}Q_{m',m,z_{m'}}({\rm d}u_{m',m}^{(i)}),\end{aligned}$$ which leads to the averaged acceptance ratio $\mathring{r}_{\mathfrak{u}_{m,m'}}^{N}(m,m')=(1/N)\sum_{i=1}^{N}\mathring{r}_{u_{m,m'}^{(i)}}(m,m')$ when sampling from $Q_{1}^{N}(\cdot)$ and $\mathring{r}_{\mathfrak{u}_{m,m'}}^{N}(m',m)^{-1}$ when sampling from $Q_{2}^{N}(\cdot)$. As discussed in Section \[subsec: Choosing Q1 and Q2 with different probabilities\], it is possible to choose between the two proposal mechanisms with a probability dependent on the current and part of the proposed states, in contrast with the $1/2-1/2$ default choice above, leading to modified acceptance ratios of the form given in . We discuss here how this can be taken advantage of for computational purposes. Assume for simplicity of exposition that only moves from model $m$ to models $m-1$ and $m+1$ are allowed (for $m$ such that these moves are valid). As illustrated below, it may be sensible to use $Q_{1}^{N}(\cdot)$ rather than $Q_{2}^{N}(\cdot)$ to increase the model index and vice-versa to decrease the model index. This can be achieved for example by setting $\beta(m,m+1)=1$ and $\beta(m,m-1)=0$. A scenario where this is a potentially good idea is for example when $\mathfrak{z}_{m,m+1}$ can take values on a continuum while $\mathfrak{z}_{m+1,m}$ can only take a finite number of values, say $c_{m+1,m}$. Generating $N\gg c_{m+1,m}$ copies of $\mathfrak{z}_{m+1,m}$ and averaging may be wasteful in comparison to the generation of $N$ values of $\mathfrak{z}_{m,m+1}$. Using the strategy above one can ensure that $Q_{1}^{N}(\cdot)$ is used when “going up” while $Q_{2}^{N}(\cdot)$ is only used to “go down”. This is the case for the Poisson change-point model example of the next section. In Algorithm \[alg: Reversible multiple jump MCMC\] we present the implementation of a particular version of this algorithm, for a general value $\beta(m,m')$, $T=0$ and $N\geq1$. Because of its similarity to the RJ-MCMC of @Green_1995 but with the difference of generating multiple auxiliary variables (hence multiple jumps) instead of one, we call this algorithm Reversible-multiple-jump MCMC (RmJ-MCMC). Sample $m'\sim q(m,\cdot)$ and $v\sim\mathcal{U}(0,1)$.\ ### Numerical example: the Poisson multiple change-point model \[subsec: Numerical example: Poisson multiple changepoint model\] The Poisson multiple change-point model was proposed for the analysis of the coal-mining disasters in @Green_1995. The model assumes that $n$ data points $y_{1},\ldots,y_{n}$, which are the times of occurrence of disasters with the choice $y_{0}=0$, arise from a non-homogenous Poisson process model on a time interval $[0,L]$ with intensity modelled as a step function with an unknown number of steps $m$ having unknown starting points $0=s_{0,m}<s_{1,m},\ldots<s_{m,m}=L$ and unknown heights $h_{1,m},\ldots,h_{m,m}$. We will refer to the model involving $m$ steps as model $m$. Therefore, denoting $\omega_{m}=(\{s_{j,m}\}_{j=0}^{m},\{h_{j,m}\}_{j=1}^{m})$, the data likelihood under model $m$ is $$\log\mathcal{L}(y_{1:n};\omega_{m})=\sum_{j=1}^{m}h_{j,m}\log\left(\sum_{i=1}^{n}\mathbb{I}_{[s_{j-1,m},s_{j,m})}(y_{i})\right)-\sum_{j=1}^{m}h_{j,m}(s_{j,m}-s_{j-1,m}).$$ The prior distribution for $\phi_{m}$ is as follows: $\{s_{j,m}\}_{j=1}^{m-1}$ are distributed as the even-numbered order statistics from $2m-1$ points uniformly distributed on $[0,L]$; the heights $h_{j,m}$, $j=1,\ldots,m$ are independent and each follow a Gamma distribution $\mathcal{G}(\alpha_{k},\beta_{k})$, where $\alpha_{k}$ and $\beta_{k}$ themselves are independent random variables admitting distributions $\mathcal{G}(c,d)$ and $\mathcal{G}(e,f)$, respectively. Finally, the prior distribution for $m$ is a truncated Poisson distribution $\mathcal{P}_{m_{\max}}(\lambda)$ where $m\leq m_{\max}\geq1$. The hyper parameters $(c,d,e,f,\lambda,m_{\max})$ are assumed known, we let $\Theta=\{1,\ldots,m_{\max}\}$ and $$z_{m}=(\omega_{m},\alpha_{m},\beta_{m})\in\mathsf{Z}_{m}=(0,L)^{m-1}\times(0,\infty)^{m}\times(0,\infty)\times(0,\infty)$$ be the within-model parameters of model $m$. This defines a trans-dimensional distribution $\pi(m,dz_{m})$ on $\mathsf{X}=\cup_{m\in\Theta}\{m\}\times\mathsf{Z}_{m}$ where the dimension $d_{m}$ of $\mathsf{Z}_{m}$ depends on $m$. The distribution $\pi(m,\mathrm{d}z_{m})$ admits a density $\pi(m,z_{m})$ known up to a normalising constant; this unnormalised density can easily be derived from the description of the model above. Our experiment on the Poisson change-point model focuses on showing that improvement over standard RJ-MCMC can be obtained solely by using asymmetric MCMC with multiple dimension matching variables (as discussed in the paragraph above); hence we run RmJ-MCMC in Algorithm \[alg: Reversible multiple jump MCMC\] for several values of $N$ and $T=0$. Each run generates $K=10^{6}$ samples of which the last $3K/4$ are used to compute the IAC for $m$. Note that we also include an MCMC move for the within model variables $z_{m}$ at every iteration in order to ensure irreducibility, the details of this move can be found given in @Karagiannis_and_Andrieu_2013. In order to illustrate the gains in terms of convergence to equilibrium of our scheme we ran $3000$ independent realisations of the algorithm started at the same point $x_{0}$ and estimated the expectations of $f_{m}(X_{i}):=\mathbb{I}\{M_{t}=m\}$, that is $\mathbb{E}_{x_{0}}^{N}\big[f_{m}(X_{i})\big]$, by an ensemble average and report $\big|\hat{\pi}(m)-k^{-1}\sum_{k=1}^{3000}f_{m}(X_{t}^{(k)})\big|$ for $m\in\{1,\ldots,8\}$ and $N=1,10,100$ in Figure \[fig:reversible-jump-burnin\] where $\hat{\pi}(m)$ was estimated by a realisation of length $10^{6}$ with $N=90$ and $T=50$, discarding the burn-in. We see that the approach reduces time to convergence to equilibrium by the order of $50\%$, while variance reduction is automatic and of the order of $60\%$ as illustrated in Figure \[fig: reversiblejumpexample\]. We also provide results for the AIS scheme for illustration. ![Estimates of time to convergence of $\mathbb{E}_{x_{0}}^{N}\big[f_{m}(X_{i})\big]$ to $\pi(m)$ for $N=1,10,100$.[]{data-label="fig:reversible-jump-burnin"}](chp_convergence_model_probabilities_until_200_first8){width="100.00000%"} ![Left: IAC for $m$ vs number of particles $N=1,2,\ldots,10,20,\ldots,100$ with $T=0$. Right: IAC for $m$ vs number of particles $T=0,1,2,\ldots,10,20,\ldots,100$ with $N=1$.[]{data-label="fig: reversiblejumpexample"}](changepoint_IAC_asymMCMCandAISMCMC) State-space models: SMC and cSMC within MHAAR \[sec: State-space models: SMC and conditional SMC within MHAAR\] =============================================================================================================== In Section \[sec: Pseudo-marginal ratio algorithms for latent variable models\] we have shown how the generic MHAAR strategy which consists of averaging independent estimates of the acceptance ratio could be helpful in the context of inference in state-space models. Here we present an alternative where dependent estimates arising from a single conditional SMC algorithm can be averaged in order to improve performance. State-space models and cSMC \[subsec: State-space models and conditional SMC\] ------------------------------------------------------------------------------- In its simplest form, a state-space model is comprised of a latent Markov chain $\{Z_{t};t\geq1\}$ taking its values in some measurable space $(\mathsf{Z},\mathcal{Z})$ and observations $\{Y_{t};t\geq1\}$ taking values in $(\mathsf{Y},\mathcal{Y})$. The latent process has initial probability with density $\mu_{\theta}(z_{1})$ and transition density $f_{\theta}(z_{t-1},z_{t})$, dependent on a parameter $\theta\in\Theta\subset\mathbb{R}^{d_{\theta}}$. An observation at time $t$ is assumed conditionally independent of all other random variables given $Z_{t}=z_{t}$ and its conditional observation density is $g_{\theta}(z_{t},y_{t})$. The corresponding joint density of the latent and observed variables up to time $T$ is $$p_{\theta}(z_{1:T},y_{1:T})=\mu_{\theta}(z_{1})\prod_{t=2}^{T}f_{\theta}(z_{t-1},z_{t})\prod_{t=1}^{T}g_{\theta}(z_{t},y_{t}),\label{eq: HMM joint density}$$ from which the likelihood function associated to the observations $y_{1:T}$ can be obtained $$l_{\theta}(y_{1:T}):=\int_{\mathsf{X}^{T}}p_{\theta}(z_{1:T},y_{1:T}){\rm d}z_{1:T}.\label{eq:likelihoodSSM}$$ Note that the densities $f_{\theta}$ and $g_{\theta}$ could also depend on $t$, at the expense of notational complications, and that $T$ is here the time horizon of the time series and should not be confused with the number of intermediate steps in AIS in the previous sections. We allow this abuse of notation since there are no intermediate steps involved in the methodology for HMMs developed in this paper. In order to go back to our generic notation, we let $z=z_{1:T}$ and $y=y_{1:T}$. With a prior distribution $\eta(\mathrm{d}\theta)$ on $\theta$ with density $\eta(\theta)$, the joint posterior $\pi(\mathrm{d}(\theta,z))$ has the density $$\pi(\theta,z)\propto\eta(\theta)p_{\theta}(z,y)$$ so that $\pi(\theta)\propto\eta(\theta)\ell_{\theta}(y)$ and $\pi_{\theta}(z):=p_{\theta}(z\mid y)=p_{\theta}(z,y)/\ell_{\theta}(y)$. The conditional sequential Monte Carlo (cSMC) algorithm for this state-space model is given in Algorithm \[alg: Conditional SMC\], where particles are initialised using distribution $h_{\theta}(\cdot)$ on $\mathsf{(Z},\mathcal{Z})$ at time $1$ and propagated at times $t>1$ using the transition kernel $H_{\theta}(\cdot,\cdot)$ on $\mathsf{(Z},\mathcal{Z})$. The cSMC algorithm is an MCMC transition probability, akin to particle filters, particularly well suited to sampling from $\pi_{\theta}(\mathrm{d}z)$ @Andrieu_et_al_2010. It was recently shown in @Lindsten_and_Schon_2012 that cSMC with backward sampling @whiteley2010discussion can be used efficiently as part of a more elaborate Metropolis-within-Particle Gibbs algorithm in order to sample from the posterior distribution $\pi(\mathrm{d}(\theta,z))$; see Algorithm \[alg: Metropolis within particle Gibbs\]. Set $\zeta_{1}^{(1)}=z_{1}$. Sample $k_{T}\sim\mathcal{P}\big(w_{T}^{(1)},\ldots,w_{T}^{(M)}\big)$ and set $z'_{T}=\zeta_{T}^{(k_{T})}$. \[line:beginBS\] \[line:BSend\] $\zeta=\zeta_{1:T}^{(1:N)}$ and $z'=z'_{1:T}$. Sample $z'\sim\mathrm{cSMC}\big(M,\theta,z\big)$.\ Sample $\theta'\sim q(\theta,\cdot)$.\ Return $X_{n+1}=(\theta',z')$ with probability $$\min\left\{ 1,\frac{\eta(\theta')p_{\theta'}(z,y)q(\theta',\theta)}{\eta(\theta)p_{\theta}(z,y)q(\theta,\theta')}\right\} ;\label{eq: noisy acceptance ratio}$$ otherwise return $X_{n+1}=(\theta,z')$. Retaining one path from the $T\times M$ samples in the cSMC algorithm involved in Algorithm \[alg: Metropolis within particle Gibbs\] may seem to be wasteful, and a natural idea is whether it is possible to make use of multiple, or even use all possible, trajectories and average out the corresponding acceptance ratios . We show that this is indeed possible with Algorithms \[alg: MHAAR for SSM with cSMC - Rao-Blackwellised backward sampling\] and \[alg: MHAAR for SSM with cSMC - multiple paths from backward sampling\] in the next section. We then show that these schemes improve performance at a cost which can be negligible, in particular when a parallel computing architecture is available. In order to avoid notational overload we postpone the justification of the algorithms to Appendix \[sec: Auxiliary results and proofs for cSMC based algorithms \]. Algorithms \[alg: MHAAR for SSM with cSMC - Rao-Blackwellised backward sampling\] and \[alg: MHAAR for SSM with cSMC - multiple paths from backward sampling\] are alternative to the recently developed method MHAAR with cSMC for state-space models \[subsec: MHAAR with cSMC for SSM\] -------------------------------------------------------------------------- The law of the indices $\mathbf{k}:=(k_{1},\ldots,k_{T})$ drawn in the backward sampling step in Algorithm \[alg: Conditional SMC\] (lines \[line:beginBS\]-\[line:BSend\]) conditional upon $\zeta=\zeta_{1:T}^{(1:M)}$ is given by $$\phi_{\theta}(\mathbf{k}\mid\zeta):=\frac{w_{T}(\zeta_{T}^{(k_{T})})}{\sum_{i=1}^{M}w_{T}(\zeta_{T}^{(i)})}\prod_{t=1}^{T-1}\frac{w_{t}(\zeta_{t}^{(k_{t})})f_{\theta}(\zeta_{t}^{(k_{t})},\zeta_{t+1}^{(k_{t+1})})}{\sum_{i=1}^{M}w_{t}(\zeta_{t}^{(i)})f_{\theta}(\zeta_{t}^{(i)},\zeta_{t+1}^{(k_{t+1})})}.$$ We introduce the Markov kernel which corresponds to the sampling of a trajectory $z$ with backward-sampling, conditional upon $\zeta$, $$\check{\Phi}_{\theta}(\zeta,{\rm d}z)=\sum_{k\in[M]^{T}}\phi_{\theta}(\mathbf{k}|\zeta)\delta_{\zeta^{(k)}}({\rm d}z),$$ where we define $[M]=\{1,\ldots,M\}$ and $\zeta^{(\mathbf{k})}:=(\zeta_{1}^{(k_{1})},\ldots,\zeta_{T}^{(k_{T})})$. Further, for any $\theta,\theta',\tilde{\theta}\in\Theta$, and $z,z'\in\mathsf{Z}^{T}$, define $$\mathring{r}_{z,z'}(\theta,\theta';\tilde{\theta})=\frac{q(\theta',\theta)\eta(\theta')p_{\theta'}(z',y)p_{\tilde{\theta}}(z,y)}{q(\theta,\theta')\eta(\theta)p_{\tilde{\theta}}(z',y)p_{\theta}(z,y)}.\label{eq: AIS acceptance ratio for SSM}$$ In the following, we show that it is possible to construct unbiased estimators of $r(\theta,\theta')$ using cSMC, provided we have a random sample $z\sim\pi_{\theta}(\cdot)$. Specifically, this is achieved as the expected value of $\mathring{r}_{z,\zeta^{(\mathbf{k})}}(\theta,\theta';\tilde{\theta})$ with respect to the backward sampling distribution on $\mathbf{k}$, $$\mathring{r}_{z,\zeta}(\theta,\theta';\tilde{\theta}):=\sum_{\mathbf{k}\in[M]^{T}}\phi_{\tilde{\theta}}(\mathbf{k}|\zeta)\mathring{r}_{z,\zeta^{(\mathbf{k})}}(\theta,\theta';\tilde{\theta}).\label{eq: SMC acceptance ratio estimator all paths}$$ \[thm: SMC unbiased estimator of acceptance ratio\]For $\theta,\theta',\tilde{\theta}\in\Theta$ and any $M\geq1$, let $z\sim\pi_{\theta}(\cdot)$, $\zeta|z\sim\mathrm{cSMC}(M,\tilde{\theta},z)$ be the generated particles from the cSMC algorithm targeting $\pi_{\tilde{\theta}}(\cdot)$ with $M$ particles, conditioned on $z$. Then, is an unbiased estimator of $r(\theta,\theta')$. Theorem \[thm: SMC unbiased estimator of acceptance ratio\] is original to the best of our knowledge and we find it interesting in several aspects. Firstly, unlike the estimator in Metropolis-within-Particle Gibbs (Algorithm \[alg: Metropolis within particle Gibbs\]), the estimators in Theorem \[thm: SMC unbiased estimator of acceptance ratio\] use all possible paths from the particles generated by the cSMC. Also, with a slight modification one can similarly obtain unbiased estimators for $\pi(\theta')/\pi(\theta)$ which is in some applications of primary interest. The theorem is derived from @del2010backward [Theorem 5.2] and the results in @Andrieu_et_al_2010 relating the laws of cSMC and SMC. The proof of the theorem is left to Appendix \[sec: Auxiliary results and proofs for cSMC based algorithms \]. In particular, Theorem \[thm: SMC unbiased estimator of acceptance ratio\] motivates us to design an asymmetric MCMC algorithm which uses the unbiased estimator mentioned in the theorem in its acceptance ratios. We present Algorithm \[alg: MHAAR for SSM with cSMC - Rao-Blackwellised backward sampling\] that is developed with this motivation. The algorithm requires a pair of functions $\tilde{\theta}_{1}:\Theta^{2}\rightarrow\Theta$ and $\tilde{\theta}_{2}:\Theta^{2}\rightarrow\Theta$ that satisfy $\tilde{\theta}_{1}(\theta,\theta')=\tilde{\theta}_{2}(\theta',\theta)$, in order to determine the intermediate parameter value at which cSMC is run. Sample $\theta'\sim q(\theta,\cdot)$ and $v\sim\mathcal{U}(0,1)$\ The proof that Algorithm \[alg: MHAAR for SSM with cSMC - Rao-Blackwellised backward sampling\] is reversible is established in Appendix \[subsec: Proof of reversibility for Algorithms\]. The proof has two interesting by-products: (i) An alternative proof of Theorem \[thm: SMC unbiased estimator of acceptance ratio\], and (ii) another unbiased estimate of $r(\theta,\theta')$ that uses all possible paths that can be constructed from the particles generated by a cSMC, which is stated in the following corollary. \[cor: SMC unbiased estimator of acceptance ratio\]For $\theta,\theta',\tilde{\theta}\in\Theta$ and any $M\geq1$, let $z\sim\pi_{\theta}(\cdot)$, $\zeta|z\sim{\rm cSMC}(M,\tilde{\theta},z)$ be the generated particles from the cSMC algorithm with $M$ particles at $\tilde{\theta}$ conditioned on $z$, and $z'|\zeta\sim\check{\Phi}_{\tilde{\theta}}(\zeta,\cdot)$. Then, $1/\mathring{r}_{z',\zeta}(\theta',\theta,\tilde{\theta})$ is an unbiased estimator of $r(\theta,\theta').$ Reduced computational cost via subsampling \[subsec: Easing computational burden with subsampling: Multiple paths BS-SMC\] -------------------------------------------------------------------------------------------------------------------------- The computations needed to implement Algorithm \[alg: MHAAR for SSM with cSMC - Rao-Blackwellised backward sampling\] can be performed with a complexity of $\mathcal{O}(M^{2}T)$ upon observing that the unnormalised probability can be written as $$\phi_{\tilde{\theta}}(\mathbf{k}|\zeta)\mathring{r}_{z,\zeta^{(\mathbf{k})}}(\theta,\theta';\tilde{\theta})=:\kappa_{z,\zeta}(\mathbf{k})=\kappa_{z,\zeta,1}(k_{1})\prod_{t=2}^{T}\kappa_{z,\zeta,t}(k_{t-1},k_{t})$$ for an appropriate choice for the functions $\kappa_{z,\zeta,t}$. Indeed, the expression above implies that computation of $\mathring{r}_{z,\zeta}(\theta,\theta';\tilde{\theta})=\sum_{\mathbf{k}\in[M]^{T}}\kappa_{z,\zeta}(\mathbf{k})$ can be performed by a sum-product algorithm and sampling $\mathbf{k}$ with probability proportional to $\kappa_{z,\zeta}(\mathbf{k})$ can be performed with a forward-filtering backward-sampling algorithm. However, $\mathcal{O}(M^{2}T)$ can still be overwhelming, especially when $M$ is large. In the following, we introduce a computationally less demanding version of Algorithm \[alg: MHAAR for SSM with cSMC - Rao-Blackwellised backward sampling\] which uses a subsampled version of obtained from $N$ paths drawn using backward sampling and still preserves reversibility. Letting $\mathfrak{u}=(u^{(1)},\ldots,u^{(N)})\in\mathsf{Z}^{TN}$ , consider $$\mathring{r}_{z,\mathfrak{u}}^{N}(\theta,\theta';\tilde{\theta})=\frac{1}{N}\sum_{i=1}^{N}\mathring{r}_{z,u^{(i)}}(\theta,\theta';\tilde{\theta}),$$ which is an unbiased estimator of when $u^{(1)},\ldots,u^{(N)}\overset{{\rm iid}}{\sim}\check{\Phi}(\zeta,\cdot)$. In Algorithm \[alg: MHAAR for SSM with cSMC - multiple paths from backward sampling\] we present the multiple paths BS-cSMC asymmetric MCMC algorithm which uses $\mathring{r}_{z,\mathfrak{u}}^{N}(\theta,\theta';\tilde{\theta})$, but still targets $\pi(\mathrm{d}(\theta,z))$, as desired. The computational complexity of this algorithm is $\mathcal{\mathcal{O}}(NMT)$ per iteration instead of $\mathcal{O}(M^{2}T)$; moreover, sampling $N$ paths can be parallelised. Reversibility of the algorithm with respect to $\pi(\mathrm{d}(\theta,z))$ is proved in Appendix \[subsec: Proof of reversibility for Algorithms\]. Sample $\theta'\sim q(\theta,\cdot)$ and $v\sim\mathcal{U}(0,1)$.\ We consider the non-linear state-space of Example \[ex: state-space example for latent variable section\] for the same set-up. We conducted experiments similar to those of Example \[ex: ctd state-space example for latent variable section\], but using this time Algorithm \[alg: MHAAR for SSM with cSMC - multiple paths from backward sampling\] instead, for $N=1$, $N=10$, $N=100$ and $M=150$ particles. The intermediate distribution used was similar, as were the various proposal distributions. The results for convergence and IAC times are shown in Figures \[fig: convergence vs N for the state-space model-2\] and \[fig: IAC time vs N for the state-space model-2\] where the results from Example \[ex: state-space example for latent variable section\] are repeated in order to ease comparison. (Note that, assuming perfect parallelisation and that the computation time of cSMC is proportional to the number of particles, Algorithm \[alg: MHAAR for SSM with cSMC - multiple paths from backward sampling\] with $M=150$ particles and Algorithm \[alg: MHAAR for pseudo-marginal ratio in latent variable models\] with $M=100$ particles are equally costly. This is because of the non-parallelisable part of $Q_{2}$ of Algorithm \[alg: MHAAR for pseudo-marginal ratio in latent variable models\].) ![Convergence results for $\theta=(\sigma_{v}^{2},\sigma_{w}^{2})$ vs $N$ in Algorithm \[alg: MHAAR for SSM with cSMC - multiple paths from backward sampling\] in comparison with Algorithm \[alg: MHAAR for pseudo-marginal ratio in latent variable models\].[]{data-label="fig: convergence vs N for the state-space model-2"}](HMM_convergence_first_two_moments_and_median_N_100_150_2000_runs) ![IAC times for $\theta=(\sigma_{v}^{2},\sigma_{w}^{2})$ vs $N$ in Algorithm \[alg: MHAAR for SSM with cSMC - multiple paths from backward sampling\] in comparison with Algorithm \[alg: MHAAR for pseudo-marginal ratio in latent variable models\].[]{data-label="fig: IAC time vs N for the state-space model-2"}](HMM_IAC_times_for_x_and_x_sq_N_100_multpl_and_single_cSMC_2000_runs) In this experiment, the true parameters are $\sigma_{v}^{2}=10$ and $\sigma_{w}^{2}=1$ and the data size is $T=500$. The prior and proposal parameters are the same as the previous example. We ran Metropolis-within-Particle Gibbs of @Lindsten_and_Schon_2012 in Algorithm \[alg: Metropolis within particle Gibbs\]. Number of particles used in the cSMC moves is $M=100$. For each configuration, $200$ Monte Carlo runs for 100000 iterations are performed and the summary of the estimated IAC values from each run is reported in Figure \[fig: IAC vs N for the state-space model\]. One can see that increasing the number of paths improves the results. However, the amount of improvement (at least for this seemingly not very challenging model) vanishes quickly after $N=10$; this is the reason we did not find necessary to look at the performance of Algorithm \[alg: MHAAR for SSM with cSMC - Rao-Blackwellised backward sampling\] for this example. In addition, the results suggest that the scenario $N=1$ seems useful in that the algorithm can beat Metropolis-within-Particle Gibbs for the same order of computation. Note that the $N=1$ case is also a recent algorithm, firstly proposed and analysed in @Yildirim_et_al_2017, with detailed comparisons with Metropolis-within-Particle Gibbs. ![IAC for $\theta=(\sigma_{v}^{2},\sigma_{w}^{2})$ vs $N$ in Algorithm \[alg: MHAAR for SSM with cSMC - multiple paths from backward sampling\] compared to Metropolis-within-Particle Gibbs (MwPG) in Algorithm \[alg: Metropolis within particle Gibbs\].[]{data-label="fig: IAC vs N for the state-space model"}](IAC_times_Asym_for_HMM) Discussion \[sec: Discussion\] ============================== In this paper, we exploit the ability to use more than one proposal scheme within a MH update. We derive several useful MHAAR algorithms that enable averaging multiple estimates of acceptance ratios, which would not be valid by using a standard single proposal MH update. The framework of MHAAR is rather general and provides a generic way of improving performance of MH update based algorithm for a wide range of problems. This is illustrated with doubly intractable models, general latent variable models, trans-dimensional models, and general state-space models. Although relevant in specific scenarios involving computations on serial machines, MHAAR algorithms are particularly useful when implemented on a parallel architecture since the computation required to have an average acceptance ratio estimate can largely be parallelised. In particular our experiments demonstrate significant reduction of the burn in period required to reach equilibrium, an issue for which very few generic approaches exist currently. Using SMC based estimators for the acceptance ratio \[sec: Using SMC based estimators for the acceptance ratio\] ---------------------------------------------------------------------------------------------------------------- More broadly the framework of using asymmetric acceptance ratios allows us to exploit even more general ratios of probabilities and plug them into MCMCs. For example, a non-trivial interesting generalisation of the algorithms presented earlier is possible by replacing AIS with SMC. The generalisation is relevant when annealing is used, i.e. $T>0$ and it is available for both the scenario $\pi(x)=\pi(\theta)$ and $\pi(x)=\pi(\theta,z)$. Notice that in Algorithms \[alg: MHAAR-AIS exchange algorithm\] to \[alg: MHAAR for pseudo-marginal ratio in latent variable models\], the acceptance ratios of the asymmetric MCMC algorithm contain the factor $$\frac{1}{N}\sum_{i=1}^{N}\prod_{t=0}^{T}\frac{f_{\theta,\theta',t+1}(u_{t}^{(i)})}{f_{\theta,\theta',t}(u_{t}^{(i)})}.$$ This average actually serves as an AIS estimator of the ratio of the normalising constants of the unnormalised densities $f_{\theta,\theta',0}$ and $f_{\theta,\theta',T+1}$ of the initial and the last densities used in annealing. For doubly intractable models, this quantity is $C_{\theta}/C_{\theta'}$, whereas in latent variable models, it is $\pi(\theta')/\pi(\theta)$. Although SMC is a well known alternative to AIS in estimating this ratio unbiasedly [@Del_Moral_et_al_2006], it is not obvious whether or how we can substitute SMC for AIS in proposal kernels $Q_{1}$ and $Q_{2}$ and still preserve the detailed balance of the overall MCMC kernel with respect to $\pi$. It turns out that this is possible by using a SMC in $Q_{1}$ and a series of backward kernels followed by a cSMC in $Q_{2}$. For interested readers, we present $Q_{1}$ and $Q_{2}$ with the corresponding acceptance ratios, and the resulting algorithm in Appendix \[sec: Substituting SMC for AIS in the acceptance ratio in MHAAR\]. Links to non-reversible algorithms \[sec: Links to non-reversible algorithms\] ------------------------------------------------------------------------------ There has been recent interest in extending existing MCMC algorithms, especially those based on MH, to algorithms having non-reversible Markov chains preserving $\pi$ as their invariant distribution. The motivation behind such algorithms is the desire to design proposals based on the acceptance-rejection information of the previous iterations so that the space $\mathsf{X}$ is explored more efficiently. For example, it may be desirable to have a MH based Markov chain that moves in a certain direction as long as the proposed values in that direction are accepted. In case of rejection, the direction of the proposal is altered and the Markov chain is made to choose a new direction until the next rejection. These non-reversible MH algorithms can be interpreted as using acceptance ratios involving two different proposal mechanism (e.g. for different directions). Using two different proposals is inherent to our MHAAR algorithms, and we briefly show how MHAAR algorithms can be turned into non-reversible MCMC. Consider one pair of such proposal mechanisms $Q_{1}(x,\mathrm{d}(y,u))$ and $Q_{2}(x,\mathrm{d}(y,u))$ as considered throughout this paper. The acceptance ratios involved are denoted $r_{1,u}(x,y)$ and $r_{2,u}(x,y)=1/r_{1,u}(y,x)$, depending no whether $Q_{1}$ or $Q_{2}$ is on the numerator or denominator. The non-reversible algorithm described in Algorithm \[alg: Non-reversible MHAAR\] targets the extended distribution $\pi(\mathrm{d}(x,a)):=\pi(\mathrm{d}x)\frac{1}{2}$, where $a\in\{1,2\}$ and whose marginal is $\pi(\mathrm{d}x)$ as desired, and generates realisations $\{\big(X_{n},A_{n}\big)\in\mathsf{X}\times\{1,2\},n\geq1\}$ where $A_{n}$ indicates which of $Q_{1}$ or $Q_{2}$ is to be used at iteration $n+1$. Sample $(y,u)\sim Q_{a}(x,\cdot)$\ Set $(X_{n+1},A_{n+1})=(x',a)$ with probability $\min\{1,r_{a,u}(x,y)\}$; otherwise reject and set $(X_{n+1},A_{n+1})=(x,3-a)$. One iteration of the algorithm is a composition of two reversible moves with respect to $\pi(\mathrm{d}(x,a))$: Given $(x,a)$, the first move consists of proposing $y,a'$ (and $u$) from $Q_{a}(x,\mathrm{d}(y,u))\mathbb{I}_{3-a}(a')$, accepting-rejecting with probability $\min\{1,r_{a,u}(x,y)\}$, which is the corresponding asymmetric acceptance probability for $\pi(\mathrm{d}(x,a))$. The second move simply switches the $a$-component: $a\rightarrow3-a$, which is reversible. We do not investigate this further here. Acknowledgements ================ CA and SY acknowledge support from EPSRC “Intractable Likelihood: New Challenges from Modern Applications (ILike)” (EP/K014463/1) and the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the programme “Scalable inference; statistical, algorithmic, computational aspects” where this manuscript was finalised (EPSRC grant EP/K032208/1). AD acknowledges support from EPSRC EP/K000276/1. NC is partially supported by a grant from the French National Research Agency (ANR) as part of program ANR-11-LABEX- 0047. The authors would also like to thank Nick Whiteley for useful discussions. A general framework for PMR and MHAAR algorithms \[sec: A general framework for MPR and MHAAR algorithms\] ========================================================================================================== Assume $\pi$ is a probability distribution defined on the measurable space $(\mathsf{X},\mathcal{X})$ and let $Q_{1}(\cdot,\cdot)$ and $Q_{2}(\cdot,\cdot)$ be a pair of proposal kernels $Q_{1}(\cdot,\cdot),Q_{2}(\cdot,\cdot)\colon\mathsf{X}\times(\mathcal{U}\otimes\mathcal{X})\rightarrow[0,1]$, where $\mathcal{U}$ is a sigma-algebra corresponding to an auxiliary random variable $u$ defined on a measurable space $(\mathsf{U},\mathcal{U})$. This variable may or may not be present, and is for example ignored in the introductory Section \[subsec: Contribution\]. We first follow @Tierney_1998 (in particular his treatment of @Green_1995’s framework) and introduce the measure $$\begin{aligned} \nu_{i}\big({\rm d}(x,y,u)\big): & =\pi({\rm d}x)Q_{i}(x,{\rm d}(y,u))+\pi({\rm d}y)Q_{3-i}(y,{\rm d}(x,u))\end{aligned}$$ and for $i\in\{1,2\}$ the densities $\eta_{i}(x,y,u):={\rm d}(\pi\otimes Q_{i})/{\rm d}\nu_{i}$ for $(x,y,u)\in\mathsf{X}^{2}\times\mathsf{U}$. Now define the measurable set $$\mathsf{S}:=\big\{(x,y,u)\in\mathsf{X}^{2}\times\mathsf{U}\colon\eta_{1}(x,y,u)>0\text{ and }\eta_{2}(y,x,u)>0\big\}\label{eq:def-ring-S}$$ and let, for $i\in\{1,2\}$ $$r_{i,u}(x,y):=\begin{cases} \eta_{3-i}(y,x,u)/\eta_{i}(x,y,u) & \text{ for }(x,y,u)\in\mathsf{S},\\ 0 & \text{otherwise}. \end{cases}$$ For ease of exposition, throughout the rest of the paper we may use the notation $$\frac{\pi({\rm d}y)Q_{3-i}(y,{\rm d}(x,u))}{\pi({\rm d}x)Q_{i}(x,{\rm d}(y,u))}=:r_{i,u}(x,y),$$ which should not lead to any confusion. Further, for $x\in\mathsf{X}$, we define the rejection probabilities $$\rho_{i}(x):=1-\int_{\mathsf{X}\times\mathsf{U}}Q_{i}(x,{\rm d}(y,u))\min\{1,r_{i,u}(x,y)\},\quad i=1,2.$$ In the theorem below we use the properties that for $i\in\{1,2\}$ and $(x,y)\in\mathsf{S}$ and $u\in\mathsf{U}$, then $r_{i}(x,y)r_{3-i}(y,x)=1$ and $\nu_{i}\big({\rm d}(x,y,u)\big)=\nu_{3-i}\big({\rm d}(y,x,u)\big)$. The following theorem serves as the basis for proving the reversibility of all of the MHAAR algorithms developed in this paper. \[thm:asymmetricMH-1\] Consider the Markov transition kernel $\breve{P}\colon\mathsf{X}\times\mathcal{X}\rightarrow[0,1]$ $$\breve{P}(x,{\rm d}y):=\sum_{i=1}^{2}\frac{1}{2}\left[\int_{\mathsf{U}}Q_{i}(x,{\rm d}(y,u)\min\left\{ 1,r_{i,u}(x,y)\right\} +\delta_{x}({\rm d}y)\rho_{i}(x)\right],\quad x\in\mathsf{X}\label{eq: asymmetric MCMC acceptance kernel-1}$$ then $\breve{P}$ satisfies the detailed balance for $\pi$. For any bounded measurable function $\phi$ on $\mathsf{X}^{2}$: $$\begin{aligned} \int_{\mathsf{X}^{2}\times\mathsf{U}}\min\left\{ 1,r_{1,u}(x,y)\right\} & \phi(x,y)\pi({\rm d}x)Q_{1}(x;{\rm d}(y,u))\\ = & \int_{\mathsf{X}^{2}\times\mathsf{U}}\phi(x,y)\min\left\{ 1,r_{1,u}(x,y)\right\} \eta_{1}(x,y,u)\nu_{1}\big({\rm d}(x,y,u)\big)\\ = & \int_{\mathsf{S}}\phi(x,y)\min\left\{ 1,r_{1,u}(x,y)\right\} r_{2,u}(y,x)\eta_{2}(y,x,u)\nu_{1}\big({\rm d}(x,y,u)\big)\\ = & \int_{\mathsf{X}^{2}\times\mathsf{U}}\phi(x,y)\min\left\{ 1,r_{1,u}(x,y)\right\} r_{2,u}(y,x)\eta_{2}(y,x,u)\nu_{2}\big({\rm d}(y,x,u)\big)\\ = & \int_{\mathsf{X}^{2}\times\mathsf{U}}\phi(x,y)\min\left\{ 1,r_{1,u}(x,y)\right\} r_{2,u}(y,x)\pi({\rm d}y)Q_{2}(y;{\rm d}(x,u))\\ = & \int_{\mathsf{X}^{2}\times\mathsf{U}}\phi(x,y)\min\left\{ r_{2,u}(y,x),1\right\} \pi({\rm d}y)Q_{2}(y;{\rm d}(x,u)).\end{aligned}$$ As a result for $\phi$ as above, $$\begin{gathered} \sum_{i=1}^{2}\frac{1}{2}\int_{\mathsf{X}^{2}\times\mathsf{U}}\phi(x,y)\min\left\{ 1,r_{i,u}(x,y)\right\} \pi({\rm d}x)Q_{i}(x;{\rm d}(y,u))\\ =\sum_{i=1}^{2}\frac{1}{2}\int_{\mathsf{X}^{2}\times\mathsf{U}}\phi(x,y)\min\left\{ 1,r_{i,u}(y,x)\right\} \pi({\rm d}y)Q_{i}(y;{\rm d}(x,u)),\end{gathered}$$ and detailed balance hence follows. The following theorem validates the use of all the PMR algorithms in this paper, specifically the algorithm corresponding to the kernel presented in and the algorithms described in Propositions \[prop: AIS MCMC for doubly intractable models\], \[prop:MHwithAISinside\] and \[prop:extensionMHwithAISinside\]. \[thm: pseudo-marginal ratio algorithms\] Let $\varphi:\mathsf{U}\rightarrow\mathsf{U}$ be a measurable involution, that is such that $\varphi\circ\varphi(u)=u$ for all $u\in\mathsf{U}$ and with the set-up of Theorem \[thm:asymmetricMH-1\] for a given kernel $Q_{1}(\cdot,\cdot)$, let $Q_{2}(\cdot,\cdot)$ be defined such that for any measurable $\psi\colon\mathsf{X}^{2}\times\mathsf{U}\rightarrow[-1,1]$ $$\begin{aligned} \int\psi(x,y,u)\pi({\rm d}x)Q_{2}\big(x,\mathrm{d}(y,u)\big) & =\int\psi(x,y,\varphi(u))\pi({\rm d}x)Q_{1}\big(x,\mathrm{d}(y,u)\big).\end{aligned}$$ Then the Markov transition kernel $$\mathring{P}(x,\mathrm{d}y)=\int_{\mathsf{U}}Q_{1}(x,\mathrm{d}(y,u))\min\left\{ 1,r_{1,u}(x,y)\right\} +\rho_{1}(x)\delta_{x}(\mathrm{d}y)$$ satisfies detailed balance with respect to $\pi$ $$\nu_{1}\big({\rm d}(x,y,u)\big)=\nu_{2}\big({\rm d}(y,x,u)\big)$$ First we show that $$\int\psi(x,y,u)\nu_{1}(\mathrm{d}(x,y,u))=\int\psi(x,y,\varphi(u))\nu_{1}(\mathrm{d}(y,x,u)).\label{eq:nu1_is_nu1_varphi}$$ This is because for $\psi$ bounded and measurable $$\begin{aligned} \int\psi(x,y,u)\nu_{1}\big({\rm d}(x,y,u)\big) & =\int\psi(x,y,u)(\pi\otimes Q_{1})\big({\rm d}(x,y,u)\big)+\int\psi(x,y,\varphi(u))(\pi\otimes Q_{1})\big({\rm d}(y,x,u)\big)\\ & =\int\psi(x,y,\varphi(u))(\pi\otimes Q_{2})\big({\rm d}(x,y,u)\big)+\int\psi(x,y,\varphi(u))(\pi\otimes Q_{1})\big({\rm d}(y,x,u)\big)\\ & =\int\psi(x,y,\varphi(u))\nu_{1}\big({\rm d}(y,x,u)\big),\end{aligned}$$ where we have used our assumption on $\pi\otimes Q_{2}$ on the first and second line, together with the fact that $\varphi$ is an involution, and the definition of $\nu_{1}$ on the last line. As a result one can establish that $$\eta_{2}(x,y,u)=\eta_{1}(x,y,\varphi(u)).$$ Indeed, for $\psi$ bounded and measurable, $$\begin{aligned} \int\psi(x,y,u)\eta_{2}(x,y,u)\nu_{2}\big({\rm d}(x,y,u)\big)= & \int\psi(x,y,u)\pi\otimes Q_{2}\big({\rm d}(x,y,u)\big)\\ = & \int\psi(x,y,\varphi(u))\pi\otimes Q_{1}\big({\rm d}(x,y,u)\big)\\ = & \int\psi(x,y,\varphi(u))\eta_{1}(x,y,u)\nu_{1}\big({\rm d}(x,y,u)\big)\\ = & \int\psi(x,y,u)\eta_{1}(x,y,\varphi(u))\nu_{1}\big({\rm d}(y,x,u)\big)\\ = & \int\psi(x,y,u)\eta_{1}(x,y,\varphi(u))\nu_{2}\left({\rm d}(x,y,u)\right),\end{aligned}$$ where we have used on the fourth line. Now for $\phi\colon\mathsf{X}^{2}\rightarrow[-1,1]$ measurable $$\begin{aligned} \int_{\mathsf{X}\times\mathsf{U}\times\mathsf{X}}\phi(x,y)\min & \left\{ 1,r_{1,u}(x,y)\right\} \pi({\rm d}x)Q_{1}(x,\mathrm{d}(y,u))\\ & =\int_{\mathsf{S}}\phi(x,y)\min\left\{ 1,r_{1,u}(x,y)\right\} \frac{\eta_{1}(x,y,u)}{\eta_{1}(y,x,\varphi(u))}\eta_{1}(y,x,\varphi(u))\nu_{1}(\mathrm{d}(x,y,u))\\ & =\int_{\mathsf{S}}\phi(x,y)\min\left\{ r_{1,\varphi(u)}(y,x),1\right\} \eta_{1}(y,x,\varphi(u))\nu_{1}(\mathrm{d}(x,y,u))\\ & =\int_{\mathsf{S}}\phi(x,y)\min\left\{ r_{1,u}(y,x),1\right\} \eta_{1}(y,x,u)\nu_{1}(\mathrm{d}(y,x,u)),\end{aligned}$$ and reversibility follows. Specialisation to two specific scenarios \[subsec: Construction of the algorithms in the paper\] ------------------------------------------------------------------------------------------------ Although the general framework described in Theorem \[thm:asymmetricMH-1\] is quite broad, our algorithms exploit it in specific ways. In this subsection, we aim to provide some insight into the ways we exploit these ideas in this paper. Recall that we either have $x=\theta$ in the single variable case or $x=(\theta,z)$ in the scenario where the model involves latent variables. Throughout the paper, we design algorithms where both $Q_{1}(\cdot,\cdot)$ and $Q_{2}(\cdot,\cdot)$ use the same proposal distribution for $\theta'$, that is $q(\theta,\cdot)$ and differ in the way they sample the auxiliary variables (and $z'$ in the latent variables scenario) such that $Q_{1}(\cdot,\cdot)$ and $Q_{2}(\cdot,\cdot)$ complement each other to produce acceptance ratio estimators whose statistical properties increase with (parallelisable) computations. ### Single variable scenario \[subsec: Single variable\] Here we have $x=\theta$. Let $\{Q_{\theta,\theta'}^{(1)}(\cdot)\colon\theta,\theta'\in\Theta\}$ and $\{Q_{\theta,\theta'}^{(2)}(\cdot)\colon\theta,\theta'\in\Theta\}$ be two families of probability distributions defined on $(\mathsf{U,\mathcal{U}})$ and $\omega_{\theta,\theta'}:\mathcal{\mathsf{U}}\rightarrow[0,\infty)$ satisfying the condition, for $\theta,\theta'\in\Theta$ and $u\in\mathsf{U}$, $$Q_{\theta',\theta}^{(2)}(\mathrm{d}u)=Q_{\theta,\theta'}^{(1)}(\mathrm{d}u)\omega_{\theta,\theta'}(u),\label{eq: relation between Q1 and Q2}$$ so that the expected value of $\omega_{\theta,\theta'}(\cdot)$ with respect to $Q_{\theta,\theta'}^{(1)}(\cdot)$ (as well as the expected value of $\omega_{\theta,\theta'}^{-1}(\cdot)$ with respect to $Q_{\theta',\theta}^{(2)}(\cdot)$ if $\omega_{\theta,\theta'}(\cdot)>0$) is $1$. Then the Radon-Nikodym derivative evaluated for $(\theta,u,\theta')\in\mathsf{S}$ as defined above, $$r_{u}(\theta,\theta')=\frac{\pi(\mathrm{d}\theta')q(\theta',\mathrm{d}\theta)Q_{\theta',\theta}^{(2)}(\mathrm{d}u)}{\pi(\mathrm{d}\theta)q(\theta,\mathrm{d}\theta')Q_{\theta,\theta'}^{(1)}(\mathrm{d}u)}=r(\theta,\theta')\omega_{\theta,\theta'}(u).\label{eq: RN derivative for single variable}$$ Note that this ratio is an unbiased estimator of the acceptance ratio of the marginal distribution, $r(\theta,\theta')$; therefore useful algorithm can be constructed if (i) $r(\theta,\theta')\omega_{\theta,\theta'}(u)$ can be evaluated, and (ii) the variance of of $\omega_{\theta,\theta'}$ can be controlled. It follows exactly from and Theorem \[thm:asymmetricMH-1\] that we can construct a reversible Markov kernel using acceptance ratios involving $r_{u}(\theta,\theta')$ as in Theorem \[thm:asymmetricMH-1\] with $$Q_{1}(\theta,\mathrm{d}(\theta',u))=q(\theta,\mathrm{d}\theta')Q_{\theta,\theta'}^{(1)}(\mathrm{d}u),\quad Q_{2}(\theta,\mathrm{d}(\theta',u))=q(\theta,\mathrm{d}\theta')Q_{\theta,\theta'}^{(2)}(\mathrm{d}u).$$ If, in addition, for any measurable and bounded function $\phi$ we have $\int\phi(u)Q_{\theta,\theta'}^{(2)}(\mathrm{d}u)=\int\phi\circ\varphi(u)Q_{\theta,\theta'}^{(1)}(\mathrm{d}u)$ for some involution $\varphi$, we are precisely in the frame of the pseudo-marginal ratio algorithms discussed in @Nicholls_et_al_2012, whose transition kernel is given in Theorem \[thm: pseudo-marginal ratio algorithms\]. ### Latent model scenario\[subsec: Latent components\] Here we have $x=(\theta,z)$ with $\pi(\mathrm{d}x)=\pi(\mathrm{d}\theta)\pi_{\theta}(\mathrm{d}z)$. Let $\{Q_{\theta,\theta',z}^{(1)}(\cdot)\colon\theta,\theta'\in\Theta\}$ and $\{Q_{\theta,\theta',z}^{(2)}(\cdot)\colon\theta,\theta'\in\Theta,z\in\mathsf{Z}\}$ be two families of probability distributions defined on $(\mathsf{\mathsf{Z}\times U,\mathcal{Z}\otimes\mathcal{U}})$ and $\omega_{\theta,\theta'}:\mathcal{\mathsf{Z}}\times\mathsf{U}\rightarrow[0,\infty)$ satisfying the condition $$\pi_{\theta'}(\mathrm{d}z')Q_{\theta',\theta,z'}^{(2)}(\mathrm{d}(z,u))=\pi_{\theta}(\mathrm{d}z)Q_{\theta,\theta',z}^{(1)}(\mathrm{d}(z',u))\omega_{\theta,\theta'}(z,u),$$ so that the expected value of $\omega_{\theta,\theta'}(z,u)$ with respect to $\pi_{\theta}(\mathrm{d}z)Q_{\theta,\theta',z}^{(1)}(\mathrm{d}(z',u))$ is $1$. Just as in the single variable case, consider the Radon-Nikodym derivative again: $$r_{u}(x,x')=\frac{\pi(\mathrm{d}x')q(\theta',\mathrm{d}\theta)Q_{\theta',\theta,z'}^{(2)}(\mathrm{d}(z,u))}{\pi(\mathrm{d}x)q(\theta,\mathrm{d}\theta')Q_{\theta,\theta',z}^{(1)}(\mathrm{d}(z',u))}=r(\theta,\theta')\omega_{\theta,\theta'}(z,u).$$ Note that this ratio is an unbiased estimator of the acceptance ratio of the marginal distribution, $r(\theta,\theta')$; therefore useful a algorithm can be constructed if (i) $r(\theta,\theta')\omega_{\theta,\theta'}(z,u)$ can be evaluated and (ii) the variance of of $\omega_{\theta,\theta'}$ can be controlled. We can construct a reversible Markov kernel using $Q_{1}(\cdot,\cdot)$ and $Q_{2}(\cdot,\cdot)$ as: $$Q_{1}(x,\mathrm{d}(x',u))=q(\theta,\mathrm{d}\theta')Q_{\theta,\theta',z}^{(1)}(\mathrm{d}(z',u)),\quad Q_{2}(x,\mathrm{d}(x',u))=q(\theta,\mathrm{d}\theta')Q_{\theta,\theta',z}^{(2)}(\mathrm{d}(z',u)).$$ Similarly, if, in addition, for any bounded measurable function $\phi$ we have $\int\phi(z,u)\pi_{\theta}({\rm d}z)Q_{\theta,\theta',z}^{(2)}(\mathrm{d}(z',u))=\int\phi(z,\varphi(u))\pi_{\theta}({\rm d}z)Q_{\theta,\theta',z}^{(1)}(\mathrm{d}(z',u))$ for some some involution $\varphi\colon\mathsf{U}\rightarrow\mathsf{U}$, we can use the transition kernel given in Theorem \[thm: pseudo-marginal ratio algorithms\] and we end up precisely in the framework of the pseudo-marginal ratio algorithms for latent variable models discussed in Section \[sec: Pseudo-marginal ratio algorithms for latent variable models\]. Generalisation and theoretical sub-optimality\[subsec: Generalisation and suboptimality\] ----------------------------------------------------------------------------------------- One can be more general than having a single pair of proposal distributions and sampling them with equal probabilities. In the following, we will consider multiple pairs and sampling among proposal distributions with state-dependent probabilities. Then we will investigate the statistical properties of this scheme by comparing it to an ideal but non-implementable algorithm in terms of Peskun order. For some $m\in\mathbb{N}$ let $\{Q_{ij}(\cdot,\cdot),i,j\in\{1,\ldots,m\}\}$ be a family of proposal kernels each from $(\mathsf{X},\mathcal{X})$ to $(\mathsf{X}\times\mathsf{U},\mathcal{X}\times\mathcal{U})$ and $\{\beta_{ij}:\mathsf{X}\rightarrow[0,1],i,j\in\{1,\ldots,m\}\}$ such that for any $x\in\mathsf{X}$, $\sum_{i,j=1}^{m}\beta_{ij}(x)=1$. Define the Markov transition kernel $$\breve{P}(x,{\rm d}y):=\sum_{i=1}^{m}\sum_{j=1}^{m}\beta_{ij}(x)\left[\int_{\mathsf{U}}Q_{ij}(x,{\rm d}(y,u))\min\left\{ 1,r_{ij,u}(x,y)\right\} +\delta_{x}({\rm d}y)\rho_{ij}(x)\right],\quad x\in\mathsf{X}\label{eq: asymmetric MCMC acceptance kernel}$$ where the acceptance ratios are $$r_{ij,u}(x,y):=\frac{\pi({\rm d}y)Q_{ji}(y,{\rm d}(x,u))}{\pi({\rm d}x)Q_{ij}(x,{\rm d}(y,u))}\frac{\beta_{ji}(y)}{\beta_{ij}(x)},\quad i,j=1,\ldots,m,$$ on some set $\mathring{\mathsf{S}}_{ij}\subset\mathsf{X}\times\mathsf{U}\times\mathsf{X}$ where the measures $\pi({\rm d}y)Q_{ji}(y,{\rm d}(x,u))$ and $\pi({\rm d}x)Q_{ij}(x,{\rm d}(y,u))$ are equivalent (see the beginning of Appendix \[sec: A general framework for MPR and MHAAR algorithms\]) and $0<\beta_{ij}(x)\beta_{ji}(y)<\infty$ and set to zero otherwise, while the rejection probabilities at $x\in\mathsf{X}$ corresponding to all the updates are given by $$\rho_{ij}(x):=1-\int_{\mathsf{U}\times\mathsf{X}}Q_{ij}(x,{\rm d}(y,u))\min\{1,r_{ij,u}(x,y)\},\quad i,j=1,\ldots,m.$$ Reversibility of $\breve{P}$ can be proven very similarly to Theorem \[thm:asymmetricMH-1\], therefore it is only stated as a corollary below. \[cor: asymmetricMH generalisation\]The MHAAR algorithm with transition kernel $\breve{P}$ in satisfies detailed balance for $\pi$. The standard MH algorithm is recovered, for example, in the situation where $\beta_{11}(x)=1$. The single pair version is recovered with $\beta_{12}(x)=\beta_{21}(x)=1/2$ and $Q_{12}(\cdot,\cdot)=Q_{1}(\cdot,\cdot)$, $Q_{21}(\cdot,\cdot)=Q_{2}(\cdot,\cdot)$. Algorithm \[alg: Reversible multiple jump MCMC\] corresponds to the special case where $\beta_{12}(x)+\beta_{21}(x)=1$. The following interpretation of $\breve{P}$ points to a theoretical sub-optimality of asymmetric MCMC a careful reader may point to. Indeed, from , the auxiliary variable $u\in\mathsf{U}$ and the proposed value $y\in\mathsf{X}$ are sampled from $$\breve{Q}(x,\cdot):=\sum_{i=1}^{m}\sum_{j=1}^{m}\beta_{ij}(x)Q_{ij}(x,\cdot),$$ and the proposed value is accepted with probability $$\breve{\alpha}_{u}(x,y):=\sum_{i=1}^{m}\sum_{j=1}^{m}\frac{\beta_{ij}(x)Q_{ij}(x,{\rm d}(y,u))}{\breve{Q}(x,{\rm d}(y,u))}\min\{1,r_{ij,u}(x,y)\}.$$ Application of Jensen’s inequality shows that for $x,y\in\mathsf{X}$, $u\in\mathsf{U}$, we have $$\begin{aligned} \breve{\alpha}_{u}(x,y) & \leq\min\left\{ 1,\sum_{i=1}^{m}\sum_{j=1}^{m}\frac{\beta_{ij}(x)Q_{ij}(x,{\rm d}(y,u))}{\breve{Q}(x,{\rm d}(y,u))}r_{ij,u}(x,y)\right\} \nonumber \\ & =\min\left\{ 1,\sum_{i=1}^{m}\sum_{j=1}^{m}\frac{\beta_{ij}(x)Q_{ij}(x,{\rm d}(y,u))}{\breve{Q}(x,{\rm d}(y,u))}\frac{\beta_{ji}(y)\pi({\rm d}y)Q_{ji}(y,{\rm d}(x,u))}{\beta_{ij}(x)\pi({\rm d}x)Q_{ij}(x,{\rm d}(y,u))}\right\} \nonumber \\ & =\min\left\{ 1,\frac{\pi({\rm d}y)\breve{Q}(y,{\rm d}(x,u))}{\pi({\rm d}x)\breve{Q}(x,{\rm d}(y,u))}\right\} =:\alpha_{u}(x,y),\label{eq: acceptance prob of MH with asymmetric proposal}\end{aligned}$$ which is the acceptance probability of a pseudo-marginal ratio MH algorithm $\mathring{P}$ with $Q_{1}(\cdot,\cdot)=\breve{Q}(\cdot,\cdot)$ and $\varphi(u)=u$ (see Theorem \[thm: pseudo-marginal ratio algorithms\]). From Peskun’s result @Tierney_1998 we deduce that for this common proposal distribution $\breve{Q}$, the update $\breve{P}$ has worse performance properties in terms of both asymptotic variance and right spectral gap than $\mathring{P}$. It is therefore natural to question the interest of updates such as $\breve{P}$. An argument already noted by @tjelmeland-eidsvik-2004 [@andrieu2008tutorial], is that computing the acceptance ratio in is generally substantially more computationally expensive than computing $\breve{r}_{ij}(x,y)$, which may offset any theoretical advantage in practice. It may also be that defining a desirable acceptance ratio is theoretically impossible using the standard approach, or that practical evaluation of the acceptance ratio is impossible. This is the case for numerous examples, including Example \[ex:doublyintractaveraging\], for which $$\begin{aligned} \alpha_{u}(\theta,\theta') & =r(\theta,\theta')\frac{\big[\prod_{i=1}^{N}g_{\theta}(u^{(i)})/C_{\theta}\,\mathring{r}_{u^{(k)}}(\theta',\theta)/\mathring{r}_{\mathfrak{u}}^{N}(\theta',\theta)+N^{-1}g_{\theta}(u^{(k)})/C_{\theta}\prod_{i\neq k}g_{\theta'}(u^{(i)})/C_{\theta'}\big]}{\big[\prod_{i=1}^{N}g_{\theta'}(u^{(i)})/C_{\theta'}\,\mathring{r}_{u^{(k)}}(\theta,\theta')/\mathring{r}_{\mathfrak{u}}^{N}(\theta,\theta')+N^{-1}g_{\theta'}(u^{(k)})/C_{\theta'}\prod_{i\neq k}g_{\theta}(u^{(i)})/C_{\theta}\big]},\end{aligned}$$ for $N\geq1$ and where we note that the unknown normalising constants do not cancel. Justification of AIS and an extension\[sec: A short justification of AIS and an extension\] =========================================================================================== We provide here a short justification of the AIS of @Crooks1998 [@Neal_2001], as presented in @Karagiannis_and_Andrieu_2013, and an extension of it that is useful in this paper. Here $\tau$ represents the number of intermediate distributions introduced, while $\mu_{0}$ and $\mu_{\tau+1}$ are the distributions of which we want the normalising constants as we assume that we only know them up to normalising constants i.e. we know the unnormalised distributions $\nu_{0}=\mu_{0}Z_{0}$ and $\nu_{\tau+1}=\mu_{\tau+1}Z_{\tau+1}$. \[thm:AIS\]Let $\big\{\mu_{t},t=0,\ldots,\tau+1\big\}$ for some $\tau\in\mathbb{N}$ be a family of probability distributions on some measurable space $\big(\mathsf{E},\mathcal{E}\big)$ such that for $t=0,\ldots,\tau$ $\mu_{t}\gg\mu_{t+1}$. Let $\big\{\Pi_{t},t=1,\ldots,\tau\big\}$ be a family of Markov transition kernels $\Pi_{t}:\mathsf{E}\times\mathcal{E}\rightarrow[0,1]$ such that for any $t=1,\ldots,\tau$, $\Pi_{t}$ is $\mu_{t}-$reversible. Let us define the following probability distributions on $\big(\mathsf{E}^{\tau+1},\mathcal{E}^{\tau+1}\big)$, $\overleftarrow{\varPi}:=\mu_{\tau+1}\times\Pi_{\tau}\times\cdots\times\Pi_{1}$ and $\overrightarrow{\varPi}:=\mu_{0}\times\Pi_{1}\times\cdots\times\Pi_{\tau}$ for $\tau\geq1$ and $\overleftarrow{\varPi}:=\mu_{\tau+1}$ and $\overrightarrow{\varPi}:=\mu_{0}$ for $\tau=0$. Then for any $x_{0:\tau}\in\mathsf{E}^{\tau+1}$ $$\begin{aligned} \overleftarrow{\varPi}\big({\rm d}(x_{\tau},\ldots,x_{0})\big) & =\prod_{t=0}^{\tau}\frac{\mu_{t+1}\big({\rm d}x_{t}\big)}{\mu_{t}\big({\rm d}x_{t}\big)}\overrightarrow{\varPi}\big({\rm d}(x_{0},\ldots,x_{\tau})\big).\end{aligned}$$ The case $\tau=0$ is direct. Assume $\tau\geq1$, we show by induction that for any $j=1,\ldots,\tau$ $$\begin{gathered} \overleftarrow{\varPi}\big({\rm d}(x_{\tau},\ldots,x_{0})\big)\\ =\left[\prod_{t=1}^{j}\frac{\mu_{\tau-t+2}\big({\rm d}x_{\tau+1-t}\big)}{\mu_{\tau-t+1}\big({\rm d}x_{\tau+1-t}\big)}\Pi_{\tau+1-t}\big(x_{\tau-t},{\rm d}x_{\tau-t+1}\big)\right]\mu_{\tau-j+1}\big({\rm d}x_{\tau-j}\big)\prod_{t=j+1}^{\tau}\Pi_{\tau-t+1}\big(x_{\tau-t+1},{\rm d}x_{\tau-t}\big),\end{gathered}$$ with the convention $\prod_{t=\tau+1}^{\tau}=I$. First we check the result for $j=1$ $$\begin{aligned} \overleftarrow{\varPi}\big({\rm d}(x_{\tau},\ldots,x_{0})\big) & =\mu_{\tau+1}\big({\rm d}x_{\tau}\big)\prod_{t=1}^{\tau}\Pi_{\tau-t+1}\big(x_{\tau-t+1},{\rm d}x_{\tau-t}\big)\\ & =\frac{\mu_{\tau+1}\big({\rm d}x_{\tau}\big)}{\mu_{\tau}\big({\rm d}x_{\tau}\big)}\mu_{\tau}\big({\rm d}x_{\tau}\big)\Pi_{\tau}\big(x_{\tau},{\rm d}x_{\tau-1}\big)\prod_{t=2}^{\tau}\Pi_{\tau-t+1}\big(x_{\tau-t+1},{\rm d}x_{\tau-t}\big)\\ & =\frac{\mu_{\tau+1}\big({\rm d}x_{\tau}\big)}{\mu_{\tau}\big({\rm d}x_{\tau}\big)}\mu_{\tau}\big({\rm d}x_{\tau-1}\big)\Pi_{\tau}\big(x_{\tau-1},{\rm d}x_{\tau}\big)\prod_{t=2}^{\tau}\Pi_{\tau-t+1}\big(x_{\tau-t+1},{\rm d}x_{\tau-t}\big),\end{aligned}$$ where we have used $\mu_{\tau}\gg\mu_{\tau+1}$ and the fact that $\Pi_{\tau}$ is $\mu_{\tau}-$reversible. Now assume the result true for some $1\leq j\leq\tau-1$ and $\tau\geq2$, then using similar arguments as above, $$\begin{aligned} \mu_{\tau-j+1}\big({\rm d}x_{\tau-j}\big)\Pi_{\tau-j}\big(x_{\tau-j}, & {\rm d}x_{\tau-j-1}\big)\\ & =\frac{\mu_{\tau-j+1}\big({\rm d}x_{\tau-j}\big)}{\mu_{\tau-j}\big({\rm d}x_{\tau-j}\big)}\mu_{\tau-j}\big({\rm d}x_{\tau-j}\big)\Pi_{\tau-j}\big(x_{\tau-j},{\rm d}x_{\tau-j-1}\big)\\ & =\frac{\mu_{\tau-j+1}\big({\rm d}x_{\tau-j}\big)}{\mu_{\tau-j}\big({\rm d}x_{\tau-j}\big)}\mu_{\tau-j}\big({\rm d}x_{\tau-j-1}\big)\Pi_{\tau-j}\big(x_{\tau-j-1},{\rm d}x_{\tau-j}\big),\end{aligned}$$ from which the intermediate claim follows for $j+1$. Now for $j=\tau$ we obtain the claimed result after a change of variables $t\leftarrow\tau+1-t$ in the product. Assume that we have access to unnormalised versions of the probability distributions, say $\nu_{t}=\mu_{t}Z_{t}$. Then $$\overleftarrow{\varPi}\big({\rm d}(x_{\tau},\ldots,x_{0})\big)=\prod_{t=0}^{\tau}\frac{Z_{t}}{Z_{t+1}}\frac{\nu_{t+1}\big({\rm d}x_{t}\big)}{\nu_{t}\big({\rm d}x_{t}\big)}\overrightarrow{\varPi}\big({\rm d}(x_{0},\ldots,x_{\tau})\big),$$ and therefore $$\begin{aligned} \prod_{t=0}^{\tau}\frac{\nu_{t+1}\big({\rm d}x_{t}\big)}{\nu_{t}\big({\rm d}x_{t}\big)}\overrightarrow{\varPi}\big({\rm d}(x_{0},\ldots,x_{\tau})\big) & =\overleftarrow{\varPi}\big({\rm d}(x_{\tau},\ldots,x_{0})\big)\prod_{t=0}^{\tau}\frac{Z_{t+1}}{Z_{t}}\\ & =\overleftarrow{\varPi}\big({\rm d}(x_{\tau},\ldots,x_{0})\big)\frac{Z_{\tau+1}}{Z_{0}},\end{aligned}$$ which suggests and justifies the AIS estimator. The following extension of the result above turns out to be of practical interest. \[thm:extensionAIS\] Let $\tau\geq2$, $\big\{\mu_{t},t=1,\ldots,\tau\big\}$ and $\big\{\Pi_{t},t=2,\ldots,\tau-1\big\}$ be as in Theorem \[thm:AIS\] above but assume now that $\mu_{0}$ and $\mu_{\tau+1}$ are defined on a potentially different measurable space $(\mathsf{F},\mathcal{F})$. Further let $\overrightarrow{\Pi}_{1},\overleftarrow{\Pi}_{\tau}:\mathsf{F}\times\mathcal{E}\rightarrow[0,1]$ and $\overleftarrow{\Pi}_{1},\overrightarrow{\Pi}_{\tau}:\mathsf{E}\times\mathcal{F}\rightarrow[0,1]$ be Markov kernels satisfying the following properties $$\mu_{0}\big({\rm d}x_{0}\big)\overrightarrow{\Pi}_{1}\big(x_{0},{\rm d}x_{1}\big)=\mu_{1}\big({\rm d}x_{1}\big)\overleftarrow{\Pi}_{1}\big(x_{1},{\rm d}x_{0}\big),$$ and $$\mu_{\tau}\big({\rm d}x_{\tau-1}\big)\overrightarrow{\Pi}_{\tau}\big(x_{\tau-1},{\rm d}x_{\tau}\big)=\mu_{\tau+1}\big({\rm d}x_{\tau}\big)\overleftarrow{\Pi}_{\tau}\big(x_{\tau},{\rm d}x_{\tau-1}\big).$$ Define $$\overrightarrow{\varPi}:=\mu_{0}\times\overrightarrow{\Pi}_{1}\times\Pi_{2}\cdots\times\overrightarrow{\Pi}_{\tau},$$ and $$\overleftarrow{\varPi}:=\mu_{\tau+1}\times\overleftarrow{\Pi}_{\tau}\times\Pi_{\tau-1}\cdots\times\overleftarrow{\Pi}_{1}.$$ Then $$\overleftarrow{\varPi}\big({\rm d}(x_{\tau},\ldots,x_{0})\big)=\prod_{t=1}^{\tau-1}\frac{\mu_{t+1}\big({\rm d}x_{t}\big)}{\mu_{t}\big({\rm d}x_{t}\big)}\overrightarrow{\varPi}\big({\rm d}(x_{0},\ldots,x_{\tau})\big).$$ The proof follows from manipulations similar to those of Theorem \[thm:AIS\]. We have, $$\begin{aligned} \overleftarrow{\varPi} & \big({\rm d}(x_{\tau},\ldots,x_{0})\big)=\mu_{\tau+1}\big({\rm d}x_{\tau}\big)\overleftarrow{\Pi}_{\tau}\big(x_{\tau},{\rm d}x_{\tau-1}\big)\left[\prod_{t=2}^{\tau-1}\Pi_{\tau-t+1}\big(x_{\tau-t+1},{\rm d}x_{\tau-t}\big)\right]\overleftarrow{\Pi}_{1}(x_{1},\mathrm{d}x_{0})\\ & =\left[\frac{\mu_{\tau}\big({\rm d}x_{\tau-1}\big)}{\mu_{\tau-1}\big({\rm d}x_{\tau-1}\big)}\overrightarrow{\Pi}_{\tau}\big(x_{\tau-1},{\rm d}x_{\tau}\big)\right]\left[\mu_{\tau-1}\big({\rm d}x_{\tau-1}\big)\prod_{t=2}^{\tau-1}\Pi_{t}\big(x_{t},{\rm d}x_{t-1}\big)\right]\overleftarrow{\Pi}_{1}(x_{1},\mathrm{d}x_{0})\\ & =\left[\frac{\mu_{\tau}\big({\rm d}x_{\tau-1}\big)}{\mu_{\tau-1}\big({\rm d}x_{\tau-1}\big)}\overrightarrow{\Pi}_{\tau}\big(x_{\tau-1},{\rm d}x_{\tau}\big)\right]\left[\left\{ \prod_{t=2}^{\tau-1}\frac{\mu_{t}\big({\rm d}x_{t-1}\big)}{\mu_{t-1}\big({\rm d}x_{t-1}\big)}\right\} \prod_{t=2}^{\tau-1}\Pi_{t}\big(x_{t-1},{\rm d}x_{t}\big)\mu_{1}\big({\rm d}x_{1}\big)\right]\overleftarrow{\Pi}_{1}(x_{1},\mathrm{d}x_{0})\\ & =\left\{ \prod_{t=1}^{\tau-1}\frac{\mu_{t+1}\big({\rm d}x_{t}\big)}{\mu_{t}\big({\rm d}x_{t}\big)}\right\} \left[\overrightarrow{\Pi}_{\tau}\big(x_{\tau-1},{\rm d}x_{\tau}\big)\prod_{t=2}^{\tau-1}\Pi_{\tau-t+1}\big(x_{\tau-t},{\rm d}x_{\tau-t+1}\big)\overrightarrow{\Pi}_{1}\big(x_{0},{\rm d}x_{1}\big)\mu_{0}\big({\rm d}x_{0}\big)\right]\\ & =\left\{ \prod_{t=1}^{\tau-1}\frac{\mu_{t+1}\big({\rm d}x_{t}\big)}{\mu_{t}\big({\rm d}x_{t}\big)}\right\} \overrightarrow{\varPi}(\mathrm{d}(x_{0},\ldots,x_{\tau}))\end{aligned}$$ where on the second and fourth line we have used the two conditions on the arrowed kernels, and the third line is obtained by applying Theorem \[thm:AIS\]. The additional conditions are satisfied, for example, if $\mathsf{F}=\mathsf{E}$, $\mu_{0}=\mu_{1}$, $\mu_{\tau}=\mu_{\tau+1}$, $\overrightarrow{\Pi}_{1}=\overleftarrow{\Pi}_{1}$ is $\mu_{1}-$ reversible and likewise $\overrightarrow{\Pi}_{\tau}=\overleftarrow{\Pi}_{\tau}$ is $\mu_{\tau}-$reversible, taking us to the standard AIS setting (with repeats at the ends). However, the generalisation obtained by those additional conditions allow for more general scenarios of interest, in particular for annealing to occur on a space different from that where $\mu_{0}$ and $\mu_{\tau+1}$ are defined. The application of our methodology for trans-dimensional models indeed requires this generalisation; see Sections \[subsec: Generalisations of pseudo-marginal asymmetric MCMC\] and\[subsec: An application: trans-dimensional distributions\]. Auxiliary results and proofs for cSMC based algorithms \[sec: Auxiliary results and proofs for cSMC based algorithms \] ======================================================================================================================= First, we lay out some useful results on SMC, cSMC, for the state-space model defined in Section \[subsec: State-space models and conditional SMC\] dropping $\theta$ from the notation. For notational simplicity we will consider the bootstrap particle filter where the particles are initiated from the initial distribution and propagated from the state transition, so that $h({\rm d}\zeta_{1}^{(i)})=\mu({\rm d}\zeta_{1}^{(i)})$ and $H(\zeta_{t-1}^{a_{t-1}^{(i)}},{\rm d}\zeta_{t}^{(i)})=f(\zeta_{t-1}^{a_{t-1}^{(i)}},{\rm d}\zeta_{t}^{(i)})$ in Algorithm \[alg: Conditional SMC\] and the particle weight is simply the observation density, $w_{t}(\zeta_{t}^{(i)})=g(y_{t}|\zeta_{t}^{(i)})$. Note that our results can be extended to other choices of $h$ and $H$. It is standard that the law of a particle filter with $M$ particles and multinomial resampling for $\zeta\in\mathsf{Z}^{MT}$ and $a\in[M]^{M(T-1)}$ is [@Andrieu_et_al_2010] $$\begin{aligned} \psi\big({\rm d}(\zeta,a)\big)= & \prod_{i=1}^{M}\mu({\rm d}\zeta_{1}^{(i)})\prod_{t=2}^{T}\left\{ \prod_{i=1}^{M}\frac{w_{t-1}(\zeta_{t-1}^{(a_{t-1}^{(i)})})}{\sum_{j=1}^{M}w_{t-1}(\zeta_{t-1}^{(j)})}f(\zeta_{t-1}^{(a_{t-1}^{(i)})},{\rm d}\zeta_{t}^{(i)})\right\} .\end{aligned}$$ What is important for us is that the marginal distribution $\psi\big({\rm d}\zeta\big)$ has a simple form $$\psi\big({\rm d}\zeta\big)=\prod_{i=1}^{M}\mu({\rm d}\zeta_{1}^{(i)})\prod_{t=2}^{T}\left\{ \prod_{i=1}^{M}\frac{\sum_{j=1}^{M}w_{t-1}(\zeta_{t-1}^{(j)})f(\zeta_{t-1}^{(j)},{\rm d}\zeta_{t}^{(i)})}{\sum_{j=1}^{M}w_{t-1}(\zeta_{t-1}^{(j)})}\right\} .$$ Now, letting $C:=\ell(y)$ (recall $y=y_{1:T}$) and its estimator $\hat{C}(\zeta):=\prod_{t=1}^{T}\frac{1}{M}\sum_{i=1}^{M}w_{t}(\zeta_{t}^{(i)})$, we introduce $$\bar{\psi}\big({\rm d}\zeta\big):=\psi\big({\rm d}\zeta\big)\frac{\hat{C}(\zeta)}{C}.\label{eq: SMC to cSMC probability law}$$ We know from @Andrieu_et_al_2010 that this is a probability distribution, and is a way of justifying that $\hat{C}(\zeta)$ is an unbiased estimator of $C$note that the ancestral history is here integrated out. For $\zeta\in\mathsf{Z}^{MT}$ and $\mathbf{k}=(k_{1},\ldots,k_{T})\in[M]^{T}$, let $\zeta^{(\mathbf{k})}=(\zeta_{1}^{(k_{1})},\ldots,\zeta_{T}^{(k_{T})})$. Furthermore, for $z\in\mathsf{Z}^{T}$ and $\zeta\in\mathsf{Z}^{MT}$, define the (extended) cSMC kernel $$\Phi(z,{\rm d}(\mathbf{k},\zeta)):=\frac{1}{M^{T}}\delta_{z}({\rm d}\zeta^{(\mathbf{k})})\prod_{i\neq k_{1}}^{M}\mu({\rm d}\zeta_{1}^{(i)})\prod_{t=2}^{T}\left\{ \prod_{i=1,i\neq k_{t}}^{M}\frac{\sum_{j=1}^{M}w_{t-1}(\zeta_{t-1}^{(j)})f(\zeta_{t-1}^{(j)},{\rm d}\zeta_{t}^{(i)})}{\sum_{j=1}^{M}w_{t-1}(\zeta_{t-1}^{(j)})}\right\} ,$$ with its marginal $\Phi(z,{\rm d}\zeta)=\sum_{\mathbf{k}\in[M]^{T}}\Phi(z,{\rm d}(\mathbf{k},\zeta))$. Recall the law of the indices used in the backward-sampling procedure in order to draw a path $\zeta^{(\mathbf{k})}$, $$\phi(\mathbf{k}|\zeta):=\frac{w_{T}(\zeta_{T}^{(k_{T})})}{\sum_{i=1}^{M}w_{T}(\zeta_{T}^{(i)})}\prod_{t=2}^{T}\frac{w_{t-1}(\zeta_{t-1}^{(k_{t-1})})f(\zeta_{t-1}^{(k_{t-1})},\zeta_{t}^{(k_{t})})}{\sum_{i=1}^{M}w_{t-1}(\zeta_{t-1}^{(i)})f(\zeta_{t-1}^{(i)},\zeta_{t}^{(k_{t})})}.$$ with a convention for $f$ when $t=1$. Finally, define the joint distribution of the indices and path drawn via backward sampling, $$\check{\Phi}(\zeta,{\rm d}(\mathbf{k},z))=\phi(\mathbf{k}\mid\zeta)\delta_{\zeta^{(\mathbf{k})}}({\rm d}z).$$ and its marginal $\check{\Phi}(\zeta,{\rm d}z)=$$\sum_{\mathbf{k}\in[M]^{T}}\check{\Phi}(\zeta,{\rm d}(\mathbf{k},z))$. \[lem: cSMC semi-reversibility\]For any $z\in\mathsf{Z}^{T}$, $\mathbf{k}\in[M]^{T},$ and $\zeta\in\mathsf{Z}^{MT}$, $$\pi({\rm d}z)\Phi(z,{\rm d}(\mathbf{k},\zeta))=\bar{\psi}\big({\rm d}\zeta\big)\check{\Phi}(\zeta,{\rm d}(\mathbf{k},z)).$$ For the left hand side, we have $$\begin{gathered} \pi\big(\mathrm{d}z\big)\Phi(z,\mathrm{d}(\mathbf{k},\zeta))=\frac{1}{M^{T}}\pi(\mathrm{d}z)\delta_{z}(\mathrm{d}\zeta^{(\mathbf{k})})\\ \times\prod_{i=1,i\neq k_{1}}^{M}\mu({\rm d}\zeta_{1}^{(i)})\prod_{t=2}^{T}\left\{ \prod_{i=1,i\neq k_{t}}^{M}\frac{\sum_{j=1}^{M}w_{t-1}(\zeta_{t-1}^{(j)})f(\zeta_{t-1}^{(j)},{\rm d}\zeta_{t}^{(i)})}{\sum_{j=1}^{M}w_{t-1}(\zeta_{t-1}^{(j)})}\right\} .\end{gathered}$$ For the right hand side, first we note the identity $$\bar{\psi}\big({\rm d}\zeta\big)\phi(\mathbf{k}\mid\zeta)=\frac{1}{M^{T}}\pi(\mathrm{d}\zeta^{(\mathbf{k})})\prod_{i=1,i\neq k_{1}}^{M}\mu({\rm d}\zeta_{1}^{(i)})\prod_{t=2}^{T}\left\{ \prod_{i=1,i\neq k_{t}}^{M}\frac{\sum_{j=1}^{M}w_{t-1}(\zeta_{t-1}^{(j)})f(\zeta_{t-1}^{(j)},{\rm d}\zeta_{t}^{(i)})}{\sum_{j=1}^{M}w_{t-1}(\zeta_{t-1}^{(j)})}\right\}$$ so that we get $$\begin{gathered} \bar{\psi}\big({\rm d}\zeta\big)\check{\Phi}(\zeta,{\rm d}(\mathbf{k},z))=\frac{1}{M^{T}}\pi(\mathrm{d}\zeta^{(\mathbf{k})})\delta_{\zeta^{(\mathbf{k})}}(\mathrm{d}z)\\ \times\prod_{i=1,i\neq k_{1}}^{M}\mu({\rm d}\zeta_{1}^{(i)})\prod_{t=2}^{T}\left\{ \prod_{i=1,i\neq k_{t}}^{M}\frac{\sum_{j=1}^{M}w_{t-1}(\zeta_{t-1}^{(j)})f(\zeta_{t-1}^{(j)},{\rm d}\zeta_{t}^{(i)})}{\sum_{j=1}^{M}w_{t-1}(\zeta_{t-1}^{(j)})}\right\} \end{gathered}$$ which is equal to $\pi\big(\mathrm{d}z\big)\Phi(z,\mathrm{d}(\mathbf{k},\zeta)).$ Lemma \[lem: cSMC semi-reversibility\] immediately leads to the following corollaries which will be useful in the subsequent proofs. \[cor: cSMC semi-reversibility\]For any $z\in\mathsf{Z}^{T}$and $\zeta\in\mathsf{Z}^{MT}$, $$\pi({\rm d}z)\Phi(z,{\rm d}\zeta)=\bar{\psi}\big({\rm d}\zeta\big)\check{\Phi}(\zeta,{\rm d}z).$$ \[cor:exchangeability-SSM\]For any $N\geq1$ $(z,u^{(1)},\ldots,u^{(N)})\in\mathsf{Z}^{(N+1)T}$, $(\mathbf{k},\mathbf{k}^{(1)},\ldots,\mathbf{k}^{(N)})\in[M]^{(N+1)T}$, and $\zeta\in\mathsf{Z}^{MT}$, $$\pi({\rm d}z)\Phi\big(z,{\rm d}(\mathbf{k},\zeta)\big)\prod_{i=1}^{N}\check{\Phi}(\zeta,{\rm d}(\mathbf{k}^{(i)},u^{(i)}))=\bar{\psi}\big({\rm d}\zeta\big)\check{\Phi}(\zeta,{\rm d}(\mathbf{k},z))\prod_{i=1}^{N}\check{\Phi}(\zeta,{\rm d}(\mathbf{k}^{(i)},u^{(i)}))$$ which establishes that $z,u^{(1)},\ldots,u^{(N)}$ are exchangeable under the joint distribution $$\pi({\rm d}z)\int_{\zeta}\Phi\big(z,{\rm d}\zeta\big)\prod_{i=1}^{N}\check{\Phi}(\zeta,{\rm d}u^{(i)})=\bar{\psi}\big({\rm d}\zeta\big)\int_{\zeta}\check{\Phi}(\zeta,{\rm d}z)\prod_{i=1}^{N}\check{\Phi}(\zeta,{\rm d}u^{(i)}).$$ Proof of unbiasedness for the acceptance/likelihood ratio estimator of Algorithm \[alg: MHAAR for SSM with cSMC - Rao-Blackwellised backward sampling\] \[subsec: Proof of unbiasedness for the Rao-Blackwellised estimator\] ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- From here on, we have $\theta$ back in the notation. Let $F:\mathsf{Z}^{T}\rightarrow\mathbb{R}$ be a real-valued function and given $\zeta\in\mathsf{Z}^{TM}$, denote $\check{\Phi}_{\theta}(\zeta,F)$ its conditional expectation with respect to the backward sampling distribution $\check{\Phi}_{\theta}(\zeta,\cdot)$, $$\check{\Phi}_{\theta}(\zeta,F)=\sum_{\mathbf{k}\in[M]^{T}}F(\zeta^{(k)})\phi_{\theta}(\mathbf{k}|\zeta).$$ which is a function of $\zeta$. It is a result from @del2010backward [Theorem 5.2] that for any $F:\mathsf{Z}^{T}\rightarrow\mathbb{R}$, the expectation of $\check{\Phi}_{\theta}(\zeta,F)$, scaled by $\hat{C}_{\theta}(\zeta)/C_{\theta}$, with respect to the law of SMC, $\psi_{\theta}$ is $\pi_{\theta}(F)$: $$\psi_{\theta}\left(\frac{\hat{C}_{\theta}(\zeta)}{C_{\theta}}\check{\Phi}_{\theta}(\zeta,F)\right)=\pi_{\theta}(F).$$ The crucial point here is that we can rewrite the identity above in terms of $\bar{\psi}_{\theta}$ as $$\bar{\psi}_{\theta}\left(\check{\Phi}_{\theta}(\zeta,F)\right)=\pi_{\theta}(F).\label{eq: SMC Del Morals result rewritten in terms of cSMC}$$ owing to . Now, we have the necessary intermediate results to prove Theorem \[thm: SMC unbiased estimator of acceptance ratio\]. (Theorem \[thm: SMC unbiased estimator of acceptance ratio\]) Let $\gamma_{\theta}(z):=p_{\theta}(z,y)$ be the unnormalised density for $\pi_{\theta}(z)$ so that $\gamma_{\theta}(z)=\pi(\theta)\ell_{\theta}(y)$. We can write the estimator in as $$\mathring{r}_{z,\zeta}(\theta,\theta';\tilde{\theta})=\frac{q(\theta',\theta)}{q(\theta,\theta')}\frac{\eta(\theta')}{\eta(\theta)}\frac{\gamma_{\tilde{\theta}}(z)}{\gamma_{\theta}(z)}\check{\Phi}_{\tilde{\theta}}\left(\zeta,\frac{\gamma_{\theta'}}{\gamma_{\tilde{\theta}}}(\cdot)\right).$$ The expectation of $\mathring{r}_{z,\zeta}(\theta,\theta';\tilde{\theta})$ with respect to the law of the mechanism described in Theorem \[thm: SMC unbiased estimator of acceptance ratio\] that generates $\mathring{r}_{z,\zeta}(\theta,\theta;\tilde{\theta})$ is $$\int\pi_{\theta}(\mathrm{d}z)\Phi_{\tilde{\theta}}(z,\mathrm{d}\zeta)\mathring{r}_{z,\zeta}(\theta,\theta';\tilde{\theta}).$$ To see that this is indeed $r(\theta,\theta')$, firstly observe that $$\pi_{\theta}(\mathrm{d}z)\frac{\gamma_{\tilde{\theta}}(z)}{\gamma_{\theta}(z)}=\frac{\gamma_{\tilde{\theta}}(z)}{\gamma_{\theta}(z)}\frac{\pi_{\theta}(z)}{\pi_{\tilde{\theta}}(z)}\pi_{\tilde{\theta}}(\mathrm{d}z)=\frac{\ell_{\tilde{\theta}}(y)}{\ell_{\theta}(y)}\pi_{\tilde{\theta}}(\mathrm{d}z).\label{eq: SMC unbiasedness proof first step}$$ Secondly, using Corollary \[cor: cSMC semi-reversibility\], we have $$\pi_{\tilde{\theta}}(\mathrm{d}z)\Phi_{\tilde{\theta}}(z,\mathrm{d}\zeta)\check{\Phi}_{\tilde{\theta}}\left(\zeta,\frac{\gamma_{\theta'}}{\gamma_{\tilde{\theta}}}(\cdot)\right)=\bar{\psi}_{\tilde{\theta}}(\mathrm{d}\zeta)\check{\Phi}_{\tilde{\theta}}\left(\zeta,\frac{\gamma_{\theta'}}{\gamma_{\tilde{\theta}}}(\cdot)\right)\check{\psi}(\zeta,\mathrm{d}z).\label{eq: SMC unbiasedness proof second step}$$ Therefore, we have $$\begin{aligned} \int\pi_{\theta}(\mathrm{d}z)\Phi_{\tilde{\theta}}(z,\mathrm{d}\zeta)\mathring{r}_{z,\zeta}(\theta,\theta';\tilde{\theta}) & =\frac{q(\theta',\theta)}{q(\theta,\theta')}\frac{\eta(\theta')}{\eta(\theta)}\frac{\ell_{\tilde{\theta}}(y)}{\ell_{\theta}(y)}\int\bar{\psi}_{\tilde{\theta}}(\mathrm{d}\zeta)\check{\Phi}_{\tilde{\theta}}\left(\zeta,\frac{\gamma_{\theta'}}{\gamma_{\tilde{\theta}}}(\cdot)\right)\check{\psi}(\zeta,\mathrm{d}z)\\ & =\frac{q(\theta',\theta)}{q(\theta,\theta')}\frac{\eta(\theta')}{\eta(\theta)}\frac{\ell_{\tilde{\theta}}(y)}{\ell_{\theta}(y)}\pi_{\tilde{\theta}}\left(\frac{\gamma_{\theta'}}{\gamma_{\tilde{\theta}}}(\cdot)\right)\\ & =r(\theta,\theta')\end{aligned}$$ where the first line is due to and , the second line follows from and the last line is due to the identity $\pi_{\tilde{\theta}}\left(\gamma_{\theta'}/\gamma_{\tilde{\theta}}\right)=\ell_{\theta'}(y)/\ell_{\tilde{\theta}}(y)$. Proof of reversibility for Algorithms \[alg: MHAAR for SSM with cSMC - Rao-Blackwellised backward sampling\] and \[alg: MHAAR for SSM with cSMC - multiple paths from backward sampling\] \[subsec: Proof of reversibility for Algorithms\] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- First we show the reversibility of Algorithm \[alg: MHAAR for SSM with cSMC - Rao-Blackwellised backward sampling\] that uses the Rao-Blackwellised estimator of the acceptance ratio The transition probability of Algorithm \[alg: MHAAR for SSM with cSMC - Rao-Blackwellised backward sampling\] satisfies the detailed balance with respect to $\pi(\mathrm{d}(\theta,z))$. Let $u=(\mathbf{k},\zeta,\mathbf{k}')\in[M]^{T}\times\mathrm{Z}^{TM}\times[M]^{T}$. The proposal kernels that correspond to the moves of Algorithm \[alg: MHAAR for SSM with cSMC - Rao-Blackwellised backward sampling\] are $$\begin{aligned} Q_{1}^{M}(x,\mathrm{d}(y,u)) & =q(\theta,\mathrm{d}\theta')\Phi_{\tilde{\theta}_{1}(\theta,\theta')}(z,\mathrm{d}(\mathbf{k},\zeta))\frac{\phi_{\tilde{\theta}_{1}(\theta,\theta')}(\mathbf{k}'|\zeta)\mathring{r}_{z,\zeta^{(\mathbf{k}')}}(\theta,\theta';\tilde{\theta}_{1}(\theta,\theta'))}{\sum_{\mathbf{l}\in[M]^{T}}\phi_{\tilde{\theta}_{1}(\theta,\theta')}(\mathbf{l}|\zeta)\mathring{r}_{z,\zeta^{(\mathbf{l})}}(\theta,\theta';\tilde{\theta}_{1}(\theta,\theta'))}\delta_{\zeta^{(\mathbf{k}')}}(\mathrm{d}z'),\\ Q_{2}^{M}(x,\mathrm{d}(y,u)) & =q(\theta,\mathrm{d}\theta')\Phi_{\tilde{\theta}_{2}(\theta,\theta')}(z,\mathrm{d}(\mathbf{k},\zeta))\check{\Phi}_{\tilde{\theta}_{2}(\theta,\theta')}(\zeta,{\rm d}(\mathbf{k}',z')).\end{aligned}$$ First, observe that, for any $z,z'\in\mathsf{Z}^{T}$, and $\theta,\theta',\tilde{\theta}\in\Theta$, equation can be rewritten as $$\begin{aligned} \mathring{r}_{z,z'}(\theta,\theta';\tilde{\theta}) & =\frac{q(\theta',\theta)}{q(\theta,\theta')}\frac{\eta(\theta')}{\eta(\theta)}\frac{\pi(\mathrm{d}(\theta',z'))}{\pi(\mathrm{d}(\tilde{\theta},z))}\frac{\pi(\mathrm{d}(\tilde{\theta},z'))}{\pi(\mathrm{d}(\theta,z))}\nonumber \\ & =r(\theta,\theta')\frac{\pi_{\theta'}(\mathrm{d}z')}{\pi_{\tilde{\theta}}(\mathrm{d}z')}\frac{\pi_{\tilde{\theta}}(\mathrm{d}z)}{\pi_{\theta}(\mathrm{d}z)}.\label{eq: acceptance ratio modified}\end{aligned}$$ From Corollary \[cor:exchangeability-SSM\], for $(\theta,z)\in\mathrm{X}$, $u=(\mathbf{k},\zeta,\mathbf{k}')\in[M]^{T}\times\mathsf{Z}^{TM}\times[M]^{T}$, and $z'\in\mathsf{Z}{}^{T}$ we have $$\begin{aligned} \pi_{\theta}(z)\Phi_{\theta}(z,\mathrm{d}(\mathbf{k},\zeta))\check{\Phi}_{\theta}(\zeta,{\rm d}(\mathbf{k}',z')) & =\pi_{\theta}(\mathrm{d}z')\Phi_{\theta}(z',\mathrm{d}(\mathbf{k}',\zeta))\check{\Phi}_{\theta}(\zeta,{\rm d}(\mathbf{k},z)).\label{eq: result from corollary for exchangeability}\end{aligned}$$ Using those relations, and letting $\tilde{\theta}=\tilde{\theta}_{1}(\theta,\theta')=\tilde{\theta}_{2}(\theta',\theta)$, we arrive the Radon-Nikodym derivative $$\begin{aligned} \frac{\pi(\mathrm{d}\theta')\pi_{\theta'}(\mathrm{d}z')Q_{2}^{M}(y,\mathrm{d}(x,u))}{\pi(\mathrm{d}\theta)\pi_{\theta}(\mathrm{d}z)Q_{1}^{M}(x,\mathrm{d}(y,u))} & =r(\theta,\theta')\frac{\pi_{\theta'}(\mathrm{d}z')\Phi_{\tilde{\theta}}(z',\mathrm{d}(\mathbf{k}',\zeta))\check{\Phi}_{\tilde{\theta}}(\zeta,\mathrm{d}(\mathbf{k},z))}{\pi_{\theta}(\mathrm{d}z)\Phi_{\tilde{\theta}}(z,\mathrm{d}(\mathbf{k},\zeta))\frac{\phi_{\tilde{\theta}}(\mathbf{k}'|\zeta)\mathring{r}_{z,\zeta^{(\mathbf{k}')}}(\theta,\theta';\tilde{\theta})}{\sum_{\mathbf{l}\in[M]^{T}}\phi_{\tilde{\theta}}(\mathbf{l}|\zeta)\mathring{r}_{z,\zeta^{(\mathbf{l})}}(\theta,\theta';\tilde{\theta})}\delta_{\zeta^{(\mathbf{k}')}}(\mathrm{d}z')}\nonumber \\ & =\frac{r(\theta,\theta')}{\mathring{r}_{z,z'}(\theta,\theta',\tilde{\theta})}\frac{\pi_{\theta'}(\mathrm{d}z')}{\pi_{\tilde{\theta}}(\mathrm{d}z')}\frac{\pi_{\tilde{\theta}}(\mathrm{d}z)\Phi_{\tilde{\theta}}(z,\mathrm{d}(\mathbf{k},\zeta))\check{\Phi}_{\tilde{\theta}}(\zeta,\mathrm{d}(\mathbf{k}',z'))}{\pi_{\theta}(\mathrm{d}z)\Phi_{\tilde{\theta}}(z,\mathrm{d}(\mathbf{k},\zeta))\check{\Phi}_{\tilde{\theta}}(\zeta,\mathrm{d}(\mathbf{k}',z'))}\mathring{r}_{z,\zeta}(\theta,\theta';\tilde{\theta})\nonumber \\ & =\frac{r(\theta,\theta')}{\mathring{r}_{z,z'}(\theta,\theta',\tilde{\theta})}\frac{\pi_{\theta'}(\mathrm{d}z')\pi_{\tilde{\theta}}(\mathrm{d}z)}{\pi_{\tilde{\theta}}(\mathrm{d}z')\pi_{\theta}(\mathrm{d}z)}\frac{\pi_{\theta}(\mathrm{d}z)\Phi_{\tilde{\theta}}(z,\mathrm{d}(\mathbf{k},\zeta))\check{\Phi}_{\tilde{\theta}}(\zeta,\mathrm{d}(\mathbf{k}',z'))}{\pi_{\theta}(\mathrm{d}z)\Phi_{\tilde{\theta}}(z,\mathrm{d}(\mathbf{k},\zeta))\check{\Phi}_{\tilde{\theta}}(\zeta,\mathrm{d}(\mathbf{k}',z'))}\mathring{r}_{z,\zeta}(\theta,\theta';\tilde{\theta})\nonumber \\ & =\mathring{r}_{z,\zeta}(\theta,\theta';\tilde{\theta}).\label{eq: RN derivative aMCMC for HMM all paths}\end{aligned}$$ The analysis in the proof above not only bears an alternative proof of Theorem \[thm: SMC unbiased estimator of acceptance ratio\] on the unbiasedness of but also implicitly proves Corollary \[cor: SMC unbiased estimator of acceptance ratio\]; as we show below. (Theorem \[thm: SMC unbiased estimator of acceptance ratio\]) Equation can be modified to obtain $$\pi_{\theta}(\mathrm{d}z)\Phi_{\tilde{\theta}}(z,\mathrm{d}(\mathbf{k},\zeta))\frac{\phi_{\tilde{\theta}}(\mathbf{k}'|\zeta)\mathring{r}_{z,\zeta^{(\mathbf{k}')}}(\theta,\theta';\tilde{\theta})}{\sum_{\mathbf{l}\in[M]^{T}}\phi_{\tilde{\theta}}(\mathbf{l}|\zeta)\mathring{r}_{z,\zeta^{(\mathbf{l})}}(\theta,\theta';\tilde{\theta})}\delta_{\zeta^{(\mathbf{k}')}}(\mathrm{d}z')\frac{\mathring{r}_{z,\zeta}(\theta,\theta';\tilde{\theta})}{r(\theta,\theta')}=\pi_{\theta'}(\mathrm{d}z')\Phi_{\tilde{\theta}}(z',\mathrm{d}(\mathbf{k}',\zeta))\check{\Phi}_{\tilde{\theta}}(\zeta,{\rm d}(\mathbf{k},z)).$$ Integrating both sides with respect to all the variables except $\theta$ and $\theta'$ leads to $$\int\pi_{\theta}(\mathrm{d}z)\Phi_{\tilde{\theta}}(z,\mathrm{d}\zeta)\mathring{r}_{z,\zeta}(\theta,\theta';\tilde{\theta})=r(\theta,\theta')$$ upon noticing that $\mathring{r}_{z,\zeta}(\theta,\theta';\tilde{\theta})$ does not depend on $\mathbf{k}'$ or $z'$ and the right hand side is a probability distribution. Noting that $\pi_{\theta}(\mathrm{d}z)\Phi_{\tilde{\theta}}(z,\mathrm{d}\zeta)$ is exactly the distribution of the mechanism described in Theorem \[thm: SMC unbiased estimator of acceptance ratio\] that generates $\mathring{r}_{z,\zeta}(\theta,\theta';\tilde{\theta})$, we prove Theorem \[thm: SMC unbiased estimator of acceptance ratio\]. (Corollary \[cor: SMC unbiased estimator of acceptance ratio\]) Similarly to the previous proof, we can write $$\begin{gathered} \pi_{\theta}(\mathrm{d}z)\Phi_{\tilde{\theta}}(z,\mathrm{d}(\mathbf{k},\zeta))\check{\Phi}_{\tilde{\theta}}(\zeta,{\rm d}(\mathbf{k}',z'))\frac{r(\theta',\theta)}{\mathring{r}_{z',\zeta}(\theta',\theta;\tilde{\theta})}\\ =\pi_{\theta}(\mathrm{d}z')\Phi_{\tilde{\theta}}(z',\mathrm{d}(\mathbf{k}',\zeta))\frac{\phi_{\tilde{\theta}}(\mathbf{k}|\zeta)\mathring{r}_{z',\zeta^{(\mathbf{k})}}(\theta',\theta;\tilde{\theta})}{\sum_{\mathbf{l}\in[M]^{T}}\phi_{\tilde{\theta}}(\mathbf{l}|\zeta)\mathring{r}_{z',\zeta^{(\mathbf{l})}}(\theta',\theta;\tilde{\theta})}\delta_{\zeta^{(\mathbf{k}')}}(\mathrm{d}z').\end{gathered}$$ Again, integrating both sides with respect to all the variables except $\theta$ and $\theta'$ leads to $$\int\pi_{\theta}(\mathrm{d}z)\Phi_{\tilde{\theta}}(z,\mathrm{d}\zeta)\check{\Phi}_{\tilde{\theta}}(\zeta,{\rm d}z')(1/\mathring{r}_{z',\zeta}(\theta',\theta;\tilde{\theta}))=r(\theta,\theta').$$ Since $1/\mathring{r}_{z',\zeta}(\theta',\theta;\tilde{\theta})$ is the estimator in question in Corollary \[cor: SMC unbiased estimator of acceptance ratio\] and $\pi_{\theta}(\mathrm{d}z)\Phi_{\tilde{\theta}}(z,\mathrm{d}\zeta)\check{\Phi}_{\tilde{\theta}}(\zeta,{\rm d}z')$ is exactly the distribution of the described mechanism that generates it, we prove Corollary \[cor: SMC unbiased estimator of acceptance ratio\]. Next, we show the reversibility of Algorithm \[alg: MHAAR for SSM with cSMC - multiple paths from backward sampling\] that uses a subsampled version of the Rao-Blackwellised acceptance ratio estimator. The transition probability of Algorithm \[alg: MHAAR for SSM with cSMC - multiple paths from backward sampling\] satisfies the detailed balance with respect to $\pi(\mathrm{d}x)$. For any $\theta\in\Theta$ and $z\in\mathsf{Z}^{T},$ define the kernel on ($\mathsf{Z}^{TN},\mathscr{Z}^{\otimes TN}$) for $N$ paths drawn via backward sampling following cSMC at $\theta$ conditioned on $z$ $$R_{\theta}(z,\mathrm{d}(u^{(1)},\ldots,u^{(N)}))=\int_{\zeta}\Phi_{\theta}\big(z,{\rm d}\zeta\big)\prod_{i=1}^{N}\check{\Phi}_{\theta}(\zeta,{\rm d}u^{(i)}).$$ By the exchangeability result of Corollary \[cor:exchangeability-SSM\], it holds for any $0\leq k\leq N$ that $$\pi_{\theta}({\rm d}u^{(0)})R_{\theta}(u^{(0)},\mathrm{d}u^{(1:N)})=\pi_{\theta}({\rm d}u^{(k)})R_{\theta}(u^{(k)},\mathrm{d}u^{(-k)}),$$ where $u^{(-k)}=(u^{(0)},\ldots,u^{(k-1)},u^{(k+1)},\ldots,u^{(N)})$, and therefore $$\pi_{\theta}({\rm d}z)\delta_{z}(\mathrm{d}u^{(0)})R_{\theta}(u^{(0)},{\rm d}u^{(1:N)})\delta_{u^{(k)}}(\mathrm{d}z')=\pi_{\theta}({\rm d}z')\delta_{z'}(\mathrm{d}u^{(k)})R_{\theta}(u^{(k)},{\rm d}u^{(-k)})\delta_{u^{(0)}}(\mathrm{d}z).\label{eq: exchangeability of auxiliary variables}$$ Letting $\mathfrak{u}=(u^{(0)},\ldots,u^{(N)})$, the proposal kernels that correspond to the moves of Algorithm \[alg: MHAAR for SSM with cSMC - multiple paths from backward sampling\] are $$\begin{aligned} Q_{1}^{M,N}\big(x;{\rm d}(y,\mathfrak{u},k)\big) & =q(\theta,{\rm d}\theta')\delta_{z}(\mathrm{d}u^{(0)})R_{\tilde{\theta}_{1}(\theta,\theta')}(u^{(0)},\mathrm{d}u^{(1:N)})\frac{\mathring{r}_{z,u^{(k)}}(\theta,\theta';\tilde{\theta}_{1}(\theta,\theta'))}{\sum_{i=1}^{N}\mathring{r}_{z,u^{(i)}}(\theta,\theta';\tilde{\theta}_{1}(\theta,\theta'))}\delta_{u^{(k)}}({\rm d}z'),\\ Q_{2}^{M,N}\big(x;{\rm d}(y,\mathfrak{u},k)\big) & =q(\theta,{\rm d}\theta')\frac{1}{N}\delta_{z}(\mathrm{d}u^{(k)})R_{\tilde{\theta}_{2}(\theta',\theta)}(u^{(k)},\mathrm{d}u^{(-k)})\delta_{u^{(0)}}({\rm d}z).\end{aligned}$$ Now we use , letting $\tilde{\theta}=\tilde{\theta}_{1}(\theta,\theta')=\tilde{\theta}_{2}(\theta',\theta)$, we can write $$\begin{aligned} \pi_{\theta'}(\mathrm{d}z')Q_{2}^{M,N}\big(y;{\rm d}(x,\mathfrak{u},k)\big) & =q(\theta',{\rm d}\theta)\frac{\pi_{\theta'}(\mathrm{d}z')}{\pi_{\tilde{\theta}}(\mathrm{d}z')}\frac{1}{N}\pi_{\tilde{\theta}}({\rm d}z')\delta_{z'}(\mathrm{d}u^{(k)})R_{\tilde{\theta}}(u^{(k)},{\rm d}u^{(-k)})\delta_{u^{(0)}}(\mathrm{d}z)\\ & =q(\theta',{\rm d}\theta)\frac{\pi_{\theta'}(\mathrm{d}z')}{\pi_{\tilde{\theta}}(\mathrm{d}z')}\frac{1}{N}\pi_{\tilde{\theta}}({\rm d}z)\delta_{z}(\mathrm{d}u^{(0)})R_{\tilde{\theta}}(u^{(0)},{\rm d}u^{(1:N)})\delta_{u^{(k)}}(\mathrm{d}z').\end{aligned}$$ Exploiting the relation between above and $$\begin{gathered} \pi_{\theta}({\rm d}z)Q_{1}^{M,N}\big(x;{\rm d}(y,\mathfrak{u},k)\big)\\ =q(\theta,{\rm d}\theta')\frac{\pi_{\theta}(\mathrm{d}z)}{\pi_{\tilde{\theta}}(\mathrm{d}z)}\pi_{\tilde{\theta}}({\rm d}z)\delta_{z}(\mathrm{d}u^{(0)})R_{\tilde{\theta}}(u^{(0)},{\rm d}u^{(1:N)})\delta_{u^{(k)}}(\mathrm{d}z')\frac{\mathring{r}_{z,z'}(\theta,\theta';\tilde{\theta})}{\sum_{i=1}^{N}\mathring{r}_{z,u^{(i)}}(\theta,\theta';\tilde{\theta})},\end{gathered}$$ and finally noting , we conclude $$\frac{\pi({\rm d}y)Q_{2}^{M,N}\big(y;{\rm d}(x,\mathfrak{u},k)\big)}{\pi({\rm d}x)Q_{1}^{M,N}\big(x;{\rm d}(y,\mathfrak{u},k)\big)}=\frac{1}{N}\sum_{i=1}^{N}\mathring{r}_{z,u^{(i)}}(\theta,\theta';\tilde{\theta}).$$ Substituting SMC for AIS in the acceptance ratio in MHAAR \[sec: Substituting SMC for AIS in the acceptance ratio in MHAAR\] ============================================================================================================================ To avoid repeats, we restrict ourselves to the description of the generalising Algorithm \[alg: MHAAR for pseudo-marginal ratio in latent variable models\], i.e. when $\pi(x)=\pi(\theta,z)$. Therefore, let us go back to the setting in Section \[sec: Pseudo-marginal ratio algorithms for latent variable models\], where we have the joint distribution $\pi(x)=\pi(\theta,z)$, the unnormalised densities for the intermediate steps of AIS, $\pi_{\theta,\theta',t}\propto f_{\theta,\theta',t}$, $t=0,\ldots,T+1$, $R_{\theta},$ and $R_{\theta,\theta',t}$, $t=1,\ldots,T$, as detailed in Proposition \[prop:MHwithAISinside\]. Consider $Q_{1}$ of Algorithm \[alg: MHAAR for pseudo-marginal ratio in latent variable models\]. Instead of AIS in Algorithm \[alg: MHAAR for pseudo-marginal ratio in latent variable models\], we want the sample paths $u_{0:T}^{(i)}$, $i=1,\ldots,N$ to interact via an SMC algorithm that uses resampling in the annealing steps. Recalling the definition for $Q_{\theta,\theta',z}$ in equation , the SMC algorithm that executes this change has the following unnormalised target distribution $$\hat{A}_{\theta,\theta'z}(\mathrm{d}u)=Q_{\theta,\theta',z}(\mathrm{d}u)\prod_{t=0}^{T}\frac{f_{\theta,\theta't+1}(u_{t})}{f_{\theta,\theta't}(u_{t})}.\label{eq: particle AIS SMC target}$$ Let us define $C_{\theta,\theta',z}:=\int\hat{A}_{\theta,\theta',z}(\mathrm{d}u)$ so that the normalised target distribution of the SMC is $$A_{\theta,\theta',z}(\mathrm{d}u)=\frac{\hat{A}_{\theta,\theta'z}(\mathrm{d}u)}{C_{\theta,\theta',z}}.\label{eq: particle AIS normalised SMC target}$$ One important observation is that $$\int\pi_{\theta}(\mathrm{d}z)C_{\theta,\theta',z}=\frac{\pi(\theta')}{\pi(\theta)}.$$ Denote all the particles generated by the SMC by $\mathfrak{\zeta}=u_{0:T}^{(1:N)}$ and let $\psi_{\theta,\theta',z}$ be the law of $\mathfrak{\zeta}$ with respect to the SMC that targets $A_{\theta,\theta',z}$. Notice that the ratio $$\hat{C}_{\theta,\theta',z}(\zeta)=\prod_{t=0}^{T}\frac{1}{N}\sum_{i=1}^{N}\frac{f_{\theta,\theta',t+1}(u_{t}^{(i)})}{f_{\theta,\theta',t}(u_{t}^{(i)})},$$ is the unbiased SMC estimator of $C_{\theta,\theta',z}$, so that $$\int\pi_{\theta}(\mathrm{d}z)\psi_{\theta,\theta',z}(\mathrm{d}\zeta)\hat{C}_{\theta,\theta',z}(\zeta)=\frac{\pi(\theta')}{\pi(\theta)}.$$ Then, a sensible candidate for the acceptance ratio would be $$\mathring{r}_{\zeta}^{N}(\theta,\theta'):=\frac{q(\theta',\theta)}{q(\theta,\theta')}\hat{C}_{\theta,\theta',z}(\zeta).$$ It turns out that we can develop an SMC based MHAAR algorithm that uses $\mathring{r}_{\zeta}^{N}(\theta,\theta')$; this is shown in Algorithm \[alg: MHAAR-SMC for general latent variable models\]. We prove its reversibility in the subsequent theorem. Sample $\theta'\sim q(\theta,\cdot)$ and $v\sim\mathcal{U}(0,1)$.\ \[thm: reversibility of particle AIS\]The transition kernel of Algorithm \[alg: MHAAR-SMC for general latent variable models\] satisfies the detailed balance with respect to $\pi$. Since $\hat{C}_{\theta,\theta'z}(\zeta)$ is an unbiased SMC estimator of $C_{\theta,\theta',z}$, we can define the probability distribution $$\bar{\psi}_{\theta,\theta',z}(\mathrm{d}\zeta)=\frac{\hat{C}_{\theta,\theta',z}(\zeta)}{C_{\theta,\theta',z}}\psi_{\theta,\theta',z}(\mathrm{d}\zeta).\label{eq: Particle AIS derivation SMC to cSMC}$$ Denote the law of all the particles in cSMC conditioned on $u$ by $\Phi_{\theta,\theta'}(u,\cdot)$ and the law of the path obtained by backward sampling given particles $\zeta$ by $\check{\Phi}_{\theta,\theta'}(\zeta,\cdot)$. Then, $Q_{1}$ and $Q_{2}$ of Algorithm can be written as $$\begin{aligned} Q_{1}(x,\mathrm{d}(y,\zeta,u)) & =q(\theta,\mathrm{d}\theta')\psi_{\theta,\theta',z}(\mathrm{d}\zeta)\check{\Phi}_{\theta,\theta'}(\zeta,\mathrm{d}u)R_{\theta'}(u_{T},\mathrm{d}z'),\\ Q_{2}(x,\mathrm{d}(y,\zeta,u)) & =q(\theta,\mathrm{d}\theta')\bar{Q}_{\theta,\theta',z}(\mathrm{d}u)R_{\theta'}(u_{0},\mathrm{d}z')\Phi_{\theta',\theta}(u,\mathrm{d}\zeta).\end{aligned}$$ where $\bar{Q}_{\theta,\theta',z}$ is defined with the involution $\varphi(u_{0},\ldots,u_{T})=(u_{T},\ldots,u_{0})$ as before. Note that in practice, we do not need to generate or store all the variables involved in $Q_{1}$ and $Q_{2}$. This is reflected in Algorithm \[alg: MHAAR-SMC for general latent variable models\] where there is no direct reference to $u$ in $Q_{1}(x,\mathrm{d}(y,\zeta,u))$ and $Q_{2}(x,\mathrm{d}(y,\zeta,u))$. Indeed, in the calculation of the acceptance ratio we only use $(\theta,z$), $\zeta$, and $(\theta',z')$. A similar shortcut taken in the implementation of the algorithm is in the labelling of the conditioned path in $Q_{2}$. However; we choose to formally define $Q_{1}$ and $Q_{2}$ as above, since with those definitions it is straightforward to show the detailed balance. Using equation , Corollary \[cor: cSMC semi-reversibility\], and in order, we have $$\begin{aligned} \psi_{\theta,\theta',z}(\mathrm{d}\zeta)\check{\Phi}_{\theta,\theta'}(\zeta,\mathrm{d}u) & =\frac{C_{\theta,\theta',z}}{\hat{C}_{\theta,\theta',z}(\zeta)}\bar{\psi}_{\theta,\theta',z}(\mathrm{d}\zeta)\check{\Phi}_{\theta,\theta'}(\zeta,\mathrm{d}u)\\ & =\frac{C_{\theta,\theta',z}}{\hat{C}_{\theta,\theta',z}(\zeta)}A_{\theta,\theta',z}(\mathrm{d}u)\Phi_{\theta,\theta'}(u,\mathrm{d}\zeta)\\ & =\frac{1}{\hat{C}_{\theta,\theta',z}(\zeta)}\hat{A}_{\theta,\theta',z}(\mathrm{d}u)\Phi_{\theta,\theta'}(u,\mathrm{d}\zeta)\\ & =\frac{1}{\hat{C}_{\theta,\theta',z}(\zeta)}\prod_{t=0}^{T}\frac{f_{\theta,\theta't+1}(u_{t})}{f_{\theta,\theta't}(u_{t})}Q_{\theta,\theta',z}(\mathrm{d}u)\Phi_{\theta,\theta'}(u,\mathrm{d}\zeta)\end{aligned}$$ Therefore, we arrive at the Radon-Nikodym derivative $$\begin{aligned} \frac{\pi(\mathrm{d}y)Q_{2}(y,\mathrm{d}(x,\zeta,u))}{\pi(\mathrm{d}x)Q_{1}(x,\mathrm{d}(y,\zeta,u))} & =\frac{q(\theta',\theta)}{q(\theta,\theta')}\frac{\pi(\theta')}{\pi(\theta)}\frac{\pi_{\theta'}(\mathrm{d}z')\bar{Q}_{\theta',\theta,z'}(\mathrm{d}u)R_{\theta}(u_{0},\mathrm{d}z)}{\pi_{\theta}(\mathrm{d}z)Q_{\theta,\theta',z}(\mathrm{d}u)R_{\theta'}(u_{0},\mathrm{d}z')}\frac{\hat{C}_{\theta,\theta',z}(\zeta)}{\prod_{t=0}^{T}\frac{f_{\theta,\theta't+1}(u_{t})}{f_{\theta,\theta't}(u_{t})}}\\ & =\frac{q(\theta',\theta)}{q(\theta,\theta')}\frac{\pi(\theta')}{\pi(\theta)}\frac{\pi(\theta)}{\pi(\theta')}\prod_{t=0}^{T}\frac{f_{\theta,\theta't+1}(u_{t})}{f_{\theta,\theta't}(u_{t})}\frac{\hat{C}_{\theta,\theta',z}(\zeta)}{\prod_{t=0}^{T}\frac{f_{\theta,\theta't+1}(u_{t})}{f_{\theta,\theta't}(u_{t})}}\\ & =\frac{q(\theta',\theta)}{q(\theta,\theta')}\hat{C}_{\theta,\theta',z}(\zeta),\end{aligned}$$ we have used the identity in . It should now be clear how the generalisation introduced in Algorithm \[alg: MHAAR-SMC for general latent variable models\] can be modified for Algorithms \[alg: MHAAR-AIS exchange algorithm\] and \[alg: MHAAR-AIS exchange algorithm - reduced computation\], which were developed for $\pi(x)=\pi(\theta)$. Let us remark the main difference that in the algorithms of Section \[sec: Improving pseudo-marginal ratio algorithms for doubly intractable models\], $u_{0}^{(i)}$’s are sampled from the initial distribution of the annealing schedule directly, whereas for the algorithms in Section \[sec: Pseudo-marginal ratio algorithms for latent variable models\], we exploit the $z$ component of the current sample to start the SMC (remember $u_{0}^{(i)}\sim R_{\theta}(z,\cdot)$, which becomes $u_{0}^{(i)}=z$ when $R_{\theta}(z,\cdot)=\delta_{z}(\cdot)$).
--- abstract: 'This paper considers the dynamic economic dispatch problem for a group of distributed energy resources (DERs) with storage that communicate over a weight-balanced strongly connected digraph. The objective is to collectively meet a certain load profile over a finite time horizon while minimizing the aggregate cost. At each time slot, each DER decides on the amount of generated power, the amount sent to/drawn from the storage unit, and the amount injected into the grid to satisfy the load. Additional constraints include bounds on the amount of generated power, ramp constraints on the difference in generation across successive time slots, and bounds on the amount of power in storage. We synthesize a provably-correct distributed algorithm that solves the resulting finite-horizon optimization problem starting from any initial condition. Our design consists of two interconnected systems, one estimating the mismatch between the injection and the total load at each time slot, and another using this estimate to reduce the mismatch and optimize the total cost of generation while meeting the constraints.' author: - 'Ashish Cherukuri Jorge Cortés[^1]' bibliography: - 'alias.bib' - 'Main.bib' - 'Main-add.bib' - 'JC.bib' title: | Distributed coordination of DERs with storage\ for dynamic economic dispatch[^2] --- Introduction {#sec:Intro} ============ The current electricity grid is up for a major transformation to enable the widespread integration of distributed energy resources and flexible loads to improve efficiency and reduce emissions without affecting reliability and performance. This presents the need for novel coordinated control and optimization strategies which, along with suitable architectures, can handle uncertainties and variability, are fault-tolerant and robust, and preserve privacy. With this context in mind, our objective here is to provide a distributed algorithmic solution to the dynamic economic dispatch problem with storage. We see the availability of such strategies as a necessary building block in realizing the vision of the future grid. *Literature review:* Static economic dispatch (SED) involves a group of generators collectively meeting a specified load for a single time slot while minimizing the total cost and respecting individual constraints. In recent years, distributed generation has motivated the shift from traditional solutions of the SED problem to decentralized ones, see e.g., [@ADDG-STC-CNH:12; @SK-GH:12; @WZ-WL-XW-LL-FF:14] and our own work [@AC-JC:15-tcns; @AC-JC:14-auto]. As argued in [@XX-AME:10; @MDI-LX-JJ:11], the dynamic version of the problem, termed dynamic economic dispatch (DED), results in better grid control as it optimally plans generation across a time horizon, specifically taking into account ramp limits and variability of power commitment from renewable sources. Conventional solution methods to the DED problem are centralized [@XX-AME:10]. Recent works [@MDI-LX-JJ:11; @XX-JZ-AE:11] have employed model predictive control (MPC)-based algorithms to deal more effectively with complex constraints and uncertainty, but the resulting methods are still centralized and do not provide theoretical guarantees on the optimality of the solution. The work [@ZL-WW-BZ-HS-QG:13] proposes a Lagrangian relaxation method to solve the DED problem, but the implementation requires a master agent that communicates with and coordinates the generators. MPC methods have also been employed by [@AH-JM-HM-HD:13] in the dynamic economic dispatch with storage ([[DEDS]{}]{}) problem, which adds storage units to the DED problem to lower the total cost, meet uncertain demand under uncertain generation, and smooth out the generation profile across time. The stochastic version of the [[DEDS]{}]{}problem adds uncertainty in demand and generation by renewables. Algorithmic solutions for this problem put the emphasis on breaking down the complexity to speed up convergence for large-scale problems and include stochastic MPC [@DZ-GH:14], dual decomposition [@YZ-NG-GBG:13], and optimal condition decomposition [@AS-AR-AK:15] methods. However, these methods are either centralized or need a coordinating central master. *Statement of contributions:* Our starting point is the formulation of the [[DEDS]{}]{}problem for a group of power DERs communicating over a weight-balanced strongly connected digraph. Since the cost functions are convex and all constraints are linear, the problem is convex in its decision variables, which are the power to be injected and the power to be sent to storage by each DER at each time slot. Using exact penalty functions, we reformulate the [[DEDS]{}]{}problem as an equivalent optimization that retains equality constraints but removes inequality ones. The structure of the modified problem guides our design of the provably-correct distributed strategy termed “dynamic average consensus ([`dac`]{}) + Laplacian nonsmooth gradient (${\mathsf{L}}\partial$) + nonsmooth gradient ($\partial$)” dynamics to solve the [[DEDS]{}]{}problem starting from any initial condition. This algorithm consists of two interconnected systems. A first block allows DERs to track, using [`dac`]{}, the mismatch between the current total power injected and the load for each time slot of the planning horizon. A second block has two components, one that minimizes the total cost while keeping the total injection constant (employing Laplacian-nonsmooth-gradient dynamics on injection variables and nonsmooth-gradient dynamics on storage variables) and an error-correcting component that uses the mismatch signal estimated by the first block to adjust, exponentially fast, the total injection towards the load for each time slot. *Notation:* Let ${{\mathbb{R}}}$, ${{\mathbb{R}}_{\ge 0}}$, ${{\mathbb{R}}_{>0}}$, ${\mathbb{Z}_{\geq 1}}$ denote the set of real, nonnegative real, positive real, and positive integer numbers, respectively. The $2$- and $\infty$-norm on ${{\mathbb{R}}}^n$ are denoted by ${\ensuremath{\| \cdot \|}}$ and ${\ensuremath{\| \cdot \|}}_{\infty}$, respectively. We let $B(x,\delta)$ denote the open ball centered at $x \in {{\mathbb{R}}}^n$ with radius $\delta > 0$. Given $r \in {{\mathbb{R}}}$, we denote ${\mathcal{H}}_r = {\{x \in {{\mathbb{R}}}^n \; | \; {\mathbf{1}}_n^\top x = r\}}$. For a symmetric matrix $A \in {{\mathbb{R}}}^{n \times n}$, the minimum and maximum eigenvalues of $A$ are $\lambda_{\min}(A)$ and $\lambda_{\max}(A)$. The Kronecker product of $A \in {{\mathbb{R}}}^{n \times m}$ and $B \in {{\mathbb{R}}}^{p \times q}$ is $A \otimes B \in {{\mathbb{R}}}^{np \times mq}$. We use ${\mathbf{0}}_n = (0,\ldots,0) \in {{\mathbb{R}}}^n$, ${\mathbf{1}}_n=(1,\ldots,1) \in {{\mathbb{R}}}^n$, and ${\mathrm{I}}_n \in {{\mathbb{R}}}^{n \times n}$ for the identity matrix. For $x\in {{\mathbb{R}}}^n$ and $y \in {{\mathbb{R}}}^m$, the vector $(x;y) \in {{\mathbb{R}}}^{n+m}$ denotes the concatenation. Given $x,y\in {{\mathbb{R}}}^n$, $x_i$ denotes the $i$-th component of $x$, and $x \le y$ denotes $x_i \le y_i$ for $i \in {\{1,\dots,n\}}$. For ${\mathfrak{h}}> 0$, given $y \in {{\mathbb{R}}}^{n{\mathfrak{h}}}$ and $k \in {\{1,\dots,{\mathfrak{h}}\}}$, the vector containing the $nk-n+1$ to $nk$ components of $y$ is $y^{(k)} \in {{\mathbb{R}}}^n$, and so, $y = (y^{(1)};y^{(2)}; \dots; y^{({\mathfrak{h}})})$. We let $[u]^{+} = \max \{0,u\}$ for $u \in {{\mathbb{R}}}$. A set-valued map ${f:{{\mathbb{R}}}^{n} \rightrightarrows {{\mathbb{R}}}^{m}}$ associates to each point in ${{\mathbb{R}}}^{n}$ a set in ${{\mathbb{R}}}^{m}$. Preliminaries {#se:Prelim} ============= This section introduces concepts from graph theory, nonsmooth analysis, differential inclusions, and optimization. *Graph theory:* Following [@FB-JC-SM:08cor], a *weighted directed graph*, is a triplet ${\mathcal{G}}=({\mathcal{V}},{\mathcal{E}},{\mathsf{A}})$, where ${\mathcal{V}}$ is the vertex set, $ {\mathcal{E}}\subseteq {\mathcal{V}}\times {\mathcal{V}}$ is the edge set, and $ {\mathsf{A}}\in \mathbb{R}^{n\times n}_{\geq0} $ is the *adjacency matrix* with the property that $a_{ij}>0 $ if $ (v_i,v_j)\in {\mathcal{E}}$ and $ a_{ij}=0 $, otherwise. A path is an ordered sequence of vertices such that any consecutive pair of vertices is an edge. A digraph is *strongly connected* if there is a path between any pair of distinct vertices. For a vertex $v_i$, ${{N^{\textup{out}}}}(v_i) = {\{v_j \in {\mathcal{V}}\; | \; (v_i, v_j) \in {\mathcal{E}}\}}$ is the set of its out-neighbors. The *Laplacian* matrix is $ {\mathsf{L}}= {{\mathsf{D}_{\textup{out}}}}-{\mathsf{A}}$, where ${{\mathsf{D}_{\textup{out}}}}$ is the diagonal matrix defined by $({{\mathsf{D}_{\textup{out}}}})_{ii}= \sum_{j=1}^{n}a_{ij} $, for all $ i \in \{1,\ldots,n\}$. Note that ${\mathsf{L}}{\mathbf{1}}_n=0 $. If $ {\mathcal{G}}$ is strongly connected, then zero is a simple eigenvalue of ${\mathsf{L}}$. ${\mathcal{G}}$ is *weight-balanced* iff $ {\mathbf{1}}_n^\top {\mathsf{L}}=0 $ iff $ {\mathsf{L}}+{\mathsf{L}}^\top$ is positive semidefinite. If ${\mathcal{G}}$ is weight-balanced and strongly connected, then zero is a simple eigenvalue of ${\mathsf{L}}+{\mathsf{L}}^\top $ and, for $x \in {{\mathbb{R}}}^n$, $$\label{eq:LapBound} \lambda_2({\mathsf{L}}+ {\mathsf{L}}^\top) {\ensuremath{\big \| x-\frac{1}{n}({\mathbf{1}}_n^\top x){\mathbf{1}}_n \big \|}}^2 \le x^\top ({\mathsf{L}}+ {\mathsf{L}}^\top)x,$$ with $\lambda_2({\mathsf{L}}+ {\mathsf{L}}^\top)$ the smallest non-zero eigenvalue of ${\mathsf{L}}+ {\mathsf{L}}^\top$. *Nonsmooth analysis:* Here, we introduce some notions on nonsmooth analysis from [@JC:08-csm-yo]. A function $f:{{\mathbb{R}}}^n \rightarrow {{\mathbb{R}}}^m$ is *locally Lipschitz* at $x \in {{\mathbb{R}}}^n$ if there exist $L, \epsilon \in {{\mathbb{R}}_{>0}}$ such that ${\ensuremath{\| f(y) - f(y') \|}} \le L{\ensuremath{\| y - y' \|}}$, for all $y, y'\in B(x,\epsilon)$. A function $f:{{\mathbb{R}}}^n \rightarrow {{\mathbb{R}}}$ is *regular* at $x \in {{\mathbb{R}}}^n$ if, for all $v \in {{\mathbb{R}}}^n$, the right directional derivative and the generalized directional derivative of $f$ at $x$ along the direction $v$ coincide, see [@JC:08-csm-yo] for these definitions. A convex function is regular. A set-valued map ${\mathcal{H}}:{{\mathbb{R}}}^n {\rightrightarrows}{{\mathbb{R}}}^n$ is *upper semicontinuous* at $x \in {{\mathbb{R}}}^n$ if, for all $\epsilon \in {{\mathbb{R}}_{>0}}$, there exists $\delta \in {{\mathbb{R}}_{>0}}$ such that ${\mathcal{H}}(y) \subset {\mathcal{H}}(x) + B(0,\epsilon)$ for all $y \in B(x,\delta)$. Also, ${\mathcal{H}}$ is *locally bounded* at $x \in {{\mathbb{R}}}^n$ if there exist $\epsilon, \delta \in {{\mathbb{R}}_{>0}}$ such that ${\ensuremath{\| z \|}} \le \epsilon$ for all $z \in {\mathcal{H}}(y)$, and all $y \in B(x,\delta)$. Given a locally Lipschitz function $f:{{\mathbb{R}}}^n \rightarrow {{\mathbb{R}}}$, let $\Omega_f$ be the set (of measure zero) of points where $f$ is not differentiable. The *generalized gradient* $\partial f: {{\mathbb{R}}}^n {\rightrightarrows}{{\mathbb{R}}}^n$ of $f$ is $$\partial f(x) = \mathrm{co} {\{ \lim_{i \rightarrow \infty} {\nabla}f(x_i) \; | \; x_i \rightarrow x, x_i \notin S \cup \Omega_f\}},$$ where $\mathrm{co}$ is the convex hull and $S \subset {{\mathbb{R}}}^n$ is any set of measure zero. The set-valued map $\partial f$ is locally bounded, upper semicontinuous, and takes non-empty, compact, and convex values. For a function ${f:{{\mathbb{R}}}^n \times {{\mathbb{R}}}^m \rightarrow {{\mathbb{R}}}}$, $(x,y) \mapsto f(x,y)$, the partial generalized gradient with respect to $x$ and $y$ are denoted by $\partial_x f$ and $\partial_y f$, respectively. *Differential inclusions:* We gather here tools from [@JC:08-csm-yo; @AC-JC:14-auto] to analyze the stability properties of differential inclusions, $$\label{eq:ddsys} \dot x \in {F}(x) ,$$ where ${F}: {{\mathbb{R}}}^n {\rightrightarrows}{{\mathbb{R}}}^n$ is a set-valued map. A solution of  on $[0,T] \subset {{\mathbb{R}}}$ is an absolutely continuous map $x:[0,T]\rightarrow {{\mathbb{R}}}^n$ that satisfies  for almost all $t \in [0,T]$. If the set-valued map ${F}$ is locally bounded, upper semicontinuous, and takes non-empty, compact, and convex values, then the existence of solutions is guaranteed. The set of equilibria of  is ${\mathrm{Eq}({F})} = {\{x \in {{\mathbb{R}}}^n \; | \; 0 \in {F}(x) \}}$. Given a locally Lipschitz function $W: {{\mathbb{R}}}^n \rightarrow {{\mathbb{R}}}$, the *set-valued Lie derivative* ${{\mathcal{L}}}_{{F}}W: {{\mathbb{R}}}^n {\rightrightarrows}{{\mathbb{R}}}$ of $W$ with respect to  at $x \in {{\mathbb{R}}}^n$ is $$\begin{aligned} {{\mathcal{L}}}_{{F}}W = {\{a \in {{\mathbb{R}}}\; | \; \exists v \in {F}(x) \text{ s.t. } \zeta^\top v=a, \, \forall \zeta \in \partial W(x)\}} .\end{aligned}$$ The *$\omega$-limit set* of a trajectory $t \mapsto \varphi(t)$, $\varphi(0) \in {{\mathbb{R}}}^n$ of , denoted $\Omega(\varphi)$, is the set of all points $y \in {{\mathbb{R}}}^n$ for which there exists a sequence of times $\{t_k\}_{k=1}^\infty$ with $t_k \to \infty$ such that $\lim_{k \to \infty} \varphi(t_k) = y$. If the trajectory is bounded, then the $\omega$-limit set is nonempty, compact, connected. The next result from [@AC-JC:14-auto] is a refinement of the LaSalle Invariance Principle for differential inclusions that establishes convergence of . \[pr:refined-lasalle-nonsmooth\] Let ${{F}:{{\mathbb{R}}}^n \rightrightarrows {{\mathbb{R}}}^n}$ be upper semicontinuous, taking nonempty, convex, and compact values everywhere in ${{\mathbb{R}}}^n$. Let $t \mapsto \varphi(t)$ be a bounded solution of  whose $\omega$-limit set $\Omega(\varphi)$ is contained in ${\mathcal{S}}\subset {{\mathbb{R}}}^n$, a closed embedded submanifold of ${{\mathbb{R}}}^n$. Let ${\mathcal{O}}$ be an open neighborhood of ${\mathcal{S}}$ where a locally Lipschitz, regular function ${W:{\mathcal{O}}\rightarrow {{\mathbb{R}}}}$ is defined. Then, $\Omega(\varphi) \subset {\mathcal{E}}$ if the following holds, 1. $\!{\mathcal{E}}\!=\! {\{x \in {\mathcal{S}}\! \; | \; \!0 \in {{\mathcal{L}}}_{{F}} W(x) \}}$ belongs to a level set of $W$ \[as:refined-lasalle-1\] 2. for any compact set ${\mathcal{M}}\subset {\mathcal{S}}$ with ${\mathcal{M}}\cap {\mathcal{E}}= \emptyset$, there exists a compact neighborhood ${\mathcal{M}}_c$ of ${\mathcal{M}}$ in ${{\mathbb{R}}}^n$ and $\delta < 0$ such that $\sup_{x \in {\mathcal{M}}_c} \max {{\mathcal{L}}}_{{F}} W(x) \le \delta$. \[as:refined-lasalle-2\] *Constrained optimization and exact penalty functions:* Here, we introduce some notions on constrained convex optimization following [@SB-LV:09; @DPB:75b]. Consider the optimization problem, \[eq:GenConsOpt\] $$\begin{aligned} \mathrm{minimize} \quad & f(x), \label{eq:GenConsObjective} \\ \text{subject to} \quad & g(x) \le {\mathbf{0}}_m, \quad h(x) ={\mathbf{0}}_p, \label{eq:GenConsConstraints} \end{aligned}$$ where $f:{{\mathbb{R}}}^n \rightarrow {{\mathbb{R}}}$, $g:{{\mathbb{R}}}^n \rightarrow {{\mathbb{R}}}^m$, are continuously differentiable and convex, and $h:{{\mathbb{R}}}^n \rightarrow {{\mathbb{R}}}^p$ with $p\le n$ is affine. The *refined Slater condition* is satisfied by  if there exists $x \in {{\mathbb{R}}}^n$ such that $h(x) = {\mathbf{0}}_p$, $g(x) \le {\mathbf{0}}_m$, and $g_i(x) < 0$ for all nonaffine functions $g_i$. The refined Slater condition implies that strong duality holds. A point $x \in {{\mathbb{R}}}^n$ is a Karush-Kuhn-Tucker (KKT) point of  if there exist Lagrange multipliers $\lambda \in {{\mathbb{R}}_{\ge 0}}^m$ and $\nu \in {{\mathbb{R}}}^p$ such that $$\begin{aligned} & g(x) \le {\mathbf{0}}_m, \quad h(x) = {\mathbf{0}}_p, \quad \lambda^\top g(x) = 0, \\ & {\nabla}f(x)+ \sum_{i=1}^m \lambda_i {\nabla}g_i(x) + \sum_{i=1}^p \nu_i {\nabla}h_i(x) = 0.\end{aligned}$$ If strong duality holds then, a point is a solution of  iff it is a KKT point. The optimization  satisfies the *strong Slater condition* with parameter $\rho \in {{\mathbb{R}}_{>0}}$ and feasible point $x^\rho \in {{\mathbb{R}}}^n$ if $g(x^\rho) < - \rho {\mathbf{1}}_m$ and $h(x^\rho) = {\mathbf{0}}_p$. \[le:lagrange-bound\] If  satisfies the strong Slater condition with parameter $\rho \in {{\mathbb{R}}_{>0}}$ and feasible point $x^\rho \in {{\mathbb{R}}}^n$, then any primal-dual optimizer $(x,{\lambda},\nu)$ of  satisfies $$\begin{aligned} {\ensuremath{\| {\lambda}\|}_{\infty}} \le \frac{f(x^\rho) - f(x)}{\rho}. \end{aligned}$$ We are interested in eliminating the inequality constraints in  while keeping the equality constraints intact. To this end, we use [@DPB:75b] to construct a nonsmooth exact penalty function $f^{\epsilon}: {{\mathbb{R}}}^n \rightarrow {{\mathbb{R}}}$, given as $ f^{\epsilon}(x) = f(x) + \frac{1}{\epsilon} \sum_{i=1}^m [g_i(x)]^+$, with $\epsilon >0$, and define the minimization problem \[eq:ExactPenalty\] $$\begin{aligned} \mathrm{minimize} \quad & f^{\epsilon}(x), \label{eq:ExactPenalty1} \\ \text{subject to} \quad & h(x) = {\mathbf{0}}_p. \label{eq:ExactPenalty2} \end{aligned}$$ Note that $f^{\epsilon}$ is convex as $f$ and $t \mapsto \frac{1}{\epsilon} [t]^+$ are convex. Hence, the problem  is convex. The following result, see e.g. [@DPB:75b Proposition 1], identifies conditions under which the solutions of the problems  and  coincide. \[pr:EquivalenceExactPenalty\] Assume  has nonempty, compact solution set, and satisfies the refined Slater condition. Then,  and  have the same solutions if $ \frac{1}{\epsilon} > {\ensuremath{\| \lambda \|}}_\infty$, for some Lagrange multiplier $\lambda \in {{\mathbb{R}}_{\ge 0}}^m$ of . Problem statement {#sec:problem} ================= Consider a network of $n \in {\mathbb{Z}_{\geq 1}}$ distributed energy resources (DERs) whose communication topology is a strongly connected and weight-balanced digraph ${\mathcal{G}}=({\mathcal{V}},{\mathcal{E}},{\mathsf{A}})$. For simplicity, we assume DERs to be generator units. In our discussion, DERs can also be flexible loads (where the cost function corresponds to the negative of the load utility function). An edge $(i,j)$ represents the capability of unit $j$ to transmit information to unit $i$. Each unit $i$ is equipped with storage capabilities with minimum $C^m_i \in {{\mathbb{R}}_{\ge 0}}$ and maximum $C^M_i \in {{\mathbb{R}}_{>0}}$ capacities. The network collectively aims to meet a power demand profile during a finite-time horizon ${\mathcal{K}}= {\{1,\dots,{\mathfrak{h}}\}}$ specified by $l \in {{\mathbb{R}}_{>0}}^{\mathfrak{h}}$, that is, $l^{(k)}$ is the demand at time slot $k \in {\mathcal{K}}$. This demand can either correspond to a load requested from an outside entity, denoted $L^{(k)} \ge 0$ for slot $k$, or each DER $i$ might have to satisfy a load at the bus it is connected to, denoted $\tilde{l}_i^{(k)} \ge 0$ for slot $k$. Thus, for each $k \in {\mathcal{K}}$, $l^{(k)} = L^{(k)} + \sum_{i=1}^n \tilde{l}_i^{(k)}$. We assume that the external demand $L = (L^{(1)}, \dots, L^{({\mathfrak{h}})}) \in {{\mathbb{R}}_{\ge 0}}^{{\mathfrak{h}}}$ is known to an arbitrarily selected unit $r \in {\{1,\dots,n\}}$, whereas the demand at bus $i$, $\tilde{l}_i = (\tilde{l}_i^{(1)}, \dots, \tilde{l}_i^{({\mathfrak{h}})}) \in {{\mathbb{R}}_{\ge 0}}^{{\mathfrak{h}}}$, is known to unit $i$. For convenience, $\tilde{l} = (\tilde{l}^{(1)}, \dots, \tilde{l}^{{\mathfrak{h}}})$, where $\tilde{l}^{(k)} = (\tilde{l}^{(k)}_1, \dots, \tilde{l}^{(k)}_n)$ collects the load known to each unit at slot $k \in {\mathcal{K}}$. Along with load satisfaction, the group also aims to minimize the total cost of generation and to satisfy the individual physical constraints for each DER. We make these elements precise next. Each unit $i$ decides at every time slot $k$ in ${\mathcal{K}}$ the amount of power it generates, the portion $I^{(k)}_i \in {{\mathbb{R}}}$ of it that it injects into the grid to meet the load, and the remaining part $S^{(k)}_i \in {{\mathbb{R}}}$ that it sends to the storage unit. The power generated by $i$ at $k$ is then $I^{(k)}_i + S^{(k)}_i$. We denote by $I^{(k)} = (I_1^{(k)}, \dots, I_n^{(k)}) \in {{\mathbb{R}}}^n$ and $S^{(k)} = (S_1^{(k)}, \dots, S_n^{(k)}) \in {{\mathbb{R}}}^n$ the collective injected and stored power at time $k$, respectively. The load satisfaction is then expressed as ${\mathbf{1}}_n^\top I^{(k)} = l^{(k)} = L^{(k)} + {\mathbf{1}}_n^\top \tilde{l}^{(k)}$, for all $k \in {\mathcal{K}}$. The cost $f_i^{(k)}(I_i^{(k)}+S_i^{(k)})$ of power generation $I^{(k)}_i + S^{(k)}_i$ by unit $i$ at time $k$ is specified by the function $f_i^{(k)}:{{\mathbb{R}}}\rightarrow {{\mathbb{R}}_{\ge 0}}$, which we assume convex and continuously differentiable. Given $(I^{(k)},S^{(k)})$, the cost incurred by the network at time slot $k$ is $$\begin{aligned} f^{(k)}(I^{(k)}+S^{(k)}) = \sum_{i=1}^{n}f_i^{(k)}(I_i^{(k)}+S_i^{(k)}) .\end{aligned}$$ The cumulative cost of generation for the network across the time horizon is ${f:{{\mathbb{R}}}^{n{\mathfrak{h}}} \rightarrow {{\mathbb{R}}_{\ge 0}}}$, $f(x) = \sum_{k=1}^{\mathfrak{h}}f^{(k)}(x^{(k)})$. Given injection $I = (I^{(1)}, \dots, I^{({\mathfrak{h}})}) \in {{\mathbb{R}}}^{n{\mathfrak{h}}}$ and storage $S = (S^{(1)}, \dots, S^{({\mathfrak{h}})}) \in {{\mathbb{R}}}^{n{\mathfrak{h}}}$ values, the total network cost is $$\begin{aligned} f(I+S) = \sum_{k=1}^{\mathfrak{h}}f^{(k)} (I^{(k)}+S^{(k)}).\end{aligned}$$ The functions $\{f^{(k)}\}_{k \in {\mathcal{K}}}$ and $f$ are also convex and continuously differentiable. Next, we describe the physical constraints on the DERs. Each unit’s power must belong to the range $[P^m_i, P^M_i] \subset {{\mathbb{R}}_{>0}}$, representing lower and upper bounds on the amount of power it can generate at each time slot. Each unit $i$ also respects upper and lower ramp constraints: the change in the generation level from any time slot $k$ to $k+1$ is upper and lower bounded by $R^u_i$ and $-R^l_i$, respectively, with $ R^u_i$, $R^l_i \in {{\mathbb{R}}_{>0}}$. At each time slot, the power injected into the grid by each unit must be nonnegative, i.e., $I^{(k)}_i \ge 0$. Furthermore, the amount of power stored in any storage unit $i$ at any time slot $k \in {\mathcal{K}}$ must belong to the range $[C^m_i, C^M_i]$. Finally, we assume that at the beginning of the time slot $k=1$, each storage unit $i$ starts with some stored power $S^{(0)}_i \in [C^m_i,C^M_i]$. With the above model, the *dynamic economic dispatch with storage* ([[DEDS]{}]{}) problem is formally defined by the following convex optimization problem, \[eq:deds\] $$\begin{aligned} \underset{(I,S) \in {{\mathbb{R}}}^{2n{\mathfrak{h}}}}{\text{minimize}} & \quad f(I+S), \label{eq:conobjective-s} \\ \text{subject to}& \, \, \, \text{for } k \in {\mathcal{K}}, \notag \\ & \quad {\mathbf{1}}_n^\top I^{(k)} = l^{(k)}, \label{eq:load-cond} \\ & \quad P^{m} \le I^{(k)} + S^{(k)} \le P^{M}, \label{eq:box-cons} \\ & \textstyle \quad C^m \le S^{(0)} + \sum_{k' = 1}^k S^{(k')} \le C^M, \label{eq:storage-cons} \\ & \quad {\mathbf{0}}_n \le I^{(k)}, \label{eq:injection-cons} \\ & \, \, \, \text{for } k \in {\mathcal{K}}\setminus \{{\mathfrak{h}}\}, \notag \\ & \!-\! R^l \le I^{(k+1)} \!+\! S^{(k+1)} \!-\! I^{(k)} \!-\! S^{(k)} \le R^u. \label{eq:ramp-cons} \end{aligned}$$ We refer to – as the *load conditions*, *box constraints*, *storage limits*, *injection constraints*, and *ramp constraints*, respectively. We denote by ${{\mathcal{F}}_{\mathrm{DEDS}}}$ and ${{\mathcal{F}}_{\mathrm{DEDS}}^{*}}$ the feasibility set and the solution set of the [[DEDS]{}]{}problem , respectively, and assume them to be nonempty. Since ${{\mathcal{F}}_{\mathrm{DEDS}}}$ is compact, so is ${{\mathcal{F}}_{\mathrm{DEDS}}^{*}}$. Moreover, the refined Slater condition is satisfied for [[DEDS]{}]{}as all the constraints – are affine in the decision variables. Additionally, we assume that the [[DEDS]{}]{}problem satisfies the strong Slater condition with parameter $\rho \in {{\mathbb{R}}_{>0}}$ and feasible point $(I^\rho,S^\rho) \in {{\mathbb{R}}}^{2n{\mathfrak{h}}}$. \[re:storage-subset\] The [[DEDS]{}]{}formulation above can be modified to consider scenarios where only some DERs ${\mathcal{V}}_{gs}$ are equipped with storage and others ${\mathcal{V}}_{g}$ are not, with ${\{1,\dots,n\}} = {\mathcal{V}}_{gs} {\mathbin{\mathaccent\cdot\cup}}{\mathcal{V}}_{g}$. The formulation can also be extended to consider the cost of storage, inefficiencies, and constraints on (dis)charging of the storage units, as in [@AH-JM-HM-HD:13; @YZ-NG-GBG:13]. These factors either affect the constraint , add additional conditions on the storage variables, or modify the objective function. As long as the resulting cost and constraints are convex in $S$, all these can be treated within  without affecting the design methodology. Our aim is to design a distributed algorithm that allows the network interacting over ${\mathcal{G}}$ to solve the [[DEDS]{}]{}problem. Distributed algorithmic solution {#sec:main-result} ================================ We describe here the distributed algorithm that asymptotically finds the optimizers of the [[DEDS]{}]{}problem. Our design strategy builds on an alternative formulation of the optimization problem using penalty functions (cf. Section \[sec:main-result\]-A). This allows us to get rid of the inequality constraints, resulting into an optimization whose structure guides our algorithmic design (cf. Section \[sec:main-result\]-B). *A. Alternative formulation of the [[DEDS]{}]{}problem:* The procedure here follows closely the theory of exact penalty functions outlined in Section \[se:Prelim\]. For an $\epsilon \in {{\mathbb{R}}_{>0}}$, consider the modified cost function ${f^{{\epsilon}}:{{\mathbb{R}}}^{n{\mathfrak{h}}} \times {{\mathbb{R}}}^{n{\mathfrak{h}}} \rightarrow {{\mathbb{R}}_{\ge 0}}}$, $$\begin{aligned} f^{{\epsilon}}(I,& S) = f(I+S) + \frac{1}{{\epsilon}} \Bigl( \sum_{k = 1}^{{\mathfrak{h}}} {\mathbf{1}}_n^\top \bigl( [T^{(k)}_1]^+ + [T^{(k)}_2]^+ + [T^{(k)}_3]^+ \\ & + [T^{(k)}_4]^+ + [T^{(k)}_5]^+ \bigr) + \sum_{k=1}^{{\mathfrak{h}}-1} {\mathbf{1}}_n^\top \bigl( [T^{(k)}_6]^+ + [T^{(k)}_7]^+ \bigr) \Bigr),\end{aligned}$$ where $$\begin{aligned} \label{eq:T-defs} & T^{(k)}_1 = P^m - I^{(k)} - S^{(k)}, \, T^{(k)}_2 = I^{(k)} + S^{(k)} - P^M, \notag \\ & \textstyle T^{(k)}_3 = C^m - S^{(0)}- \sum_{k'=1}^k S^{(k')}, \notag \\ & \textstyle T^{(k)}_4 = S^{(0)} + \sum_{k'=1}^k S^{(k')} - C^M, \, \, T^{(k)}_5 = - I^{(k)}, \notag \\ &T^{(k)}_6 = -R^l - I^{(k+1)}- S^{(k+1)} + I^{(k)}+ S^{(k)}, \notag \\ &T^{(k)}_7 = I^{(k+1)} + S^{(k+1)} - I^{(k)}- S^{(k)} - R^u. \end{aligned}$$ This cost contains the penalty terms for all the inequality constraints of the [[DEDS]{}]{}problem. Note that $f^{\epsilon}$ is locally Lipschitz, jointly convex in $I$ and $S$, and regular. Thus, the partial generalized gradients $\partial_I f^{\epsilon}$ and $\partial_S f^{\epsilon}$ take nonempty, convex, compact values and are locally bounded and upper semicontinuous. Consider the modified [[DEDS]{}]{}problem \[eq:deds-m\] $$\begin{aligned} \mathrm{minimize} \quad & f^{{\epsilon}}(I,S), \label{eq:conobjective-ms} \\ \text{subject to} \quad & {\mathbf{1}}_n^\top I^{(k)} = l^{(k)}, \, \forall k \in {\mathcal{K}}. \label{eq:equalitycons-ms} \end{aligned}$$ The next result provides a criteria for selecting ${\epsilon}$ such that the modified [[DEDS]{}]{}and the [[DEDS]{}]{}problems have the exact same solutions. The proof is a direct application of Lemmas \[le:lagrange-bound\] and \[pr:EquivalenceExactPenalty\] using that the [[DEDS]{}]{}problem satisfies the strong Slater condition with parameter $\rho$ and feasible point $(I^\rho,S^\rho)$. \[le:equiv-ded\] Let $(I^*,S^*) \in {{\mathcal{F}}_{\mathrm{DEDS}}^{*}}$. Then, the optimizers of the problems  and  are the same for ${\epsilon}\in {{\mathbb{R}}_{>0}}$ satisfying $$\begin{aligned} \label{eq:eps-bound} {\epsilon}< \frac{\rho}{f(I^\rho+S^\rho) - f(I^*+S^*)}. \end{aligned}$$ As a consequence, if ${\epsilon}$ satisfies  then, writing the Lagrangian and the KKT conditions for  gives the following characterization of the solution set of the [[DEDS]{}]{}problem $$\begin{aligned} \label{eq:solution-set} {{\mathcal{F}}_{\mathrm{DEDS}}^{*}}= & {\{(I,S) \in {{\mathbb{R}}}^{2n{\mathfrak{h}}} \; | \; {\mathbf{1}}_n^\top I^{(k)} = l^{(k)} \text{ for all } k \in {\mathcal{K}}, \notag \\ & 0 \in \partial_S f^{\epsilon}(I,S), \text{ and } \exists \nu \in {{\mathbb{R}}}^{\mathfrak{h}}\text{ such that }\notag \\ & (\nu^{(1)} {\mathbf{1}}_n; \dots ; \nu^{({\mathfrak{h}})} {\mathbf{1}}_n) \in \partial_I f^{\epsilon}(I,S)\}}.\end{aligned}$$ Recall that ${{\mathcal{F}}_{\mathrm{DEDS}}^{*}}$ is bounded. Next, we stipulate a mild regularity assumption on this set which implies that perturbing it by a small parameter does not result into an unbounded set. This property is of use in our convergence analysis later. \[as:regular\] For $p \in {{\mathbb{R}}_{\ge 0}}$, define the map $p \mapsto {\mathcal{F}}(p) \subset {{\mathbb{R}}}^{2n{\mathfrak{h}}}$ as $$\begin{aligned} {\mathcal{F}}(p) = & {\{(I,S) \in {{\mathbb{R}}}^{2n{\mathfrak{h}}} \; | \; {\ensuremath{\left\lvert{{\mathbf{1}}_n^\top I^{(k)} - l^{(k)}}\right\rvert}} \le p \text{ for all } k \in {\mathcal{K}}, \\ & 0 \in \partial_S f^{\epsilon}(I,S) + p B(0,1), \text{ and } \exists \nu \in {{\mathbb{R}}}^{\mathfrak{h}}\text{ such that } \\ & (\nu^{(1)} {\mathbf{1}}_n; \dots ; \nu^{({\mathfrak{h}})} {\mathbf{1}}_n) \in \partial_I f^{\epsilon}(I,S) + p B(0,1)\}}. \end{aligned}$$ Note that ${\mathcal{F}}(0) = {{\mathcal{F}}_{\mathrm{DEDS}}^{*}}$. Then, there exists a $\bar{p} > 0$ such that ${\mathcal{F}}(p)$ is bounded for all $p \in [0,\bar{p})$. We end this section by stating a property of the generalized gradient of $f^{\epsilon}$ that will be employed later in the analysis. \[le:size-gen-gradient\] For $(I,S) \in {{\mathbb{R}}}^{2n{\mathfrak{h}}}$, any two elements $\zeta_1 \in \partial_I f^{\epsilon}(I,S)$ and $\zeta_2 \in \partial_S f^{\epsilon}(I,S)$ satisfy $$\begin{aligned} {\ensuremath{\| \zeta_1 - \zeta_2 \|}_{\infty}} \le \frac{{\mathfrak{h}}+4}{{\epsilon}}. \end{aligned}$$ Write $f^{\epsilon}(I,S) = f_a(I+S) + f_b(I) + f_c(S)$ where the functions $f_a,f_b,f_c: {{\mathbb{R}}}^{n{\mathfrak{h}}} \to {{\mathbb{R}}_{\ge 0}}$ are $$\begin{aligned} f_a(I+S) & = f(I+S) + \frac{1}{{\epsilon}} \Bigl(\sum_{k=1}^{\mathfrak{h}}{\mathbf{1}}_n^\top ( [T_1^{(k)}]^+ + [T_2^{(k)}]^+ ) \\ & \qquad \qquad \quad + \sum_{k=1}^{{\mathfrak{h}}-1} {\mathbf{1}}_n^\top ( [T_6^{(k)}]^+ + [T_7^{(k)}]^+) \Bigr), \\ f_b(I) & = \frac{1}{{\epsilon}} \sum_{k=1}^{\mathfrak{h}}{\mathbf{1}}_n^\top [T_5^{(k)}]^+, \\ f_c(S) & = \frac{1}{{\epsilon}} \sum_{k=1}^{\mathfrak{h}}{\mathbf{1}}_n^\top ([T_3^{(k)}]^+ + [T_4^{(k)}]^+). \end{aligned}$$ From the sum rule of generalized gradients [@JC:08-csm-yo], any element $\zeta_1 \in \partial_I f^{\epsilon}(I,S)$ can be expressed as a sum of the vectors $\zeta_{1,a}$ and $\zeta_{1,b} \in {{\mathbb{R}}}^{n{\mathfrak{h}}}$ such that $\zeta_{1,a} \in \partial f_a(I+S)$ and $\zeta_{1,b} \in \partial f_b(I)$. Similarly, $\zeta_2 = \zeta_{2,a} + \zeta_{2,c}$ where $\zeta_{2,a} \in \partial f_a(I+S)$ and $\zeta_{2,c} \in \partial f_c(S)$. By the definition of $f_b$, we get ${\ensuremath{\| \zeta_{1,b} \|}_{\infty}} \le \frac{1}{{\epsilon}}$. For the function $f_c$, note that for any $i \in {\{1,\dots,n\}}$ and any $k \in {\mathcal{K}}$, either $([T_3^{(k)}]^+)_i$ is zero or $([T_4^{(k)}]^+)_i$ is zero. Considering extreme case, if for a particular $i$, either $([T_3^{(k)}]^+)_i > 0$ or $([T_4^{(k)}]^+)_i > 0$ for all $k \in {\mathcal{K}}$ then, we obtain ${\ensuremath{\left\lvert{(\zeta_{2,c})_i^{(1)}}\right\rvert}} = \frac{{\mathfrak{h}}}{{\epsilon}}$. This implies that ${\ensuremath{\| \zeta_{2,c} \|}_{\infty}} \le \frac{{\mathfrak{h}}}{{\epsilon}}$. Now consider any two elements $\zeta_{1,a}, \zeta_{2,a} \in \partial f_a(I+S)$. Note that for any $i \in {\{1,\dots,n\}}$, either $([T_1^{(k)}]^+)_i$ is zero or $([T_2^{(k)}]^+)_i$ is zero. Similarly, either $([T_6^{(k)}]^+)_i$ or $([T_7^{(k)}]^+)_i$ is zero. Further, note that $I^{(k)}_i + S^{(k)}_i$ appears in $([T^{(k)}_6]^+)_i$ and $([T^{(k)}_7]^+)_i$ as well as in $([T^{(k-1)}_6]^+)_i$ and $([T^{(k-1)}_7]^+)_i$. At the same time, only two of these four terms are nonzero for any $k \in {\mathcal{K}}\setminus {{\mathfrak{h}}}$ and any $i \in {\{1,\dots,n\}}$. Using these facts one can obtain the bound ${\ensuremath{\| \zeta_{1,a}-\zeta_{2,a} \|}_{\infty}} \le \frac{3}{{\epsilon}}$. Finally, the proof concludes noting $$\begin{aligned} {\ensuremath{\| & \zeta_1 - \zeta_2 \|}_{\infty}} = {\ensuremath{\| \zeta_{1,a} + \zeta_{1,b} - \zeta_{2,a}- \zeta_{2,c} \|}_{\infty}} \\ & \qquad \le {\ensuremath{\| \zeta_{1,a}-\zeta_{2,a} \|}_{\infty}} + {\ensuremath{\| \zeta_{1,b} \|}_{\infty}} + {\ensuremath{\| \zeta_{2,c} \|}_{\infty}} = \frac{{\mathfrak{h}}+4}{{\epsilon}}. \quad \IEEEQED \end{aligned}$$ *B. The [`dac+`$({\mathsf{L}}\partial,\partial)$]{}coordination algorithm:* Here, we present our distributed algorithm and establish its asymptotic convergence to the set of solutions of the [[DEDS]{}]{}problem starting from any initial condition. Our design combines ideas of Laplacian-gradient dynamics [@AC-JC:15-tcns] and dynamic average consensus [@SSK-JC-SM:15-ijrnc]. Consider the set-valued dynamics, \[eq:dac-lap\] $$\begin{aligned} \dot I & \in -({\mathrm{I}}_{\mathfrak{h}}\otimes {\mathsf{L}}) \partial_I f^{\epsilon}(I,S) + \nu_1 z, \label{eq:dac-lap-1} \\ \dot S & \in -\partial_S f^{\epsilon}(I,S), \label{eq:dac-lap-2} \\ \dot z & = -\alpha z - \beta ({\mathrm{I}}_{\mathfrak{h}}\otimes {\mathsf{L}}) z - v + \nu_2(L \otimes e_r + \tilde{l} - I),\label{eq:dac-lap-3} \\ \dot v & = \alpha \beta ({\mathrm{I}}_{\mathfrak{h}}\otimes {\mathsf{L}}) z,\label{eq:dac-lap-4} \end{aligned}$$ where $\alpha, \beta, \nu_2, \nu_2 \in {{\mathbb{R}}_{>0}}$ are design parameters and $e_r \in {{\mathbb{R}}}^n$ is the unit vector along the $r$-th coordinate. This dynamics is an interconnected system with two parts: the $(I,S)$-component seeks to adjust the injection levels to satisfy the load profile and search for the optimizers of the [[DEDS]{}]{}problem while the $(z,v)$-component corresponds to the dynamic average consensus part, with $z^{(k)}_i$ aiming to track the difference between the load $l^{(k)} = L^{(k)} + {\mathbf{1}}_n^\top \tilde{l}_i^{(k)}$. Our terminology [`dac+`$({\mathsf{L}}\partial,\partial)$]{}dynamics to refer to  is motivated by this “dynamic average consensus in $(z,v)$+ Laplacian gradient in $I$ + gradient in $S$” structure. For convenience, we denote  by ${{X_{\texttt{dac+}({\mathsf{L}}\partial,\partial)}}:{{\mathbb{R}}}^{4n{\mathfrak{h}}} \rightrightarrows {{\mathbb{R}}}^{4n{\mathfrak{h}}}}$. Note ${\mathrm{Eq}({X_{\texttt{dac+}({\mathsf{L}}\partial,\partial)}})} = {{\mathcal{F}}_{\mathrm{DEDS}}^{*}}$ and since $\partial_I f^{\epsilon}$ and $\partial_S f^{\epsilon}$ are locally bounded, upper semicontinuous and take nonempty convex compact values, the solutions of ${X_{\texttt{dac+}({\mathsf{L}}\partial,\partial)}}$ exist starting from any initial condition (cf. Section \[se:Prelim\]). \[re:dist-imp\] Writing the $(z,v)$ dynamics componentwise, one can see that for each $i$ and each $k$, the values $(\dot z_i^{(k)},\dot v_i^{(k)})$ can be computed using the state variables $(z_i^{(k)}, \{z_j^{(k)}\}_{j \in {{N^{\textup{out}}}}(i)}, v_i^{(k)}, I_i^{(k)})$ only. Hence,  and  can be implemented in a distributed manner where each unit only requires information from its out-neighbors. Subsequently, $f^{\epsilon}$ can be written in the separable form $$\begin{aligned} f^{\epsilon}(I,S) = \sum_{i=1}^n f^{\epsilon}_i(I^{(1)}_i, \dots, I^{({\mathfrak{h}})}_i, S^{(1)}_i, \dots, S^{({\mathfrak{h}})}_i). \end{aligned}$$ Thus, if $\zeta_1 \in \partial_I f^{\epsilon}(I,S)$ and $\zeta_2 \in \partial_S f^{\epsilon}(I,S)$ then, for all $k \in {\mathcal{K}}$, $(\zeta_1)_i^{(k)},(\zeta_2)_i^{(k)} \in {{\mathbb{R}}}$ only depend on the state of unit $i$, i.e., $(I^{(1)}_i, \dots, I^{({\mathfrak{h}})}_i, S^{(1)}_i, \dots, S^{({\mathfrak{h}})}_i)$ and are computable by $i$. Hence, the $S$-dynamics can implemented by the DERs using their own state and to execute the $I$-dynamics, each $i$ needs information from its out-neighbors. We next address the convergence analysis of . For convenience, let ${\mathfrak{M}_g}= {{\mathbb{R}}}^{n{\mathfrak{h}}} \times {{\mathbb{R}}}^{n{\mathfrak{h}}} \times {{\mathbb{R}}}^{n{\mathfrak{h}}} \times ({\mathcal{H}}_0)^{\mathfrak{h}}$ and ${\mathfrak{M}_o}= \prod_{k=1}^{\mathfrak{h}}{\mathcal{H}}_{l^{(k)}} \times {{\mathbb{R}}}^{n{\mathfrak{h}}} \times ({\mathcal{H}}_0)^{\mathfrak{h}}\times ({\mathcal{H}}_0)^{\mathfrak{h}}$. \[th:convergence\] Let ${{\mathcal{F}}_{\mathrm{DEDS}}^{*}}$ satisfy Assumption \[as:regular\], ${\epsilon}$ satisfy , and $\alpha, \beta, \nu_1, \nu_2 > 0$ satisfy $$\label{eq:alpha-beta-cond-n} \frac{\nu_1}{\beta \nu_2 \lambda_2({\mathsf{L}}+ {\mathsf{L}}^\top)} + \frac{\nu_2^2 \lambda_{\max}({\mathsf{L}}^\top {\mathsf{L}})}{2 \alpha} < \lambda_2({\mathsf{L}}+ {\mathsf{L}}^\top).$$ Then, any trajectory of  starting in ${\mathfrak{M}_g}$ converges to ${{\mathcal{F}}_{\mathrm{aug}}^*}$ $={\{(I,S,z,v) \in {{\mathcal{F}}_{\mathrm{DEDS}}^{*}}\times \{0\} \times {{\mathbb{R}}}^{n{\mathfrak{h}}} \; | \; v= \nu_2 (l \otimes e_r - I)\}}$. Our first step is to show that the $\omega$-limit set of any trajectory of  with initial condition $(I_0,S_0,z_0,v_0) \in {\mathfrak{M}_g}$ is contained in ${\mathfrak{M}_o}$. To this end, write  as $$\begin{aligned} \dot v^{(k)} = \alpha \beta {\mathsf{L}}z^{(k)} \quad \text{ for all } k \in {\mathcal{K}}. \end{aligned}$$ Note that ${\mathbf{1}}_n^\top \dot v^{(k)} = \alpha \beta {\mathbf{1}}_n^\top {\mathsf{L}}z^{(k)} = 0$ for all $k \in {\mathcal{K}}$ because ${\mathcal{G}}$ is weight-balanced. Therefore, the initial condition $v_0 \in ({\mathcal{H}}_0)^{\mathfrak{h}}$ implies that $v(t) \in ({\mathcal{H}}_0)^{\mathfrak{h}}$ for all $t \ge 0$ along any trajectory of  starting at $(I_0,S_0,z_0,v_0)$. Now, if $\zeta \in \partial_I f^{\epsilon}(I,S)$ then, from  and , we get for any $k \in {\mathcal{K}}$ $$\begin{aligned} \dot I^{(k)} & = - {\mathsf{L}}\zeta^{(k)} + \nu_1 z^{(k)}, \\ \dot z^{(k)} & = - \alpha z^{(k)} - \beta {\mathsf{L}}z^{(k)} - v^{(k)} + \nu_2 (l^{(k)} e_r - I^{(k)}). \end{aligned}$$ Let $\xi_k = {\mathbf{1}}_n^\top I^{(k)} - l^{(k)}$. Then, from the above equations we get $\dot \xi_k = {\mathbf{1}}_n^\top \dot I^{(k)} = \nu_1 {\mathbf{1}}_n^\top z^{(k)}$. Further, we have $$\begin{aligned} \ddot \xi_k & = \nu_1 {\mathbf{1}}_n^\top \dot z^{(k)} = - \alpha \nu_1 {\mathbf{1}}_n^\top z^{(k)} + \nu_1 \nu_2 (l^{(k)} - {\mathbf{1}}^\top I^{(k)}) \\ & = - \alpha \dot \xi_k - \nu_1 \nu_2 \xi_k, \end{aligned}$$ forming a second-order linear system for $\xi_k$. The LaSalle Invariance Principle [@HKK:02] with the function $\nu_1 \nu_2 {\ensuremath{\| \xi_k \|}}^2 + {\ensuremath{\| \dot \xi_k \|}}^2$ implies that as $t \to \infty$ we have $(\xi_k(t); \dot \xi_k(t)) \to 0$ and so ${\mathbf{1}}_n^\top I^{(k)}(t) \to l^{(k)}$ and ${\mathbf{1}}_n^\top z^{(k)}(t) \to 0$ as $t \to \infty$. Next, proceeding to the convergence analysis, consider the change of coordinates ${D:{{\mathbb{R}}}^{4n{\mathfrak{h}}} \rightarrow {{\mathbb{R}}}^{4n{\mathfrak{h}}}}$ defined by $$\begin{aligned} (I,S,\omega_1,\omega_2) & = D(I,S,z,v) \\ & = (I,S,z,v +\alpha z- \nu_2 (l \otimes e_r -I)) . \end{aligned}$$ In these coordinates, the set-valued map  takes the form $$\begin{aligned} {X_{\texttt{dac+}({\mathsf{L}}\partial,\partial)}}&(I,S,\omega_1,\omega_2) = {\{( -({\mathrm{I}}_{\mathfrak{h}}\otimes {\mathsf{L}})\zeta_1 + \nu_1 \omega_1, -\zeta_2, \notag \\ & -\beta({\mathrm{I}}_{\mathfrak{h}}\otimes {\mathsf{L}}) \omega_1 - \omega_2, \label{eq:rewrite-edp2-2} \\ & \nu_1 \nu_2 \omega_1 -\alpha \omega_2 - \nu_2 ({\mathrm{I}}_{\mathfrak{h}}\otimes {\mathsf{L}}) \zeta_1) \in {{\mathbb{R}}}^{4n{\mathfrak{h}}} \; | \; \notag \\ & \zeta_1 \in \partial_I f^{\epsilon}(I,S), \zeta_2 \in \partial_S f^{\epsilon}(I,S)\}} . \notag \end{aligned}$$ This transformation helps in identifying the LaSalle-type function for the dynamics. We now focus on proving that, in the new coordinates, the trajectories of  converge to $$\begin{aligned} {\overline{{\mathcal{F}}}_{\mathrm{aug}}}& = D({{\mathcal{F}}_{\mathrm{aug}}^*}) = {{\mathcal{F}}_{\mathrm{DEDS}}^{*}}\times \{0\} \times \{0\}. \end{aligned}$$ Note that $D({\mathfrak{M}_o}) = {\mathfrak{M}_o}$ and so, from the property of the $\omega$-limit set of trajectories above, we get that $t \mapsto (I(t),S(t),\omega_1(t),\omega_2(t))$ starting in $D({\mathfrak{M}_g})$ belongs to ${\mathfrak{M}_o}$. Next, we show the hypotheses of Proposition \[pr:refined-lasalle-nonsmooth\] are satisfied, where ${\mathfrak{M}_o}$ plays the role of ${\mathcal{S}}\subset {{\mathbb{R}}}^{4n{\mathfrak{h}}}$ and ${V:{{\mathbb{R}}}^{4n{\mathfrak{h}}} \rightarrow {{\mathbb{R}}_{\ge 0}}}$, $$\begin{aligned} V(I,S,\omega_1,\omega_2) = f^{\epsilon}(I,S) + \tfrac{1}{2}(\nu_1 \nu_2 {\ensuremath{\| \omega_1 \|}}^2 + {\ensuremath{\| \omega_2 \|}}^2). \end{aligned}$$ plays the role of $W$, resp. Let $(I,S,\omega_1,\omega_2) \in {\mathfrak{M}_o}$ then any element of ${{\mathcal{L}}}_{{X_{\texttt{dac+}({\mathsf{L}}\partial,\partial)}}}V(I,S,\omega_1,\omega_2)$ can be written as $$\begin{aligned} \label{eq:liederivative-element} & - \zeta_1^\top ({\mathrm{I}}_{\mathfrak{h}}\otimes {\mathsf{L}}) \zeta_1 + \nu_1 \zeta_1^\top \omega_1 - {\ensuremath{\| \zeta_2 \|}}^2 - \beta \nu_1 \nu_2 \omega_1^\top ({\mathrm{I}}_{\mathfrak{h}}\otimes {\mathsf{L}}) \omega_1 \notag \\ & - \alpha {\ensuremath{\| \omega_2 \|}}^2 - \nu_2 \omega_2^\top ({\mathrm{I}}_{\mathfrak{h}}\otimes L) \zeta_1, \end{aligned}$$ where $\zeta_1 \in \partial_I f^{\epsilon}(I,S)$ and $\zeta_2 \in \partial_S f^{\epsilon}(I,S)$. Since the digraph ${\mathcal{G}}$ is strongly connected and weight-balanced, we use  and ${\mathbf{1}}_{n{\mathfrak{h}}}^\top \omega_1 = 0$ to bound the above expression as $$\begin{aligned} & - \tfrac{1}{2}\lambda_2({\mathsf{L}}+ {\mathsf{L}}^\top) {\ensuremath{\| \eta \|}}^2 + \nu_1 \eta^\top \omega_1 - {\ensuremath{\| \zeta_2 \|}}^2 \\ & - \tfrac{1}{2} \beta \nu_1 \nu_2 \lambda_2({\mathsf{L}}+ {\mathsf{L}}^\top) {\ensuremath{\| \omega_1 \|}}^2 \notag - \alpha {\ensuremath{\| \omega_2 \|}}^2 - \nu_2 \omega_2^\top ({\mathrm{I}}_{\mathfrak{h}}\otimes {\mathsf{L}}) \eta \\ & = \gamma^\top M \gamma - {\ensuremath{\| \zeta_2 \|}}^2 , \end{aligned}$$ where $\eta = (\eta^{(1)}; \dots; \eta^{({\mathfrak{h}})})$ with $\eta^{(k)} = \zeta^{(k)} - \tfrac{1}{n} ({\mathbf{1}}_n^\top \zeta^{(k)}){\mathbf{1}}_n$, the vector $\gamma = (\eta; \omega_1; \omega_2)$, and the matrix $$\begin{aligned} M = \begin{bmatrix} -\tfrac{1}{2} \lambda_2({\mathsf{L}}+ {\mathsf{L}}^\top) {\mathrm{I}}_{n{\mathfrak{h}}} & B^\top \\ B & C \end{bmatrix}, \end{aligned}$$ with $B^\top = \begin{bmatrix} \tfrac{1}{2} \nu_1 {\mathrm{I}}_{n{\mathfrak{h}}} & - \tfrac{1}{2} \nu_2 ({\mathrm{I}}_{{\mathfrak{h}}} \otimes {\mathsf{L}})^\top \end{bmatrix}$, and $$\begin{aligned} C =\begin{bmatrix} -\tfrac{1}{2} \beta \nu_1 \nu_2 \lambda_2({\mathsf{L}}+ {\mathsf{L}}^\top) {\mathrm{I}}_{n{\mathfrak{h}}} & 0 \\ 0 & -\alpha {\mathrm{I}}_{n{\mathfrak{h}}} \end{bmatrix}. \end{aligned}$$ Resorting to the Schur complement [@SB-LV:09], $M \in {{\mathbb{R}}}^{3n{\mathfrak{h}}\times 3n{\mathfrak{h}}}$ is neg. definite if $ -\tfrac{1}{2} \lambda_2({\mathsf{L}}+ {\mathsf{L}}^\top) {\mathrm{I}}_{n{\mathfrak{h}}} - B^\top C^{-1} B$, that equals $$\begin{aligned} -\tfrac{1}{2} \lambda_2({\mathsf{L}}+ {\mathsf{L}}^\top){\mathrm{I}}_{n{\mathfrak{h}}} + \tfrac{\nu_1}{2 \beta \nu_2 \lambda_2({\mathsf{L}}+ {\mathsf{L}}^\top)} {\mathrm{I}}_{n{\mathfrak{h}}}+ \tfrac{\nu_2^2}{4\alpha} ({\mathrm{I}}_{{\mathfrak{h}}} \otimes {\mathsf{L}})^\top ({\mathrm{I}}_{{\mathfrak{h}}} \otimes {\mathsf{L}}) , \end{aligned}$$ is negative definite, which follows from . Hence, for any $(I,S,\omega_1,\omega_2) \in {\mathfrak{M}_o}$, we have $\max {{\mathcal{L}}}_{{X_{\texttt{dac+}({\mathsf{L}}\partial,\partial)}}} V(I,S,\omega_1,\omega_2) \le 0$ and also $0 \in {{\mathcal{L}}}_{{X_{\texttt{dac+}({\mathsf{L}}\partial,\partial)}}} V(I,S,\omega_1,\omega_2)$ iff $\eta = \zeta_2 = \omega_1 = \omega_2 =0$, which means $\zeta^{(k)} \in {\mathrm{span}}\{{\mathbf{1}}_n\}$ for each $k \in {\mathcal{K}}$. Consequently, using the characterization of optimizers in , we deduce that $(I,S)$ is a solution of  and so, $(I,S,\omega_1,\omega_2) \in {\overline{{\mathcal{F}}}_{\mathrm{aug}}}$. Since, ${\overline{{\mathcal{F}}}_{\mathrm{aug}}}$ belongs to a level set of $V$, we conclude that Proposition \[pr:refined-lasalle-nonsmooth\]\[as:refined-lasalle-1\] holds. Further, using [@AC-JC:14-auto Lemma A.1] one can show that Proposition \[pr:refined-lasalle-nonsmooth\]\[as:refined-lasalle-2\] holds too (we omit the details due to space constraints). To apply Proposition \[pr:refined-lasalle-nonsmooth\], it remains to show that the trajectories starting from $D({\mathfrak{M}_g})$ are bounded. We reason by contradiction. Assume there exists $t \mapsto (I(t), S(t), \omega_1(t),\omega_2(t))$, with $(I(0),S(0),\omega_1(0),\omega_2(0)) \in D({\mathfrak{M}_g})$, of ${X_{\texttt{dac+}({\mathsf{L}}\partial,\partial)}}$ such that ${\ensuremath{\| (I(t),S(t),\omega_1(t),\omega_2(t) \|}} \to \infty$. Since $V$ is radially unbounded, this implies $V(I(t),S(t),\omega_1(t),\omega_2(t)) \to \infty$. Also, as established above, we know ${\mathbf{1}}_n^\top I^{(k)}(t) \to l^{(k)}$ and ${\mathbf{1}}_n^\top \omega_1^{(k)}(t) \to 0$ for each $k \in {\mathcal{K}}$. Thus, there exist times $\{t_m\}_{m=1}^{\infty}$ with $t_m \to \infty$ such that for all $m \in {\mathbb{Z}_{\geq 1}}$, $$\begin{aligned} \label{eq:xi-bound} {\ensuremath{\left\lvert{{\mathbf{1}}_n^\top \omega_1^{(k)}(t_m)}\right\rvert}} < {1}/{m} \text{ for all } k \in & {\mathcal{K}}, \\ \max {{\mathcal{L}}}_{{X_{\texttt{dac+}({\mathsf{L}}\partial,\partial)}}} V(I(t_m),S(t_m),\omega_1(t_m), \omega_2(t_m)) & > 0. \notag \end{aligned}$$ The second inequality implies the existence of $\{\zeta_{1,m}\}_{m = 1}^{\infty}$ and $\{\zeta_{2,m}\}_{m=1}^{\infty}$ with $(\zeta_{1,m},\zeta_{2,m}) \in (\partial_I f^{{\epsilon}}(I(t_m),S(t_m)), \partial_S f^{{\epsilon}}(I(t_m),S(t_m)))$, such that $$\begin{aligned} - \zeta_{1,m}^\top & ({\mathrm{I}}_{\mathfrak{h}}\otimes {\mathsf{L}}) \zeta_{1,m} + \nu_1 \zeta_{1,m}^\top \omega_1(t_m) - {\ensuremath{\| \zeta_{2,m} \|}}^2 \notag \\ & - \beta \nu_1 \nu_2 \omega_1(t_m)^\top ({\mathrm{I}}_{\mathfrak{h}}\otimes {\mathsf{L}}) \omega_1(t_m) - \alpha {\ensuremath{\| \omega_2(t_m) \|}}^2 \notag \\ & - \nu_2 \omega_2(t_m)^\top ({\mathrm{I}}_{\mathfrak{h}}\otimes {\mathsf{L}}) \zeta_{1,m} > 0, \end{aligned}$$ for all $m \in {\mathbb{Z}_{\geq 1}}$, where we have used  to write an element of ${{\mathcal{L}}}_{{X_{\texttt{dac+}({\mathsf{L}}\partial,\partial)}}} V (I,S,\omega_1,\omega_2)$. Letting $\eta_m^{(k)} = \zeta_{1,m}^{(k)} - \tfrac{1}{n}({\mathbf{1}}_n^\top \zeta_{1,m}^{(k)}) {\mathbf{1}}_n$, using , and using the relation ${\ensuremath{\| \omega_1^{(k)}(t_m) - \tfrac{1}{n} ({\mathbf{1}}_n^\top \omega_1^{(k)}(t_m)) {\mathbf{1}}_n \|}}^2 = {\ensuremath{\| \omega_1^{(k)}(t_m) \|}}^2 - \tfrac{1}{n} ({\mathbf{1}}_n^\top \omega_1^{(k)}(t_m))^2$, the above inequality can be rewritten as $$\begin{gathered} \label{eq:auxxx} \gamma_m^\top M \gamma_m + \tfrac{1}{n} \nu_1 \sum_{k\in {\mathcal{K}}} ({\mathbf{1}}_n^\top \zeta_{1,m}^{(k)})({\mathbf{1}}_n^\top \omega_1^{(k)}(t_m)) - {\ensuremath{\| \zeta_{2,m} \|}}^2 \\ + \tfrac{\beta \nu_1 \nu_2}{2 n } \lambda_2({\mathsf{L}}+ {\mathsf{L}}^\top) \sum_{k \in {\mathcal{K}}} ({\mathbf{1}}_n^\top \omega_1^{(k)}(t_m))^2 > 0, \end{gathered}$$ with $\gamma_m = (\eta_{m}; \omega_1(t_m); \omega_2(t_m))$. Using  on , $$\begin{gathered} \label{eq:lie-bound} \gamma_m^\top M \gamma_m - {\ensuremath{\| \zeta_{2,m} \|}}^2 + \tfrac{\nu_1}{nm} \sum_{k \in {\mathcal{K}}} {\ensuremath{\left\lvert{{\mathbf{1}}_n^\top \zeta_{1,m}^{(k)}}\right\rvert}} \\ + \tfrac{\beta \nu_1 \nu_2 {\mathfrak{h}}}{2 n m^2} \lambda_2({\mathsf{L}}+ {\mathsf{L}}^\top) > 0 \end{gathered}$$ for all $m \in {\mathbb{Z}_{\geq 1}}$. Next, we consider two cases, depending on whether the sequence $\{(I(t_m),S(t_m))\}_{m=1}^\infty$ is (a) bounded or (b) unbounded. In case (a), $\{(\omega_1(t_m),\omega_2(t_m))\}_{m=1}^\infty$ must be unbounded. Since $M$ is negative definite, we have $\gamma_m^\top M \gamma_m \le \lambda_{\max}(M) {\ensuremath{\| (\omega_1(t_m), \omega_2(t_m)) \|}}^2$. Thus, by  $$\begin{aligned} \label{eq:omega-inf-bound} \lambda_{\max} (M) {\ensuremath{\| (\omega_1(t_m),& \omega_2(t_m)) \|}}^2 + \tfrac{\nu_1}{nm} \sum_{k \in {\mathcal{K}}} {\ensuremath{\left\lvert{{\mathbf{1}}_n^\top \zeta_{1,m}^{(k)}}\right\rvert}} \notag \\ & + \tfrac{\beta \nu_1 \nu_2 {\mathfrak{h}}}{2 n m^2} \lambda_2({\mathsf{L}}+ {\mathsf{L}}^\top) > 0. \end{aligned}$$ Since $\partial_I f^{\epsilon}$ is locally bounded and $\{(I(t_m),S(t_m))\}_{m=1}^\infty$ is bounded, we deduce $\{\zeta_{1,m}\}$ is bounded [@JBHU-CL:93 Proposition 6.2.2]. Combining these facts with $\lambda_{\max} (M) <0$ and ${\ensuremath{\| (\omega_1(t_m),\omega_2(t_m)) \|}} \to \infty$, one can find $\bar{m} \in {\mathbb{Z}_{\geq 1}}$ such that  is violated for all $m \ge \bar{m}$, a contradiction. Now consider case (b) where $\{(I(t_m),S(t_m))\}_{m=1}^\infty$ is unbounded. We divide this case further into two, based on the sequence $\bigl\{\sum_{k=1}^{\mathfrak{h}}{\ensuremath{\left\lvert{{\mathbf{1}}_n^\top \zeta_{1,m}^{(k)}}\right\rvert}} \bigr\}_{m=1}^\infty$ being bounded or not. Using $\gamma_m^\top M \gamma_m \le \lambda_{\max}(M){\ensuremath{\| \eta_m \|}}^2$, the inequality  implies $$\begin{gathered} \label{eq:state-bound} \lambda_{\max} (M) {\ensuremath{\| \eta_m \|}}^2 - {\ensuremath{\| \zeta_{2,m} \|}}^2 + \frac{\nu_1}{nm} \sum_{k=1}^{\mathfrak{h}}{\ensuremath{\left\lvert{{\mathbf{1}}_n^\top \zeta_{1,m}^{(k)}}\right\rvert}} \\ + \frac{\beta \nu_1 \nu_2 {\mathfrak{h}}}{2 n m^2} \lambda_2({\mathsf{L}}+ {\mathsf{L}}^\top) > 0. \end{gathered}$$ Consider the case when $\bigl\{\sum_{k=1}^{\mathfrak{h}}{\ensuremath{\left\lvert{{\mathbf{1}}_n^\top \zeta_{1,m}^{(k)}}\right\rvert}} \bigr\}_{m=1}^\infty$ is unbounded. Partition ${\mathcal{K}}$ into disjoint sets ${\mathcal{K}}_u$ and ${\mathcal{K}}_b$ such that ${\ensuremath{\left\lvert{{\mathbf{1}}_n^\top \zeta_{1,m}^{(k)}}\right\rvert}} \to \infty$ for all $k \in {\mathcal{K}}_u$ and $\bigl\{{\ensuremath{\left\lvert{{\mathbf{1}}_n^\top \zeta_{1,m}^{(k)}}\right\rvert}}\bigr\}_{m=1} ^\infty$ is uniformly bounded for all $k \in {\mathcal{K}}_b$. For convenience, rewrite  as $\sum_{k = 1}^{\mathfrak{h}}U_{k,m} + \frac{Z_1}{m} > 0$, where $ Z_1 = \frac{\beta \nu_1 \nu_2 {\mathfrak{h}}}{2nm} \lambda_2({\mathsf{L}}+ {\mathsf{L}}^\top)$ and, for each $k \in {\mathcal{K}}$, $$\begin{gathered} U_{k,m} = \lambda_{\max}(M) {\ensuremath{\| \eta_m^{(k)} \|}}^2 - {\ensuremath{\| \zeta_{2,m}^{(k)} \|}}^2 + \frac{\nu_1}{nm} {\ensuremath{\left\lvert{{\mathbf{1}}_n^\top \zeta_{1,m}^{(k)}}\right\rvert}}. \end{gathered}$$ By definition of ${\mathcal{K}}_b$, there exists $Z_2 > 0$ with $\sum_{k \in {\mathcal{K}}_b} U_{k,m} \le \frac{Z_2}{m}$. Hence, if  holds for all $m \in {\mathbb{Z}_{\geq 1}}$, then so is $$\begin{aligned} \sum_{k \in {\mathcal{K}}_u} U_{k,m} + \frac{Z_1 + Z_2}{m} > 0. \end{aligned}$$ Next we show that for each $k \in {\mathcal{K}}_u$ there exists $m_k \in {\mathbb{Z}_{\geq 1}}$ such that $U_{k,m} + \frac{Z_1 + Z_2}{m} < 0$ for all $m \ge m_k$. This will lead to the desired contradiction. Assume without loss of generality that ${\mathbf{1}}_n^\top \zeta_{1,m}^{(k)} \to \infty$ (reasoning for the case when the sequence approaches negative infinity follows analogously). Then, for $$\begin{aligned} \lambda_{\max}(M) {\ensuremath{\| \eta_m^{(k)} \|}}^2 - {\ensuremath{\| \zeta_{2,m} \|}}^2 + \frac{\nu_1}{nm} {\ensuremath{\left\lvert{{\mathbf{1}}_n^\top \zeta_{1,m}^{(k)}}\right\rvert}} + \frac{Z_1 + Z_2}{m} > 0, \end{aligned}$$ for all $m \in {\mathbb{Z}_{\geq 1}}$, we require $(\zeta_{1,m}^{(k)})_i \to \infty$ for all $i \in {\{1,\dots,n\}}$. Indeed, otherwise, recalling that $\eta_m^{(k)} = \zeta_{1,m}^{(k)} - \frac{1}{n}({\mathbf{1}}_n^\top \zeta_{1,m}^{(k)}) {\mathbf{1}}_n$, it can be shown that there exist an $\bar{m}$ such that $$\begin{aligned} \lambda_{\max} {\ensuremath{\| \eta_m^{(k)} \|}}^2 < \frac{\nu_1}{nm} {\ensuremath{\left\lvert{{\mathbf{1}}_n^\top \zeta_{1,m}^{(k)}}\right\rvert}} + \frac{Z_1 + Z_2}{m} \, \, \text{ for all } m \ge \bar{m}. \end{aligned}$$ Note that from Lemma \[le:size-gen-gradient\] we have ${\ensuremath{\| \zeta_{1,m}^{(k)} - \zeta_{2,m}^{(k)} \|}_{\infty}} \le \frac{{\mathfrak{h}}+4}{{\epsilon}}$ which further implies that $(\zeta_{2,m}^{(k)})_i \to \infty$ for all $i \in {\{1,\dots,n\}}$. With these facts in place, we write $$\begin{aligned} U_{k,m} + \frac{Z_1 + Z_2 }{m} & < - \sum_{i=1}^n (\zeta_{2,m}^{(k)})_i^2 + \frac{\nu_1}{m} {\ensuremath{\left\lvert{\sum_{i=1}^n (\zeta_{1,m}^{(k)})_i}\right\rvert}} \\ & \qquad + \frac{Z_1 + Z_2}{m} \end{aligned}$$ and deduce that there exists an $m_k \in {\mathbb{Z}_{\geq 1}}$ such that the right-hand side of the above expression is negative for all $m \ge m_k$, which is what we wanted to show. Finally, consider the case when the sequence $\bigl \{\sum_{k=1}^{\mathfrak{h}}{\ensuremath{\left\lvert{{\mathbf{1}}_n^\top \zeta_{1,m}^{(k)}}\right\rvert}} \bigr \}_{m=1}^\infty$ is bounded. For  to be true for all $m \in {\mathbb{Z}_{\geq 1}}$, we require ${\ensuremath{\| \gamma_m \|}} \to 0$ and ${\ensuremath{\| \zeta_{2,m} \|}} \to 0$ as $m \to \infty$. This further implies that $\eta_m \to 0$ and, from Assumption \[as:regular\], this is only possible if $\{(I(t_m),S(t_m))\}_{m=1}^\infty$ is bounded, which is a contradiction. \[re:storage-subset-re\] The [`dac+`$({\mathsf{L}}\partial,\partial)$]{}dynamics  can be modified to scenarios that include more general descriptions of storage capabilities, as in Remark \[re:storage-subset\]. For instance, if only a subset of units have storage capabilities, the only modification is to set the variables $\{S_i^{(k)}\}_{i \in {\mathcal{V}}_{g}, k\in {\mathcal{K}}}$ to zero and execute  only for the variables $\{S_i^{(k)}\}_{i \in {\mathcal{V}}_{gs}, k\in {\mathcal{K}}}$. The resulting strategy converges to the solution set of the corresponding [[DEDS]{}]{}problem. \[re:dist-selection\] The implementation of the [`dac+`$({\mathsf{L}}\partial,\partial)$]{}dynamics requires the selection of parameters $\alpha,\beta,\nu_1,\nu_2,{\epsilon}$ satisfying  and . Condition  involves knowledge of network-wide quantities, but the units can resort to various distributed procedures to collectively select appropriate values. Regarding , an upper bound on the denominator of the right-hand side can be computed aggregating, using consensus, the difference between the max and the min values that each DER’s aggregate cost function takes in its respective feasibility set (neglecting load conditions). The challenge for the units, however, is to estimate the parameter $\rho$ if it is not known a priori. Simulations {#sec:sims} =========== We illustrate the application of the [`dac+`$({\mathsf{L}}\partial,\partial)$]{}dynamics to solve the [[DEDS]{}]{}problem for a group of $n=10$ generators with communication defined by a directed ring with bi-directional edges $\{(1,5),(2,6),(3,7),(4,8)\}$ (all edge weights are $1$). The planning horizon is ${\mathfrak{h}}=6$ and the load profile consists of the external load $L = (1950, 1980, 2700, 2370, 1900,1850)$ and the load at each generator $i$ for each slot $k$ given by $\tilde{l}_i^{(k)} = 10i$. Thus, for each slot $k$, $\tilde{l}^{(k)} = \sum_{i=1}^{10} \tilde{l}_i^{(k)} = 550$ and so, $l = (2500, 2530, 3250, 2920, 2450, 2400)$. Generators have storage capacities determined by $C^M = 100 {\mathbf{1}}_n$ and $C^m = S^{(0)} = 5 {\mathbf{1}}_n$. The cost function of each unit is quadratic and constant across time. Table \[tb:Cost1\] details the cost function coefficients, generation limits, and ramp constraints, which are modified from the data for $39$-bus New England system [@RDQ-CEMS-RJT:11]. Figure \[fig:evolution\] illustrates the evolution of the total power injected at each time slot and the total cost incurred by the network, respectively. As established in Theorem \[th:convergence\] and shown in Figure \[fig:conv\], the total injection asymptotically converges to the load profile $l$, the total aggregate cost converges to the minimum $201092$ and the converged solution satisfies -. \ Conclusions =========== We have studied the [[DEDS]{}]{}problem for a group of generators with storage capabilities that communicate over a strongly connected, weight-balanced digraph. Using exact penalty functions, we have provided an alternative problem formulation, upon which we have built to design the distributed [`dac+`$({\mathsf{L}}\partial,\partial)$]{}dynamics. This dynamics provably converges to the set of solutions of the problem from any initial condition. For future work, we plan to extend the scope of our formulation to include power flow equations, constraints on the power lines, various losses, and stochasticity of the available data (loads, costs, and generator availability). We also intend to explore the use of our dynamics as a building block in solving grid control problems across different time scales (e.g., implementations at long time scales on high-inertia generators and at short time scales on low-inertia generators in the face of highly-varying demand) and hierarchical levels (e.g., in multi-layer architectures where aggregators at one layer coordinate their response to a request for power production, and feed their decisions as load requirements to the devices in lower layers). [^1]: Ashish Cherukuri and Jorge Cortés are with the Department of Mechanical and Aerospace Engineering, University of California, San Diego, `{acheruku,cortes}@ucsd.edu`. [^2]: A preliminary version appeared as [@AC-JC:15-cdc] at the IEEE Conference on Decision and Control.
--- abstract: 'The generation of atomic entanglement is discussed in a system that atoms are trapped in separate cavities which are connected via optical fibers. Two distant atoms can be projected to Bell-state by synchronized turning off the local laser fields and then performing a single quantum measurement by a distant controller. The distinct advantage of this scheme is that it works in a regime that $\Delta\approx\kappa\gg g$, which makes the scheme insensitive to cavity strong leakage. Moreover, the fidelity is not affected by atomic spontaneous emission.' author: - 'Y. Q. Guo[^1], H. Y. Zhong, Y. H. Zhang' - 'H. S. Song' title: Remote generation of entanglement for individual atoms via optical fibers --- Very recently, much attention has been paid to the study of the possibility of quantum information processing realized via optical fibers $^{[1,2]}$. Generating an entangled state of distant qubits turns out to be a basic aim of quantum computation. It has been pointed out that implementing quantum entangling gate that works for spatially separated local processors which are connected by quantum channels is crucial in distributed quantum computation. Many schemes have been put forward to prepare engineering entanglement of atoms trapped in separate optical cavities by creating direct or indirect interaction between them $^{[3-10]}$. Some of the schemes involve direct connection of separate cavities via optical fibers, others apply detection of the photons leaking from the cavities. All the implemented quantum gates work in a probabilistic way. To improve the corresponding success probability and fidelity, one must construct precisely controlled coherent evolutions of the global system and weaken the affect of photon detection inefficiency. In the system considered by Serafini et al $^{[5]}$, the only required local control is synchronized switching on and off of the atom-field interaction in the distant cavities. In the scheme proposed by Mancini and Bose $^{[11]}$, a direct interaction between two atoms trapped in distant cavities is engineered, the only required control for implementing quantum entangling gate is turning off the interaction between atoms and the locally applied laser fields. In the present letter, we propose an alternative scheme with particular focus on the establishment of three-qubit entanglement, which is suitable and effective for the generation of three-atom W-type state and two-atom Bell-state. To generate three-atom W-type state, the only control required is synchronized turning off the locally applied laser fields. While, To generate two-atom Bell-state, an additional quantum measurement performed on one of the atoms is needed. We demonstrate that the scheme works in a high success probability, and the atomic spontaneous emission does not affect the fidelity. The schematic setup of the system is shown in Fig. 1. Three two-level atoms 1, 2 and 3 locate in separate optical cavities $C_{1}$, $C_{2}$ and $C_{3}$ respectively. The cavities are assumed to be single-sided. Three off-resonant driving external fields $\varepsilon _{1}$, $\varepsilon _{2}$ and $\varepsilon _{3}$ are added on $C_{1}$, $C_{2}$ and $C_{3}$ respectively. In each cavity, a local weak laser field is applied to resonantly interact to the atom. Two neighboring cavities are connected via optical fiber. The global system is located in vacuum. Using the input-output theory, taking the adiabatic approximation $^{[12]}$ and applying the methods developed in Refs. \[11\] and \[13\], we obtain the effective Hamiltonian of the global system as $$\begin{aligned} H_{eff}=J_{12}\sigma _{1}^{z}\sigma _{2}^{z}+J_{23}\sigma _{2}^{z}\sigma _{3}^{z}+J_{31}\sigma _{3}^{z}\sigma _{1}^{z}+\Gamma\sum\limits_{i}(\sigma _{i}^{-}+\sigma _{i}^{+}),\end{aligned}$$ where $\sigma_{i}^{z}$ and $\sigma_{i}^{+} (\sigma_{i}^{-})$, $i=1,2,3$, are spin and spin raising (lowering) operators of atom $i$, $\Gamma$ represents the local laser field added on the atom. To keep the validity of adiabatic approximation, we assume $\Gamma \ll J_{12}(J_{23},J_{31})$. And $$\begin{aligned} J_{12} &=&2\kappa\chi ^{2}Im\left\{ \alpha _{1} \alpha _{2} ^{\ast }(Me^{i\phi _{21}}+\kappa e^{i\phi_{32}+\phi_{13}})/(M^{3}-W^{3})\right\}, \nonumber \\ J_{23} &=&2\kappa \chi ^{2}Im\left\{ \alpha _{2} \alpha _{3} ^{\ast }(Me^{i\phi _{32}}+\kappa e^{i\phi_{13}+\phi_{21}})/(M^{3}-W^{3})\right\}, \nonumber \\ J_{31} &=&2\kappa \chi ^{2}Im\left\{ \alpha _{3} \alpha _{1} ^{\ast }(Me^{i\phi _{13}}+\kappa e^{i\phi_{21}+\phi_{32}})/(M^{3}-W^{3})\right\},\end{aligned}$$ where $\kappa$ is the cavity leaking rate, $\chi=\frac{g^2}{\Delta}$, $g$ is the coupling strength between atom and cavity field, $\Delta$ is the detuning. In deducing Eq. (1), the condition $\Delta\approx\kappa\gg g$ is assumed, $M=i\Delta+\kappa$, $W^{3}=\kappa ^{3}e^{i(\phi _{21}+\phi _{32}+\phi _{13})}$. The phase factors $\phi _{21}$, $\phi _{32}$, and $\phi _{13}$ are the phases delay caused by the photon transmission along the optical fibers. And $$\begin{aligned} \alpha_{1}&=&\frac{M^2\varepsilon_{1}+\kappa^{2}e^{i(\phi_{32}+\phi_{13})} \varepsilon_{2}+M\kappa e^{i\phi_{13}}\varepsilon_{3}}{M^{3}-W^3}, \nonumber\\ \alpha_{2}&=&\frac{M^2\varepsilon_{2}+\kappa^{2}e^{i(\phi_{13}+\phi_{21})} \varepsilon_{3}+M\kappa e^{i\phi_{21}}\varepsilon_{1}}{M^{3}-W^3}, \nonumber\\ \alpha_{3}&=&\frac{M^2\varepsilon_{3}+\kappa^{2}e^{i(\phi_{21}+\phi_{32})} \varepsilon_{1}+M\kappa e^{i\phi_{32}}\varepsilon_{2}}{M^{3}-W^3},\end{aligned}$$ We assume that $\varepsilon_{1}=\varepsilon_{2}=\varepsilon_{3}=\varepsilon_{0}$, $\phi _{21}=\phi _{32}=\phi _{13}=\phi _{0}$. This leads to $$\begin{aligned} \alpha_{1}=\alpha_{2}=\alpha_{3}=\alpha_{0},\nonumber\\ J_{12}=J_{23}=J_{31}=J_{0}.\end{aligned}$$ The Hamiltonian in Eq. (1) is now written as $$\begin{aligned} H_{eff}=H_{zz}+H_{x},\end{aligned}$$ where $$\begin{aligned} H_{zz}=J_{0}(\sigma _{1}^{z}\sigma _{2}^{z}+\sigma _{2}^{z}\sigma _{3}^{z}+\sigma _{3}^{z}\sigma _{1}^{z}), H_{x}=\Gamma\sum\limits_{i}(\sigma _{i}^{-}+\sigma _{i}^{+}).\end{aligned}$$ Eq. (5) represents the Hamiltonian of an Ising ring model. The entanglement of the ground state of the above Hamiltonian has already been discussed $^{[14]}$. Here, we study the entanglement of the evolved system state governed by the Hamiltonian. Under the condition $\Gamma\ll J_{0}$, the secular part of the effective Hamiltonian can be obtained through the transformation $UH_{x}U^{-1}$, $U=e^{-iH_{zz}t}$, as $^{[15]}$ $$\begin{aligned} \tilde{H}=\frac{\Gamma}{2}[\sigma_{1}^{x}(1-\sigma_{2}^{z}\sigma_{3}^{z}) +\sigma_{2}^{x}(1-\sigma_{3}^{z}\sigma_{1}^{z}) +\sigma_{3}^{x}(1-\sigma_{1}^{z}\sigma_{2}^{z})].\end{aligned}$$ The straight forward interpretation of this Hamiltonian is: the spin of an atom in the Ising ring flips *if and only if* its two neighbors have opposite spins. For the initial states that one or two of the atoms are excited, the system state is restricted within the subspace spanned by the following basis vectors $$\begin{aligned} |\phi_{1}\rangle&=&|egg\rangle, |\phi_{2}\rangle=|eeg\rangle, |\phi_{3}\rangle=|geg\rangle,\nonumber \\ |\phi_{4}\rangle&=&|gee\rangle, |\phi_{5}\rangle=|gge\rangle, |\phi_{6}\rangle=|ege\rangle.\end{aligned}$$ The Hamiltonian in Eq. (7) can be written as $$\begin{aligned} \tilde{H}=\left(\begin{array}{cccccc} 0 & \Gamma & 0 & 0 & 0 & \Gamma\\ \Gamma & 0 & \Gamma & 0 & 0 & 0\\ 0 & \Gamma & 0 & \Gamma & 0 & 0\\ 0 & 0 & \Gamma & 0 & \Gamma & 0\\ 0 & 0 & 0 & \Gamma & 0 & \Gamma\\ \Gamma & 0 & 0 & 0 & \Gamma & 0 \end{array}\right).\end{aligned}$$ The eigenvalues of the Hamiltonian can be obtained as $E_{12}=\pm\Gamma, E_{34}=\pm\Gamma, E_{56}=\pm2\Gamma$, and the corresponding eigenvectors are $$\begin{aligned} |\psi_{12}\rangle&=&\frac{1}{2}(-|\phi_{1}\rangle\mp|\phi_{2}\rangle\pm|\phi_{4}\rangle+|\phi_{5}\rangle),\nonumber \\ |\psi_{34}\rangle&=&\frac{1}{2}(\pm|\phi_{1}\rangle\mp|\phi_{3}\rangle-|\phi_{4}\rangle+|\phi_{6}\rangle),\nonumber \\ |\psi_{56}\rangle&=&\frac{1}{\sqrt{6}}(|\phi_{1}\rangle\pm|\phi_{2}\rangle+|\phi_{3}\rangle\pm|\phi_{4}\rangle+|\phi_{5}\rangle\pm|\phi_{6}\rangle).\end{aligned}$$ For initial system state $|\Psi(0)\rangle=\sum\limits_{i}c_{i}(0)|\phi_{i}\rangle$, the evolving system state can be written as $|\Psi(t)\rangle=\sum\limits_{i}c_{i}(t)|\phi_{i}\rangle$, where the coefficients $c_{i}(t)$ are given by $^{[8]}$ $$\begin{aligned} c_{i}(t)=\sum\limits_{j}[S^{-1}]_{ij}[Sc(0)]_{j}e^{-iE_{j}t},\end{aligned}$$ where $c(0)=[c_{1}(0),c_{2}(0),c_{3}(0),c_{4}(0),c_{5}(0),c_{6}(0)]^{T}$, and $S$ is the $6\times6$ unitary transformation matrix between eigenvectors and basis vectors. Now we discuss the evolving system state for initial state that only one atom is excited, that is, $|\Psi(0)\rangle=|\phi_{1}\rangle$, which leads to $c(0)=[1,0,0,0,0,0]^{T}$, we can obtain $$\begin{aligned} c_{1}(t)&=&\frac{2}{3}\textrm{cos}\Gamma t+\frac{1}{3}\textrm{cos}2\Gamma t,\nonumber\\ c_{2}(t)&=&-\frac{1}{3}\textrm{sin}\Gamma t-\frac{1}{3}\textrm{sin}2\Gamma t,\nonumber\\ c_{3}(t)&=&-\frac{1}{3}\textrm{cos}\Gamma t+\frac{1}{3}\textrm{cos}2\Gamma t,\nonumber\\ c_{4}(t)&=&\frac{2}{3}\textrm{sin}\Gamma t-\frac{1}{3}\textrm{sin}2\Gamma t,\nonumber\\ c_{5}(t)&=&-\frac{1}{3}\textrm{cos}\Gamma t+\frac{1}{3}\textrm{cos}2\Gamma t,\nonumber\\ c_{6}(t)&=&-\frac{1}{3}\textrm{sin}\Gamma t-\frac{1}{3}\textrm{sin}2\Gamma t.\end{aligned}$$ It should be noted that the Hamiltonian in Eq. (6) or Eq. (7) remains invariant under the permutation of atoms 1, 2 and 3. We also note that the initial state $|egg\rangle$ has exchange symmetry for atoms 2 and 3. So, there is no doubt that $c_{5}(t)\equiv c_{3}(t)$ and $c_{6}(t)\equiv c_{2}(t)$. Eqs. (12) lead to an resolvable analyzing of the three-atom or two-atom entanglement nature of the involving system state. The entanglement of three-partite pure states can be measured by intrinsic three-partite entanglement which is defined as $^{[16]}$ $$C_{abc}=C_{a(bc)}-C_{ab}^{2}-C_{ac}^{2},$$ where $C_{a(bc)}$, which represents the tangle between a subsystem $a$ and the rest of the global system (denoted as $b, c$), is represented as $$C_{a(bc)}=4\textrm{Det}\rho_{a}=2(1-\textrm{Tr}\rho _{a}^{2}).$$ and $C_{ab}$ ($C_{ac}$) is the well known Concurrence that is used for entanglement measurement of qubits $a$ and $b$ ($a$ and $c$) $^{[17]}$ $$\begin{aligned} C_{ab}=C(\rho_{ab})=\max \{0,\lambda _{1}-\lambda _{2}-\lambda _{3}-\lambda _{4}\},\end{aligned}$$ where $\rho_{ab}$ is the reduced density matrix of qubits $a$ and $b$, $\lambda _{1}, \lambda_{2}, \lambda_{3}$ and $\lambda_{4}$ are four non-negative square roots of the eigenvalues of the non-Hermitian matrix $\rho_{ab}(\sigma _{y}\otimes \sigma _{y})\rho_{ab}^{\ast }(\sigma _{y}\otimes \sigma _{y})$ in decreasing order. The entanglement is described in Fig. 2. The solid line represents three-atom intrinsic entanglement, the dotted line represents the entanglement of atoms $2$ and $3$, and the dashed line represents the tangle between atom 2 and the rest two atoms. In most region of the time interval $(0,2)$, the tangle between atom 2 and the rest two atoms does not alter too much. It seems that two-atom entanglement makes the largest contribution to the variety of three-atom entanglement, since the three-atom entanglement is expressed by the difference between the tangle and the two-atom entanglement \[see Eq. (13)\]. The peak entanglement of atoms 2 and 3 is much larger than that of atoms 1 and 2. These may suggest the following physical picture: for the initial state only one atom is excited, the interaction between distant atoms can generate strong and relatively steady entanglement shared by one atom and the rest. Also, the interaction can cause strong entanglement shared by any two atoms, while only the atoms that are initially in ground state share the largest two-atom entanglement. In detail, two-atom entanglements $C_{12}$ and $C_{23}$ approach their maximum at $\Gamma t_{1}=(2k+1)\pi$ ($k=0,1,2,3\cdots$), where three-atom entanglement $C_{123}$ turns out to be zero. It is clearly shown in Fig. 2 that $C_{23}$ is always larger than $C_{12}$ in the whole region. In fact, it can be analytically proved that $C_{23}=2C_{12}$ at $\Gamma t_{1}$. At $\Gamma t_{2,3}=(2k+1)\pi\pm\frac{1}{3}\pi$, three-atom entanglement $C_{123}$ periodically reaches a maximum, the corresponding two-atom entanglement is zero. At the points, the initial state evolves into the following states $$\begin{aligned} |\Psi(t_{1})\rangle&=&-\frac{1}{3}|egg\rangle+\frac{2\sqrt{2}}{3}|\Psi_{123}\rangle,\nonumber \\ |\Psi(t_{2,3})\rangle&=&-\frac{1}{2}|egg\rangle\mp\frac{\sqrt{3}}{2}|gee\rangle,\end{aligned}$$ where $|\Psi_{123}\rangle=|g\rangle_{1}(|eg\rangle_{23}+|ge\rangle_{23})/\sqrt{2}=|g\rangle_{1}|\Psi^{+}\rangle_{23}$. We can name $|\Psi_{123}\rangle$ as a Bell-correlated state. In Eq. (16), $|\Psi(t_{1})\rangle$ is a combination of the initial state and a Bell-correlated state, also it is a W-type state. $|\Psi(t_{2,3})\rangle$ are linear combinations of the initial state and a state with atomic population inverse with respect to the initial state. The results imply possible applications in practical distant quantum communication. For example, it can be applied in the preparation of maximally entangled state of distant atoms, and thus acts as an atomic entangling gate. In this case, we assume Alice, Bob, and Charles hold atoms 1, 2, and 3 respectively. To do this, Alice, Bob, and Charles synchronously turn off their locally applied laser fields at $t_{1}$. Now, they together have a W-type state $|\Psi(t_{1})\rangle$. Then Alice performs measurement $\sigma^{z}$ on her atom. She finds her atom is in ground state with probability $\frac{8}{9}$, which is exactly the success probability that the atoms held by Bob and Charles are projected to Bell-state $|\Psi^{+}\rangle_{23}$, or she finds the system state is recovered to initial state $|egg\rangle$ with probability $\frac{1}{9}$. The advantage of the scheme is that both Bob and Charles do not need any measurement to entangle their atoms. All the requirement, after the locally applied laser fields are turned off, is a $\sigma^{z}$ measurement performed by Alice. So, two-atom maximally entangled state can be generated by remote operation. Especially, the measurement performed by Alice does not damage the initial state of the global system if she failed to entangle the others’ atoms. Here, Alice can be regarded as a distant controller, and her atom turns out to be a control-qubit. In this process, the main obstacle is the spontaneous emission of the atoms and the leakage of optical fibers. We firstly investigate the affection of atomic spontaneous emission. The evolution of the global system is now described by the non-Hermitian conditional Hamiltonian $H_{s}=-i\gamma\sum \limits_{i}|e\rangle_{i}\langle e|+\tilde{H}$ $^{[3]}$, where $\gamma$ denotes the atomic spontaneous emission rate. In Fig. 3, we plot the success probability $P$ of preparing Bell-state $|\Psi^{+}\rangle_{23}$ with respect to time for different atomic spontaneous emission rates: $\gamma=0.001\Gamma$, $\gamma=0.002\Gamma$, and $\gamma=0.01\Gamma$. The success probability is undoubtedly sensitive to the atomic spontaneous emission. The maximum probability drops to $0.881$, $0.872$, and $0.809$ respectively. However, for any $\gamma$, the corresponding fidelity can not be affected since $c_{3}(t)\equiv c_{5}(t)$ (recall that both the Hamiltonian and the initial state remain invariant under the permutation of atom 2 and atom 3). The dissipation of the photon leakage along optical fibers can be included in the spin-spin coupling coefficients by the exchange $e^{i\phi _{ij}}\rightarrow e^{i\phi _{ij}-\nu L}$, where $\nu$ is the decay per meter and $L$ is the length of the optical fiber between atoms $i$ and $j$. For typical fibers $^{[18]}$, the decay per meter is $\nu=0.08$. The spin-spin coupling coefficient is now about $90\%$ of that in Eq. (6). The adiabatic approximation $\Gamma \ll J_{0}$ can still be fully kept. So the entangling gate still works with high fidelity. Another dissipation is the cavity leakage. While, in the adiabatic approximation, we have assumed $\Delta \approx \kappa \gg g$. The entanglement is then insensitive to the variety of strong leakage rate. In summary, we have discussed the remote generation of atomic entanglement in a system contains three distant atoms for the initial state that only one atom is excited. The atoms that are initially in ground state share the largest two-atom entanglement. Two-atom entanglement turns out to be the largest contribution to the variety of three-atom entanglement. In an application of preparing entangled state of two atoms, a quantum measurement of $\sigma^{z}$ performed on the atom that is initially excited at typical time is required after synchronized turning off the locally applied laser fields. The success probability that two atoms are prepared in Bell-state $|\Psi^{+}\rangle _{23}$ can approach $\frac{8}{9}$. The distinct advantage of this scheme lies in the large detuning and large cavity leakage, that is $\Delta\approx\kappa\gg g$ which loosens the requirement of cavity dissipation. Furthermore, we show that the fidelity of the scheme is not affected by the atomic spontaneous emission. We think this scheme may work as a candidate for scalable long-distance quantum communication or one-way quantum computation $^{[3]}$. This work is supported by NSF of China under Grant Nos. 10647107 and 10575017. [99]{} Moehring D L, Maunz P, Olmschenk S, Younge K C, Matsukevich D N, Duan L M and Monroe C 2007 *Nature* **449** 68 Rosenfeld W, Berner S, Volz J, Weber M and Weinfurter H 2007 *Phys. Rev. Lett.* **98** 050504 Cho J and Lee H W 2005 *Phys. Rev. Lett.* **95** 160501 Razavi M and Shapiro J H 2006 *Phys. Rev. A* **73** 042303 Serafini A, Mancini S and Bose S 2006 *Phys. Rev. Lett.* **96** 010503 Zheng S B and Guo G C 2006 *Phys. Rev. A* **73** 032329 Duan L M, Madsen M J, Moehring D L, Maunz P, Kohn R N and Monroe C 2006 *Phys. Rev. A* **73** 062324 Yin Z Q and Li F L 2007 *Phys. Rev. A* **75** 012324 Lu D M and Zheng S B 2007 *Chin. Phys. Lett.* **24** 596 Ou Y C, Yuan C H and Zhang Z M 2006 *J. Phys. B: At. Mol. Opt. Phys.* **39** 7 Mancini S and Bose S 2004 *Phys. Rev. A* **70** 022307 Walls D F and Milburn G J 1994 *Quantum Optics* (Berlin: Springer)chap 7 p121 Guo Y Q, Chen J and Song H S 2006 *Chin. Phys. Lett.* **23** 1088 Štemlmachovič P and Bužek V 2004 *Phys. Rev. A* **70** 032313 Lee J S and Khitrin A K 2005 *Phys. Rev. A* **71** 062338 Coffman V, Kundu J and Wootters W K 2000 *Phys. Rev. A* **61** 052306 Wootters W K 1998 *Phys. Rev. Lett.* **80** 2245 Tittel W, Brendel J, Gisin B, Herzog T, Zbinden H and Gisin N 1998 *Phys. Rve. A* **57** 3229 [^1]: Corresponding author: [email protected]
--- address: 'FB 6 - Mathematik, Universität Essen, 45117 Essen, Germany' author: - Silke Lekaus title: Relation between the dimensions of the ring generated by a vector bundle of degree zero on an elliptic curve and a torsor trivializing this bundle --- Introduction and Notations ========================== Let $X$ be a complete, connected, reduced scheme over a perfect field $k$. We define Vect$(X)$ to be the set of isomorphism classes $[V]$ of vector bundles $V$ on $X$. We can define an addition and a multiplication on Vect$(X)$: $$\begin{aligned} & [V]+[V']=[V\oplus V']\\ & [V] \cdot [V']=[V\otimes V'].\end{aligned}$$ The (naive) Grothendieck ring $K(X)$ (see [@no]) is the ring associated to the additive monoid Vect$(X)$, that means $$K(X)=\frac{{{{\Bbb Z}}}[\mbox{Vect}(X)]}{H},$$ where $H$ is the subgroup of ${{{\Bbb Z}}}[\mbox{Vect}(X)]$ generated by all elements of the form $[V\oplus V'] - [V] - [V']$.The indecomposable vector bundles on $X$ form a free basis of $K(X)$. Since H$^0(X,\mbox{End}(V))$ is finite dimensional, the Krull-Schmidt theorem ([@at2]) holds on $X$. This means that a decomposition of a vector bundle in indecomposable components exists and is unique up to isomophism.\ We want to generalize a theorem of M. Nori on finite vector bundles. A vector bundle $V$ on $X$ is called finite, if the collection $S(V)$ of all indecomposable components of $V^{\otimes n}$ for all integers $n\in {{{\Bbb Z}}}$ is finite.\ In the following, we denote by R(V) the ${{\Bbb Q}}$-subalgebra of $K(X)\otimes_{{{\Bbb Z}}}{{{\Bbb Q}}}$ generated by the set $S(V)$. Thus a vector bundle $V$ is finite if and only if the ring $R(V)$ is of Krull dimension zero. In [@no], Nori proves the following theorem:For every finite vector bundle $V$ on $X$ there exists a finite group scheme $G$ and a principal $G$-bundle $\pi : P\to X$, such that $\pi^*V$ is trivial on $P$. In particular, the equality $$\dim R(V)=\dim G\, (=0)$$ holds.\ As every vector bundle $V$ on $X$ of rank $r$ trivializes on its associated principal GL($r$)-bundle, we can look for a group scheme $G$ of smallest dimension and a principal $G$-bundle on which the pullback of the vector bundle $V$ is trivial. We might also compare the dimension of the group scheme to dim $R(V)$.\ In this article we consider the family of vector bundles of degree zero on an elliptic curve. We will prove in propositions 2 and 3 that they trivialize on a principal $G$-bundle with $G$ a group scheme of smallest dimension one.As in the situation of Nori’s theorem, this dimension turns out to be equal to the dimension of the ring $R(V)$.\ I am grateful to Hélène Esnault for suggesting the problem treated here and for many useful discussions. Dimension relation for vector bundles of degree zero on an elliptic curve ========================================================================= Let $X$ be an elliptic curve over an algebraically closed field $k$ of characteristic zero. We consider vector bundles of degree zero on $X$ which can be classified according to Atiyah (see [@at]). By ${\mathcal E}(r,0)$ we denote the set of indecomposable vector bundles of rank $r$ and degree zero. (Atiyah [@at]) 1. There exists a vector bundle $F_r\in {\mathcal E}(r,0)$, unique up to isomorphism, with $\Gamma (X,F_r)\neq 0$.Moreover we have an exact sequence $$\begin{array}{ccccccccc} 0 & \to & {\mathcal O}_X & \to & F_r & \to & F_{r-1} & \to & 0. \end{array}$$ 2. Let $E\in {\mathcal E}(r,0)$, then $E\cong L\otimes F_r$ where $L$ is a line bundle of degree zero, unique up to isomorphism (and such that $L^r\cong \det E$.)    1. The ${{\Bbb Q}}$-subalgebra $R(F_r)$ of $K(X)\otimes_{{{\Bbb Z}}} {{{\Bbb Q}}}$ generated by $S(F_r)$ is ${{\Bbb Q}}[x]$, where $x=[F_2]$, if $r$ is even, and $x=[F_3]$, if $r$ is odd. In particular, $R(F_r)$ is of Krull dimension zero. 2. There exists a principal ${{{\Bbb G}}}_a$-bundle $\pi : P \to X$ such that $\pi^*(F_r)$ is trivial for all $r\ge 2$. Remark: As in Nori’s case we have a correspondence of dimensions $$\mbox{dim }R(F_r)=\mbox{dim }{{{\Bbb G}}}_a = 1.$$ Proof:As proved by Atiyah in [@at], the vector bundles $F_r$ are self-dual and fulfill the formula $$F_r\otimes F_s= F_{r-s+1}\oplus F_{r-s+3} \oplus \cdots \oplus F_{(r-s)+(2s-1)}\, \mbox{ for } s\le r.$$ For even $r$, it follows by induction that there exist integers $a_i(n)$ such that $$F_r^{\otimes n} = a_2(n)F_2\oplus a_4(n) F_4 \oplus \cdots \oplus a_{(r-1)n -1}(n)F_{(r-1)n -1} \oplus F_{(r-1)n+1}$$ for odd $n\ge 3$, and $$F_r^{\otimes n} = a_1(n){\mathcal O}_X \oplus a_3(n) F_3 \oplus \cdots \oplus a_{(r-1)n-1}(n)F_{(r-1)n-1} \oplus F_{(r-1)n+1}$$ for even $n\ge 2$ .\ Therefore we obtain $$S(F_r)=\{F_i \, |\, i=1,2,3,\dots\}\, , \mbox{ if } r \mbox{ even },$$ and $S(F_r)$ generates the subring ${{{\Bbb Q}}}[F_2]$ of $K(X)\otimes {{{\Bbb Q}}}$, because inductively we can write every vector bundle $F_i$ as $p(F_2)$ for some polynomial $p \in {{\Bbb Z}}[x]$. For odd $r$, Atiyah’s multiplication formula gives $$F_r^{\otimes n} = a_1(n){\mathcal O}_X \oplus a_3(n) F_3 \oplus \cdots \oplus a_{(r-1)n -1}(n)F_{(r-1)n -1} \oplus F_{(r-1)n+1}$$ for all $n \ge 2$. It follows that $$S(F_r)=\{F_i \, |\, i \mbox{ odd }\}\, , \mbox{ if } r \mbox{ odd }.$$ For odd $r$, the set $S(F_r)$ generates the ring $R(F_r)={{{\Bbb Q}}}[F_3]$, as for odd $i$ each $F_i$ is $p(F_3)$ for a polynomial $p \in {{\Bbb Z}}[x]$. The vector bundle $F_2 $ is an element of $H^1(X,\mbox{GL}(2,\mathcal O))$. Because of the exact sequence $$\begin{array}{ccccccccc} 0 & \to & {\mathcal O}_X & \to & F_2 & \to & {\mathcal O}_X & \to & 0, \end{array}$$ $F_2$ is even an element of $H^1(X,{{{\Bbb G}}}_a)$. Here we embed ${{\Bbb G}}_a$ into GL($2,{\mathcal O}$) via $u \to \left( \begin{array}{cc} 1 & u \\ 0 & 1 \\ \end{array} \right) $. Hence $F_2$ trivializes on a principal ${{{\Bbb G}}}_a$-bundle. As $F_r= S^{r-1}F_2\, , r\ge 3,$ each $F_r$ trivializes on the same principal ${{{\Bbb G}}}_a$-bundle as $F_2$.\ As the classes $[F_r]$ are not torsion elements in $H^1(X,\mbox{GL}(2,\mathcal O))$, none of the bundles $F_r$ can trivialize on a principal $G$-bundle with $G$ a finite group scheme. [**Remark:**]{} In the given examples of vector bundles $E$ there was so far not only a correspondence of the dimensions of the group scheme and the ring $R(E)$. The algebra $R(E)$ was also the Hopf algebra corresponding to the group scheme. The following proposition shows that this is not true in general. Let $E\cong L\otimes F_r \in {\mathcal E}(r,0)$ (see theorem 1). 1. If $L$ is not torsion, the ring $R(E)$ is isomorphic to ${{\Bbb Q}}[x,x^{-1}] \otimes {{\Bbb Q}}[y]$ and $E$ trivializes on a principal ${{\Bbb G}}_m \times {{\Bbb G}}_a$-bundle. 2. If $L$ is torsion, let $n\in {{\Bbb N}}$, $n\ge 1$, be the minimal number such that $L^{\otimes n}\cong {\mathcal O}_X$. If $n$ and $r$ are both even, the ring $R(E)$ is isomorphic to $${{\Bbb Q}}[x]/<x^{n/2} -1> \otimes{{\Bbb Q}}[y]$$ and $E$ trivializes on a principal $\mu_n \times{{\Bbb G}}_a$-bundle. There is no principal $\mu_{n/2}\times{{\Bbb G}}_a$-bundle where $E$ is trivial.\ If $n$ and $r$ are not both even, the ring $R(E)$ is isomorphic to $${{\Bbb Q}}[x]/<x^n -1> \otimes{{\Bbb Q}}[y]$$ and $E$ trivializes on a principal $\mu_n\times{{\Bbb G}}_a$-bundle. Proof: Let $E\in {\mathcal E}(r,0)$ with $\Gamma(X,E)=0$. (If $\Gamma(X,E)\neq 0$, then $E\cong F_r$. This case was already dealt with in proposition 2.)\ First we consider the case that $L$ is not torsion.We must distinguish between odd and even $r$. For odd $r$, Atiyah’s multiplication formula ( see proof of proposition 4) gives the following result:\ For $m\in {{\Bbb N}}$, $m\ge 2$, the tensor power $E^{\otimes m} \cong L^{\otimes m}\otimes F_r^{\otimes m}$ has the indecomposable components $ L^{\otimes m}\otimes {\mathcal O}_X, \, L^{\otimes m}\otimes F_3, \dots , L^{\otimes m}\otimes F_{(r-1)m +1}$,\ the tensor power $E^{\otimes -m} \cong L^{\otimes -m}\otimes F_r^{\otimes m}$ has the indecomposable components $L^{\otimes -m}\otimes {\mathcal O}_X, \, L^{\otimes -m}\otimes F_3, \dots , L^{\otimes -m}\otimes F_{(r-1)m+1}$.\ Thus we obtain $$S(E)= \left\{ \begin{array}{l} {\mathcal O}_X, L\otimes F_r, L^{-1}\otimes F_r,\\ L^{\otimes \pm i}\otimes F_3, L^{\otimes \pm i}\otimes F_5, \dots , L^{\otimes \pm i}\otimes F_{(r-1)i +1}, \mbox{ i }\in {{\Bbb N}}\\ \end{array} \right\}.$$ The algebra $R(E)$ which is generated by $S(E)$ is the subalgebra of $K(X)\otimes_{{{\Bbb Z}}}{{\Bbb Q}}$ generated by $L$, $L^{-1}$ and$F_3$, thus $$R(E)={{\Bbb Q}}[L,L^{-1}] \otimes_{{{\Bbb Z}}} {{\Bbb Q}}[F_3].$$ For even $r$, a similar computation gives that $$S(E)= \left\{ \begin{array}{l} {\mathcal O}_X, L\otimes F_r, L^{-1}\otimes F_r,\\ L^{\otimes \pm 2i}, L^{\otimes \pm 2i}\otimes F_3, \dots , L^{\otimes \pm 2i}\otimes F_{(r-1)2i +1},\mbox{ i }\in {{\Bbb N}}\\ L^{\otimes \pm (2i+1)}\otimes F_2, L^{\otimes \pm (2i+1)}\otimes F_4,\dots,\\ \mbox{ }\; \; L^{\otimes \pm (2i+1)}\otimes F_{(r-1)(2i+1) +1},\mbox{ i }\in {{\Bbb N}}\\ \end{array} \right\}.$$ The ring $R(E)$, generated by $S(E)$, is the subring of $K(X)\otimes_{{{\Bbb Z}}} {{{\Bbb Q}}}$ which is generated by the elements $L^{\otimes 2}$, $L^{\otimes -2}$, $L^{-1}\otimes F_2$, therefore $$R(E)={{\Bbb Q}}[L^{\otimes 2}, L^{\otimes -2}]\otimes _{{{\Bbb Z}}} {{\Bbb Q}}[L^{-1}\otimes F_2].$$ If $L$ is not a torsion bundle, it is clear that $L$ trivializes on a principal ${{\Bbb G}}_m$-bundle $P_L$. The vector bundle $E\cong L\otimes F_2$ trivializes on the ${{\Bbb G}}_m \times {{\Bbb G}}_a$-bundle $P_L \times_X P$, where $P$ is the principal ${{\Bbb G}}_a$-bundle from proposition 2, where $F_2$ and hence all the $F_r$ trivialize.\ Let now L be torsion and $n\in {{\Bbb N}}$, $n\ge 2$, the minimal number with $L^{\otimes n}\cong{\mathcal O}_X$. As the $F_r$ are selfdual and $L^{\otimes n-1}=L^{-1}$, it suffices to consider positive tensor powers.Again we compute the tensor powers using Atiyah’s formula to find the indecomposable components. If $r$ is even and $n$ is odd, the set $S(E)$ contains the following bundles: $$S(E)=\{ {\mathcal O}_X, L^{\otimes i}\otimes F_j \, | \, i=0,1,\dots , n-1, \, j\in {{\Bbb N}}\}.$$ With the help of the multiplication formula for $F_2$ it is easy to show that all elements of $S(E)$ can be generated by $L$ and $F_2$. In additon, the relation $L^{\otimes n}\cong {\mathcal O}_X$ holds. Hence we obtain $$R(E)=\frac{{{\Bbb Q}}[L]}{<L^{\otimes n} -1>}\otimes_{{{\Bbb Z}}} {{\Bbb Q}}[F_2].$$ If $r$ is odd and $n$ is even or odd, the result is $$S(E)=\{ L^{\otimes i}\otimes F_j \, | \, i=0,1,\dots , n-1, \, j\in {{\Bbb N}}\mbox{ odd} \}.$$ The bundles $L$ and $F_3$ are in $S(E)$ and generate all elements of $S(E)$. Because of the relation $L^{\otimes n}\cong {\mathcal O}_X$, the algebra $R(E)$ is $$R(E)=\frac{{{\Bbb Q}}[L]}{<L^{\otimes n} -1>}\otimes_{{{\Bbb Z}}} {{\Bbb Q}}[F_3].$$ If $r$ and $n$ are both even $$S(E)=\{ L^{\otimes 2i}\otimes F_{2j-1}, L^{\otimes 2i+1}\otimes F_{2j} \, | \, i=0,1, \dots , n/2, \, j\in {{\Bbb N}}\}.$$ The algebra R(E) is generated by $L^{\otimes 2}$ and $L\otimes F_2$. The generators are subject to the relation $L^{\otimes n}\cong {\mathcal O}_X$, thus $$R(E)= \frac{{{\Bbb Q}}[L^{\otimes 2}]}{<({L^{\otimes 2}})^{\otimes m} -1>} \otimes {{\Bbb Q}}[L\otimes F_2],$$ where $m=n/2$. Recall that $n \ge 2$ is the minimal number such that $L^{\otimes n} \cong {\mathcal O}_X$. Thus the bundle $L$ trivializes on a $\mu_n$-bundle $P_L$ and not on a $\mu_m$-torsor for $m< n$.The bundle $E\cong L\otimes F_r$ then trivializes on the $\mu_n\times {{\Bbb G}}_a$-bundle $P_L\times_X P$, where $P$ is again the principal ${{\Bbb G}}_a$-bundle from proposition 2. We will now show that the bundle $E$ does not trivialize on a $\mu_{n/2}\times {{\Bbb G}}_a$-bundle:If $E\cong L\otimes F_r$ trivializes on $Q\times_X P$, where $Q$ is a $\mu_m$-torsor and $P$ a ${{\Bbb G}}_a$-torsor, then det$(L\otimes F_r) = L$ is the identity element in the group Pic($Q\times_X P$). But one has Pic($Q\times_X P) =$Pic$(Q)$ by homotopy invariance. Thus $L$ must trivialize on the $\mu_m$-torsor $Q$, which is impossible for $m<n$. [**Remark:**]{} The correspondence between the dimension of the “minimal” group scheme and the dimension of the ring $R(E)$ also occurs in the case of vector bundles on the projective line, as one easily sees.\ Let $X$ be the complex projective line ${{{\Bbb P}}}^1$ and $E:={\mathcal O} (a)$ a line bundle.\ If $a=0$ we have $S(E)=\{ \mathcal O\}$ and $R(E)=Q$.\ We define the group scheme $G$ to be $G=\mbox{Spec } {{{\Bbb Q}}}$ and the trivializing torsor is simply ${{{\Bbb P}}}^1$.\ If $a\neq 0$ we can easily compute that $S(E)=\{{\mathcal O}(\lambda\cdot a) | \lambda \in {{{\Bbb Z}}}\}$ and $R(E)={{{\Bbb Q}}}[x,x^{-1}]$. We define the group scheme to be $G={{{\Bbb G}}}_m=\mbox{Spec } {{{\Bbb Q}}}[x,x^{-1}]$.\ The given line bundle $E$ trivializes on a principal ${{{\Bbb G}}}_m$-bundle $P_a$, which depends on $a$.\ Thus we get the correspondence of $\dim R(E)$ and $\dim G$ in the case of a line bundle on ${{{\Bbb P}}}^1$. This computation can easily be generalized to the case of vector bundles of higher rank. We illustrate this for bundles of rank two.\ Let now $E$ be a vector bundle of rank 2 on ${{{\Bbb P}}}^1$, $E={\mathcal O}(a)\oplus {\mathcal O}(b)$.\ The case $(a,b)=(0,0)$ is trivial. We can see at once that $S(E)=\{{\mathcal O}\}$ and therefore $R(E)={{{\Bbb Q}}}$.\ The vector bundle $E$ trivializes on the principal Spec ${{{\Bbb Q}}}$ - bundle ${{{\Bbb P}}}^1$.\ If $(a,b)\neq (0,0)$ the computation gives that $S({\mathcal O}(a)\oplus {\mathcal O}(b))\, = \, S({\mathcal O}(c))$,\ where $c=(a,b)$ (with $(a,0)=a$ and $(0,b)=b$) and therefore $R(E)={{{\Bbb Q}}}[x,x^{-1}]$. $E$ trivializes on the principal ${{{\Bbb G}}}_m$-bundle $P_c$ that belongs to ${\mathcal O}(c)$ as ${\mathcal O}(a)={\mathcal O}(c)^\lambda$ and ${\mathcal O}(b)={\mathcal O}(c)^\mu$ for appropriate integers $\lambda$ and $\mu$.\ [99]{} Nori, M.V.: On the representations of the fundamental group, Compositio Mathematica [**33**]{}, Fasc. 1, 1976, 29-41 Atiyah, M.F.: Vector bundles over an elliptic curve, Proc. London Math. Soc. (3) 7, 1957, 414-452 Atiyah, M.F.: On the Krull-Schmidt theorem with application to sheaves, Bull. Soc. Math. France 84, 1956, 307-317
--- abstract: 'Using archival [*Chandra*]{} observations with a total exposure of 510 ks, we present an updated catalog of point sources for Globular Cluster 47 Tucanae. Our study covers an area of $\sim 176.7$ arcmin$^{2}$ (i.e., with $R\lesssim7.5\arcmin$) with 537 X-ray sources. We show that the surface density distribution of X-ray sources in 47 Tuc is highly peaked in cluster center, rapidly decreases at intermediate radii, and finally rises again at larger radii, with two distribution dips at $R\sim 100\arcsec$ and $R\sim 170\arcsec$ for the faint ($L_{X}\lesssim 5.0\times 10^{30} {\rm\ erg\,s^{-1}}$) and bright ($L_{X}\gtrsim 5.0\times 10^{30} {\rm\ erg\,s^{-1}}$) groups of X-ray sources, separately. These distribution features are similar to those of Blue Straggler Stars (BSS), where the distribution dip is located at $R\sim 200\arcsec$ [@ferraro2004]. By fitting the radial distribution of each group of sources with a “generalized King model", we estimated an average mass of $1.51\pm0.17\ M_{\odot}$, $1.44\pm0.15\ M_{\odot}$ and $1.16\pm0.06\ M_{\odot}$ for the BSS, bright and faint X-ray sources, respectively. These results are consistent with the mass segregation effect of heavy objects in GCs, where more massive objects drop to the cluster center faster and their distribution dip propagates outward further. Besides, the peculiar distribution profiles of X-ray sources and BSS are also consistent with the mass segregation model of binaries in GCs, which suggests that in addition to the dynamical formation channel, primordial binaries are also a significant contributor to the X-ray source population in GCs.' author: - 'Zhongqun Cheng$^{1,2,3}$, Zhiyuan Li$^{2,3}$, Xiangdong Li$^{2,3}$, Xiaojie Xu$^{2,3}$ and Taotao Fang$^{1}$' title: 'Exploring the Mass Segregation Effect of X-ray Sources in Globular Clusters: The Case of 47 Tucanae' --- Introduction ============ Globular Clusters (GCs) are ancient stellar systems that evolve with some fundamental dynamical processes taking place on timescales shorter than (or comparable to) their absolute age, which make them a unique laboratory for learning about two-body relaxation, mass segregation, stellar collisions and mergers, and gravitational core collapse [@heggie2003]. Among all of these dynamical interactions, the two-body relaxation plays a fundamental role in driving cluster evolution, as it dominates the transportation of energy and mass in GCs. Stars are driven to reach a state of energy equipartition by two-body relaxation, massive stars (or binaries) therefore tend to lose energy and drop to the lower potential well of GCs. Whereas for lower mass stars, they tend to obtain energy and move faster, and will migrate outwards and even escape from the host clusters [@heggie2003]. The concept of core collapse, linked to the instability caused by the negative heat capacity of all self-gravitating systems, was first investigated theoretically in the 1960s and confirmed observationally in the 1980s (see @meylan1997 for a review). During the phase of core-collapse, the stellar density in the cluster core may increase by several orders of magnitude, which significantly increases the frequency of interactions and collisions between stars. Binaries are thought to play an essential role in the evolution of GCs, as they have much larger cross section and hence much higher encounter frequency than the single stars in GCs [@hut1992]. More importantly, binaries in GCs may serve as the reservoir of energy: encounters of binaries in GCs will obey the Hills-Heggie law, soft binaries (with bound energy $|E_{b}|$ less than the averaged kinetic energy $E_{k}$ of the GC stars) tend to absorb energy and become softer or be disrupted, while hard binaries (with $|E_{b}|> E_{k}$) tend to transfer energy to passing stars and become harder [@hills1975; @heggie1975; @hut1993]. Encounters of hard binaries in GCs can thus strongly influence the cluster evolution—sufficient to delay, halt, and even reverse core collapse [@heggie2003]. Observationally, many exotic objects have been detected in GCs, including Blue Straggler Stars (BSS) [@sandage1953], low-mass X-ray binaries (LMXBs) [@clark1975; @katz1975], millisecond pulsars (MSPs) [@camilo2005; @ransom2008], cataclysmic variables (CVs) and coronally active binaries (ABs) [@grindlay2001; @pooley2002a; @edmonds2003a; @edmonds2003b; @heinke2005]. All of these objects either are experiencing the drastic binary evolution stage (i.e., LMXBs, CVs, ABs) or are the immediate remnants of close binaries (i.e., BSS, MSPs). Compared with normal binaries, exotic objects hosted in GCs are brighter in luminosity (either in optical, X-ray or in radio band) and can be easily detected and picked out from the dense core of clusters, making them ideal tracing particles of studying stellar dynamical interactions and cluster evolution. For example, in searching of BSS in GCs, @ferraro1993 found that the radial distribution of BSS in M3 are bimodal, with two populations of BSS located at cluster center and larger radii, separately, and a region devoid of BSS exists at the medium radii. Furthermore, @ferraro2012 found that GCs can be classified into three families (i.e., Family I, II and III) with increasing dynamical ages. In this scenario, BSS is flatly distributed with respect to the radial distribution of normal stars in dynamically young (Family I) GCs. However, two-body relaxation will drives the BSS sedimentation toward the cluster center, modifying the flat BSS distribution into a bimodal shape, with a central peak, a dip and an outer rising branch in intermediate dynamical age (Family II) GCs. As time goes on, the radial distribution dip will propagate outward gradually, and eventually leading to a unimodal BSS distribution that monotonically moves outward in dynamically old (Family III) GCs. Based on these findings, @ferraro2012 suggested that BSS in GCs can be utilized to build a “dynamical clock" of evaluating cluster evolution. For X-ray sources detected in GCs, majority of them are found to be CVs and ABs, while few have been identified as LMXBs and MSPs (see @heinke2010 for a review). These objects are suggested to be dynamically formed in GCs [@pooley2003; @pooley2006], and the abundances (i.e., number per unit stellar mass) of LMXBs and MSPs are found to be orders of magnitude higher in GCs than in the Galactic field [@clark1975; @katz1975; @camilo2005; @ransom2008]. Meanwhile, the cumulative X-ray emissivity (i.e., X-ray luminosity per unit stellar mass) of many GCs are found to be slightly lower than that of the solar neighborhood stars and the dwarf elliptical galaxies [@ge2015; @cheng2018a]. This suggests a dearth rather than over-abundance of CVs and ABs in most GCs relative to the field. To explain this contrast, @cheng2018a argued that stellar interactions (i.e., binary-single or binary-binary encounters) in GCs are efficient in destroying binaries, and a large fraction of soft primordial binaries have been disrupted in GCs before they can otherwise evolve into CVs and ABs. For the remaining hard primordial binaries, they are likely being transformed into X-ray-emitting close binaries by stellar interactions, leading to a strong correlation between binary hardness ratio (i.e., the abundance ratio of X-ray-emitting close binaries to main sequence binaries) and stellar velocity dispersion in GCs [@cheng2018b]. However, most of the previous work (i.e., [@pooley2003; @pooley2006; @cheng2018a; @cheng2018b]) on GC X-ray sources were confined within the cluster half-light radius, where stellar density is high and stellar dynamical interactions cannot be ignored. For X-ray sources located in the cluster halo, they are more likely to descend from primordial binaries and have not experienced strong encounters; however, two-body relaxation is efficient in driving these systems sedimentation to cluster center. This process was ignored in previous work. Hence, it remains an open question whether radial distribution of X-ray sources shall be similar to BSS in GCs. Globular cluster 47 Tuc is massive ($M\sim 10^{6} M_{\odot}$, [@harris1996]) and of relatively high stellar concentration ($c=2.01\pm0.12$, [@mcLaughlin2006]), making it one of the clusters with the highest predicted stellar encounter rate in the Milky Way [@bahramian2013]. Observationally, main sequence binary fraction in 47 Tuc was found to be lower ($f_{b}=1.8\pm0.6\%$) than that of other GCs (with a typical value of $f_{b}\sim1-20\%$, [@milone2012]), which suggests a substantial fraction of primordial binaries had been disrupted or altered by close encounters in this cluster. 47 Tuc is a typical Family II cluster with a bimodal radial distribution of BSS [@ferraro2004]: the BSS distribution dip is located at $170\arcsec<R<230\arcsec$, which is slightly larger than the half-light radius ($r_{h}=3.17\arcmin$, [@harris1996]). In X-ray, 47 Tuc is well known for its large number of X-ray sources, many of which have been identified as CVs and ABs [@edmonds2003a; @edmonds2003b; @heinke2005], quiescent LMXBs [@grindlay2001; @edmonds2002] and MSPs [@bogdanov2006]. With a deep [*Chandra*]{} exposure, @bhattacharya2017 presented the largest X-ray sources catalogue for 47 Tuc with 370 X-ray sources. However, due to a smaller X-ray sources searching region (i.e., within $R=2.79\arcmin$), they did not find a significant mass segregation effect of X-ray sources. In this work, we present the most sensitive and full-scale [*Chandra*]{} X-ray point source catalog for 47 Tuc, which covers an area of $\sim 176.7$ arcmin$^{2}$ (i.e., with $R\lesssim 7.5\arcmin$ in 47 Tuc) and has a maximum effective exposure of $\sim510$ ks (same as [@bhattacharya2017]) at the center of the cluster. Therefore, the X-ray source catalog presented in our work is much larger than that of @bhattacharya2017 (see Section 3 for details). This paper is organized as follows. In Section 2 we present the [*Chandra*]{} observations and data preparation. In Section 3, we describe the creation of the X-ray source catalog. We analyze the X-ray source radial distributions in Section 4, and explore its relation to GC mass segregation effect in Section 5. A brief summary of our main conclusions is given in Section 6. [*Chandra*]{} Observations and Data Preparation =============================================== Observations and Data Reduction ------------------------------- The central region of 47 Tuc has been observed 19 times by the [*Chandra*]{} Advanced CCD Imaging Spectrometer (ACIS) from March 2000 to February 2015. Five observations in March of 2000 were performed with the ACIS-I CCD array at the telescope focus, while the rest were taken with the aim-point at the S3 chip. As listed in Table-\[tab:obslog\], 13 observations were performed with a subarry mode, to minimize the effect of pileup for bright sources. Starting from the level-1 events file, we used the [*Chandra*]{} Interactive Analysis Observations (CIAO, version 4.8) tools and the [*Chandra*]{} Calibration Database (CALDB, version 4.73) to reprocess the data. Following the standard procedures[^1], we used the CIAO tool [*acis\_build\_badpix*]{} to create a bad pixel file and employed [*acis\_find\_afterglow*]{} to identify and flag the cosmic-ray afterglows events. We updated the charge transfer inefficiency, time-dependent gain and pulse height with [*acis\_process\_events*]{}. Most of the [*acis\_process\_events*]{} parameters are set as the default values and the VFAINT option is set for certain observations as appropriate. We then removed bad columns, bad pixels and filtered the events file using the standard (ASCA) grades (0,2,3,4,6). We also inspected the background light curve of each observation and removed the background flares using [*lc\_clean*]{} routine. We further adopted the clean data from chips I0-I3 (for ACIS-I) and chips S2 and S3 (for ACIS-S) for this work. Image Alignment and Merging --------------------------- ![image](F1.eps){height="0.66\textheight" width="100.00000%"} After generating a cleaned event file for each of the 19 observations, we needed to create a merged image for 47 Tuc in order to detect the faintest sources. We created a flux image with a binsize of 0.5 pixel in $0.5-7$ keV band for each observation, and created a PSF map with CIAO tool [*mkpsfmap*]{} by assuming a power-law spectrum with a photon index of $\Gamma=1.4$ and an enclosed counts fraction (ECF) of 0.4. Supplied with the flux image and PSF map, we then searched X-ray sources within 4 arcmin off-axis of each observation, by using the CIAO tool [*wavdetect*]{} with a “$\sqrt{2}$ sequence" wavelet scales (i.e., 1, 1.414, 2, 2.828, 4, 5.656, and 8 pixels) and a false-positive probability threshold of $10^{-6}$. Depending on the exposure time of the observations, $\sim 10-250$ X-ray sources are detected in the individual data sets. In order to improve the alignment between each observation, we refined the position of the detected X-ray sources with the ACIS Extract (AE, [@broos2010]) package, and adopted the centroid positions determined by AE from its “CHECK\_POSITIONS" stage as the improved positions of these sources. AE also provides a tool (i.e., [*ae\_interObsID\_astrometry*]{}) to verify the astrometric alignment between each observations, which is performed during the pruning stage of the candidate source list (i.e., Section 3.1), to guarantee that the astrometric offsets between two observations are better than $\sim 0.1\arcsec$ [@broos2010]. To correct the relative astrometry among the X-ray observations, we chose the observation (ID:2738) with the longest exposure time as reference of coordinate, and employed the CIAO tool [*wcs\_match*]{} to match and align the X-ray source lists. We set the search radius of source-pairs to $1\arcsec$ and allow the incrementally elimination of source-pairs with the highest positional errors by setting [*residtype=1*]{} and [*residlim=0.5*]{}. The [*wcs\_match*]{} creates a transformation matrix for each individual observation, with the values of matrix elements range from $-1.168\arcsec$ to $1.195\arcsec$ in linear translation, $-0.084^{\circ}$ to $0.216^{\circ}$ in rotation, and 0.9985 to 1.0019 in scaling (Table-\[tab:obslog\]). This matrix was used with [*wcs\_update*]{} to update the aspect solution file, which was then supplied to [*acis\_process\_events*]{} tool to reprocess the event file. The parameters of [*acis\_process\_events*]{} are set as described in Section 2.1, except that sub-pixel event repositioning algorithm EDSER[^2] [@li2004] was used to preserve the spatial resolution of point sources on-axis. This algorithm is helpful in improving the resolution of nearby sources in the core of 47 Tuc, where stellar density is high and X-ray sources are crowded. We combined these 19 cleaned event files into merged event files and exposure-corrected images with tool [*merge\_obs*]{}. Three groups of images have been created in soft (0.5–2 keV), hard (2–7 keV) and full (0.5–7 keV) bands and with binsize of 0.5, 1 and 2 pixels, respectively. By setting an ECF of 0.4 and a photon energy of 1.2 keV, 3.8 keV and 2.3 keV for soft, hard and full band, respectively, we also calculated for each of the 19 observations PSF maps with [*mkpsfmap*]{}, and combined them into exposure time weighted averaged PSF maps with tool [*dmimgcalc*]{}. These merged images are only used for detecting X-ray sources in this work. Catalog of Sources ================== Source Detection ---------------- To generate the candidate X-ray source list for 47 Tuc, we ran [*wavdetect*]{} on each of the 9 merged images, using a “$\sqrt{2}$ sequence" of wavelet scales (i.e., 1, 1.414, 2, 2.828, 4, 5.656, 8, 11.314, and 16 pixels). We confined the X-ray sources searching region as a circle with radius of 7.5 acrmin (Figure-\[fig:rawimage\]), and adopted a false-positive probability threshold of $1 \times 10^{-7}$, $5 \times 10^{-6}$ and $1 \times 10^{-5}$ for event files with scale of 0.5, 1.0 and 2.0 pixels, respectively. We then combined the [*wavdetect*]{} results into a master source list with the [*match\_xy*]{} tool from the Tools for ACIS Review and Analysis (TARA) packages[^3]. The resulting source list includes 559 sources. Although the relatively loose source-detection threshold introduces a non-negligible number of spurious sources to the candidate list, we found that dozens of faint sources are failed to be identified by [*wavdetect*]{} within the crowded core of 47 Tuc, and several sources within bright neighbours also cannot be properly resolved. We picked out these sources by eye and added them into the master source list, which resulting a final candidate list of 617 sources. To obtain a reliable source list, we utilized AE to filter and refine the candidate source list, which was developed for analyzing multiple overlapping [*Chandra*]{} ACIS observations [@broos2010]. By analyzing the level-2 event files of each observation, AE builds source and background regions with local PSF contours[^4], and creates the source and background spectra with appropriate extraction apertures. Then by merging the extractions of multiple observations into “composite" data products (i.e., event files and spectra for source and background regions, PSF model, ARF and RMF, etc.), we are able to perform further analysis of source properties (such as source validation, position refining, photometry and spectral fitting, etc.) with AE [@broos2010]. Standard AE point-source extraction workflow usually involves many iterations of extracting, merging, pruning, and repositioning of sources in the candidate catalog[^5], which is helpful to our evaluation of the significance of the X-ray sources in 47 Tuc. As illustrated in Figure-\[fig:rgbimage\], X-ray sources in the core of 47 Tuc are overcrowded, thus extraction of point sources is strongly affected by their neighbours. ![image](F2.eps){height="0.75\textheight" width="100.00000%"} For these reasons, we adopted an iterative pruning strategy to examine the validation of each candidate source in 47 Tuc. Our source pruning processes is similar to the steps outlined in the validation procedure[^6] presented by the AE authors: all candidate sources were extracted, merging with multiple extractions, pruning insignificant sources, repositioning and checking astrometry repeatedly, until a stable source list with refined positions was obtained. AE provides an important output parameter, the binomial no-source probability $P_{B}$, to evaluate the significance of a source. The evaluation of $P_{B}$ is based on the null hypothesis that a source does not exist in the source-extraction aperture, and the observed excess number of counts over background is purely due to background fluctuations [@weisskopf2007]. The formula to obtain $P_{B}$ is given by $$P_{\rm B}(X\ge S)=\sum_{X=S}^{N}\frac{N!}{X!(N-X)!}p^X(1-p)^{N-X},$$ where $S$ is the total number of counts in the source extraction region and $B$ is the total number of counts in the background extraction region; and $N$ is the sum of $S$ and $B$; $p=1/(1+BACKSCAL)$, with $BACKSCAL$ being the area ratio of the background and source-extraction regions. For each source, AE computed a $P_{B}$ value in each of the three bands, and we adopted the minimum of the three as the final $P_{B}$ value for the source. By adopting an invalid threshold value of $P_{B}< 1\times 10^{-3}$ for the candidate source list, we obtained a final stable catalogue with 537 X-ray sources for 47 Tuc. As comparison, our final catalog of X-ray sources is much larger than that obtained by @bhattacharya2017, which contains 370 X-ray sources within a radius of $2.79\arcmin$ in 47 Tuc. Although our X-ray source searching region is larger (i.e. $R\sim$ 7.5), we found that 61 of the new detections (blue sources in Figure-\[fig:rawimage\] and \[fig:rgbimage\]) are located within $R=2.79\arcmin$ in our catalog, while 28 sources (green sources in Figure-\[fig:rawimage\] and \[fig:rgbimage\]) in @bhattacharya2017 are failed to be recovered in our catalog. Discrepancy may result from the different source detection and pruning strategies. By adopting a loose source-detection threshold in the [*wavdetect*]{} script, we allow more fainter X-ray sources to be detected, which ensure the source completeness (i.e., recovery number of real sources) of our catalog, but with a sacrifice to enroll some spurious sources. In order to optimize the balance between the source completeness and reliability (i.e., the fraction of potential spurious sources), we adopted a more strict binomial no-source probability threshold (i.e., $P_{B}< 1\times 10^{-3}$) in AE source pruning procedure, which is much less than the value ($P_{B}< 1\times 10^{-1}$) used by @bhattacharya2017. ![image](F3.eps){height="0.34\textheight" width="100.00000%"} To compare the performance of our source detection and pruning strategies with that of @bhattacharya2017, we plot $P_{B}$ as a function of net source counts and photon flux in Figure-\[fig:pb\]. Obviously, the new detections (i.e., blue sources in Figure-\[fig:rawimage\] and \[fig:rgbimage\]) of our work are uniformly mixed with the validated detections (i.e., red sources that have been cross-identified in [@bhattacharya2017] and this work), which supports the better source completeness in our catalog. For green sources in Figure-\[fig:rawimage\] and \[fig:rgbimage\], many of them were found to have few photons ($\lesssim 5$) and much higher value of binomial no-source probability ($1.5\times 10^{-2} \lesssim P_{B}\lesssim 1.0\times 10^{-1}$). They were labelled as “$c$" and were thought to be marginal detections in the catalog of @bhattacharya2017. With a more stringent pruning $P_{B}$ threshold (i.e., $P_{B}< 1\times 10^{-3}$), these sources have been removed from our catalog. We note that the choice of the $P_{B}$ threshold is an empirical decision, which may range from $10^{-2}$ (AE default) to $10^{-3}$ in literature. Source Properties ----------------- Following source pruning and position refinement, we utilized AE to extract final source and background spectra for the X-ray sources. The default AE extraction aperture was defined to enclose $\sim 90\%$ (evaluated at 1.5 keV) of the PSF power, which may lead to overlapped extraction regions in the dense core of 47 Tuc. By reducing the aperture sizes iteratively and calculating the energy-dependent photometry correction factors for the individual sources, AE provides a sophisticated method to perform source extraction for crowding environment. In our AE usage, we adopted the default AE polygonal region as the source extraction regions for most sources in 47 Tuc, and reduced the extraction regions to enclose $\approx 40\%-90\%$ of the local PSF power for sources with close neighbours. For the background extractions, we adopted the AE “BETTER\_BACKGROUNDS" algorithm to build the background regions. This algorithm models the spatial distributions of flux for the source of interest and its neighboring sources using unmasked data, and then computes local background counts within background regions that subtract contributions from the source and its neighboring sources. By setting a minimum number of 100 counts for each merged background spectrum to ensure the photometric accuracy, we obtained the background regions of each source through the “ADJUST\_BACKSCAL" stage in AE. Spectral extractions are first performed independently for each source and each observation, then using AE we have merged the extraction results to construct composite products (i.e., event lists, spectra, light curves, response matrices and effective area files) for each source through the “MERGE\_OBSERVATIONS” procedure. Aperture-corrected net source counts are derived in 0.5–2 keV (soft), 2–8 keV (hard) and 0.5–8 keV (full), respectively. About $45\%$ (i.e., 244/537) of the X-ray sources have net counts greater than $\sim 30$ in the full band, and are available for spectral analysis with the AE automated spectral fitting script. We modeled the spectra of these sources with an absorbed power-law spectrum. In all cases the neutral hydrogen column density ($N_{H}$) is constrained to no less than $2.3 \times 10^{20} {\rm \ cm^{-2}}$, calculated from the color excess E(B–V) of 47 Tuc. For the remaining sources, their net counts are less than $\sim 30$ and the X-ray spectra are therefore poorly constrained. We converted their net count rates into unabsorbed fluxes, by using the AE-generated merged spectral response files and assuming a power-law model with fixed photon-index ($\Gamma=1.4$) and column density ($N_{H}=2.3 \times 10^{20} {\rm \ cm^{-2}}$). Taking a distance of 4.02 kpc for 47 Tuc [@mcLaughlin2006], we converted the flux of each X-ray source into unabsorbed luminosity in soft, hard and full bands, respectively. Finally, we collated the source extraction and spectral fitting results into a main X-ray source catalog for 47 Tuc. The X-ray sources are sorted by their Right Ascension. We calculated their distance to cluster center using the optical central coordinate $\alpha =00^{h}24^{m}05^{s}.67$ and $\delta =-72\arcdeg 04\arcmin 52.62\arcsec$ [@mcLaughlin2006]. Our catalog contains 22 columns of information for the 537 X-ray sources (Table-\[tab:mcat\]), and the full table is available in the electronic edition of the Journal. Sensitivity ----------- As shown in Figure-\[fig:rawimage\], the merged images have the deepest exposure near the cluster core and much shallow exposure at the cluster halo, which leads to significant variation of detection sensitivity (i.e., the minimum flux at which a source would be detected) across the searching region. In order to estimate the sensitivity of each point in the survey field, we created background and sensitivity maps following the procedures described by @xue2011 and @luo2017 in [*Chandra*]{} deep field-south survey. Briefly, we first masked out the main catalog sources from the full band merged image using circular regions with mask radii ($r_{msk}$) provided by AE. We then filled in the masked region for each source with the background counts from a local annulus with inner to outer radii of $r_{msk}-2.5r_{msk}$. Due to the crowding of X-ray sources, the default annuli for background counts are strongly overlapped with neighbouring sources in the core of 47 Tuc. Therefore, for X-ray sources located in $R\leq20\arcsec$ and $20\arcsec<R\leq40\arcsec$, we filled in them with background counts from an uniform background region separately. This uniform background region was defined by subtracting the mask regions of detected X-ray sources within the circle region with $R\leq 20\arcsec$ and the annulus region with $20\arcsec\leq R\leq40\arcsec$, respectively. The resulted source-free background map was then used to calculate the limiting sensitivity map, which is the flux limit required for a source to be selected by our $P_{B}$ criterion. To follow the behavior of AE in photometry extraction of the main-catalog sources, we defined for each pixel a circular source extraction aperture with the local $90\%$ PSF ECF radii. Due to the off-axis effects, the value of $BACKSCAL$ (i.e., area ratio of the source to background extraction regions) is depends on the off-axis angles in AE. Therefore, for a given pixel in the survey field, we also computed its off-axis angle $\theta_{p}$, and set the value of $BACKSCAL$ to the maximum value of the main-catalog sources that are located in an annulus with the inner/outer radius being $\theta_{p}-0.25\arcmin/\theta_{p}+0.25\arcmin$ (note that the adopted maximum $BACKSCAL$ value corresponds to the highest sensitivity). With the defined source extraction aperture, $BACKSCAL$ and background map, we then calculated for each pixel the background counts ($B$). The detection sensitivity ($S$), which is the minimum number of source counts required for a detection, can be obtained by solving Equation (1) with survey $P_{B}$ threshold value of $1\times 10^{-3}$. Finally, We computed for each pixel the limiting detection count rates with the exposure map, and converted it into limiting fluxes by assuming an absorption power-law model with photon-index $\Gamma=1.4$ and column density $N_{H}=2.3 \times 10^{20} {\rm \ cm^{-2}}$. The limiting detection count rates of each pixel are shown as a function of projected distance from the cluster center in Figure-\[fig:pflux\](a). The median, minimum and maximum value of detection limit at each radial bin is represented as red, green and blue dotted lines, respectively. For comparison, we also plot the photon flux of the 537 X-ray sources as diagonal crosses in Figure-\[fig:pflux\] (a). As the figure suggests, the merged images have a deepest and relatively flat limiting sensitivity ($F_{p}\sim6\times 10^{-8} {\rm \ photon\ cm^{-2} s^{-1}}$) within $R\lesssim 2.8\arcmin$. At larger cluster radii, the defined survey field is poorly covered by the [*Chandra*]{} (subarry) ACIS-S Field of Views (see also Figure 1), thus, the median of the sensitivity curve decreases significantly with increasing $R$. However, almost all of the main-catalog X-ray sources are located above the minimum sensitivity line. Analysis: Source Radial Distribution ==================================== Despite the sensitivity variation across the survey field, we found that X-ray sources in 47 Tuc can be divided into several groups according to their locations in Figure-\[fig:pflux\](a). Specifically, the radial distribution of the bright X-ray sources (i.e., sources with photon flux $F_{p}\gtrsim 1\times 10^{-6} {\rm\ photon\ cm^{-2}\,s^{-1}}$ or luminosity $L_{X}\gtrsim 5.0\times 10^{30} {\rm\ erg\,s^{-1}}$) is bimodal, with two peaks located at the cluster center and $R\sim 6\arcmin$, respectively. A broad dip exists between $1\arcmin\lesssim R\lesssim 4\arcmin$ (marked by magenta down arrow), and a narrow dip is evident around $1.5\arcmin\lesssim R\lesssim 1.8\arcmin$ (magenta up arrow) for the faint X-ray sources (i.e., $F_{p}\lesssim 1\times 10^{-6} {\rm\ photon\ cm^{-2}\,s^{-1}}$ or $L_{X}\lesssim 5.0\times 10^{30} {\rm\ erg\,s^{-1}}$). These distribution dips can also be clearly identified from Figure-\[fig:pflux\](b), \[fig:pflux\](c) and \[fig:pflux\](d), where the radial distributions of total, bright and faint X-ray sources are presented as black, red and olive histograms, respectively. Since the 47 Tuc is very close (with an angular distance of $\sim2.3^{\circ}$) to the Small Magellanic Cloud (SMC), some of the X-ray sources could be associated with the SMC. We check this possibility by examining the azimuthal distribution of sources located within $3\arcmin \lesssim R \lesssim 7.5\arcmin$, and found that there is no significant excess of X-ray sources toward the direction of the SMC. We also examined whether these distribution dips are created by the coverage effects (i.e., CCD gaps or variation of detection sensitivity) of the merged observations, and found that such effects are negligible (Figure-\[fig:rawimage\]). However, classification of X-ray sources in Figure-\[fig:pflux\] is challenging, due to the contamination of cosmic X-ray background (CXB). Assuming a uniform spatial distribution, the CXB contamination will become dominant outside the cluster core since the survey area grows rapidly with increasing $R$. To estimate the contamination of CXB sources, we plot the X-ray source surface density as a function of $R$ in Figure-\[fig:surfd\]. The observed X-ray sources are displayed as red and olive dots, while the CXB contributions are shown as black dashed lines. Here we adopted the ${\rm log}N-{\rm log}S$ relations determined by @kim2007 to estimate the contribution of CXB sources. The cumulative number count of CXB sources of a limiting sensitivity flux $S$ can be estimated with function $${N(>S)}=2433(S/10^{-15})^{-0.64}-186\ {\rm deg^{-2}}.$$ This function is derived from Equation (5) of @kim2007, by assuming a photon index of $\Gamma=1.4$ for the [*Chandra*]{} ACIS observations in 0.5-8 keV band. Figure-\[fig:surfd\] suggests the excess of X-ray sources over the CXB can be reasonably accounted for by the GC X-ray sources. For the faint group of X-ray sources, the excess ($\sim336$ sources) drops rapidly with increasing $R$ and becomes insignificant at $R\sim 6.0\arcmin$. Beyond $R\sim 6.0\arcmin$, the predicted CXB contribution matches well with the observed surface density profile. For the bright group of X-ray sources, the excess ($\sim 70$ sources) over the CXB is evident within $R\lesssim 2.5\arcmin$ and $4\arcmin \lesssim R\lesssim 7.5\arcmin$, and becomes negligible at $2.5\arcmin \lesssim R\lesssim 4\arcmin$. Apparently, even when the CXB sources are taken into account, the two distribution dips are still evident in Figure-\[fig:surfd\]. To quantitatively evaluate the significance of the distribution dips, we modeled the radial distribution of GC X-ray sources with the stellar density profile of 47 Tuc. Through out this work, we utilized the King model [@king1962] to calculate the radial distribution of normal stars in 47 Tuc. The cluster concentration ($c=2.01$) and core radius ($r_{c}=20.84\arcsec$) are adopted from @mcLaughlin2006, which were determined using stars with mass of $0.85M_{\odot}\lesssim M_{\ast} \lesssim 0.9 M_{\odot}$. We then convolved the King model profile with the sensitivity function, and normalized the model profile to match the total number of GC X-ray sources in each group. The King model profiles are plotted as black solid lines in Figure-\[fig:surfd\], while the sum of the CXB and King model components are represented as blue solid lines. We defined the significance of the distribution dip as $(N_{K}+N_{B}-N_{X})/\sqrt{N_{K}^{2}\sigma_{K}^{2}+N_{B}^{2}\sigma_{c}^{2}+N_{X}^{2}\sigma_{P}^{2}}$. Here $N_{K}$ is the number of X-ray sources predicted by the King model, $\sigma_{K}$ is the fitting error for the King model, $\sigma_{P}=1/\sqrt{N_{X}}$ is the the Poisson error for the observed number of sources ($N_{X}$), and $\sigma_{c}^{2}$ is the variance of CXB sources ($N_{B}$). By adopting a nominal error of $\sigma_{K}=5\%$ for the King model estimation and a CXB variance of $\sigma_{c}^{2}=2.25\%$ for the [*Chandra*]{} surveyed field (with an area of $\sim 0.05 \rm \, deg^{2}$) in 47 Tuc [@moretti2009], we adjust the locations and widths of the distribution dips, and found a maximum significance of $\sim7.3\sigma$ and $\sim6.3\sigma$ for the bright (with $119\arcsec \leqslant R\leqslant 253\arcsec$, $N_{K}=16.5$, $N_{B}=10.9$ and $N_{X}=7$) and faint (with $99\arcsec \leqslant R\leqslant 113\arcsec$, $N_{K}=16.9$, $N_{B}=3.5$ and $N_{X}=4$) distribution dips, respectively. Compared with BSS, the split of X-ray sources into two sub-populations in radial direction suggests that they may have experienced similar mass segregation effect. To test this hypothesis, we set normal stars as reference and compare their cumulative radial distribution with that of the X-ray sources in 47 Tuc. Due to the variation of detection sensitivity across the surveying field, we need to ensure that the selected sample of the X-ray sources are observationally unbiased. To maximize the sizes of source samples and minimize the CXB contamination, we set a lower photon flux limit of $\sim 1\times 10^{-7} {\rm \ photon\ cm^{-2} s^{-1}}$ for the faint group of X-ray sources, and confined the source selection region as $R \lesssim 200\arcsec$ (i.e., red dotted box in Figure-\[fig:pflux\](a)). The final sample contains 250 X-ray sources. About 35 of them are CXB sources. For the bright group of X-ray sources, 113 sources are located within $R \lesssim 450\arcsec$ (olive dotted box in Figure-\[fig:pflux\](a)), where $\sim 43$ sources are from the CXB. Assuming that the CXB sources are uniformly distributed across the survey field, we corrected the cumulative distribution of X-ray source samples for CXB contamination, following the Monte Carlo procedure described in @grindlay2002. A total of 1000 bootstrap resamplings were generated for each group of X-ray sources. For each of the bootstrap sample, we simulated the CXB sources using a Poisson distribution with a mean number of 35 (or 43). The CXB sources were set to have a uniform spatial distribution, and the closest actual sources to these positions were removed from the bootstrap sample. The average of the 1000 background-corrected sample distributions was then adopted as the best estimate for the corrected cumulative distribution. Following the procedures presented by @ferraro1993 and @ferraro2004, we first plot the cumulative distribution of X-ray sources as a function of $R$ in the upper panels of Figure-\[fig:distribution\]. The faint and bright group of X-ray sources are shown as red and olive step lines, while the cumulative distributions of the reference normal stars are displayed as blue lines. Apparently, the X-ray sources are more concentrated than the normal stars in the central region, and less concentrated in the outer region. This feature is better illustrated in the middle panels of Figure-\[fig:distribution\], in which the two sub-populations of each group of X-ray sources are shown separately. These distribution profiles are similar to that of BSS (right panels of Figure-\[fig:distribution\]), which suggests a universal mass segregation effect for X-ray sources and BSS in 47 Tuc. As suggested by @ferraro1993, the mass segregation effect of X-ray sources in GCs can also be studied by using the number of X-ray sources normalized to the integrated cluster light. In this case the survey area is divided into a set of concentric annuli, and the number of X-ray sources ($N_{X}$) in each annulus is counted and then normalized to the total light emitted in that annulus. In order to correct the contamination (i.e., $N_{B}$) of CXB sources, We modified the specific frequency equation of @ferraro1993 to $$R_{X}=\frac{(N_{i}/N_{t})}{(L_{i}/L_{t})},$$ where $N_{i}=N_{X}-N_{B}$ ($N_{t}=\Sigma_{i} N_{i}$) is the background subtracted number of X-ray sources within each annulus (survey area), and $L_{i}$ ($L_{t}$) is the corresponding V-band luminosity from the annulus (survey area). Following the method used by @ferraro1993, we utilized the King model to estimate the integrated light ratio (i.e., $L_{i}/L_{t}$) for each annulus, and assigned a nominal error of 5% for the King model estimation. By assuming a Poisson error ($\delta N_{i}=\sqrt{N_{i}}$) for X-ray source counts, we calculated the error of $R_{X}$ with the formula $$\sigma_{R}=(\beta\sigma_{b}+\sigma_{a})/b,$$ where $\beta=a/b$, $\sigma_{a}$ and $\sigma_{b}$ is the error of $a$ and $b$, respectively. We listed the data and results for each annulus in Table-\[tbl:spec\_freq\]. In the bottom panels of Figure-\[fig:distribution\], we plot the specific frequency of X-ray sources as a function of $R$. The faint and bright group of X-ray sources are presented as red and olive dots, respectively. For comparison, we also plot a blue dashed horizontal line (i.e., $R_{X}=1$) in the bottom panels of Figure-\[fig:distribution\], which describes a uniform distribution of X-ray sources with respect to the integrated light of cluster. As shown in the figure, the specific frequency of X-ray source ($R_{X}$) reaches its maximum at the cluster center, decreases to the minimum at the distribution dips and then rises again. Compared with the dashed horizontal line, the distribution profile of $R_{X}$ suggests that the mass segregation is efficient within the distribution dips and many X-ray sources have been drifted into the cluster core, leading to an over-abundance of X-ray sources at the center of 47 Tuc. Beyond the distribution dips, mass segregation is inefficient, and lots of X-ray sources remain a large orbit from the cluster center. As a result, an annulus region devoid of X-ray sources is formed at the distribution dips. For comparison, we also plot the cumulative and specific frequency distributions of BSS (orange lines and symbols) in the right panels of Figure-\[fig:distribution\]. The data of BSS in 47 Tuc were adopted from @ferraro2004, and only BSS with $0<R<700\arcsec$ have been illustrated in Figure-\[fig:distribution\]. We note that the locations and widths (i.e., range of $R$ with $R_{X}\lesssim 1.0$) of the distribution dips are $R \sim 100\arcsec$ and $\Delta R \sim 100\arcsec$ for the faint group of X-ray sources, $R \sim 170\arcsec$ and $\Delta R \sim 300\arcsec$ for the bright group of X-ray sources, $R \sim 200\arcsec$ and $\Delta R \sim 500\arcsec$ for the BSS, respectively. Discussion ========== Mass Segregation of X-ray Sources and BSS in 47 Tuc --------------------------------------------------- As described by @ferraro2012, the two-body relaxation is the main process that drives more massive objects (such as binaries and BSS) sedimentation to the cluster center, and modifies an initially flat BSS radial distribution into a bimodal shape. Due to the highest stellar density, mass segregation is thought to take place at cluster center first. As time goes on, heavy objects orbiting at larger radii are expected to drift toward the core. As a consequence, the region devoid of these heavy objects will propagate outward progressively, and the two-body relaxation timescale at the distribution dip is roughly scaled to the dynamical age of the cluster. The two-body relaxation timescale in GCs can be expressed as [@binney2008]: $$t_{relax}\propto\frac{{\sigma}^{3}(r)}{G^{2}M{\rho}(r) ln\Lambda},$$ where $\rho(r)$ and $\sigma(r)$ is the profile of stellar density and stellar velocity dispersion, respectively, $G$ is the gravitational constant, $ln\Lambda$ is the Coulomb logarithm, and $M$ is the average mass of the heavy objects. In GCs, the stellar density profile ($\rho(r)$) decreases dramatically outside the cluster core, which leads to a steep increase of the two-body relaxation timescale from the cluster center to the halo[^7]. For 47 Tuc, the two-body relaxation time is $t_{rh}\approx 3.55\, \rm Gyr$ at the half-light radius $r_h=3.17\arcmin$ [@harris1996], which is much smaller than the age ($t=13.06 \, \rm Gyr$, [@forbes2010]) of this cluster. Therefore, significant mass segregation effect is expected to in this cluster. In Figure-\[fig:distribution\], we show that the distribution dips of X-ray sources and BBS are different in 47 Tuc. Considering that initial conditions and cluster environment (i.e., $\rho(r)$, $\sigma(r)$ and $ln\Lambda$) should be same for all of these objects, the mass of heavy objects becomes the only factor that affects their distribution dips in this cluster. According to Equation (5), massive objects drift to the cluster center faster than the less massive ones, and their radial distribution dip propagates outward further. If this is the case, Figure-\[fig:distribution\] suggests that BSS in 47 Tuc will be more massive than the X-ray sources, and the bright X-ray sources should be more massive than the faint X-ray sources. Thus, an estimation of the average mass of each group of heavy objects is helpful to clarify this issue. In GCs, radial distribution of heavy objects can be used to estimate their average mass. For example, by analyzing the distribution of the projected radial distance of LMXBs from cluster centers, @grindlay1984 estimated the typical mass of LMXBs to be $\sim 1.5^{+0.4}_{-0.6}\ M_{\odot}$. Their method assumes that the distributions of LMXBs and normal stars are in thermal equilibrium. Therefore, LMXBs will be more centrally concentrated than that of the normal stars, which allows an estimation of the average mass ratio ($q=M_{X}/M_{\ast}$) between these two groups of objects. @grindlay2002 presented a maximum likelihood procedure for fitting “generalized King models” to the radial distribution of heavy objects in clusters. In this model, the projected surface density profile of heavy objects takes the form (see also [@heinke2005]) $$S(r)=S_{0}{\biggl[1+\biggl(\frac{r}{r_{c}}\biggr)^{2}\biggr]}^{(1-3q)/2},$$ where $r_{c}$ is the cluster core radius, $S_{0}$ is the normalization constant, and $q=M_{X}/M_{\ast}$ is the average mass ratio of heavy objects over the reference stars, respectively. As suggested previously, X-ray sources located within the distribution dips are dynamically relaxed, hence energy equilibrium can be established between them and the normal stars. This is the presumption for estimating the mass of heavy objects with the radial distributions. Beyond the distribution dips, the local two-body relaxation timescale of each group of heavy objects could be much larger than the cluster dynamical age, thus no energy equilibrium can be established and we have to exclude these sources from our fitting samples. Finally, in order to better illustrate and compare the radial distribution of each group of heavy objects, we further constraint the “generalized King models” fitting region as $R<100\arcsec$ for all selected sources in Figure-\[fig:distribution\]. In Figure-\[fig:fit\], we plot the CXB-corrected radial distribution of each group of heavy objects as step lines. Among all the considered heavy objects, BSS (orange) shows the highest degree of concentration, followed by the bright (olive) and faint (red) group of X-ray sources. We applied the K-S test to these distributions to check the statistical significance of the differences. The test yields that the BBS and bright X-ray sources are different at the $15.5\%$ confidence level, while the bright and fain X-ray sources, BSS and faint X-ray sources are different at $97.9\%$ and $99.3\%$ confidence level, respectively. The distributions of BSS and bright X-ray sources are similar, but they belong to different populations in GCs, thus the lower probability of statistical difference may result from a similar average mass between these two types of objects. We fit the these distributions (shown as dashed lines) using a maximum-likelihood algorithm, which gives $q=1.73\pm0.19$ for BSS, $q=1.65\pm0.17$ for the bright X-ray sources, and $q=1.33\pm0.06$ for the faint X-ray sources, respectively. Taking an average mass of $0.875\pm0.025\ M_{\odot}$ for the reference normal stars [@mcLaughlin2006], we derived an average mass of $1.51\pm0.17\ M_{\odot}$, $1.44\pm0.15\ M_{\odot}$ and $1.16\pm0.06\ M_{\odot}$ for the BSS, bright and faint X-ray sources, respectively. Here, the fitting confidence is at 1$\sigma$ level. Our results for the BSS average mass are consistent with the those reported in literatures, where the mass of BSS in 47 Tuc ranges from $0.99 M_{\odot}$ [@de2005] to $1.7 M_{\odot}$ [@shara1997; @gilliland1998]. By comparing the velocity dispersion profiles of BSS and main-sequence turn-off stars in GCs, @baldwin2016 estimated an average mass of $1.7^{+0.56}_{-0.35} M_{\odot}$ for BSS in 47 Tuc, which is consistent within errors with our fit results in Figure-\[fig:fit\]. For X-ray sources, their mass depends on which types of object they belong to. Quiescent LMXBs in GCs are thought to have an average mass of $1.5^{+0.3}_{-0.2} M_{\odot}$ [@heinke2003], this is consistent with the derived mass of luminous LMXBs in @grindlay1984. For the MSPs, CVs and ABs identified in 47 Tuc, a similar “generalized King models” fitting estimated that they have an average mass of $1.47\pm0.19 M_{\odot}$, $1.31\pm0.22 M_{\odot}$ and $0.99\pm0.13 M_{\odot}$, respectively [@heinke2005]. More importantly, @heinke2005 found that the majority of bright X-ray sources in 47 Tuc are either CVs or quiescent LMXBs, which is much luminous than most of the ABs (see Figure 9 and 10 of @heinke2005 for details). Besides, the mass of ABs in 47 Tuc was found to be correlated with their X-ray flux, and ABs brighter in X-rays are expected to be more massive [@edmonds2003b]. These evidence may indicate that the bright X-ray sources are dominated by CVs, quiescent LMXBs or luminous ABs, thus their average mass is significantly larger than the faint group of X-ray sources, which are mainly dominated by faint ABs. Formation of Weak X-ray Sources in GCs: Primordial Binary Evolution versus Dynamical Encounters ----------------------------------------------------------------------------------------------- Since the discovery, the origin of weak X-ray sources (i.e., mainly CVs, ABs, MSPs and quiescent LMXBs, [@heinke2010]) in GCs has received much attention. In the earlier studies, in which the exposure are shallow and only brighter X-ray sources (i.e., with $L_{X}\gtrsim 4\times 10^{31}\rm\,erg\,s^{-1}$, [@pooley2003]) were detected, a strong correlation between the source counts and the stellar encounter rate has been found in a dozen of GCs, which are thought to be an evidence of the dynamical formation of X-ray sources in GCs [@pooley2003]. But when more GCs and more fainter X-ray sources (i.e., $L_{X}\lesssim 4\times 10^{31}\rm\,erg\,s^{-1}$) are included, this correlation become less significant[^8], which suggests that the contribution of primordial binary channel can not be ignored in GCs [@pooley2006; @cheng2018a]. The bimodal distribution of X-ray sources in 47 Tuc provides further support for the primordial binaries contribution of weak X-ray sources sources in GCs. Note that the distribution profiles of $R_{X}$ and $R_{BSS}$ are similar in Figure-\[fig:distribution\], which may suggest a universal origin for X-ray sources and BSS in GCs——namely, both X-ray sources and BSS are descendant from primordial binaries, and their radial distribution are consistent with the model of mass segregation of binaries in GCs. This prediction can further be tested by the finding of X-ray sources and BSS in the outskirts of 47 Tuc, where stellar density decreases dramatically and dynamical interactions are less important, thus mainly the primordial binary formation channel is responsible for the creation of these sources. Nevertheless, the two-body relaxation is not the only process that affect the evolution of primordial binaries in GCs. As binaries migrate to the dense core of GCs, they may suffer frequent encounters with other stars, which can modify the properties of binaries greatly and abruptly. According to the Hills-Heggie law, the evolution of binaries in GCs depends on their bound energy with respect to the kinetic energy of intruding stars. Stellar encounters involving hard binaries tend to make them harder, whereas encounters involving soft binaries drive them softer and eventually to disruption [@hills1975; @heggie1975; @hut1993]. It is possible for MS binaries to exchange one of their primordial members with the intruding compact objects and lead to the formation of CVs or LMXBs in GCs. Alternatively, the interactions between binaries and other stars may accelerate the evolution of MS binaries and transform them into ABs, CVs or BSS. As a result, even for the X-ray sources and BSS detected at cluster center, many of them are likely formed by the encounters of primordial binaries [@ferraro1997; @mapelli2004; @knigge2009; @cheng2018b]. Recently, observations have revealed two populations of CVs in GCs. The X-ray luminosity distribution of CVs in NGC 6397 [@cohn2010], NGC 6752 [@lugger2017] and 47 Tuc [@sandoval2018] was found to be bimodal. The bright CVs (i.e., young CVs with $P_{orb}\gtrsim P_{gap}$ and $L_{X}\sim 10^{32} \rm erg\,s^{-1}$) were found to be more concentrated than the faint CVs in these clusters, especially in the core-collapsed GCs (e.g., NGC 6397 and NGC 6752). This suggest that the bright CVs are recently formed and may suffer from dynamical encounters in the cluster cores. While for the faint CVs, they are primordial CVs or CVs that are dynamically formed a long time ago, and they may represent the highly evolved population of CVs (i.e., CVs with $P_{orb}\lesssim P_{gap}$ and $L_{X}\sim 10^{30} \rm erg\,s^{-1}$) in GCs [@cohn2010; @lugger2017]. Compared with faint CVs, simulations suggested that stellar encounters (such as exchange encounters) are non-negligible in the formation of bright CVs in GCs [@hong2017; @belloni2017], although the mass segregation effect is important in shaping the radial distribution profile of CVs in GCs [@belloni2018]. Therefore, it is likely that both primordial binaries and dynamical interactions are indispensable factors that affect the formation of weak X-ray sources in GCs. Depending on the timescale of binary evolution versus strong dynamical encounter, primordial binaries could be transformed into weak X-ray sources in GCs, either through normal stellar evolution or strong dynamical encounters. On the other hand, the weak dynamical interactions, such as the two-body relaxation, may affect the evolution tracks of primordial binaries, by driving them through sedimentation into the dense core of GCs, thus enhancing the encounter probability of binaries. We note that such a scenario is not only consistent with the observed mass segregation effect of X-ray sources in 47 Tuc, but also consistent with the recent simulation results of @belloni2018. By modeling a large sample of GCs, @belloni2018 show that the majority of CVs in GCs are descendant from primordial binaries (either through primordial binary evolution channel or the dynamical formation channel of primordial binaries). More importantly, they found that the fraction of of CVs inside/outside the half-light radius is modulated by the cluster half-mass relaxation time $T_{\rm rel}$, and with longer $T_{\rm rel}$, the higher the fraction of CVs are located outside the half-light radius. Besides, bright CVs tend to be more massive and they migrate to the cluster center more faster, leading to a higher concentration than faint CVs in more dense GCs (also with smaller $T_{\rm rel}$, see Figure-9 and 10 of [@belloni2018] for details). Summary ======= In this work, We have presented the most sensitive and full-scale X-ray source catalogue for Globular Cluster 47 Tuc. By analyzing the radial properties of X-ray sources in the this cluster, our main findings are as follows. 1\. Our catalogue consists of 537 X-ray sources and covers a total area of $\sim 176.7$ arcmin$^{2}$ (or within a radius of $7.5\arcmin$) in 47 Tuc. 2\. The radial specific frequency of X-ray sources in 47 Tuc peaks strongly in the cluster center, rapidly decreases at intermediate radii, and finally rises again at larger radii, with two distribution dips at $R\sim 100\arcsec$ and $R\sim 170\arcsec$ for the faint ($L_{X}\lesssim 5.0\times 10^{30} {\rm\ erg\,s^{-1}}$) and bright ($L_{X}\gtrsim 5.0\times 10^{30} {\rm\ erg\,s^{-1}}$) groups of X-ray sources, respectively. These distribution shapes are similar to that of the Blue Straggler Stars (BSS), where the distribution dip is located at $R\sim 200\arcsec$ [@ferraro2004]. 3\. By fitting the radial distribution of each heavy object (i.e., BSS, bright and faint X-ray sources) with a “generalized King model", we estimated an average mass of $1.51\pm0.17\ M_{\odot}$, $1.44\pm0.15\ M_{\odot}$ and $1.16\pm0.06\ M_{\odot}$ for the BSS, bright and faint X-ray sources, respectively. These results are qualitatively consistent with the observed distribution dips of BSS and X-ray sources in 47 Tuc, and suggests that mass segregation plays an important role in creating these distribution features. 4\. The distribution profiles of X-ray sources and BSS are consistent with the mass segregation model of binaries in GCs, which suggests that primordial binaries are a significant contributor (at least part of the contribution for the sources in cluster center and nearly a full contribution for sources located in the outskirts of 47 Tuc) to X-ray source population in GCs. [lrrcrrrcrrrc]{} 78 & 2000-03-16 & ACIS-I & 3.87 & 5.97704 & -72.07297 & 190.4 & 0.94 & -0.078 & -0.155 & 0.028 & 1.0014\ 953 & 2000-03-16 & ACIS-I & 31.67 & 5.97695 & -72.07304 & 190.3 & 3.24 & 0.078 & -0.230 & 0.032 & 0.9999\ 954 & 2000-03-16 & ACIS-I & 0.85 & 5.97716 & -72.07304 & 189.9 & 0.54 & 0.269 & -0.564 & -0.088 & 0.9989\ 955 & 2000-03-16 & ACIS-I & 31.67 & 5.97696 & -72.07294 & 189.9 & 3.24 & 0.142 & -0.281 & -0.013 & 0.9989\ 956 & 2000-03-17 & ACIS-I & 4.69 & 5.97700 & -72.07282 & 189.5 & 0.94 & 0.098 & -0.680 & 0.035 & 0.9994\ 2735 & 2002-09-29 & ACIS-S & 65.24 & 6.07523 & -72.08251 & 0.4 & 3.14 & 0.066 & 0.116 & 0.021 & 1.0005\ 2736 & 2002-09-30 & ACIS-S & 65.24 & 6.07516 & -72.08274 & 359.6 & 3.14 & 0.027 & 0.161 & 0.016 & 1.0002\ 2737 & 2002-10-02 & ACIS-S & 65.24 & 6.07491 & -72.08330 & 357.5 & 3.14 & 0.034 & 0.164 & -0.004 & 1.0008\ 2738 & 2002-10-11 & ACIS-S & 68.77 & 6.07322 & -72.08552 & 349.2 & 3.14 & 0.000 & 0.000 & 0.000 & 1.0000\ 3384 & 2002-09-30 & ACIS-S & 5.31 & 6.07515 & -72.08263 & 359.6 & 0.84 & 0.261 & -0.126 & -0.066 & 0.9985\ 3385 & 2002-10-01 & ACIS-S & 5.31 & 6.07498 & -72.08296 & 358.8 & 0.84 & -0.028 & -0.240 & 0.038 & 1.0003\ 3386 & 2002-10-03 & ACIS-S & 5.54 & 6.07480 & -72.08349 & 356.7 & 0.84 & 0.133 & 0.073 & 0.105 & 0.9989\ 3387 & 2002-10-11 & ACIS-S & 5.73 & 6.07299 & -72.08544 & 349.2 & 0.84 & -0.133 & 0.028 & 0.079 & 1.0009\ 15747 & 2014-09-09 & ACIS-S & 50.04 & 6.01653 & -72.07805 & 22.2 & 0.44 & -0.254 & 1.122 & 0.020 & 0.9987\ 15748 & 2014-10-02 & ACIS-S & 16.24 & 6.01984 & -72.07846 & 2.2 & 0.44 & -0.955 & 1.195 & 0.216 & 1.0008\ 16527 & 2014-09-05 & ACIS-S & 40.88 & 6.01654 & -72.07804 & 22.2 & 0.44 & 0.276 & 0.141 & 0.062 & 0.9999\ 16528 & 2015-02-02 & ACIS-S & 40.28 & 6.01851 & -72.08395 & 236.5 & 0.44 & -0.316 & 0.300 & -0.084 & 0.9985\ 16529 & 2014-09-21 & ACIS-S & 24.7 & 6.01989 & -72.07845 & 2.2 & 0.44 & -0.713 & 0.489 & -0.061 & 1.0005\ 17420 & 2014-09-30 & ACIS-S & 9.13 & 6.01989 & -72.07840 & 2.2 & 0.44 & -1.168 & 0.681 & 0.007 & 1.0019\ \[tab:obslog\] [cllccccccccc]{} 1 & 5.666170 & -72.061835 & 0.5 & 402.3 & $<$-5 & $15.3^{+4.5}_{-3.9}$ & $11.5^{+3.8}_{-3.2}$ & 1.08E-6 & 5.45E-7 & 30.83 & 30.31\ 2 & 5.691507 & -72.022617 & 0.3 & 424.7 & $<$-5 & $31.9^{+6.3}_{-5.5}$ & $30.7^{+6.0}_{-5.3}$ & 3.53E-6 & 2.18E-6 & 31.12 & 31.04\ 3 & 5.692831 & -72.021512 & 0.2 & 425.4 & $<$-5 & $46.2^{+7.3}_{-6.7}$ & $37.1^{+6.5}_{-6.0}$ & 4.07E-6 & 2.13E-6 & 31.31 & 30.97\ 4 & 5.720357 & -72.045905 & 0.2 & 359.5 & $<$-5 & $183^{+15}_{-14} $ & $139^{+12}_{-13}$ & 4.37E-6 & 1.98E-6 & 31.28 & 30.9\ 5 & 5.728573 & -72.123989 & 0.6 & 360.8 & -4.5 & $14.1^{+8.5}_{-7.9}$ & $15.4^{+6.1}_{-5.6}$ & 1.53E-7 & 9.92E-8 & 29.90 & 29.38\ 6 & 5.755388 & -72.077648 & 0.4 & 297.4 & $<$-5 & $30.9^{+8.0}_{-7.4}$ & $31.4^{6.8}_{-6.1}$ & 3.78E-7 & 2.22E-7 & 30.30 & 29.78\ 7 & 5.762490 & -72.120313 & 0.4 & 321.3 & $<$-5 & $26.1^{+8.3}_{-7.8}$ & $9.4^{5.0}_{-4.6}$ & 2.81E-7 & 6.02E-8 & 30.18 & 29.67\ 8 & 5.765549 & -72.128034 & 0.2 & 331.4 & $<$-5 & $134^{+13}_{-13}$ & $98.8^{+10.0}_{-10.6}$ & 1.78E-6 & 7.71E-7 & 30.94 & 30.47\ \[tab:mcat\] [@ll\*[8]{}[c]{}rrr@]{} \ 0-15 & 46 & 0.20 & 45.80 & 0.213 & 0.122 & 1.75 & 0.46\ 15-34 & 64 & 0.81 & 63.19 & 0.294 & 0.198 & 1.48 & 0.36\ 34-51 & 33 & 1.26 & 31.74 & 0.148 & 0.143 & 1.03 & 0.31\ 51-68 & 18 & 1.76 & 16.24 & 0.075 & 0.111 & 0.68 & 0.25\ 68-85 & 12 & 2.27 & 9.73 & 0.045 & 0.089 & 0.51 & 0.22\ 85-107 & 13 & 3.68 & 9.32 & 0.043 & 0.093 & 0.47 & 0.21\ 107-127 & 12 & 4.08 & 7.92 & 0.037 & 0.069 & 0.53 & 0.25\ 127-147 & 13 & 4.78 & 8.22 & 0.038 & 0.058 & 0.66 & 0.31\ 147-168 & 15 & 5.77 & 9.23 & 0.043 & 0.052 & 0.83 & 0.37\ 168-200 & 24 & 10.27& 13.73 & 0.064 & 0.065 & 0.98 & 0.38\ \ 0-30 & 36 & 0.19 & 35.81 & 0.514 & 0.220 & 2.34 & 0.79\ 30-60 & 14 & 0.58 & 13.42 & 0.193 & 0.192 & 1.00 & 0.44\ 60-120 & 11 & 2.31 & 8.69 & 0.125 & 0.217 & 0.57 & 0.29\ 120-220 & 8 & 7.28 & 0.72 & 0.010 & 0.184 & 0.06 & 0.01\ 220-320 & 14 & 11.56& 2.44 & 0.035 & 0.104 & 0.34 & 0.27\ 320-450 & 30 & 21.43& 8.57 & 0.123 & 0.083 & 1.48 & 0.76\ \[tbl:spec\_freq\] We thank the anonymous referee for valuable comments that helped to improve our manuscript. This work is supported by the National Key R&D Program of China No. 2017YFA0402600, the National Science Fundation of China under grants 11525312, 11133001, 11333004 and 11303015. Bahramian, A., Heinke, C.O., Sivakoff, G.R., & Gladstone, J.C. 2013, , 766, 136 Baldwin, A. T., Watkins, L. L., van der Marel, R. P., et al. 2016, , 827, 12 Bhattacharya, S., Heinke, C. O., Chugunov, A. I., et al. 2017, , 472, 3706 Belloni, D., Zorotovic, M., Schreiber, M. R., et al. 2017, , 468, 2429 Belloni, D., Giersz, M., Rivera Sandoval, L. E., Askar, A., & Cieciel[a]{}g, P. 2018, arXiv:1811.04937 Binney, J., & Tremaine, S. 2008, Galactic Dynamics: Second Edition (Princeton University Press) Bogdanov, S., Grindlay, J. E., Heinke, C. O., et al. 2006, , 646, 1104 Broos P. S., Townsley L. K., Feigelson E. D., Getman K. V., Bauer F. E., Garmire G. P., 2010, , 714, 1582 Camilo, F., Rasio, F. A., 2005, Binary Radio Pulsars, ASP Conf. Ser., vol. 328, p. 147, arXiv:astro-ph/0501226 Cheng, Z., Li, Z., Xu, X., & Li, X. 2018, , 858, 33 Cheng, Z., Li, Z., Xu, X., et al. 2018, , 869, 52 Clark, G. W. 1975, , 199, L143 Cohn, H. N., Lugger, P. M., Couch, S. M., et al. 2010, , 722, 20 De Marco, O., Shara, M. M., Zurek, D., et al. 2005, , 632, 894 Edmonds, P. D., Heinke, C. O., Grindlay, J. E., & Gilliland, R. L. 2002, , 564, 17 Edmonds, P. D., Gilliland, R. L., Heinke, C. O., & Grindlay, J. E. 2003a, , 569, 1177 Edmonds, P. D., Gilliland, R. L., Heinke, C. O., & Grindlay, J. E. 2003b, , 596, 1197 Ferraro, F. R., Pecci, F. F., et al. 1993, , 106, 2324 Ferraro, F. R., et al. 1997, , 324, 915 Ferraro, F. R., Beccari, G., et al. 2004, , 603, 127 Ferraro, F. R., Lanzoni, B., Dalessandro, E., et al. 2012, , 492, 393 Forbes, D. A., & Bridges, T. 2010, , 404, 1203 Ge, C., Li, Z-Y., Xue, X-J., Gu, Q-S., et al. 2015, , 812, 130 Gilliland, R. L., Bono, G., Edmonds, P. D., et al. 1998, , 507, 818 Grindlay, J. E., Hertz, P., et al. 1984, , 282, 13 Grindlay, J. E., Heinke, C. O., Edmonds, P. D., et al. 2001, Science, 292, 2290 Grindlay, J. E., Camilo, F., Heinke, C. O., et al. 2002, , 581, 470 Harris, W. E. 1996(2010 edition), , 112, 1487 Heggie, D. C., 1975, , 173, 729 Heggie D. C., Hut P., 2003, The Gravitational Million-Body Problem: A Multidisciplinary Approach to Star Cluster Dynamics (Cambridge: Cambridge University Press) Heinke, C. O., Grindlay, J. E., Lugger, P. M., et al. 2003, , 598, 501 Heinke, C. O., Grindlay, J. E., Cohn, H. N., et al. 2005, , 625, 796 Heinke, C. O. 2010, American Institute of Physics Conference Series, 1314, 135 Hills, J. G., 1975, , 80, 809 Hong, J., Vesperini, E., Belloni, D., & Giersz, M. 2017, , 464, 2511 Hut, P., McMillan, S., Goodman, J., et al. 1992, , 104, 981 Hut, P., 1993, , 403, 256 Katz, J. I. 1975, , 253, 698 Knigge C., Leigh N., Sills A., 2009, , 457, 288 Kim, M., Wilkes, B. J., Kim, D.-W., et al. 2007, , 659, 29 King, I., 1962, , 67, 471 Li, J., Kastner, J. H., Prigozhin, G. Y., et al. 2004, , 610, 1204 Lugger, P. M., Cohn, H. N., Cool, A. M., Heinke, C. O., & Anderson, J. 2017, , 841, 53 Luo, B., Brandt, W. N., Xue, Y. Q., et al. 2017, , 228, 2 Mapelli M., Sigurdsson S., Colpi M., Ferraro F. R., et al., 2004, , 605, 29 McLaughlin, D. E., Anderson, J., Meylan, G., et al. 2006, , 166, 249 Meylan, G., Heggie, D. C., 1997, , 8, 1 Milone, A. P., Piotto, G., Bedin, L. R., et al. 2012, , 540, 16 Moretti, A., Pagani, C., Cusumano, G., et al. 2009, , 493, 501 Pooley, D., Lewin, W. H. G., Homer L., et al. 2002, , 569, 405 Pooley, D., Lewin, W. H. G., Verbunt, F., et al. 2002, , 573, 184 Pooley, D., et al. 2003, , 591, 131 Pooley, D., & Hut, P. 2006, , 646, 143 Ransom, S. M., 2008, 40 Years of Pulsars: Millisecond Pulsars, Magnetars and More, AIP Conf. Ser., vol. 983, p. 415, arXiv:astro-ph/0710.3626 Rivera Sandoval, L. E., van den Berg, M., Heinke, C. O., et al. 2018, , 475, 4841 Sandage, A. R., 1953, , 58, 61 Shara, M. M., Saffer, R. A., & Livio, M. 1997, , 489, 59 Weisskopf, M. C., Wu, K., Trimble, V., et al. 2007, , 657, 1026 Xue, Y. Q., Luo, B., Brandt, W. N., et al. 2011, , 195, 10 \[lastpage\] [^1]: http://cxc.harvard.edu/ciao [^2]: http://cxc.harvard.edu/ciao/why/acissubpix.html [^3]: http://www2.astro.psu.edu/xray/docs/TARA/ [^4]: Two sets of regions can be constructed with AE in the photometric analysis of point sources: The contours of the local PSF, which is defined with a given enclosed counts fraction ($0<$ECF$<1$) and can be set as the source extraction regions; The “mask regions", which completely cover the sources (the default mask region in AE is chosen to be 1.1 times a radius that encloses 99% of the PSF) and can be used to construct the background regions. [^5]: See Figure 8 of the AE User’s Guide for details [^6]: http://www2.astro.psu.edu/xray/docs/TARA/ae\_users\_guide/procedures/ [^7]: For example, the typical relaxation timescale in cluster core ($t_{rc}$) is about 1-3 orders of magnitude lower than the Hubble timescale ($t_{H}$), while typical relaxation timescale at half-light radius ($t_{rh}$) is slightly less than or comparable to $t_{H}$ [@harris1996]. [^8]: For example, see the fainter group of X-ray sources in the bottom panels of Figure-2 in @pooley2006, or the Section 3 of @cheng2018a for details.
--- abstract: 'Proportional representation (PR) is one of the central principles in voting. Elegant rules with compelling PR axiomatic properties have the potential to be adopted for several important collective decision making settings. I survey some recent ideas and results on axioms and rules for proportional representation in committee voting.' author: - Haris Aziz title: | Proportional Representation in\ Approval-based Committee Voting and Beyond[^1] --- **JEL Classification**: C70 $\cdot$ D61 $\cdot$ D63 $\cdot$ D71 Introduction ============ When making collective decisions, fairness entails that the decision is made in accordance with the will and desire of the people and that each person has equal influence. A natural principle that captures this requirement is proportional representation: the bigger a group, the more representation it should have. This general principle of proportionality is engrained in just societies.[^2] We discuss the issue of proportional representation in the context of approval-based committee voting (also called multi-winner voting with approvals). The setting involves a set $N=\{1,\ldots, n\}$ of voters and a set $C$ of candidates. Each voter $i\in N$ submits an approval ballot $A_i\subseteq C$, which represents the subset of candidates that she approves. We refer to the list ${{\vec{A}}}= (A_1,\ldots, A_n)$ of approval ballots as the [*ballot profile*]{}. Based on the approval of the voters, the goal is to select a target $k$ number of candidates. The setting has inspired a number of natural voting rules (see e.g. the survey by @Kilg10a). Many of the voting rules are designed with the goal of achieving some form of just representation. However it is not entirely obvious what axiom captures proportional representation requirements. How should proportional representation be defined in approval-based committee voting? We first note that it can be defined in a straightforward manner for a restricted version of approval-based committee voting that we will refer to as ‘*polarized*’. In a polarised profile, voters can be partitioned into disjoint groups such that the approvals of voters in the same group coincide, and approvals from two different groups do not intersect. For polarized preferences, the proportional representation requirement can easily be formalized as follows: for any group $G$ that approves candidates in set $C_G$, we can require that at least $\min({{\lfloor k\frac{|G|}{n} \rfloor }},|C_G|)$ candidates from $C_G$ are selected. Not only can the requirement be easily defined, it can also achieved by the following rule: > ***[$\mathit{GroupSeqPAV}$]{}**: Sequentially select candidates to be placed in the committee. In each round, consider the group $G$ that has the largest value $|G|/(r(G)+1)$ (where $r(G)$ is the current number of representatives of $G$) and still has an approved candidate $c$ that is yet not selected. Place $c$ in the committee. Repeat until $k$ candidates are selected.* Polarized preferences are typically prevalent in ‘closed list’ party elections in which voters vote for parties and each party gets seats in proportion of votes [@Jans16a]. These seats are then filled up by representatives from the corresponding party. If each party has sufficient number of representatives, then the problem reduces to giving each party at least the integer part of the target quota and then *apportion* the remaining seats [see e.g., @BLS16a; @SFF16a]. There are several ways to do this and there is a substantial body of work on proportional representation via apportionment [see e.g., @BaYo01a; @Puke14a; @PeTe90a]. In this restricted setting that models ‘closed list’ party elections, [$\mathit{GroupSeqPAV}$]{}corresponds to the D’Hondt method (also called the Jefferson method) for apportionment. Representation becomes more challenging to formalize when voters in a group may approve candidates approved by voters outside the group. The challenge stems from the fact that the approval-based committee voting setting does not even assume pre-specified groups since each individual voter is free to approve any subset of candidates. In what follows we describe recent work on formalising proportional representation axioms that are referred to as justified representation axioms. Justified Representation Properties =================================== We present justified representation axioms that are all based on the proportionality representation principle. The idea behind all the axioms is that a cohesive and large enough group of voters deserves sufficient number of approved candidates in the winning set of candidates. Given a ballot profile ${{\vec{A}}}= (A_1, \dots, A_n)$ over a candidate set $C$ and a target committee size $k$, we say that a set of candidates $W$ of size $|W|=k$ [*satisfies justified representation for $({{\vec{A}}}, k)$*]{} if $\forall X\subseteq N: |X|\geq \frac{n}{k} \text{ and } |\cap_{i\in X}A_i|\geq 1 \implies (|W\cap (\cup_{i\in X}A_i)|\geq 1).$ [$\mathit{JR}$]{}was proposed by @ABC+15a [@ABC+16a]. The rationale behind [$\mathit{JR}$]{}is that if $k$ candidates are to be selected, then, intuitively, each group of $\frac{n}{k}$ voters “deserves” a representative. Therefore, a set of $\frac{n}{k}$ voters that have at least one candidate in common should not be completely unrepresented. [$\mathit{JR}$]{}can be strengthened to [$\mathit{PJR}$]{}and [$\mathit{EJR}$]{}. Given a ballot profile $(A_1, \dots, A_n)$ over a candidate set $C$, a target committee size $k$, $k\le m$, and integer $\ell$ we say that a set of candidates $W$, $|W|=k$, [*satisfies $\ell$-proportional justified representation for $({{\vec{A}}}, k)$*]{} if $\forall X\subseteq N: |X|\geq \ell\frac{n}{k} \text{ and } |\cap_{i\in X}A_i|\geq \ell \implies (|W\cap (\cup_{i\in X}A_i)|\geq \ell).$ We say that $W$ [*satisfies proportional justified representation for $({{\vec{A}}}, k)$*]{} if it [satisfies $\ell$-proportional justified representation for $({{\vec{A}}}, k)$]{} and all integers $\ell\leq k$. [$\mathit{PJR}$]{}was formally studied by @SFF+17a. Given a ballot profile $(A_1, \dots, A_n)$ over a candidate set $C$, a target committee size $k$, $k\le m$, we say that a set of candidates $W$, $|W|=k$, [*satisfies $\ell$-extended justified representation for $({{\vec{A}}}, k)$*]{} and integer $\ell$ if $\forall X\subseteq N: |X|\geq \ell\frac{n}{k} \text{ and } |\cap_{i\in X}A_i|\geq \ell \implies (\exists i\in X: |W\cap A_i|\geq \ell).$ We say that $W$ [*satisfies extended justified representation for $({{\vec{A}}}, k)$*]{} if it [satisfies $\ell$-extended justified representation for $({{\vec{A}}}, k)$]{} and all integers $\ell\leq k$. [$\mathit{EJR}$]{}was proposed by @ABC+16a. It is easy to observe the following relations: ${\ensuremath{\mathit{EJR}}\xspace}\implies {\ensuremath{\mathit{PJR}}\xspace}\implies {\ensuremath{\mathit{JR}}\xspace}.$ Also note that if we only consider $\ell=1$ in the definitions of [$\mathit{PJR}$]{}, and [$\mathit{EJR}$]{}we get [$\mathit{JR}$]{}. We also observe that for $k=1$, [$\mathit{JR}$]{}, [$\mathit{PJR}$]{}, and [$\mathit{EJR}$]{}are equivalent. Achieving Proportional Representation ===================================== We say that a rule satisfies [$\mathit{JR}$]{}/[$\mathit{PJR}$]{}/[$\mathit{EJR}$]{}if it always returns a committee satisfying the corresponding property. For preferences that are not polarized, the definition of [$\mathit{GroupSeqPAV}$]{}needs to be extended since there are no clear-cut groups for general approval ballots. One such generalisation is called [$\mathit{SeqPAV}$]{}. Let $H$ be a function defined on integers such that $H(p)=0$ for $p=0$ and $H(p)=\sum_{j=1}^p\frac{1}{j}$ otherwise. Let the [$\mathit{PAV}$]{}score of a committee $W$ be $ \sum_{i\in N}H(|W\cap A_i|)$. Then the [$\mathit{SeqPAV}$]{}rule is defined as follows. > **[$\mathit{SeqPAV}$]{}**: Set $W=\emptyset$. Then in round $j$, $j=1, \dots, k$, add a new candidate to $W$ so that the [$\mathit{PAV}$]{}score of $W$ is maximised. [$\mathit{SeqPAV}$]{}was originally proposed by @Thie95a. Although [$\mathit{SeqPAV}$]{}seems like a reasonable extension of [$\mathit{GroupSeqPAV}$]{}, it has been shown that [$\mathit{SeqPAV}$]{}does not even satisfy [$\mathit{JR}$]{} [@ABC+16a]. Incidentally, [$\mathit{SeqPAV}$]{}is not the only rule that may violate [$\mathit{JR}$]{}. @ABC+16a pointed out that several well-known rules that are designed for representation purposes fail to satisfy [$\mathit{JR}$]{}.[^3] Whereas SeqPAV iteratively builds a committee while trying to maximize the [$\mathit{PAV}$]{}score, one could also try to find a committee that globally maximizes the [$\mathit{PAV}$]{}score. Such a rule is popularly known as [$\mathit{PAV}$]{}and was originally proposed by @Thie95a. In contrast to [$\mathit{SeqPAV}$]{}, [$\mathit{PAV}$]{}always returns a committee that satisfies [$\mathit{EJR}$]{} [@ABC+16a] thereby giving a constructive argument for the existence of a committee that satisfies [$\mathit{EJR}$]{}. Although [$\mathit{PAV}$]{}satisfies [$\mathit{EJR}$]{}, it does have some drawbacks. From a computational perspective, finding a [$\mathit{PAV}$]{}outcome is NP-hard [@AGG+15a; @SFL16a]. The computational intractability renders the rule impractical for large scale voting. From an axiomatic perspective, [$\mathit{PAV}$]{}does not satisfy certain desirable axioms such as committee monotonicity.[^4] When [$\mathit{EJR}$]{}was proposed it was not clear whether it can be achieved in polynomial time. In view of this, researchers turned to designing polynomial-time algorithms to achieve the weaker property of [$\mathit{PJR}$]{}. @BFJL16a proved that SeqPhragmén (an algorithm proposed by Swedish mathematician Phragmén in the 19th century) is polynomial-time and returns a committee satisfying [$\mathit{PJR}$]{}. Independently and around the same time as the result by @BFJL16a, @SFF16a presented a different algorithm that finds a [$\mathit{PJR}$]{}committee and also satisfies other desirable monotonicity axioms. Like [$\mathit{SeqPAV}$]{}, both algorithms sequentially build a committee while optimising a corresponding load balancing objective. However the algorithms may not return a committee that satisfies [$\mathit{EJR}$]{}. Recently, three different groups [@AzHu17a; @SLES17a; @SEL17a] have independently and around the same time shown that a committee satisfying [$\mathit{EJR}$]{}can be computed in polynomial time.[^5] Two of the groups [@AzHu17a; @SLES17a] have essentially the same idea of maximizing the [$\mathit{PAV}$]{}score via local search and implementing swaps of candidates. Discussion ========== We focussed on proportional representation under approvals and discussed natural axioms for this purpose. It will be interesting to see how ideas from recent developments can be used to design voting rules that are compelling for proportional representation for dichotomous preferences as well as more general preferences. For example, it will be interesting to design or identify rules that satisfy a strong notion of proportional representation along with other natural axioms such as candidate monotonicity[^6] and committee monotonicity. When considering approvals, [$\mathit{EJR}$]{}can be further strengthened to [$\mathit{CJR}$]{}(core justified representation). Given a ballot profile $(A_1, \dots, A_n)$ over a candidate set $C$, a target committee size $k$, $k\le m$, we say that a set of candidates $W$, $|W|=k$, satisfies *core representation ([$\mathit{CJR}$]{})* if there exists no coalition $X\subseteq N$ such that $|X|\geq \ell n/k$ and there is a set $D\subset C$ such that $|D|=\ell$ and $|A_i\cap D| > |A_i\cap W|$ for each $i\in X$. We call such a coalition $X$ as a [$\mathit{CJR}$]{}blocking coalition. A core concept equivalent to [$\mathit{CJR}$]{}but formalized in a different way was discussed by @ABC+16a. It is interesting that core stability, one of the central ideas of economic design is also meaningful in the context of proportional representation. It remains open whether a committee satisfying [$\mathit{CJR}$]{}always exists and whether such a committee can be computed in polynomial time.[^7] Considering that proportional representation for approvals (that capture dichotomous preferences) is a non-trivial task, it leads to the question of how it should be defined in the context of preferences that are not dichotomous. The axioms [$\mathit{JR}$]{}, [$\mathit{PJR}$]{}, and [$\mathit{EJR}$]{}can also be extended to the case where voters have strict or weak orders over candidates. However for a natural generalisation of [$\mathit{JR}$]{}to the case of linear orders, it turns out that not only a committee satisfying the property may not exist, it is also NP-hard to compute [@AEF+17a]. It will be interesting to see if compelling proportional representation axioms can be proposed for general preferences that guide the design and analysis of rules. There is scope for substantial and fruitful research in formalizing and achieving proportional representation for more general or complex voting settings in which simultaneous or sequential decisions are made. Finally, multi-winner voting deserves a thorough research investigation with respect to goals other than proportional representational as well [@FSST17a]. This paper was written as a companion paper to the authors’s talk at the Dagstuhl Seminar on Voting: Beyond Simple Majorities and Single-Winner Elections (25–30, June 2017). The author is supported by a Julius Career Award. He thanks all of his collaborators on this topic for several insightful discussions. He also thanks Barton Lee for feedback. [20]{} \[1\][\#1]{} \[1\][`#1`]{} urlstyle \[1\][doi: \#1]{} H. Aziz and S. Huang. Computational complexity of testing proportional justified representation. Technical Report arXiv:1612.06476, arXiv.org, 2016. H. Aziz and S. Huang. A polynomial-time algorithm to achieve extended justified representation. Technical Report 1703.10415, arXiv.org, 2017. H. Aziz, M. Brill, V. Conitzer, E. Elkind, R. Freeman, and T. Walsh. Justified representation in approval-based committee voting. In *Proceedings of the 29th AAAI Conference on Artificial Intelligence (AAAI)*, pages 784–790. AAAI Press, 2015. H. Aziz, S. Gaspers, J. Gudmundsson, S. Mackenzie, N. Mattei, and T. Walsh. Computational aspects of multi-winner approval voting. In *Proceedings of the 14th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS)*, pages 107–115. IFAAMAS, 2015. H. Aziz, M. Brill, V. Conitzer, E. Elkind, R. Freeman, and T. Walsh. Justified representation in approval-based committee voting. *Social Choice and Welfare*, 480 (2):0 461–485, 2017. H. Aziz, E. Elkind, P. Faliszewski, M. Lackner, and P. Skowron:. The [C]{}ondorcet principle for multiwinner elections: From shortlisting to proportionality. Technical Report arXiv:1701.08023, arXiv.org, 2017. M. Balinski and H. P. Young. *Fair Representation: [M]{}eeting the Ideal of One Man, One Vote*. Brookings Institution Press, 2nd edition, 2001. M. Brill, R. Freeman, S. Janson, and M. Lackner. Phragmén’s voting methods and justified representation. In *Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI)*, pages 406–413. AAAI Press, 2017. M. Brill, J.-F. Laslier, and P. Skowron. Multiwinner approval rules as apportionment methods. In *Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI)*, pages 414–420. AAAI Press, 2017. P. Faliszewski, P. Skowron, A. Slinko, and N. Talmon. Multiwinner voting: A new challenge for social choice theory. In U. Endriss, editor, *Trends in Computational Social Choice*, chapter 2. 2017. S. Janson. Phragm[é]{}n’s and [T]{}hiele’s election methods. Technical Report arXiv:1611.08826 \[math.HO\], arXiv.org, 2016. D. M. Kilgour. Approval balloting for multi-winner elections. In J.-F. Laslier and M. R. Sanver, editors, *Handbook on Approval Voting*, chapter 6, pages 105–124. Springer, 2010. J-L. Petit and E. Terouanne. A theory of proportional representation. *SIAM Journal on Discrete Mathematics*, 30 (1):0 116–139, 1990. F. Pukelsheim. *Proportional Representation: Apportionment Methods and Their Applications*. Springer, 2014. L. S[á]{}nchez-Fern[á]{}ndez, N. Fern[á]{}ndez, and L. A. Fisteus. Fully open extensions to the [D]{}’[H]{}ondt method. Technical Report arXiv:1609.05370 \[cs.GT\], arXiv.org, 2016. L. S[á]{}nchez-Fern[á]{}ndez, E. Elkind, and M. Lackner. Committees providing ejr can be computed efficiently. Technical Report arXiv:1704.00356, arXiv.org, 2017. L. S[á]{}nchez-Fern[á]{}ndez, E. Elkind, M. Lackner, N. Fern[á]{}ndez, J. A. Fisteus, P. [Basanta Val]{}, and P. Skowron. Proportional justified representation. In *Proceedings of the 31st AAAI Conference on Artificial Intelligence (AAAI)*. AAAI Press, 2017. P. Skowron, M. Lackner, E. Elkind, and L. S[á]{}nchez-Fern[á]{}ndez. Optimal average satisfaction and extended justified representation in polynomial time. Technical Report arXiv:1704.00293, arXiv.org, 2017. P. K. Skowron, P. Faliszewski, and J. Lang. Finding a collective set of items: From proportional multirepresentation to group recommendation. *Artificial Intelligence*, 241:0 191–216, 2016. T. N. Thiele. Om flerfoldsvalg. *Oversigt over det Kongelige Danske Videnskabernes Selskabs Forhandlinger*, pages 415–441, 1895. [^1]: [^2]: Aristotle said “\[...\] *what the just is-the proportional; the unjust is what violates the proportion*.” (Nicomachean Ethics, Written 350 B.C). [^3]: There is a natural dual version of [$\mathit{SeqPAV}$]{}called *RevSeqPAV* (in which candidates are iteratively deleted from $C$ that leads to minimal decrease in total [$\mathit{PAV}$]{}score) which also violates [$\mathit{JR}$]{}. [^4]: Committee monotonicity requires that for any outcome $W$ of size $k$, there is a possible outcome $W'$ of size $k+1$ such that $W'\supset W$. [^5]: Although a committee satisfying [$\mathit{EJR}$]{}can be computed in polynomial time, testing whether a given committee satisfies a representation property is coNP-complete for both [$\mathit{EJR}$]{} [@AGG+15a; @ABC+16a] and [$\mathit{PJR}$]{} [@AzHu16a]. [^6]: Candidate monotonicity requires that increasing the support for candidate should never make a selected candidate unselected. [^7]: If the definition is strengthened to *strict* core (a set of candidates $W$, $|W|=k$, satisfies *strict core representation ([$\mathit{CJR}$]{})* if there exists no coalition $X\subseteq N$ such that $|X|\geq \ell n/k$ and there is a set $D\subset C$ such that $|D|=\ell$ and $|A_i\cap D| \geq |A_i\cap W|$ for each $i\in X$ and $|A_i\cap D| >|A_i\cap W|$ for some $i\in X$), one can obtain simple examples for which no stable outcome exists.
--- abstract: 'We consider the problem of estimating a vector from its noisy measurements using a prior specified only through a denoising function. Recent work on plug-and-play priors (PnP) and regularization-by-denoising (RED) has shown the state-of-the-art performance of estimators under such priors in a range of imaging tasks. In this work, we develop a new block coordinate RED algorithm that decomposes a large-scale estimation problem into a sequence of updates over a small subset of the unknown variables. We theoretically analyze the convergence of the algorithm and discuss its relationship to the traditional proximal optimization. Our analysis complements and extends recent theoretical results for RED-based estimation methods. We numerically validate our method using several denoiser priors, including those based on convolutional neural network (CNN) denoisers.' author: - | Yu Sun$^\ast$ , Jiaming Liu$^\ast$ ,\ and Ulugbek S. Kamilov, [^1] [^2] [^3] [^4] [^5] title: Block Coordinate Regularization by Denoising --- [Sun, Liu, and Kamilov]{} **EDICS: CIF-OBI, CIF-SBI, IMT-SIM** Introduction ============ Problems involving estimation of an unknown vector $\xbm \in \R^n$ from a set of noisy measurements $\ybm \in \R^m$ are important in many areas, including computational imaging, machine learning, and compressive sensing. Consider the scenario in Fig. \[Fig:Setting\], where a vector $\xbm \sim p_\xbm$ passes through the measurement channel $p_{\ybm | \xbm}$ to produce the measurement vector $\ybm$. When the estimation problem is ill-posed, it becomes essential to include the prior $p_\xbm$ in the estimation process. However, in high-dimensional settings, it is difficult to directly obtain the true prior $p_\xbm$ for certain signals (such as natural images) and one is hence restricted to various indirect sources of prior information on $\xbm$. This paper considers the cases where the prior information on $\xbm$ is specified only via a denoising function, $\Dsf: \R^n \rightarrow \R^n$, designed for the removal of additive white Gaussian noise (AWGN). ![The estimation problem considered in this work. The vector $\xbm \in \R^n$, with a prior $p_\xbm(\xbm)$, passes through the measurement channel $p_{\ybm|\xbm}(\ybm|\xbm)$ to result in the measurements $\ybm \in \R^m$. The estimation algorithm $f_\Dsf(\ybm)$ does not have a direct access to the prior, but can rely on a denoising function $\Dsf: \R^n \rightarrow \R^n$, specifically designed for the removal of additive white Gaussian noise (AWGN). We propose block coordinate RED as a scalable algorithm for obtaining $\xbm$ given $\ybm$ and $\Dsf$.[]{data-label="Fig:Setting"}](figures/overview){width="8.5cm"} There has been considerable recent interest in leveraging denoisers as priors for the recovery of $\xbm$. One popular strategy, known as plug-and-play priors (PnP) [@Venkatakrishnan.etal2013], extends traditional proximal optimization [@Parikh.Boyd2014] by replacing the proximal operator with a general off-the-shelf denoiser. It has been shown that the combination of proximal algorithms with advanced denoisers, such as BM3D [@Dabov.etal2007] or DnCNN [@Zhang.etal2017], leads to the state-of-the-art performance for various imaging problems [@Danielyan.etal2012; @Chan.etal2016; @Sreehari.etal2016; @Ono2017; @Kamilov.etal2017; @Meinhardt.etal2017; @Zhang.etal2017a; @Buzzard.etal2017; @Sun.etal2018a; @Teodoro.etal2019; @Ryu.etal2019]. A similar strategy has also been adopted in the context of a related class of algorithms known as approximate message passing (AMP) [@Tan.etal2015; @Metzler.etal2016; @Metzler.etal2016a; @Fletcher.etal2018]. Regularization-by-denoising (RED) [@Romano.etal2017], and the closely related deep mean-shift priors [@Bigdeli.etal2017], represent an alternative, in which the denoiser is used to specify an explicit regularizer that has a simple gradient. More recent work has clarified the existence of explicit RED regularizers [@Reehorst.Schniter2019], demonstrated its excellent performance on phase retrieval [@Metzler.etal2018], and further boosted its performance in combination with a deep image prior [@Mataev.etal2019]. In short, the use of advanced denoisers has proven to be essential for achieving the state-of-the-art results in many contexts. However, solving the corresponding estimation problem is still a significant computational challenge, especially in the context of high-dimensional vectors $\xbm$, typical in modern applications. In this work, we extend the current family of RED algorithms by introducing a new *block coordinate RED (BC-RED)* algorithm. The algorithm relies on random partial updates on $\xbm$, which makes it scalable to vectors that would otherwise be prohibitively large for direct processing. Additionally, as we shall see, the overall computational complexity of BC-RED can sometimes be lower than corresponding methods operating on the full vector. This behavior is consistent with the traditional coordinate descent methods that can outperform their full gradient counterparts by being able to better reuse local updates and take larger steps [@Tseng2001; @Nesterov2012; @Beck.Tetruashvili2013; @Wright2015; @Fercoq.Gramfort2018]. We present two theoretical results related to BC-RED. We first theoretically characterize the convergence of the algorithm under a set of transparent assumptions on the data-fidelity and the denoiser. Our analysis complements the recent theoretical analysis of full-gradient RED algorithms in [@Reehorst.Schniter2019] by considering block-coordinate updates and establishing the explicit worst-case convergence rate. Our second result establishes backward compatibility of BC-RED with the traditional proximal optimization. We show that when the denoiser corresponds to a proximal operator, BC-RED can be interpreted as an approximate MAP estimator, whose approximation error can be made arbitrarily small. To the best of our knowledge, this explicit link with proximal optimization is missing in the current literature on RED. BC-RED thus provides a flexible, scalable, and theoretically sound algorithm applicable to a wide variety of large-scale estimation problems. We demonstrate BC-RED on image recovery from linear measurements using several denoising priors, including those based on convolutional neural network (CNN) denoisers. A preliminary version of this work has appeared in [@Sun.etal2019b]. The current paper contains all the proofs, more detailed descriptions and additional simulations. Background ========== It is common to formulate the estimation in Figure \[Fig:Setting\] as an optimization problem $$\label{Eq:RegMin} \xbmhat = \argmin_{\xbm \in \R^n} f(\xbm) \quad\textrm{with}\quad f(\xbm) = g(\xbm) + h(\xbm),$$ where $g$ is the data-fidelity term and $h$ is the regularizer. For example, the maximum a posteriori probability (MAP) estimator is obtained by setting $$g(\xbm) = -\log(p_{\ybm|\xbm}(\ybm | \xbm)) \quad\textrm{and}\quad h(\xbm) = -\log(p_{\xbm}(\xbm)),$$ where $p_{\ybm|\xbm}$ is the likelihood that depends on $\ybm$ and $p_\xbm$ is the prior. One of the most popular data-fidelity terms is least-squares $g(\xbm) = \frac{1}{2}\|\ybm-\Abm\xbm\|_2^2$, which assumes a linear measurement model under AWGN. Similarly, one of the most popular regularizers is based on a sparsity-promoting penalty $h(\xbm) = \tau \|\Dbm\xbm\|_1$, where $\Dbm$ is a linear transform and $\tau > 0$ is the regularization parameter [@Rudin.etal1992; @Tibshirani1996; @Candes.etal2006; @Donoho2006]. Many widely used regularizers, including the ones based on the $\ell_1$-norm, are nondifferentiable. Proximal algorithms [@Parikh.Boyd2014], such as the proximal-gradient method (PGM) [@Figueiredo.Nowak2003; @Daubechies.etal2004; @Bect.etal2004; @Beck.Teboulle2009] and alternating direction method of multipliers (ADMM) [@Eckstein.Bertsekas1992; @Afonso.etal2010; @Ng.etal2010; @Boyd.etal2011], are a class of optimization methods that can circumvent the need to differentiate nonsmooth regularizers by using the proximal operator $$\label{Eq:ProximalOperator} \prox_{\mu h}(\zbm) \defn \argmin_{\xbm \in \R^n} \left\{\frac{1}{2}\|\xbm-\zbm\|_2^2 + \mu h(\xbm)\right\},\; \mu > 0.$$ The observation that the proximal operator can be interpreted as the MAP denoiser for AWGN has prompted the development of PnP [@Venkatakrishnan.etal2013], where the proximal operator $\prox_{\mu h}(\cdot)$, within ADMM or PGM, is replaced with a more general denoising function $\Dsf(\cdot)$. Consider the following alternative to PnP that also relies on a denoising function [@Bigdeli.etal2017; @Romano.etal2017] $$\begin{aligned} \label{Eq:REDGM} &\xbm^t \leftarrow \xbm^{t-1} - \gamma \left(\nabla g(\xbm^{t-1})+\Hsf(\xbm^{t-1})\right) \nonumber\\ &\text{where}\quad \Hsf(\xbm) \defn \tau(\xbm - \Dsf(\xbm)), \quad \tau >0.\end{aligned}$$ Under some conditions on the denoiser, it is possible to relate $\Hsf(\cdot)$ in  to some explicit regularization function $h$. For example, when the denoiser is locally homogeneous and has a symmetric Jacobian [@Romano.etal2017; @Reehorst.Schniter2019], the operator $\Hsf(\cdot)$ corresponds to the gradient of the following function $$\label{Eq:REDReg} h(\xbm) = \frac{\tau}{2}\xbm^\Tsf(\xbm-\Dsf(\xbm)).$$ On the other hand, when the denoiser corresponds to the minimum mean squared error (MMSE) estimator $\Dsf(\zbm) = \E[\xbm | \zbm]$ for the AWGN denoising problem [@Bigdeli.etal2017; @Reehorst.Schniter2019], $\zbm = \xbm + \ebm$, with $\xbm \sim p_\xbm(\xbm)$ and $\ebm \sim \Ncal(\zerobm, \sigma^2\Ibm)$, the operator $\Hsf(\cdot)$ corresponds to the gradient of $$\label{Eq:DMSP} h(\xbm) = -\tau\sigma^2 \log(p_\zbm(\xbm)),$$ where $$p_\zbm(\xbm) = (p_\xbm \ast p_\ebm)(\xbm) = \int_{\R^n} p_\xbm(\zbm) \phi_\sigma(\xbm-\zbm) \dsf \zbm, \nonumber$$ where $\phi_\sigma$ is the Gaussian probability density function of variance $\sigma^2$ and $\ast$ denotes convolution. In this paper, we will use the term RED to denote *all* methods seeking the fixed points of . The key benefits of the RED methods [@Romano.etal2017; @Bigdeli.etal2017; @Metzler.etal2018; @Reehorst.Schniter2019; @Mataev.etal2019] are their explicit separation of the forward model from the prior, their ability to accommodate powerful denoisers (such as the ones based on CNNs) without differentiating them, and their state-of-the-art performance on a number of imaging tasks. The next section further extends the scalability of RED by designing a new block coordinate RED algorithm. Block Coordinate RED {#Sec:Algorithm} ==================== All the current RED algorithms operate on vectors in $\R^n$. We propose BC-RED, shown in Algorithm \[Alg:BCRED\], to allow for partial randomized updates on $\xbm$. Consider the decomposition of $\R^n$ into $b \geq 1$ subspaces $$\R^n = \R^{n_1} \times \R^{n_2} \times \cdots \times \R^{n_b}\quad\text{with}\quad n = n_1 + n_2 + \cdots + n_b.$$ For each $i \in \{1, \dots, b\}$, we define the matrix $\Usf_i: \R^{n_i} \rightarrow \R^n$ that injects a vector in $\R^{n_i}$ into $\R^n$ and its transpose $\Usf_i^\Tsf$ that extracts the $i$th block from a vector in $\R^n$. Then, for any $\xbm = (\xbm_1, \dots, \xbm_b) \in \R^n$ $$\label{Eq:SubspaceDecomposition} \xbm = \sum_{i = 1}^b \Usf_i \xbm_i \quad\text{with}\quad \xbm_i = \Usf_i^\Tsf\xbm \in \R^{n_i}, \; i = 1, \dots, b$$ which is equivalent to $\sum_{i = 1}^b \Usf_i \Usf_i^\Tsf = \Isf$. Note that  directly implies the norm preservation $\|\xbm\|_2^2 = \|\xbm_1\|_2^2 + \cdots + \|\xbm_b\|_2^2$ for any $\xbm \in \R^n$. We are interested in a block-coordinate algorithm that uses only a subset of operator outputs corresponding to coordinates in some block $i \in \{1, \dots, b\}$. Hence, for an operator $\Gsf: \R^n \rightarrow \R^n$, we define the block-coordinate operator $\Gsf_i: \R^n \rightarrow \R^{n_i}$ as $$\Gsf_i(\xbm) \defn [\Gsf(\xbm)]_i = \Usf_i^\Tsf\Gsf(\xbm) \in \R^{n_i}, \quad \xbm \in \R^n.$$ We now introduce the proposed BC-RED algorithm summarized in Algorithm \[Alg:BCRED\]. Note that when $b = 1$, we have $n = n_1$ and $\Usf_1 = \Usf_1^\Tsf = \Isf$. Hence, the theoretical analysis in this paper is also applicable to the full-gradient RED algorithm in . **input:** initial value $\xbm^0 \in \R^n$, parameter $\tau > 0$, and step-size $\gamma > 0$. Choose an index $i_k \in \{1, \dots, b\}$ $\xbm^k \leftarrow \xbm^{k-1} - \gamma \Usf_{i_k} \Gsf_{i_k}(\xbm^{k-1})$\ where $\Gsf_i(\xbm) \defn \Usf_i^\Tsf\Gsf(\xbm)$ with $\Gsf(\xbm) \defn \nabla g(\xbm) + \tau(\xbm-\Dsf(\xbm))$. As with traditional coordinate descent methods (see [@Wright2015] for a review), BC-RED can be implemented using different block selection strategies. The strategy adopted for our theoretical analysis selects block indices $i_k$ as i.i.d. random variables distributed uniformly over $\{1, \dots, b\}$. An alternative is to proceed in epochs of $b$ consecutive iterations, where at the start of each epoch the set $\{1, \dots, b\}$ is reshuffled, and $i_k$ is then selected consecutively from this ordered set. We numerically compare the convergence of both BC-RED variants in Section \[Sec:Simulations\]. BC-RED updates its iterates one randomly picked block at a time using the output of $\Gsf$. When the algorithm converges, it converges to the vectors in the zero set of $\Gsf$ $$\begin{aligned} &\Gsf(\xbmast) = \nabla g(\xbmast) + \tau(\xbmast - \Dsf(\xbmast)) = \zerobm \nonumber\\ &\quad\Leftrightarrow\quad \xbmast \in \zer(\Gsf) \defn \{\xbm \in \R^n : \Gsf(\xbm) = \zerobm\}.\end{aligned}$$ Consider the following two sets $$\begin{aligned} &\zer(\nabla g) \defn \{\xbm \in \R^n : \nabla g(\xbm) = \zerobm\} \nonumber\\ \text{and}\quad& \fix(\Dsf) \defn \{\xbm \in \R^n : \xbm = \Dsf(\xbm)\},\end{aligned}$$ where $\zer(\nabla g)$ is the set of all critical points of the data-fidelity and $\fix(\Dsf)$ is the set of all fixed points of the denoiser. Intuitively, the fixed points of $\Dsf$ correspond to all the vectors that are not denoised, and therefore can be interpreted as vectors that are *noise-free* according to the denoiser. Note that if $\xbmast \in \zer(\nabla g)\cap\fix(\Dsf)$, then $\Gsf(\xbmast) = \zerobm$ and $\xbmast$ is one of the solutions of BC-RED. Hence, any vector that is consistent with the data for a convex $g$ and noiseless according to $\Dsf$ is in the solution set. On the other hand, when $\zer(\nabla g)\cap \fix(\Dsf) = \varnothing$, then $\xbmast \in \zer(\Gsf)$ corresponds to a tradeoff between the two sets, explicitly controlled via $\tau > 0$ (see Fig. \[Fig:imageFlow\] in the supplement for an illustration). This explicit control is one of the key differences between RED and PnP. BC-RED benefits from considerable *flexibility* compared to the full-gradient RED. Since each update is restricted to only one block of $\xbm$, the algorithm is suitable for parallel implementations and can deal with problems where the vector $\xbm$ is distributed in space and in time. However, the maximal benefit of BC-RED is achieved when $\Gsf_i$ is efficient to evaluate. Fortunately, it was systematically shown in [@Peng.etal2016] that many operators—common in machine learning, image processing, and compressive sensing—admit *coordinate friendly* updates. For a specific example, consider the least-squares data-fidelity $g$ and a block-wise denoiser $\Dsf$. Define the residual vector $r(\xbm) \defn \Abm\xbm-\ybm$ and consider a single iteration of BC-RED that produces $\xbm^+$ by updating the $i$th block of $\xbm$. Then, the update direction and the residual update can be computed as $$\begin{aligned} &\Gsf_i(\xbm) = \Abm_i^\Tsf r(\xbm) + \tau (\xbm_i - \Dsf(\xbm_i)) \nonumber\\ \text{and}\quad& r(\xbm^+) = r(\xbm) - \gamma \Abm_i \Gsf_i(\xbm),\end{aligned}$$ where $\Abm_i \in \R^{m \times n_i}$ is a submatrix of $\Abm$ consisting of the columns corresponding to the $i$th block. In many problems of practical interest [@Peng.etal2016], the complexity of working with $\Abm_i$ is roughly $b$ times lower than with $\Abm$. Also, many advanced denoisers can be effectively applied on image patches rather than on the full image [@Elad.Aharon2006; @Buades.etal2010; @Zoran.Weiss2011]. Therefore, in such settings, the speed of $b$ iterations of BC-RED is expected to be at least comparable to a single iteration of the full-gradient RED (see also Section \[Sec:ComputationalComplexity\]). Convergence Analysis and Compatibility with Proximal Optimization {#Sec:TheoretcalResults} ================================================================= In this section, we present two theoretical results related to BC-RED. We first establish its convergence to an element of $\zer(\Gsf)$ and then discuss its compatibility with the theory of proximal optimization. Fixed Point Convergence of BC-RED --------------------------------- Our analysis requires three assumptions that together serve as sufficient conditions for convergence. \[As:NonemptySet\] The operator $\Gsf$ is such that $\zer(\Gsf) \neq \varnothing$. There is a finite number $R_0$ such that the distance of the initial $\xbm^0 \in \R^n$ to the farthest element of $\zer(\Gsf)$ is bounded, that is $$\max_{\xbmast \in \zer(\Gsf)} \|\xbm^0-\xbmast\|_2 \leq R_0.$$ This assumption is necessary to guarantee convergence and is related to the existence of the minimizers in the literature on traditional coordinate minimization [@Tseng2001; @Nesterov2012; @Beck.Tetruashvili2013; @Wright2015]. The next two assumptions rely on Lipschitz constants along directions specified by specific blocks. We say that $\Gsf_i$ is *block Lipschitz continuous* with constant $\lambda_i > 0$ if $$\begin{aligned} &\|\Gsf_i(\xbm)-\Gsf_i(\ybm)\|_2 \leq \lambda_i \|\hbm_i\|_2, \nonumber\\ \text{where}\quad& \xbm = \ybm + \Usf_i\hbm_i, \; \ybm \in \R^n, \hbm_i \in \R^{n_i}.\end{aligned}$$ When $\lambda_i = 1$, we say that $\Gsf_i$ is *block nonexpansive*. Note that if an operator $\Gsf$ is globally $\lambda$-Lipschitz continuous, then it is straightforward to see that each $\Gsf_i = \Usf_i^\Tsf\Gsf$ is also block $\lambda$-Lipschitz continuous. \[As:DataFitConvexity\] The function $g$ is continuously differentiable and convex. Additionally, for each $i \in \{1, \dots, b\}$ the block gradient $\nabla_i g$ is block Lipschitz continuous with constant $L_i > 0$. We define the largest block Lipschitz constant as $\Lmax \defn \max\{L_1, \dots, L_b\}.$ Let $L > 0$ denote the global Lipschitz constant of $\nabla g$. We always have $\Lmax \leq L$ and, for some $g$, it may even happen that $\Lmax = L/b$ [@Wright2015]. As we shall see, the largest possible step-size $\gamma$ of BC-RED depends on $\Lmax$, while that of the full-gradient RED on $L$. Hence, one natural advantage of BC-RED is that it can often take more aggressive steps compared to the full-gradient RED. \[As:NonexpansiveDen\] The denoiser $\Dsf$ is such that each block denoiser $\Dsf_i$ is block nonexpansive. Since the proximal operator is nonexpansive [@Parikh.Boyd2014], it automatically satisfies this assumption. We revisit this scenario in a greater depth in Section \[Sec:ProximalConvergence\]. We can now establish the following result for BC-RED. \[Thm:ConvThm1\] Run BC-RED for $t \geq 1$ iterations with random i.i.d. block selection under Assumptions \[As:NonemptySet\]-\[As:NonexpansiveDen\] using a fixed step-size $0 < \gamma \leq 1/(\Lmax+2\tau)$. Then, we have $$\begin{aligned} \label{Eq:BCREDConv} \E &\left[\min_{k \in \{1, \dots, t\}} \|\Gsf(\xbm^{k-1})\|_2^2\right] \nonumber\\ &\leq \E\left[\frac{1}{t}\sum_{k = 1}^t \|\Gsf(\xbm^{k-1})\|_2^2\right] \leq \frac{b(\Lmax+2\tau)}{\gamma t}R_0^2.\end{aligned}$$ A proof of the theorem is provided in the supplement. Theorem \[Thm:ConvThm1\] establishes the fixed-point convergence of BC-RED in expectation to $\zer(\Gsf)$ with $O(1/t)$ rate. The proof relies on the monotone operator theory [@Bauschke.Combettes2017; @Ryu.Boyd2016], widely used in the context of convex optimization [@Parikh.Boyd2014], including in the unified analysis of various traditional coordinate descent algorithms [@Peng.etal2016a; @Chow.etal2017]. Note that the theorem does *not* assume the existence of any regularizer $h$, which makes it applicable to denoisers beyond those characterized with explicit functions in  and . Since $\Lmax \leq L$, one important implication of Theorem \[Thm:ConvThm1\], is that the worst-case convergence rate (in expectation) of $b$ iterations of BC-RED is better than that of a single iteration of the full-gradient RED (to see this, note that the full-gradient rate is obtained by setting $b = 1$, $\Lmax = L$, and removing the expectation in ). This implies that in *coordinate friendly settings* (as discussed at the end of Section \[Sec:Algorithm\]), the overall computational complexity of BC-RED can be lower than that of the full-gradient RED. This gain is primarily due to two factors: (a) possibility to pick a larger step-size $\gamma = 1/(\Lmax+2\tau)$; (b) immediate reuse of each local block-update when computing the next iterate (the full-gradient RED updates the full vector before computing the next iterate). In the special case of $\Dsf(\xbm) = \xbm - (1/\tau)\nabla h(\xbm)$, for some convex function $h$, BC-RED reduces to the traditional coordinate descent method applied to . Hence, under the assumptions of Theorem \[Thm:ConvThm1\], one can rely on the analysis of traditional randomized coordinate descent methods in [@Wright2015] to obtain $$\label{Eq:CordDesConv} \E\left[f(\xbm^t)\right] - f^\ast \leq \frac{2b}{\gamma t}R_0^2$$ where $f^\ast$ is the minimum value in . A proof of  is provided in the supplement for completeness. Therefore, such denoisers lead to explicit convex RED regularizers and $O(1/t)$ convergence of BC-RED in terms of the objective. However, as discussed in Section \[Sec:ProximalConvergence\], when the denoiser is a proximal operator of some convex $h$, BC-RED is *not* directly solving , but rather its approximation. Finally, note that the analysis in Theorem \[Thm:ConvThm1\] only provides *sufficient conditions* for the convergence of BC-RED. As corroborated by our numerical studies in Section \[Sec:Simulations\], the actual convergence of BC-RED is more general and often holds beyond nonexpansive denoisers. One plausible explanation for this is that such denoisers are *locally nonexpansive* over the set of input vectors used in testing. On the other hand, the recent techniques for spectral-normalization of CNNs [@Miyato.etal2018; @Sedghi.etal2019; @Gouk.etal2018] provide a convenient tool for building *globally nonexpansive* neural denoisers that result in provable convergence of BC-RED. Convergence for Proximal Operators {#Sec:ProximalConvergence} ---------------------------------- One of the limitations of the current RED theory is in its limited backward compatibility with the theory of proximal optimization. For example, as discussed in [@Romano.etal2017] (see section *“Can we mimic any prior?”*), the popular total variation (TV) denoiser [@Rudin.etal1992] cannot be justified with the original RED regularization function . In this section, we show that BC-RED (and hence also the full-gradient RED) can be used to solve  for any convex, closed, and proper function $h$. We do this by establishing a formal link between RED and the concept of Moreau smoothing, widely used in nonsmooth optimization [@Moreau1965; @Rockafellar.Wets1998; @Yu2013]. In particular, we consider the following proximal-operator denoiser $$\begin{aligned} \label{Eq:ProximalDenoiser} \Dsf(\zbm) = \prox_{\frac{1}{\tau} h}(\zbm) = \argmin_{\xbm \in \R^n}\left\{\frac{1}{2}\|\xbm-\zbm\|_2^2 + (1/\tau) h(\xbm)\right\},\end{aligned}$$ where $\tau>0$, $\zbm\in\R^n$, and $h$ is a closed, proper, and convex function [@Parikh.Boyd2014]. Since the proximal operator is nonexpansive, it is also block nonexpansive, which means that Assumption \[As:NonexpansiveDen\] is automatically satisfied. Our analysis, however, requires an additional assumption using the constant $R_0$ defined in Assumption \[As:NonemptySet\]. \[As:Subgradient\] There is a finite number $G_0$ that bounds the largest subgradient of $h$, that is $$\max\{\|\gbm(\xbm)\|_2 : \gbm(\xbm) \in \partial h(\xbm), \xbm \in \Bcal(\xbm^0, R_0)\} \leq G_0,$$ where $\Bcal(\xbm^0, R_0) \defn \{\xbm \in \R^n : \|\xbm-\xbm^0\|_2 \leq R_0\}$ denotes a ball of radius $R_0$, centered at $\xbm^0$. This assumption on boundedness of the subgradients holds for a large number of regularizers used in practice, including both TV and the $\ell_1$-norm penalties. We can now establish the following result. \[Thm:ProxConv\] Run BC-RED for $t \geq 1$ iterations with random i.i.d. block selection and the denoiser  under Assumptions \[As:NonemptySet\]-\[As:Subgradient\] using a fixed step-size $0 < \gamma \leq 1/(\Lmax+2\tau)$. Then, we have $$\label{Eq:BCREDProx} \E\left[f(\xbm^t)\right] - f^\ast \leq \frac{2b}{\gamma t} R_0^2 + \frac{G_0^2}{2\tau},$$ where the function $f$ is defined in  and $f^\ast$ is its minimum. The theorem is proved in the supplement. It establishes that BC-RED in expectation *approximates* the solution of  with an error bounded by $(G_0^2/(2\tau))$. For example, by setting $\tau = \sqrt{t}$ and ${\gamma = 1/(\Lmax+2\sqrt{t})}$, one obtains the following bound $$\label{Eq:BCREDProx2} \E\left[f(\xbm^t)\right] - f^\ast \leq \frac{1}{\sqrt{t}}\left[2b(\Lmax+2)R_0^2 + G_0^2\right].$$ When $h(\xbm) = -\log(p_\xbm(\xbm))$, the proximal operator corresponds to the MAP denoiser, and the solution of BC-RED corresponds to an *approximate* MAP estimator. This approximation can be made as precise as desired by considering larger values for the parameter $\tau > 0$. Note that this further justifies the RED framework by establishing that it can be used to compute a minimizer of any proper, closed, and convex (but not necessarily differentiable) $h$. Therefore, our analysis strengthens RED by showing that it can accommodate a much larger class of explicit regularization functions, beyond those characterized in  and . Numerical Validation {#Sec:Simulations} ==================== There is a considerable recent interest in using advanced priors in the context of image recovery from underdetermined ($m < n$) and noisy measurements. Recent work [@Romano.etal2017; @Bigdeli.etal2017; @Reehorst.Schniter2019; @Metzler.etal2018; @Mataev.etal2019] suggests significant performance improvements due to advanced denoisers (such as BM3D [@Dabov.etal2007] or DnCNN [@Zhang.etal2017]) over traditional sparsity-driven priors (such as TV [@Rudin.etal1992]). Our goal is to complement these studies with several simulations validating our theoretical analysis and providing additional insights into [BC-RED]{}. The code for our implementation of BC-RED is available through the following link[^6]. We consider inverse problems of form $\ybm = \Abm\xbm + \ebm,$ where ${\ebm \in \R^m}$ is an AWGN vector and ${\Abm \in \R^{m \times n}}$ is a matrix corresponding to either a sparse-view Radon transform, i.i.d. zero-mean Gaussian random matrix of variance $1/m$, or radially subsampled two-dimensional Fourier transform. Such matrices are commonly used in the context of computerized tomography (CT) [@Kak.Slaney1988], compressive sensing [@Candes.etal2006; @Donoho2006], and magnetic resonance imaging (MRI) [@Knoll.etal2011], respectively. In all simulations, we set the measurement ratio to be approximately $m/n = 0.5$ with AWGN corresponding to input signal-to-noise ratio (SNR) of 30 dB and 40 dB. The images used correspond to 10 images randomly selected from the NYU fastMRI dataset [@Zbontar.etal2018], resized to be $160 \times 160$ pixels (see Fig. \[Fig:TestImages\] in the supplement). BC-RED is set to work with 16 blocks, each of size $40 \times 40$ pixels. The reconstruction quality is quantified using SNR averaged over all ten test images. In addition to well-studied denoisers, such as TV and BM3D, we design our own CNN denoiser denoted $\DnCNNast$, which is a simplified version of the popular DnCNN denoiser (see Supplement \[Sec:TechnicalDetails\] for details). This simplification reduces the computational complexity of denoising, which is important when running many iterations of BC-RED. Additionally, it makes it easier to control the global Lipschitz constant of the CNN via spectral-normalization [@Sedghi.etal2019]. We train $\DnCNNast$ for the removal of AWGN at four noise levels corresponding to $\sigma \in \{5,10,15,20\}$. For each experiment, we select the denoiser achieving the highest SNR value. Note that the $\sigma$ parameter of BM3D is also fine-tuned for each experiment from the same set $\{5,10,15,20\}$. ![image](figures/convergence){width="80.00000%"} \[Tab:SNR\] [13.3cm]{}[L[90pt]{}C[30pt]{}C[30pt]{}cC[30pt]{}C[30pt]{}cC[30pt]{}C[30pt]{}]{} **Methods** & & & & &\ & **30 dB** & **40 dB** & & **30 dB** & **40 dB** & & **30 dB** & **40 dB**\ **PGM (TV)** & 20.66 & 24.40 & & 26.07 & ***28.42*** & & 28.74 & 29.99\ **U-Net** & ***21.90*** & 21.72 & & 16.37 & 16.40 & & 22.11 & 22.11\ **RED (TV)** & 20.79 & 24.46 & & 25.64 & & & 28.67 & 29.97\ **BC-RED (TV)** & 20.78 & 24.42 & & 25.70 & & & 28.71 & 29.99\ **RED (BM3D)** & & & & 26.46 & 27.82 & & 28.89 & 29.79\ **BC-RED (BM3D)** & & & & 26.50 & 27.88 & & 28.85 & 29.80\ **RED ($\text{DnCNN}^\ast$)** & 20.89 & 24.38 & & & 28.05 & & &\ **BC-RED ($\text{DnCNN}^\ast$)** & 20.88 & 24.42 & & & 28.12 & & &\ Theorem \[Thm:ConvThm1\] establishes the convergence of BC-RED in expectation to an element of $\zer(\Gsf)$. This is illustrated in Fig. \[Fig:convergenceCT\] (left) for the Radon matrix with $30$ dB noise and a nonexpansive $\DnCNNast$ denoiser (see also Fig. \[Fig:ConvergencePlots\] in the supplement). The average value of $\|\Gsf(\xbm^k)\|_2^2/\|\Gsf(\xbm^0)\|_2^2$ is plotted against the iteration number for the full-gradient RED and BC-RED, with $b$ updates of BC-RED (each modifying a single block) represented as one iteration. We numerically tested two block selection rules for BC-RED (*i.i.d.* and *epoch*) and observed that processing in randomized epochs leads to a faster convergence. For reference, the figure also plots the normalized squared norm of the gradient mapping vectors produced by the traditional PGM with TV [@Beck.Teboulle2009a]. The shaded areas indicate the range of values taken over $10$ runs corresponding to each test image. The results highlight the potential of BC-RED to enjoy a better convergence rate compared to the full-gradient RED, with BC-RED (epoch) achieving the accuracy of $10^{-10}$ in 104 iterations, while the full-gradient RED achieves the same accuracy in 190 iterations. Theorem \[Thm:ProxConv\] establishes that for proximal-operator denoisers, BC-RED computes an approximate solution to  with an accuracy controlled by the parameter $\tau$. This is illustrated in Fig. \[Fig:convergenceCT\] (right) for the Fourier matrix with $40$ dB noise and the TV regularized least-squares problem. The average value of ${(f(\xbm^k)-f^\ast)/(f(\xbm^0)-f^\ast)}$ is plotted against the iteration number for BC-RED with ${\tau \in \{0.01,0.1, 1\}}$. The optimal value $f^\ast$ is obtained by running the traditional PGM until convergence. As before, the figure groups $b$ updates of BC-RED as a single iteration. The results are consistent with our theoretical analysis and show that as $\tau$ increases BC-RED provides an increasingly accurate solution to TV. On the other hand, since the range of possible values for the step-size $\gamma$ depends on $\tau$, the speed of convergence to $f^\ast$ is also influenced by $\tau$. The benefits of the full-gradient RED algorithms have been well discussed in prior work [@Romano.etal2017; @Bigdeli.etal2017; @Reehorst.Schniter2019; @Metzler.etal2018; @Mataev.etal2019]. Table \[Tab:SNR\] summarizes the average SNR performance of BC-RED in comparison to the full-gradient RED for all three matrix types and several priors. Unlike the full-gradient RED, BC-RED is implemented using block-wise denoisers that work on image patches rather than the full images. We empirically found that 40 pixel padding on the denoiser input is sufficient for BC-RED to match the performance of the full-gradient RED. The table also includes the results for the traditional PGM with TV [@Beck.Teboulle2009a] and the widely-used end-to-end U-Net approach [@Jin.etal2017a; @Han.etal2017]. The latter first backprojects the measurements into the image domain and then denoises the result using U-Net [@Ronneberger.etal2015]. The model was specifically trained end-to-end for the Radon matrix with 30 dB noise and applied as such to other measurement settings. All the algorithms were run until convergence with hyperparameters optimized for SNR. The $\DnCNNast$ denoiser in the table corresponds to the residual network with the Lipschitz constant of two (see Supplement \[Sec:ArchitectureTraining\] for details). The overall best SNR in the table is highlighted in bold-italic, while the best RED prior is highlighted in light-green. First, note the excellent agreement between BC-RED and the full-gradient RED. This close agreement between two methods is encouraging as BC-RED relies on block-wise denoising and our analysis does not establish uniqueness of the solution, yet, in practice, both methods seem to yield solutions of nearly identical quality. Second, note that BC-RED and RED provide excellent approximations to PGM-TV solutions. Third, note how (unlike U-Net) BC-RED and RED with $\DnCNNast$ generalize to different measurement models. Finally. no prior seems to be universally good on all measurement settings, which indicates to the potential benefit of tailoring specific priors to specific measurement models. ![image](figures/galaxy){width="90.00000%"} Coordinate descent methods are known to be highly beneficial in problems where both $m$ and $n$ are very large, but each measurement depends only on a small subset of the unknowns [@Niu.etal2011]. Fig. \[Fig:galaxyImages\] demonstrates BC-RED in such large-scale setting by adopting the experimental setup from a recent work [@Farrens.etal2017] (see also Fig. \[Fig:MoreGalaxies\] in the supplement). Specifically, we consider the recovery of a $8292 \times 8364$ pixel galaxy image degraded by 597 known point spread functions (PSFs) corresponding to different spatial locations. The natural sparsity of the problem makes it ideal for BC-RED, which is implemented to update $41 \times 41$ pixel blocks in a randomized fashion by only picking areas containing galaxies. The computational complexity of BC-RED is further reduced by considering a simpler variant of $\DnCNNast$ that has only four convolutional layers (see Fig. \[Fig:DnCNNstar\] in the supplement). For comparison, we additionally show the result obtained by using the low-rank recovery method from [@Farrens.etal2017] with all the parameters kept at the values set by the authors. Note that our intent here is not to justify $\DnCNNast$ as a prior for image deblurring, but to demonstrate that BC-RED can indeed be applied to a realistic, nontrivial image recovery task on a large image. Conclusion and Future Work {#Sec:Conclusion} ========================== Coordinate descent methods have become increasingly important in optimization for solving large-scale problems arising in data analysis. We have introduced BC-RED as a coordinate descent extension to the current family of RED algorithms and theoretically analyzed its convergence. Preliminary experiments suggest that BC-RED can be an effective tool in large-scale estimation problems arising in image recovery. More experiments are certainly needed to better asses the promise of this approach in various estimation tasks. For future work, we would like to explore accelerated and asynchronous variants of BC-RED to further enhance its performance in parallel settings. We adopt the monotone operator theory [@Ryu.Boyd2016; @Bauschke.Combettes2017] for a unified analysis of BC-RED. In Supplement \[Sec:Proof1\], we prove the convergence of BC-RED to an element of $\zer(\Gsf)$. In Supplement \[Sec:Proof2\], we prove that for proximal-operator denoisers, BC-RED converges to an approximate solution of . For completeness, in Supplement \[Sec:CoordinateAnalysis\], we discuss the well-known convergence results for traditional coordinate descent [@Tseng2001; @Nesterov2012; @Beck.Tetruashvili2013; @Wright2015; @Fercoq.Gramfort2018]. In Supplement \[Sec:BackgroundMaterial\], we provide the background material used in Supplement \[Sec:Proof1\] and Supplement \[Sec:Proof2\], expressed in a form convenient for block-coordinate analysis. In Supplement \[Sec:TechnicalDetails\], we provide additional technical details omitted from the main paper due to space, such as the details on computational complexity and CNN architectures. In Supplement \[Sec:AdditionalSimulations\], we present additional simulations that were also omitted from the main paper due to space. Proof of Theorem \[Thm:ConvThm1\] {#Sec:Proof1} ================================= A fixed-point convergence of averaged operators is well-known under the name of Krasnosel’skii-Mann theorem (see Section 5.2 in [@Bauschke.Combettes2017]) and was recently applied to the analysis of PnP [@Sun.etal2018a] and several full-gradient RED algorithms in [@Reehorst.Schniter2019]. Our analysis here extends these results to the block-coordinate setting and provides explicit worst-case convergence rates for BC-RED. We consider the following operators $$\Gsf_i = \nabla_i g + \Hsf_i \quad\text{with}\quad \Hsf_i = \tau \Usf_i^\Tsf(\Isf-\Dsf).$$ and proceed in several steps. 1. Since $\nabla_i g$ is block $L_i$-Lipschitz continuous, it is also block $\Lmax$-Lipschitz continuous. Hence, we know from Proposition \[Prop:BlockCocoer\] in Supplement \[Sec:Convexity\] that it is block $(1/\Lmax)$-cocoercive. Then from Proposition \[Prop:NonexpEquiv\] in Supplement \[Sec:AveragedOp\], we know that the operator ${(\Usf_i^\Tsf-(2/\Lmax)\nabla_i g)}$ is block nonexpansive. 2. From the definition of $\Hsf_i$ and the fact that $\Dsf_i$ is block nonexpansive, we know that ${(\Usf_i^\Tsf-(1/\tau)\Hsf_i) = \Dsf_i}$ is block nonexpansive. 3. From Proposition \[Prop:BlockConvNonexp\] in Supplement \[Sec:Prelims\], we know that a convex combination of block nonexpansive operators is also block nonexpansive, hence we conclude that $$\begin{aligned} &\Usf_i^\Tsf - \frac{2}{\Lmax+2\tau}\Gsf_i \\ &= \left(\frac{2}{\Lmax+2\tau}\cdot \frac{\Lmax}{2}\right)\left[\Usf_i^\Tsf-\frac{2}{\Lmax}\nabla_i g\right] \nonumber\\ &\quad\quad\quad+ \left(\frac{2}{\Lmax+2\tau}\cdot \frac{2\tau}{2}\right)\left[\Usf_i^\Tsf - \frac{1}{\tau}\Hsf_i\right],\end{aligned}$$ is block nonexpansive. Then from Proposition \[Prop:NonexpEquiv\] in Supplement \[Sec:AveragedOp\], we know that $\Gsf_i$ is block $1/(\Lmax+2\tau)$-cocoercive. 4. Consider any $\xbmast \in \zer(\Gsf)$, an index $i \in \{1, \dots, b\}$ picked uniformly at random, and a single iteration of BC-RED $\xbm^+ = \xbm - \gamma\Usf_i\Gsf_i\xbm$. Define a vector $\hbm_i \defn \Usf_i^\Tsf(\xbm-\xbmast) \in \R^{n_i}$. We then have $$\begin{aligned} \label{Eq:SingleIter} \nonumber&\|\xbm^+-\xbmast\|^2 \\ \nonumber=& \|\xbm - \xbmast - \gamma \Usf_i\Gsf_i\xbm\|^2\\ \nonumber=& \|\xbm-\xbmast\|^2 - 2\gamma (\Usf_i\Gsf_i\xbm)^\Tsf(\xbm-\xbmast) + \gamma^2\|\Gsf_i\xbm\|^2 \\ \nonumber=& \|\xbm-\xbmast\|^2 - 2\gamma (\Gsf_i\xbm-\Gsf_i\xbmast)^\Tsf\hbm_i + \gamma^2\|\Gsf_i\xbm\|^2 \\ \nonumber\leq& \|\xbm-\xbmast\|^2 - \frac{2\gamma-(\Lmax+2\tau)\gamma^2}{\Lmax+2\tau}\|\Gsf_i\xbm\|^2 \\ \leq& \|\xbm-\xbmast\|^2 - \frac{\gamma}{\Lmax+2\tau}\|\Gsf_i \xbm\|^2,\end{aligned}$$ where in the third line we used $\Gsf_i\xbmast = \Usf_i^\Tsf\Gsf\xbmast = \zerobm$, in the fourth line the block cocoercivity of $\Gsf_i$, and in the last line the fact that $0 < \gamma \leq 1/(\Lmax+2\tau)$. 5. By taking a conditional expectation on both sides and rearranging the terms, we obtain $$\begin{aligned} &\frac{\gamma}{\Lmax + 2\tau} \E\left[\|\Gsf_i\xbm\|^2 | \xbm\right] \nonumber\\ &= \frac{\gamma}{b(\Lmax + 2\tau)} \sum_{i = 1}^b \|\Gsf_i\xbm\|^2 = \frac{\gamma}{b(\Lmax + 2\tau)} \|\Gsf\xbm\|^2 \nonumber\\ &\leq \E\left[\|\xbm-\xbmast\|^2 - \|\xbm^+ - \xbmast\|^2 | \xbm \right]\end{aligned}$$ 6. Hence by averaging over $t \geq 1$ iterations and taking the total expectation $$\begin{aligned} \E\left[\frac{1}{t}\sum_{k = 1}^{t} \|\Gsf\xbm^{k-1}\|^2\right] &\leq \frac{1}{t}\left[\frac{b(\Lmax+2\tau)}{\gamma}\|\xbm^0-\xbmast\|^2\right] \nonumber\\ &\leq \frac{1}{t}\left[\frac{b(\Lmax+2\tau)}{\gamma}R_0^2\right].\end{aligned}$$ The last inequality directly leads to the result. **Remark**. Eq.  implies that, under Assumptions \[As:NonemptySet\]-\[As:NonexpansiveDen\], the iterates of BC-RED satisfy $$\label{Eq:DistanceReduction} \|\xbm^t-\xbmast\| \leq \|\xbm^{t-1}-\xbmast\| \leq \cdots \leq \|\xbm^0-\xbmast\| \leq R_0,$$ which means that the distance of the iterates of BC-RED to $\zer(\Gsf)$ is nonincreasing. **Remark**. Suppose we are solving a *coordinate friendly problem* [@Peng.etal2016], in which the cost of the full gradient update is $b$ times the cost of block update. Consider the step-size $\gamma = 1/(L + 2\tau)$ where $L$ is the global Lipschitz constant of the gradient method. A similar analysis as above would yield the following convergence rate for the gradient method $$\frac{1}{t}\sum_{k = 1}^{t} \|\Gsf\xbm^{k-1}\|^2 \leq \frac{(L+2\tau)^2R_0^2}{t}$$ Now, consider the step-size $\gamma = 1/(\Lmax + 2\tau)$ and suppose that we run $(t \cdot b)$ updates of BC-RED with $t \geq 1$. Then, we have that $$\E\left[\frac{1}{tb}\sum_{k = 1}^{tb} \|\Gsf\xbm^{k-1}\|^2\right] \leq \frac{(\Lmax + 2\tau)^2 R_0^2}{t}.$$ Since $\Lmax \leq L \leq b\Lmax$, where the upper bound can sometimes be tight, we conclude that the expected complexity of the block-coordinate algorithm is lower compared to the full algorithm. Proof of Theorem \[Thm:ProxConv\] {#Sec:Proof2} ================================= The concept of Moreau smoothing is well-known and has been extensively used in other contexts (see for example [@Yu2013]). Our contribution is to formally connect the concept to RED-based algorithms, which leads to its novel justification as an approximate MAP estimator. The basic review of relevant concepts from proximal optimization is given in Supplement \[Sec:MoreauTheory\]. For $\tau > 0$, we consider the Moreau envelope of $h$ $$h_{(1/\tau)}(\xbm) \defn \min_{\zbm \in \R^n}\left\{\frac{1}{2}\|\zbm-\xbm\|^2 + (1/\tau) h(\zbm)\right\}.$$ From Proposition \[Prop:UniformBoundMoreau\] in Supplement \[Sec:MoreauTheory\] we know that $$\label{Eq:MorApprox} 0 \leq h(\xbm) - \tau h_{(1/\tau)}(\xbm) \leq \frac{G_0}{2\tau}$$ and from Proposition \[Prop:GradMorProxRes\] in Supplement \[Sec:MoreauTheory\], we know that $$\label{Eq:MorGrad} \tau\nabla h_{(1/\tau)}(\xbm) = \tau(\xbm - \prox_{(1/\tau)h}(\xbm)).$$ Hence, we can express the function $f$ as follows $$\begin{aligned} f(\xbm) &= g(\xbm) + h(\xbm) \\ &= (g(\xbm) + \tau h_{(1/\tau)}(\xbm)) + (h(\xbm) - \tau h_{(1/\tau)}(\xbm)) \\ &= f_{(1/\tau)}(\xbm) + (h(\xbm) - \tau h_{(1/\tau)}(\xbm)),\end{aligned}$$ where $f_{(1/\tau)} \defn g + \tau h_{(1/\tau)}$. From eq. , we conclude that a single iteration of BC-RED $$\xbm^+ = \xbm - \gamma \Usf_i \Gsf_i \xbm \quad\text{with}\quad \Gsf_i = \Usf_i^\Tsf(\nabla g(\xbm) + \tau \nabla h_{(1/\tau)}(\xbm))$$ is performing a block-coordinate descent on the function $f_{(1/\tau)}$. From eq.  and the convexity of the Moreau envelope, we have $$f_{(1/\tau)}^\ast = f_{(1/\tau)}(\xbmast) \leq f_{(1/\tau)}(\xbm) \leq f(\xbm), \quad \xbm \in \R^n, \xbmast \in \zer(\Gsf).$$ Hence, there exists a finite $f^\ast$ such that $f(\xbm) \geq f^\ast$ with $f_{(1/\tau)}^\ast \leq f^\ast$. Consider the iteration $t \geq 1$ of BC-RED, then we have that $$\begin{aligned} \E[f(\xbm^t)] - f^\ast &\leq \E[f(\xbm^t)] - f_{(1/\tau)}^\ast \\ &= (\E[f_{(1/\tau)}(\xbm^t)]-f_{(1/\tau)}^\ast) \nonumber\\ &\quad\quad\quad\quad+ \E[(h(\xbm^t)-\tau h_{(1/\tau)}(\xbm^t)]) \\ &\leq \frac{2b}{\gamma t}R_0^2 + \frac{G_0^2}{2\tau},\end{aligned}$$ where we applied , which is further discussed in Supplement \[Sec:CoordinateAnalysis\]. The proof of eq.  is directly obtained by setting $\tau = \sqrt{t}$, $\gamma = \Lmax+2\sqrt{t}$, and noting that $t \geq \sqrt{t}$, for all $t \geq 1$. Convergence of the Traditional Coordinate Descent {#Sec:CoordinateAnalysis} ================================================= The following analysis has been adopted from [@Wright2015]. We include it here for completeness. Consider the following denoiser $$\Dsf(\xbm) = \xbm - \frac{1}{\tau}\nabla h(\xbm), \quad \tau > 0, \quad \xbm \in \R^n,$$ and the following function $$f(\xbm) = g(\xbm) + h(\xbm)$$ where $g$ and $h$ are both convex and continuously differentiable. For this denoiser, we have that $$\Gsf(\xbm) = \nabla g(\xbm) + \tau (\xbm-\Dsf(\xbm)) = \nabla g(\xbm) + \nabla h(\xbm) = \nabla f(\xbm).$$ Therefore, in this case, BC-RED is minimizing a convex and smooth function $f$, which means that any $\xbmast \in \zer(\Gsf)$ is a global minimizer of $f$. Additionally, due to Proposition \[Prop:NonexpCocoerOp\] in Supplement \[Sec:Prelims\] and Proposition \[Prop:BlockCocoer\] in Supplement \[Sec:Convexity\], we have $$\begin{aligned} &\Dsf_i \text{ is block nonexpansive} \nonumber\\ \Leftrightarrow\quad &\nabla_i h \text{ is block $2\tau$-Lipschitz continuous}.\end{aligned}$$ Hence, for such denoisers, Assumption \[As:NonexpansiveDen\] is equivalent to the $2\tau$-Lipschitz smoothness of block gradients $\nabla_i h$. To prove eq. \[Eq:CordDesConv\], we consider the following iteration $$\xbm^+ = \xbm - \Usf_i \Gsf_i\xbm \quad\text{with}\quad \Gsf_i = \nabla_i f = \nabla_i g + \nabla_i h,$$ which under our assumptions is a special case of the setting for Theorem \[Thm:ConvThm1\]. 1. From the block Lipscthiz continuity of $f$, we conclude that $$\begin{aligned} f(\xbm^+) &\leq f(\xbm) + \nabla f(\xbm)^\Tsf(\xbm^+-\xbm) \nonumber\\ &\quad\quad\quad\quad\quad\quad\quad\quad+\frac{(\Lmax+2\tau)}{2}\|\xbm^+-\xbm\|^2 \\ &= f(\xbm) - \gamma \|\nabla_i f(\xbm)\|^2 \nonumber\\ &\quad\quad\quad\quad\quad\quad\quad\quad+ \frac{\gamma^2(\Lmax+2\tau)}{2}\|\nabla_i f(\xbm)\|^2 \\ &\leq f(\xbm) - \frac{\gamma}{2} \|\nabla_i f(\xbm)\|^2,\end{aligned}$$ where the last inequality comes from the fact that $\gamma \leq 1/(\Lmax+2\tau)$. 2. For all $t \geq 1$, define $$\varphi_t \defn \E\left[f(\xbm^t)\right] - f(\xbmast).$$ Then from (a), we can conclude that $$\begin{aligned} \varphi_t &\leq \varphi_{t-1} - \frac{\gamma}{2b}\E\left[\|\nabla f(\xbm^{t-1})\|^2\right] \nonumber\\ &\leq \varphi_{t-1} - \frac{\gamma}{2b}\E\left[\|\nabla f(\xbm^{t-1})\|\right]^2,\end{aligned}$$ where in the last inequality we used the Jensen’s inequality, and the fact that $$\begin{aligned} \E\left[\|\nabla_i f(\xbm^{t-1})\|^2\right] &= \E\left[\E\left[\|\nabla_i f(\xbm^{t-1})\|^2 | \xbm^{t-1}\right]\right] \nonumber\\ &= \E\left[\frac{1}{b} \sum_{i = 1}^b \|\nabla_i f(\xbm^t)\|^2 \right] \nonumber\\ &= \frac{1}{b}\E\left[\|\nabla f(\xbm^{t-1})\|^2\right].\end{aligned}$$ 3. From convexity, we know that $$\begin{aligned} \varphi_t = \E\left[f(\xbm^t)\right] - f(\xbmast) &\leq \E\left[\nabla f(\xbm^t)^\Tsf(\xbm^t-\xbmast)\right] \nonumber\\ &\leq \E\left[\|\nabla f(\xbm^t)\| \|\xbm^t-\xbmast\|\right] \nonumber\\ &\leq R_0 \cdot \E\left[\|\nabla f(\xbm^t)\|\right],\end{aligned}$$ where in the last inequality, we used eq. . This combined with the result of (b) implies that $$\varphi_t \leq \varphi_{t-1} - \frac{\gamma}{2b} \frac{\varphi_{t-1}^2}{R_0^2}.$$ 4. Note that from (c), we can obtain $$\frac{1}{\varphi_t}-\frac{1}{\varphi_{t-1}} = \frac{\varphi_{t-1}-\varphi_t}{\varphi_t\varphi_{t-1}} \geq \frac{\varphi_{t-1}-\varphi_t}{\varphi_{t-1}^2} \geq \frac{\gamma}{2b R_0^2}.$$ By iterating this inequality, we get the final result $$\frac{1}{\varphi_t} \geq \frac{1}{\varphi_0} + \frac{\gamma t}{2b\|\xbm^0-\xbmast\|^2} \geq \frac{\gamma t}{2bR_0^2} \;\Rightarrow\; \varphi_t \leq \frac{2b}{\gamma t}R_0^2.$$ Background Material {#Sec:BackgroundMaterial} =================== The results in this section are well-known in the optimization literature and can be found in different forms in standard textbooks [@Rockafellar.Wets1998; @Boyd.Vandenberghe2004; @Nesterov2004; @Bauschke.Combettes2017]. For completeness, we summarize the key results useful for our analysis by restating them in a block-coordinate form. Properties of Block-Coordinate Operators {#Sec:Prelims} ---------------------------------------- Most of the concepts in this part come from the traditional monotone operator theory [@Ryu.Boyd2016; @Bauschke.Combettes2017] adapted for block-coordinate operators. We define the *block-coordinate operator* $\Tsf_i: \R^n \rightarrow \R^{n_i}$ of $\Tsf: \R^n \rightarrow \R^n$ as $$\Tsf_i\xbm \defn [\Tsf\xbm]_i = \Usf_i^\Tsf \Tsf\xbm \in \R^{n_i}, \quad \xbm \in \R^n.$$ The operator $\Tsf_i$ applies $\Tsf$ to its input vector and then extracts the subset of outputs corresponding to the coordinates in the block $i \in \{1, \dots, b\}$. **Remark**. When $b = 1$, we have that $n = n_1$ and $\Usf_1 = \Usf_1^\Tsf = \Isf$. Then, all the properties in this section reduce to their standard counterparts from the monotone operator theory in $\R^n$. In such settings, we simply drop the word *block* from the name of the property. $\Tsf_i$ is *block Lipschitz continuous with constant $\lambda_i > 0$* if $$\|\Tsf_i\xbm - \Tsf_i\ybm\| \leq \lambda_i\|\hbm_i\|,\quad \xbm = \ybm + \Usf_i\hbm_i, \quad \ybm \in \R^n, \hbm_i \in \R^{n_i}.$$ When $\lambda_i = 1$, we say that $\Tsf_i$ is *block nonexpansive*. An operator $\Tsf_i$ is *block cocoercive with constant $\beta_i > 0$* if $$(\Tsf_i\xbm-\Tsf_i\ybm)^\Tsf\hbm_i \geq \beta_i\|\Tsf_i\xbm-\Tsf_i\ybm\|^2,$$ $$\xbm = \ybm + \Usf_i\hbm_i, \quad \ybm \in \R^n, \hbm_i \in \R^{n_i}.$$ When $\beta_i = 1$, we say that $\Tsf_i$ is *block firmly nonexpansive*. The following propositions are conclusions derived from the definition of above. \[Prop:BlockConvNonexp\] Let $\Tsf_{ij}: \R^n \rightarrow \R^{n_i}$ for $j \in J$ be a set of block nonexpansive operators. Then, their convex combination $$\Tsf_i \defn \sum_{j \in J} \theta_j \Tsf_{ij}, \quad\text{with}\quad \theta_j > 0 \text{ and } \sum_{j \in J} \theta_j = 1,$$ is nonexpansive. By using the triangular inequality and the definition of block nonexpansiveness, we obtain $$\|\Tsf_i\xbm-\Tsf_i\ybm\| \leq \sum_{j \in J} \theta_j \|\Tsf_{ij}\xbm-\Tsf_{ij}\ybm\| \leq \left(\sum_{j \in J}\theta_j\right) \|\hbm_i\| = \|\hbm_i\|,$$ for all $\ybm \in \R^n$ and $\hbm_i \in \R^{n_i}$ where $\xbm = \ybm + \Usf_i\hbm_i$ . \[Prop:NonexpCocoerOp\] Consider $\Rsf_i = \Usf_i^\Tsf - \Tsf_i$ where $\Tsf_i: \R^n \rightarrow \R^{n_i}$. $$\Tsf_i \text{ is block nonexpansive } \quad\Leftrightarrow\quad \Rsf_i \text{ is $(1/2)$-block cocoercive.}$$ First suppose that $\Rsf_i$ is $1/2$ block cocoercive. Let $\xbm = \ybm + \Usf_i\hbm_i$ for all $\ybm \in \R^n$ and $\hbm_i \in \R^{n_i}$. We then have $$\frac{1}{2}\|\Rsf_i\xbm-\Rsf_i\ybm\|^2 \leq (\Rsf_i\xbm-\Rsf_i\ybm)^\Tsf\hbm_i = \|\hbm_i\|^2 - (\Tsf_i\xbm-\Tsf_i\ybm)^\Tsf\hbm_i.$$ We also have that $$\frac{1}{2}\|\Rsf_i\xbm-\Rsf_i\ybm\|^2 = \frac{1}{2}\|\hbm_i\|^2 - (\Tsf_i\xbm-\Tsf_i\ybm)^\Tsf\hbm_i + \frac{1}{2}\|\Tsf_i\xbm-\Tsf_i\ybm\|^2.$$ By combining these two and simplifying the expression, we obtain that $$\|\Tsf_i\xbm-\Tsf_i\ybm\| \leq \|\hbm_i\|.$$ The converse can be proved by following this logic in reverse. Block Averaged Operators {#Sec:AveragedOp} ------------------------ It is well known that the iteration of a nonexpansive operator does not necessarily converge. To see this consider a nonexpansive operator $\Tsf = -\Isf$, where $\Isf$ is identity. However, it is also well known that the convergence can be established for averaged operators. For a constant $\alpha \in (0, 1)$, we say that the operator $\Tsf$ is *$\alpha$-averaged*, if there exists a nonexpansive operator $\Nsf$ such that $\Tsf = (1-\alpha)\Isf + \alpha \Nsf$. For a constant $\alpha \in (0, 1)$, we say that $\Tsf_i: \R^n \rightarrow \R^{n_i}$ is *block $\alpha$-averaged*, if there exists a block nonexpansive operator $\Nsf_i$ such that $\Tsf_i = (1-\alpha)\Usf_i^\Tsf + \alpha \Nsf_i$. **Remark**. It is clear that if $\Tsf$ is $\alpha$-averaged, then $\Tsf_i = \Usf_i^\Tsf\Tsf$ is block $\alpha$-averaged. The following characterization is often convenient. \[Prop:BlockAveragedEquiv\] For a block nonexpansive operator $\Tsf_i$, a constant $\alpha \in (0, 1)$, and the operator ${\Rsf_i \defn \Usf_i^\Tsf-\Tsf_i}$, the following are equivalent 1. $\Tsf_i$ is block $\alpha$-averaged 2. $(1-1/\alpha)\Usf_i^\Tsf + (1/\alpha)\Tsf_i$ is block nonexpansive 3. $\|\Tsf_i\xbm - \Tsf_i\ybm\|^2 \leq \|\hbm_i\|^2 - \left(\frac{1-\alpha}{\alpha}\right)\|\Rsf_i\xbm-\Rsf_i\ybm\|^2$, $\hspace{2cm}\xbm = \ybm + \Usf_i\hbm_i, \quad \ybm \in \R^n, \hbm_i \in \R^{n_i}$ The equivalence of (a) and (b) is clear from the definition. To establish the equivalence with (c), consider an operator $\Nsf_i$ and $\Tsf_i = (1-\alpha)\Usf_i^\Tsf + \alpha \Nsf_i$. Note that $$\Rsf_i = \Usf_i^\Tsf - \Tsf_i = \alpha (\Usf_i^\Tsf - \Nsf_i).$$ Then, for all $\ybm \in \R^n$ and $\hbm_i \in \R^{n_i}$, with $\xbm = \ybm + \Usf_i \hbm_i$, we have that $$\begin{aligned} \label{Equ:AvgExpansion1} \|\Tsf_i\xbm - \Tsf_i\ybm\|^2 \nonumber&= \|(1-\alpha)\hbm_i + \alpha (\Nsf_i\xbm-\Nsf_i\ybm)\|^2\\ \nonumber&= (1-\alpha) \|\hbm_i\|^2 + \alpha \|\Nsf_i\xbm-\Nsf_i\ybm\|^2 - \nonumber\\ &\hspace{1.6cm}\alpha(1-\alpha)\|\hbm_i - (\Nsf_i\xbm-\Nsf_i\ybm)\|^2 \nonumber\\ &= (1-\alpha) \|\hbm_i\|^2 + \alpha \|\Nsf_i\xbm-\Nsf_i\ybm\|^2 \nonumber\\ &\hspace{1.6cm}- \left(\frac{1-\alpha}{\alpha}\right)\|\Rsf_i\xbm-\Rsf_i\ybm\|^2,\end{aligned}$$ where we used the fact that $$\|(1-\alpha) \xbm + \alpha \ybm\|^2 = (1-\alpha) \|\xbm\|^2 + \alpha \|\ybm\|^2 - \alpha(1-\alpha)\|\xbm-\ybm\|^2,$$ where $\theta \in \R$ and $\xbm, \ybm \in \R^n$. Consider also $$\begin{aligned} \label{Equ:AvgExpansion2} &\|\hbm_i\|^2 - \left(\frac{1-\alpha}{\alpha}\right)\|\Rsf_i\xbm-\Rsf_i\ybm\|^2 \nonumber\\ &= (1-\alpha)\|\hbm_i\|^2 + \alpha \|\hbm_i\|^2 - \left(\frac{1-\alpha}{\alpha}\right)\|\Rsf_i\xbm-\Rsf_i\ybm\|^2.\end{aligned}$$ It is clear that we have $$\begin{aligned} \eqref{Equ:AvgExpansion1} \leq \eqref{Equ:AvgExpansion2} &\quad\Leftrightarrow\quad \Nsf_i \text{ is block nonexpansive} \nonumber\\ &\quad\Leftrightarrow\quad \Tsf_i \text{ is block $\alpha$-averaged},\end{aligned}$$ where for the last equivalence, we used the definition of block averagedness. \[Prop:NonexpEquiv\] Consider a block-coordinate operator ${\Tsf_i = \Usf_i^\Tsf\Tsf}$ with $\Tsf: \R^n \rightarrow \R^n$. Let $\xbm = \ybm + \Usf_i\hbm$ with ${\xbm \in \R^n}$, ${\hbm_i \in \R^{n_i}}$ and consider $\beta_i > 0$. Then, the following are equivalent 1. $\Tsf_i$ is block $\beta_i$-cocoercive 2. $\beta_i\Tsf_i$ is block firmly nonexpansive 3. $\Usf_i^\Tsf-\beta_i\Tsf_i$ is block firmly nonexpansive. 4. $\beta_i\Tsf_i$ is block $(1/2)$-averaged. 5. $\Usf_i^\Tsf-2\beta_i\Tsf_i$ is block nonexpansive. The equivalence between (a) and (b) is readily observed by defining $\Psf_i \defn \beta_i\Tsf_i$ and noting that $$\begin{aligned} (\Psf_i\xbm - \Psf_i\ybm)^\Tsf\hbm_i = \beta_i(\Tsf_i\xbm - \Tsf_i\ybm)^\Tsf\hbm_i \nonumber\\ \quad\text{and}\quad \|\Psf_i\xbm-\Psf_i\ybm\|^2 = \beta_i^2 \|\Tsf_i\xbm-\Tsf_i\ybm\|.\end{aligned}$$ Define $\Rsf_i \defn \Usf_i^\Tsf - \Psf_i$ and suppose (b) is true, then $$\begin{aligned} (\Rsf_i\xbm-\Rsf_i\ybm)^\Tsf\hbm_i &= \|\hbm_i\|^2 - (\Psf_i\xbm-\Psf_i\ybm)^\Tsf\hbm_i \\ &= \|\Rsf_i\xbm-\Rsf_i\ybm\|^2 + (\Psf_i\xbm-\Psf_i\ybm)^\Tsf\hbm_i \nonumber\\ &\hspace{3cm}- \|\Psf_i\xbm-\Psf_i\ybm\|^2 \\ &\geq \|\Rsf_i\xbm-\Rsf_i\ybm\|^2.\end{aligned}$$ By repeating the same argument for $\Psf_i = \Usf_i^\Tsf - \Rsf_i$, we establish the full equivalence between (b) and (c). The full equivalence of (b) and (d) can be established by observing that $$\begin{aligned} &\hspace{-2.9cm}2\|\Psf_i\xbm-\Psf_i\ybm\|^2 \leq 2(\Psf_i\xbm-\Psf_i\ybm)^\Tsf\hbm_i \\ \Leftrightarrow\quad\|\Psf_i\xbm-\Psf_i\ybm\|^2 &\leq 2(\Psf_i\xbm-\Psf_i\ybm)^\Tsf\hbm_i - \|\Psf_i\xbm-\Psf_i\ybm\|^2 \\ &= \|\hbm_i\|^2-(\|\hbm_i\|^2 - 2(\Psf_i\xbm-\Psf_i\ybm)^\Tsf\hbm_i\nonumber\\ &\hspace{3cm} + \|\Psf_i\xbm-\Psf_i\ybm\|^2)\\ &= \|\hbm_i\|^2 - \|\Rsf_i\xbm-\Rsf_i\ybm\|^2.\end{aligned}$$ To show the equivalence with (e), first suppose that ${\Nsf_i \defn \Usf_i^\Tsf - 2 \Psf_i}$ is block nonexpansive, then ${\Psf_i = \frac{1}{2}(\Usf_i^\Tsf + (-\Nsf_i))}$ is block $1/2$-averaged, which means that it is block firmly nonexpansive. On the other hand, if $\Psf_i$ is block firmly nonexpansive, then it is block $1/2$-averaged, which means that from Proposition \[Prop:BlockAveragedEquiv\](b) we have that $(1-2)\Usf_i^\Tsf + 2\Psf_i = 2\Psf_i - \Usf_i^\Tsf = -\Nsf_i$ is block nonexpansive. This directly means that $\Nsf_i$ is block nonexpansive. Operator Properties for Convex Function {#Sec:Convexity} --------------------------------------- It is convenient to link properties of a function $f: \R^n \rightarrow \R$, $\xbm \mapsto y = f(\xbm)$, to the properties of operators derived from it. The key properties for our analysis are related to continuity and convexity. Let $f$ be continuously differentiable function with $\nabla_i f$ that is block $L_i$-Lipschitz continuous. Then, $$\begin{aligned} f(\ybm) &\leq f(\xbm) + \nabla f(\xbm)^\Tsf(\ybm-\xbm) + \frac{L_i}{2}\|\ybm-\xbm\|^2 \nonumber\\ &= f(\xbm) + \nabla_i f(\xbm)^\Tsf\hbm_i + \frac{L_i}{2}\|\hbm_i\|^2 \nonumber\end{aligned}$$ for all $\xbm \in \R^n$ and $\hbm_i \in \R^{n_i}$, where $\ybm = \xbm + \Usf_i\hbm_i$. The proof is a minor variation of the one presented in Section 2.1 of [@Nesterov2004]. \[Prop:LipBound2\] Consider a continuously differentiable $f$ such that $\nabla_i f$ is block $L_i$-Lipschitz continuous. Let $\xbmast \in \R^n$ denote the global minimizer of $f$. Then, we have that $$\begin{aligned} \frac{1}{2L_i} \|\nabla_i f(\xbm)\|^2 \leq (f(\xbm)-f(\xbmast)) \leq \frac{L_i}{2}\|\xbm-\xbmast\|^2, \nonumber\\ \text{where}\quad \xbm = \xbmast + \Usf_i \hbm_i, \quad \xbm \in \R^n, \hbm_i \in \R^{n_i}. \nonumber\end{aligned}$$ The proof is a minor variation of the discussion in Section 9.1.2 of [@Boyd.Vandenberghe2004]. \[Prop:BlockCocoer\] For a convex and continuously differentiable function $f$, we have $$\begin{aligned} \nabla_i f \text{ is block $L_i$-Lipschitz continuous} \nonumber\\ \Leftrightarrow\quad \nabla_i f \text{ is block $(1/L_i)$-cocoercive}. \nonumber\end{aligned}$$ The proof is a minor variation of the one presented as Theorem 2.1.5 in Section 2.1 of [@Nesterov2004]. Moreau smoothing and proximal operators {#Sec:MoreauTheory} --------------------------------------- In this section, we consider a class of functions that are proper, closed, and convex, but are not necessarily differentiable. The proximal operator is a widely-used concept in such nonsmooth optimization problems [@Moreau1965; @Rockafellar.Wets1998]. \[Def:MoreauEnv\] Consider a proper, closed, and convex $h$ and a constant $\mu > 0$. We define the *proximal operator* $$\prox_{\mu h}(\xbm) \defn \argmin_{\zbm \in \R^n}\left\{\frac{1}{2}\|\zbm-\xbm\|^2 + \mu h(\zbm)\right\}$$ and the *Moreau envelope* $$h_\mu(\xbm) \defn \min_{\zbm \in \R^n} \left\{\frac{1}{2}\|\zbm-\xbm\|^2 + \mu h(\zbm)\right\}.$$ \[Prop:GradMorProxRes\] The function $h_\mu$ is convex and continuously differentiable with a $1$-Lipschitz gradient $$\nabla h_\mu(\xbm) = \xbm - \prox_{\mu h}(\xbm), \quad \xbm \in \R^n.$$ We first show that $h_\mu$ is convex. Consider $$q(\xbm, \zbm) \defn \frac{1}{2}\|\zbm-\xbm\|^2 + \mu h(\zbm),$$ which is convex $(\xbm, \zbm)$. Then, for any $0 \leq \theta \leq 1$ and $(\xbm_1, \zbm_1), (\xbm_2, \zbm_2) \in \R^{2n}$, we have $$\begin{aligned} h_\mu(\theta\xbm_1+(1-\theta)\xbm_2) &\leq q(\theta\xbm_1+(1-\theta)\xbm_2, \theta\zbm_1+(1-\theta)\zbm_2) \nonumber\\ &\leq \theta q(\xbm_1, \zbm_1) + (1-\theta) q(\xbm_2, \zbm_2),\end{aligned}$$ where we used the convexity of $q$. Since this inequality holds everywhere, we have $$h_\mu(\theta\xbm_1+(1-\theta)\xbm_2) \leq \theta h_\mu (\xbm_1) + (1-\theta) h_\mu(\xbm_2),$$ with $$h_\mu(\xbm_1) = \min_{\zbm_1}q(\xbm_1, \zbm_1) \quad\text{and}\quad h_\mu(\xbm_2) = \min_{\zbm_2}q(\xbm_2, \zbm_2).$$ To show the differentiability, note that $$\begin{aligned} h_\mu(\xbm) &= \frac{1}{2}\|\xbm\|^2 - \max_{\zbm \in \R^n}\left\{\xbm^\Tsf\zbm - \mu h(\zbm) - \frac{1}{2}\|\zbm\|^2\right\} \\ &= \frac{1}{2}\|\xbm\|^2 - \phi^\star(\xbm) \quad\text{with}\quad \phi(\zbm) \defn \frac{1}{2}\|\zbm\|^2 + \mu h(\zbm),\end{aligned}$$ where $\phi^\star$ denotes the conjugate of $\phi$. The function $\phi$ is closed and $1$-strongly convex. Hence, we know that $\phi^\star$ is defined for all $\xbm \in \R^n$ and is differentiable with gradient [@Boyd.Vandenberghe2004] $$\nabla \phi^\star(\xbm) = \argmax_{\zbm \in \R^n} \left\{\xbm^\Tsf\zbm - \mu h(\zbm) - \frac{1}{2}\|\zbm\|^2\right\} = \prox_{\mu h}(\xbm).$$ Hence, we conclude that $$\nabla h_\mu(\xbm) = \xbm - \nabla \phi^\star(\xbm) = \xbm - \prox_{\mu h}(\xbm).$$ Note that since the proximal operator is firmly nonexpansive, $\nabla h_\mu$ is also firmly nonexpansive, which means that it is $1$-Lipschitz. The next result shows that the Moreau envelope can serve as a smooth approximation to a nonsmooth function. \[Prop:UniformBoundMoreau\] Consider $h \in \R^n$ and its Moreau envelope $h_\mu(\xbm)$ for $\mu > 0$. Then, $$0 \leq h(\xbm) - \frac{1}{\mu}h_\mu(\xbm) \leq \frac{\mu}{2}G_\xbm^2\quad\text{with}\quad G_\xbm^2 \defn \min_{\gbm \in \partial h(\xbm)} \|\gbm\|^2, \quad \xbm \in \R^n.$$ First note that $$\frac{1}{\mu}h_\mu(\xbm) = \min_{\zbm \in \R^n}\left\{\frac{1}{2\mu}\|\zbm-\xbm\|^2 + h(\zbm)\right\} \leq h(\xbm), \quad \xbm \in \R^n,$$ which is due to the fact that $\zbm = \xbm$ is potentially suboptimal. We additionally have for any $\gbm \in \partial h(\xbm)$ $$\begin{aligned} h_\mu(\xbm) - \mu h(\xbm) &= \min_{\zbm \in \R^n}\left\{\mu h(\zbm) - \mu h(\xbm) + \frac{1}{2}\|\zbm-\xbm\|^2\right\} \\ &\geq \min_{\zbm \in \R^n}\left\{\mu \gbm^\Tsf(\zbm-\xbm) + \frac{1}{2}\|\zbm-\xbm\|^2\right\} \\ &= \min_{\zbm \in \R^n} \left\{\frac{1}{2}\|\zbm-(\xbm-\mu\gbm)\|^2 - \frac{\mu^2}{2}\|\gbm\|^2\right\}\\ &= -\frac{\mu^2}{2}\|\gbm\|^2.\end{aligned}$$ This directly leads to the conclusion. Additional Technical Details {#Sec:TechnicalDetails} ============================ In this section, we discuss several technical details that we omitted from the main paper for space. Section \[Sec:ComputationalComplexity\] discusses issues related to implementation and computational complexity of BC-RED. Section \[Sec:ArchitectureTraining\] discusses the architecture of our own CNN denoiser $\DnCNNast$ and provides details on its training. Section \[Sec:InfluenceLipschitz\] discusses the influence of the Lipschitz constant of the CNN denoiser on its performance as a denoising prior. Computational Complexity and a Coordinate-Friendly Implementation {#Sec:ComputationalComplexity} ----------------------------------------------------------------- Theoretical analysis in Section \[Sec:TheoretcalResults\] of the main paper suggests that, if $b$ updates of BC-RED (each modifying a single block) are counted as a single iteration, the worst-case convergence rate of BC-RED is expected to be better than that of the full-gradient RED. This fact was empirically validated in Section \[Sec:Simulations\], where we showed that in practice BC-RED needs much fewer iterations to converge. However, the overall computational complexity of two methods depends on their per-iteration complexities. In particular, the overall complexity of BC-RED is favorable when its total number of iterations required for convergence offsets the cost of solving the problem in a block-coordinate fashion. As for traditional coordinate descent methods [@Peng.etal2016; @Niu.etal2011], in many problems of interest, the computational complexity of a single update of BC-RED will be roughly $b$ times lower than that of the full-gradient method. The computational complexity of each block-update will depend on the specifics of the data-fidelity term $g$ and the denoiser $\Dsf$ used in the estimation problem. For example, consider the problem where $g(\xbm) = \frac{1}{2}\|\Abm\xbm-\ybm\|_2^2$. Additionally, suppose that $\xbm$ is such that it is sufficient represent its prior with a block-wise denoiser on each $\xbm_i$, rather than on the full $\xbm$. This situation is very common in image processing, where many popular denoisers are applied block-wise [@Zoran.Weiss2011]. Then, one can obtain a very efficient implementation of BC-RED, illustrated in Algorithm \[Alg:BCRED2\]. The worst-case complexity of applying $\Abm_i$ and $\Abm_i^\Tsf$ is $O(m n_i)$, which means that the cost of $b$ updates such updates for $i \in \{1, \dots, b\}$ is $O(mn)$. Additionally, if the complexity of $b$ block-wise denoising operations is equivalent or less than the complexity of denoising the full vector (which is generally true for advanced denoisers), then the complexity of $b$ updates of BC-RED will be equivalent or better than a single iteration of the full-gradient RED. Some of our simulations were conducted using denoisers applied on the full-image and others using block-wise denoisers. In particular, the convergence simulations in Fig. \[Fig:convergenceCT\] and Fig. \[Fig:ConvergencePlots\] relied on the full-image denoisers, in order to use identical denoisers for both RED and BC-RED and be fully compatible with the theoretical analysis. On the other hand, the SNR results in Table \[Tab:SNR\], Table \[Tab:LipschitzDiscuss\], Fig. \[Fig:MoreExamples\], and Fig. \[Fig:imageFlow\] rely on block-wise denoisers, where the denoiser input includes an additional 40 pixel padding around the block and the output has the exact size of the block. The padding size was determined empirically in order to have a close match between BC-RED and RED. We have observed that having even larger paddings does not influence the results of BC-RED. Finally, the size of the denoiser input and output for the galaxy simulations in Fig. \[Fig:galaxyImages\] and Fig. \[Fig:MoreGalaxies\] exactly matches the block size, with no additional padding. **input:** initial value $\xbm^0 \in \R^n$, parameter $\tau > 0$, and step-size $\gamma > 0$. **initialize:** $\rbm^0 \leftarrow \Abm\xbm^0-\ybm$ Choose an index $i_k \in \{1, \dots, b\}$ $\xbm^k \leftarrow \xbm^{k-1} - \gamma \Usf_{i_k} \Gsf_{i_k}(\xbm^{k-1})$ with$\Gsf_{i_k}(\xbm^{k-1}) = \Abm_{i_k}^\Tsf \rbm^{k-1} + \tau (\xbm_{i_k} - \Dsf(\xbm_{i_k}))$. $\rbm^k \leftarrow \rbm^{k-1} - \gamma \Abm_{i_k} \Gsf_{i_k}(\xbm^{k-1})$ Architecture and Training of $\DnCNNast$ {#Sec:ArchitectureTraining} ----------------------------------------- ![The architecture of three variants of $\DnCNNast$ used in our simulations. Each neural net is trained to remove AWGN from noisy input images. **Residual $\DnCNNast$** is trained to predict the noise from the input. The final desired denoiser $\Dsf$ is obtained by simply subtracting the predicted noise from the input $\Dsf(\zbm) = \zbm - \mathsf{DnCNN}^\ast(\zbm)$. **Direct $\DnCNNast$** is trained to directly output a clean image from a noisy input $\Dsf(\zbm) = \mathsf{DnCNN}^\ast(\zbm)$. **Galaxy $\DnCNNast$** is a further simplification of the Residual DnCNN to only 4 convolutional layers specifically designed for large-scale image recovery. In most experiments, we further constrain the Lipschitz constant (LC) of the direct denoiser to be LC = 1 and of the residual denoiser to LC = 2 by using spectral normalization [@Sedghi.etal2019]. LC = 1 means that $\Dsf$ is a nonexpansive denoiser. A residual $\Rsf = \Isf - \Dsf$ with LC = 2 provides a necessary (but not sufficient) condition for $\Dsf$ to be a nonexpansive denoiser.[]{data-label="Fig:DnCNNstar"}](figures/DnCNNstar){width="45.00000%"} We designed $\DnCNNast$ fully based on DnCNN architecture. The network contains three parts. The first part is a composite convolutional layer, consisting of a normal convolutional layer and a rectified linear units (ReLU) layer. It convolves the $n_1 \times n_2$ input to $n_1 \times n_2 \times 64$ features maps by using 64 filters of size $3 \times 3$. The second part is a sequence of 5 composite convolutional layers, each having 64 filters of size $3 \times 3 \times 64$. Those composite layers further processes the feature maps generated by the first part. The third part of the network, a single convolutional layer, generates the final output image by convolving the feature maps with a $3 \times 3 \times 64$ filter. Every convolution is performed with a stride $=1$, so that the intermediate feature maps share the same spatial size of the input image. Fig. \[Fig:DnCNNstar\] visualizes the architectural details. We generated 52000 training examples by adding AWGN to 13000 images ($320 \times 320$) from the NYU fastMRI dataset [@Zbontar.etal2018] and cropping them into 4 sub-images of size $160 \times 160$ pixels. We trained $\DnCNNast$ to optimize the *mean squared error* by using the Adam optimizer. Influence of the Lipschitz Constant on Performance {#Sec:InfluenceLipschitz} -------------------------------------------------- \[Tab:LipschitzDiscuss\] [13.7cm]{}[C[35pt]{}C[55pt]{}C[30pt]{}C[30pt]{}cC[30pt]{}C[30pt]{}cC[30pt]{}C[30pt]{}]{} & & & & &\ & & **30 dB** & **40 dB** & & **30 dB** & **40 dB** & & **30 dB** & **40 dB**\ & Unconstrained & 21.67 & 24.74 & & Diverges & Diverges & & 29.40 & 30.35\ & LC = 1 & 19.33 & 22.98 & &19.89 & 20.26 & & 25.06 & 25.40\ & Unconstrained & 20.88 & 24.68 & & 26.49 & 27.60 & & 29.39 & 30.31\ & LC = 2 & 20.88 & 24.42 & & 26.60 & 28.12 & & 29.40 & 30.39\ Our theoretical analysis in Theorem \[Thm:ConvThm1\] assumes that the denoiser each block denoiser $\Dsf_i$ of $\Dsf$ is block-nonexpansive. It is relatively straightforward to control the global Lipscthiz constants of CNN denoisers via spectral normalization [@Miyato.etal2018; @Sedghi.etal2019; @Gouk.etal2018] and we have empirically tested the influence of nonexpansiveness to the quality of final image recovery. Table \[Tab:LipschitzDiscuss\] summarizes the SNR performance of BC-RED for two common variants of $\DnCNNast$. The first variant is trained to learn the *direct* mapping from a noisy input to a clean image, while the second variant relies on *residual learning* to map its input to noise (shown in Fig. \[Fig:DnCNNstar\]). To gain insight into the influence of the *Lipschitz constant (LC)* of a denoiser to its performance as a prior, we trained denoisers with both globally constrained and nonconstrained LCs via the spectral-normalization technique from [@Sedghi.etal2019]. For the direct network, we trained $\DnCNNast$ with ${\text{LC} = 1}$, which corresponds to a nonexpansive denoiser. For the residual network, we considered $\text{LC} = 2$, which is a necessary (but not sufficient) condition for the nonexpansiveness. In our simulations, BC-RED converged for all the variants of $\DnCNNast$, except for the direct and unconstrained $\DnCNNast$, which confirms that our theoretical analysis provides only sufficient conditions for convergence. Nonetheless, our simulations reveal the performance loss of the algorithm for the direct and nonexpansive (LC = 1) $\DnCNNast$. On the other hand, the performance of the residual $\DnCNNast$ with $\text{LC} = 2$ nearly matches the performance of fully unconstrained networks in all experiments. Additional Numerical Validation {#Sec:AdditionalSimulations} =============================== ![image](figures/testImages){width="90.00000%"} Fig. \[Fig:TestImages\] shows ten randomly selected test images used for numerical validation. The simulations in this paper were performed on a machine equipped with an Intel Xeon Gold 6130 Processor that has 16 cores of 2.1 GHz and 192 GBs of DDR memory. We trained all neural nets using NVIDIA RTX 2080 GPUs. Fig. \[Fig:ConvergencePlots\] presents the convergence plots for *direct* and *residual* $\DnCNNast$ with Radon matrix. In order to ensure nonexpansivenss, the LC of direct $\DnCNNast$ is constrained to 1. On the other hand, the LC of the residual $\DnCNNast$ is constrained to 2, which is a necessary condition for ensuring its nonexpansiveness. We compare two variants of BC-RED, one with *i.i.d.* block selection and an alternative that proceeds in *epochs* of $b$ consecutive iterations, where at the start of each epoch the set $\{1, \dots, b\}$ is reshuffled, and $i_k$ is then selected consecutively from this ordered set. The figure first confirms our observation of the convergence of BC-RED under different $\DnCNNast$, and further highlights the faster convergence speed of BC-RED due to its ability to select larger step-size and immediately reuse each block update. Among two block selection rules, *BC-RED (epoch)* clearly outperforms *BC-RED (i.i.d.)* in all our simulations, which has also been observed in traditional coordinate descent methods [@Wright2015]. However, the theoretical understanding of this gap in performance between *epoch* and *i.i.d.* block selection remains elusive. Fig. \[Fig:MoreExamples\] visually compares the images recovered by BC-RED and RED and two baseline methods. First, the images visually illustrate the excellent agreement between BC-RED and RED. Second, leveraging advanced denoisers in BC-RED largely improves the reconstruction quality over PGM with the traditional TV prior. For instance, BC-RED under $\DnCNNast$ outperforms PGM under TV by 1 dB for Fourier matrix. Finally, we note the stability of BC-RED using the CNN denoiser versus the deteriorating performance of U-Net, which is trained end-to-end for Radon matrix with 30 dB noise. This fact highlights one key merit of the RED framework, that the CNN denoiser, only trained once, can be directly applied in different scenarios for different tasks with no degradation. ![image](figures/convergenceCT){width="80.00000%"} ![image](figures/moreExamples){width="80.00000%"} In BC-RED, the parameter $\tau$ controls the tradeoff between $\zer(\nabla g)$ and $\fix(\Dsf)$. Fig. \[Fig:imageFlow\] illustrates evolution of images reconstructed by BC-RED for different $\tau$. The first row corresponds to the reconstruction from the Fourier measurements with 30 dB noise, while the second row corresponds to the Radon measurements with 40 dB noise. The figure clearly shows how $\tau$ explicitly adjusts the balance between the data-fit and the denoiser. In particular, small $\tau$, corresponding to weak denoising, results in unwanted artifacts in the reconstructed images, while large $\tau$ promotes denoising strength but smooths out desired features and details. The leftmost images in Fig. \[Fig:imageFlow\] shows the optimal balance introduced by $\tau^\ast$. To conclude, we present the experimental details of the galaxy image recovery task. In the simulation, we inherited the dataset used in [@Farrens.etal2017]. The dataset contains 10’000 galaxy survey images from the GREAT3 Challenge [@Mandelbaum.etal2014], and each image is cropped to $41 \times 41$ pixel size. The dataset also includes 597 simulated space variant point spread functions (PSF) corresponding to 597 physical position across 4 $4096 \times 4132$ pixel CCDs [@Cropper.etal2012; @Kuntzer.etal2016]. In order to synthesize the $8292 \times 8364$ pixel image, we first selected 597 galaxy images from the dataset and degraded each of them by a different PSF, and then locate the degraded images back to the corresponding positions in the full image. Note that we also contaminated each degraded image with AWGN of 5 dB. Figure \[Fig:DnCNNstar\] shows the architecture of the 4-layer $\DnCNNast$ used as denoiser for the galaxy image recovery. We generated 72000 training examples by rotating and flipping the rest 9000 images, and trained the neural network to learn the noise residual with LC$=2$. Since the locations of galaxies were known in this case, we optimized the speed of BC-RED by only updating the blocks containing galaxies. In practice, such block selection strategies can be efficiently implemented by applying a threshold on image intensities to separate blocks with galaxies from the ones that have only noise. As illustrated in Fig. \[Fig:ConvergenceGalaxy\], BC-RED converged to about $4.78 \times 10^{-5}$, in relative accuracy within 120 seconds, which corresponds to 100 iterations of the algorithm, with $b$ BC-RED updates grouped as a single iteration. Fig. \[Fig:MoreGalaxies\] illustrates the performance of BC-RED under $\DnCNNast$ for 4 example galaxies selected from the $1316 \times 1245$ pixel sub-image. The first row on the left shows the same galaxy in Fig. \[Fig:galaxyImages\] in the main paper. We obtained the reconstructed image of the low-rank matrix prior by running the algorithm with default parameter values. This experiment demonstrates that BC-RED can indeed be applied to a realistic, nontrivial image recovery task on a large image. ![image](figures/imageFlow){width="80.00000%"} ![Illustration of the convergence of BC-RED under $\DnCNNast$ in the realistic, large-scale image recovery task. BC-RED is run for 100 iterations, which leads to the accuracy of $4.78 \times 10^{-5}$ within 120 seconds. The efficiency of the algorithm is due to the sparsity of the recovery problem.[]{data-label="Fig:ConvergenceGalaxy"}](figures/convergenceGalaxy){width="45.00000%"} ![image](figures/galaxyMore){width="80.00000%"} [10]{} \[1\][\#1]{} url@samestyle \[2\][\#2]{} \[2\][[l@\#1=l@\#1\#2]{}]{} S. V. Venkatakrishnan, C. A. Bouman, and B. Wohlberg, “Plug-and-play priors for model based reconstruction,” in *Proc. IEEE Global Conf. Signal Process. and INf. Process. ([GlobalSIP]{})*, 2013. N. Parikh and S. Boyd, “Proximal algorithms,” *Foundations and Trends in Optimization*, vol. 1, no. 3, pp. 123–231, 2014. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse [3-D]{} transform-domain collaborative filtering,” *IEEE Trans. Image Process.*, vol. 16, no. 16, pp. 2080–2095, August 2007. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a [G]{}aussian denoiser: [R]{}esidual learning of deep [CNN]{} for image denoising,” *IEEE Trans. Image Process.*, vol. 26, no. 7, pp. 3142–3155, July 2017. A. Danielyan, V. Katkovnik, and K. Egiazarian, “[BM3D]{} frames and variational image deblurring,” *IEEE Trans. Image Process.*, vol. 21, no. 4, pp. 1715–1728, April 2012. S. H. Chan, X. Wang, and O. A. Elgendy, “Plug-and-play [ADMM]{} for image restoration: Fixed-point convergence and applications,” *IEEE Trans. Comp. Imag.*, vol. 3, no. 1, pp. 84–98, March 2017. S. [*et al.*]{}, “Plug-and-play priors for bright field electron tomography and sparse interpolation,” *IEEE Trans. Comp. Imag.*, vol. 2, no. 4, pp. 408–423, December 2016. S. Ono, “Primal-dual plug-and-play image restoration,” *IEEE Signal Process. Lett.*, vol. 24, no. 8, pp. 1108–1112, 2017. U. S. Kamilov, H. Mansour, and B. Wohlberg, “A plug-and-play priors approach for solving nonlinear imaging inverse problems,” *IEEE Signal Process. Lett.*, vol. 24, no. 12, pp. 1872–1876, December 2017. T. Meinhardt, M. Moeller, C. Hazirbas, and D. Cremers, “Learning proximal operators: [U]{}sing denoising networks for regularizing inverse imaging problems,” in *Proc. IEEE Int. Conf. Comp. Vis. (ICCV)*, 2017. K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep [CNN]{} denoiser prior for image restoration,” in *Proc. [IEEE]{} Conf. Computer Vision and Pattern Recognition ([CVPR]{})*, 2017. G. T. Buzzard, S. H. Chan, S. Sreehari, and C. A. Bouman, “Plug-and-play unplugged: [O]{}ptimization free reconstruction using consensus equilibrium,” *SIAM J. Imaging Sci.*, vol. 11, no. 3, pp. 2001–2020, 2018. Y. Sun, B. Wohlberg, and U. S. Kamilov, “An online plug-and-play algorithm for regularized image reconstruction,” *IEEE Trans. Comput. Imaging*, 2019. A. M. Teodoro, J. M. Bioucas-Dias, and M. Figueiredo, “A convergent image fusion algorithm using scene-adapted [G]{}aussian-mixture-based denoising,” *IEEE Trans. Image Process.*, vol. 28, no. 1, pp. 451–463, Jan. 2019. E. K. Ryu, J. Liu, S. Wang, X. Chen, Z. Wang, and W. Yin, “[P]{}lug-and-play methods provably converge with properly trained denoisers,” in *Proc. 36th Int. Conf. Machine Learning ([ICML]{})*, 2019. J. Tan, Y. Ma, and D. Baron, “Compressive imaging via approximate message passing with image denoising,” *IEEE Trans. Signal Process.*, vol. 63, no. 8, pp. 2085–2092, Apr. 2015. C. A. Metzler, A. Maleki, and R. G. Baraniuk, “From denoising to compressed sensing,” *IEEE Trans. Inf. Theory*, vol. 62, no. 9, pp. 5117–5144, September 2016. C. A. Metzler, A. Maleki, and R. Baraniuk, “[BM3D]{}-[PRGAMP]{}: [C]{}ompressive phase retrieval based on [BM3D]{} denoising,” in *Proc. [IEEE]{} Int. Conf. Image Proc.*, 2016. A. Fletcher, S. Rangan, S. Sarkar, and P. Schniter, “Plug-in estimation in high-dimensional linear inverse problems: [A]{} rigorous analysis,” in *Proc. Advances in Neural Information Processing Systems 32*, 2018. Y. Romano, M. Elad, and P. Milanfar, “The little engine that could: [R]{}egularization by denoising ([RED]{}),” *SIAM J. Imaging Sci.*, vol. 10, no. 4, pp. 1804–1844, 2017. S. A. Bigdeli, M. Jin, P. Favaro, and M. Zwicker, “Deep mean-shift priors for image restoration,” in *Proc. Advances in Neural Information Processing Systems 31*, 2017. E. T. Reehorst and P. Schniter, “Regularization by denoising: [C]{}larifications and new interpretations,” *IEEE Trans. Comput. Imag.*, vol. 5, no. 1, pp. 52–67, Mar. 2019. C. A. Metzler, P. Schniter, A. Veeraraghavan, and R. G. Baraniuk, “pr[D]{}eep: [R]{}obust phase retrieval with a flexible deep network,” in *Proc. 35th Int. Conf. Machine Learning ([ICML]{})*, 2018. G. Mataev, M. Elad, and P. Milanfar, “[DeepRED]{}: [D]{}eep image prior powered by [RED]{},” in *Proc. [IEEE]{} Int. Conf. Comp. Vis. Workshops ([ICCVW]{})*, 2019. P. Tseng, “Convergence of a block coordinate descent method for nondifferentiable minimization,” *J. Optimiz. Theory App.*, vol. 109, no. 3, pp. 475–494, June 2001. Y. Nesterov, “Efficiency of coordinate descent methods on huge-scale optimization problems,” *SIAM J. Optim.*, vol. 22, no. 2, pp. 341–362, 2012. A. Beck and L. Tetruashvili, “On the convergence of block coordinate descent type methods,” *SIAM J. Optim.*, vol. 23, no. 4, pp. 2037–2060, Oct. 2013. S. J. Wright, “Coordinate descent algorithms,” *Math. Program.*, vol. 151, no. 1, pp. 3–34, Jun. 2015. O. Fercoq and A. Gramfort, “Coordinate descent methods,” Lecture notes *[O]{}ptimization for [D]{}ata [S]{}cience*, École polytechnique, 2018. Y. Sun, J. Liu, and U. S. Kamilov, “Block coordinate regularization by denoising,” in *Proc. Advances in Neural Information Processing Systems 33*, Vancouver, BC, Canada, December 8-14, 2019. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” *Physica D*, vol. 60, no. 1–4, pp. 259–268, November 1992. R. Tibshirani, “Regression and selection via the lasso,” *J. R. Stat. Soc. Series B (Methodological)*, vol. 58, no. 1, pp. 267–288, 1996. E. J. Cand[è]{}s, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,” *IEEE Trans. Inf. Theory*, vol. 52, no. 2, pp. 489–509, February 2006. D. L. Donoho, “Compressed sensing,” *IEEE Trans. Inf. Theory*, vol. 52, no. 4, pp. 1289–1306, April 2006. M. A. T. Figueiredo and R. D. Nowak, “An [EM]{} algorithm for wavelet-based image restoration,” *IEEE Trans. Image Process.*, vol. 12, no. 8, pp. 906–916, August 2003. I. Daubechies, M. Defrise, and C. D. Mol, “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,” *Commun. Pure Appl. Math.*, vol. 57, no. 11, pp. 1413–1457, November 2004. J. Bect, L. Blanc-Feraud, G. Aubert, and A. Chambolle, “A $\ell_1$-unified variational framework for image restoration,” in *Proc. [ECCV]{}*, Springer, Ed., vol. 3024, New York, 2004, pp. 1–13. A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” *SIAM J. Imaging Sciences*, vol. 2, no. 1, pp. 183–202, 2009. J. Eckstein and D. P. Bertsekas, “On the [D]{}ouglas-[R]{}achford splitting method and the proximal point algorithm for maximal monotone operators,” *Mathematical Programming*, vol. 55, pp. 293–318, 1992. M. V. Afonso, J. M.Bioucas-Dias, and M. A. T. Figueiredo, “Fast image recovery using variable splitting and constrained optimization,” *IEEE Trans. Image Process.*, vol. 19, no. 9, pp. 2345–2356, September 2010. M. K. Ng, P. Weiss, and X. Yuan, “Solving constrained total-variation image restoration and reconstruction problems via alternating direction methods,” *SIAM J. Sci. Comput.*, vol. 32, no. 5, pp. 2710–2736, August 2010. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” *Foundations and Trends in Machine Learning*, vol. 3, no. 1, pp. 1–122, 2011. Z. Peng, T. Wu, Y. Xu, M. Yan, and W. Yin, “Coordinate-friendly structures, algorithms and applications,” *Adv. Math. Sci. Appl.*, vol. 1, no. 1, pp. 57–119, Apr. 2016. M. Elad and M. Aharon, “Image denoising via sparse and redundant representations over learned dictionaries,” *IEEE Trans. Image Process.*, vol. 15, no. 12, pp. 3736–3745, December 2006. A. Buades, B. Coll, and J. M. Morel, “Image denoising methods. [A]{} new nonlocal principle,” *SIAM Rev*, vol. 52, no. 1, pp. 113–147, 2010. D. Zoran and Y. Weiss, “From learning models of natural image patches to whole image restoration,” in *Proc. IEEE Int. Conf. Comp. Vis. (ICCV)*, 2011. H. H. Bauschke and P. L. Combettes, *Convex Analysis and Monotone Operator Theory in Hilbert Spaces*, 2nd ed.1em plus 0.5em minus 0.4em Springer, 2017. E. K. Ryu and S. Boyd, “A primer on monotone operator methods,” *Appl. Comput. Math.*, vol. 15, no. 1, pp. 3–43, 2016. Z. Peng, Y. Xu, M. Yan, and W. Yin, “[ARock]{}: [A]{}n algorithmic framework for asynchronous parallel coordinate updates,” *SIAM J. Sci. Comput.*, vol. 38, no. 5, pp. A2851–A2879, 2016. Y. T. Chow, T. Wu, and W. Yin, “Cyclic coordinate-update algorithms for fixed-point problems: Analysis and applications,” *SIAM J. Sci. Comput.*, vol. 39, no. 4, pp. A1280–A1300, 2017. T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida, “Spectral normalization for generative adversarial networks,” in *International Conference on Learning Representations ([ICLR]{})*, 2018. H. Sedghi, V. Gupta, and P. M. Long, “The singular values of convolutional layers,” in *International Conference on Learning Representations ([ICLR]{})*, 2019. H. Gouk, E. Frank, B. Pfahringer, and M. Cree, “Regularisation of neural networks by enforcing [L]{}ipschitz continuity,” 2018, arXiv:1804.04368. J. J. Moreau, “Proximit[é]{} et dualit[é]{} dans un espace hilbertien,” *Bull. Soc. Math. France*, vol. 93, pp. 273–299, 1965. R. T. Rockafellar and R. Wets, *Variational Analysis*.1em plus 0.5em minus 0.4emSpringer, 1998. Y.-L. Yu, “Better approximation and faster algorithm using the proximal average,” in *Proc. Advances in Neural Information Processing Systems 26*, 2013. A. C. Kak and M. Slaney, *Principles of Computerized Tomographic Imaging*.1em plus 0.5em minus 0.4em, 1988. F. Knoll, K. Brendies, T. Pock, and R. Stollberger, “Second order total generalized variation ([TGV]{}) for [MRI]{},” *Magn. Reson. Med.*, vol. 65, no. 2, pp. 480–491, February 2011. , “[fastMRI]{}: [A]{}n open dataset and benchmarks for accelerated [MRI]{},” 2018, arXiv:1811.08839. \[Online\]. Available: <http://arxiv.org/abs/1811.08839> A. Beck and M. Teboulle, “Fast gradient-based algorithm for constrained total variation image denoising and deblurring problems,” *IEEE Trans. Image Process.*, vol. 18, no. 11, pp. 2419–2434, November 2009. K. H. Jin, M. T. McCann, E. Froustey, and M. Unser, “Deep convolutional neural network for inverse problems in imaging,” *IEEE Trans. Image Process.*, vol. 26, no. 9, pp. 4509–4522, Sep. 2017. Y. S. Han, J. Yoo, and J. C. Ye, “Deep learning with domain adaptation for accelerated projection reconstruction [MR]{},” *Magn. Reson. Med.*, vol. 80, no. 3, pp. 1189–1205, Sep. 2017. O. Ronneberger, P. Fischer, and T. Brox, “[U-Net]{}: [C]{}onvolutional networks for biomedical image segmentation,” in *Medical Image Computing and Computer-Assisted Intervention ([MICCAI]{})*, 2015. F. Niu, B. Recht, C. R[é]{}, and S. J. Wright, “[Hogwild!]{}: [A]{} lock-free approach to parallelizing stochastic gradient descent,” in *Proc. Advances in Neural Information Processing Systems 24*, 2011. S. Farrens, F. M. Ngol[è]{} Mboula, and J.-L. Starck, “Space variant deconvolution of galaxy survey images,” *A&A*, vol. 601, p. A66, 2017. \[Online\]. Available: <https://doi.org/10.1051/0004-6361/201629709> S. Boyd and L. Vandenberghe, *Convex Optimization*.1em plus 0.5em minus 0.4emCambridge Univ. Press, 2004. Y. Nesterov, *Introductory Lectures on Convex Optimization: A Basic Course*.1em plus 0.5em minus 0.4emKluwer Academic Publishers, 2004. M. [*et al.*]{}, “The third gravitational lensing accuracy testing ([GREAT3]{}) challenge handbook,” *Astrophys. J. Suppl. S.*, vol. 212, no. 1, p. 5, Aug. 2014. \[Online\]. Available: <https://doi.org/10.1088%2F0067-0049%2F212%2F1%2F5> C. [*et al.*]{}, “[VIS]{}: [T]{}he visible imager for [E]{}uclid,” in *Proc. SPIE*, vol. 8442, 2012. \[Online\]. Available: <https://doi.org/10.1117/12.927241> T. Kuntzer, M. Tewes, and F. Courbin, “Stellar classification from single-band imaging using machine learning,” *A&A*, vol. 591, p. A54, 2016. \[Online\]. Available: <https://doi.org/10.1051/0004-6361/201628660> [^1]: This material is based upon work supported in part by NSF award CCF-1813910 and by NVIDIA Corporation with the donation of the Titan Xp GPU for research. This paper was presented at the 2019 33th Annual Conference on Neural Information Processing Systems (NeurIPS). [^2]: Y. Sun is with the Department of Computer Science & Engineering, Washington University in St. Louis, MO 63130, USA. [^3]: J. Liu is with the Department of Electrical & Systems Engineering, Washington University in St. Louis, MO 63130, USA. [^4]: U. S. Kamilov (email: [email protected]) is with the Department of Computer Science & Engineering and the Department of Electrical & Systems Engineering, Washington University in St. Louis, MO 63130, USA. [^5]: $^\ast$ indicates equal contribution [^6]: <https://github.com/wustl-cig/bcred>
--- author: - | M. E. Cates$^1$, O. Henrich$^1$, D. Marenduzzo$^1$, K. Stratford$^2$\ $^1$SUPA, School of Physics and Astronomy, and $^2$EPCC,\ University of Edinburgh, JCMB Kings Buildings,\ Mayfield Road, Edinburgh EH9 3JZ, Scotland title: 'Lattice Boltzmann simulations of liquid crystalline fluids: active gels and blue phases' --- Lattice Boltzmann simulations have become a method of choice to solve the hydrodynamic equations of motion of a number of complex fluids. Here we review some recent applications of lattice Boltzmann to study the hydrodynamics of liquid crystalline materials. In particular, we focus on the study of (a) the exotic blue phases of cholesteric liquid crystals, and (b) active gels – a model system for actin plus myosin solutions or bacterial suspensions. In both cases lattice Boltzmann studies have proved useful to provide new insights into these complex materials. Introduction ============ In recent years, the lattice Boltzmann (LB) algorithm [@Succi] has emerged as a powerful method to study fluid dynamics. Due to its conceptual simplicity and codability (particularly on parallel computers), LB provides an attractive alternative to other methods such as finite elements algorithms. In the last couple of decades, in particular, the LB method has increasingly been applied to the hydrodynamics of complex fluids, such as binary fluids, colloidal suspensions and liquid crystals [@bijel_review; @Swift96; @Gonnella97; @Denniston04; @Marenduzzo07a; @Cates08; @Stratford]. In applying LB to complex fluids, one often aims at solving two coupled sets of partial differential equations. One set describes the evolution of the order parameter (e.g., composition for binary mixtures, or orientational order in liquid crystals), whereas the other set describes conservation of mass and momentum (via the continuity and Navier-Stokes equations for the velocity field). In liquid crystals, which are the focus of this article, one typically considers the Beris-Edwards model [@Beris], which is defined starting from a free energy expressed in terms of a tensorial order parameter, ${\bf Q}$, whose largest eigenvalue describes locally the strength of local molecular alignment (nematic order), and whose corresponding eigenvector defines the director $\mathbf{n}$ along which this alignment prevails. (See the Appendix for the precise form of the free energy adopted.) Note that simpler descriptions, in which the magnitude of the ordering is assumed constant and only the director varies, are unsuitable for describing blue phases. This is because such phases contain defect lines at which the ordering drops locally [@Wright89]. The equation of motion for $\mathbf{Q}$ is then [@Beris]: $$\label{Q_motion} D_t \mathbf{Q} = \Gamma \mathbf{H}.$$ The left member of equation \[Q\_motion\] is a “material derivative” describing the time evolution of the order parameter advected with the velocity $\mathbf{u}$ of a fluid element. For fluids of rod-like particles such as liquid crystals, flow gradients may lead to local rotations and these are also taken into account by the material derivative. In Eq. \[Q\_motion\], $\Gamma$ is a collective rotational diffusion constant, which sets the time scale for the relaxation of orientational order (usually in the millisecond range for small-molecule liquid crystals), and $\mathbf{H}$ is the “molecular field”, which provides the force for this relaxation motion. The molecular field involves the derivative of the free energy with respect to the order parameter (see the Appendix for the specific form of $\mathbf{H}$). As stated previously, the fluid velocity obeys the continuity equation, $\partial_t\rho = -\nabla.\mathbf{u} \equiv \partial_\alpha u_\alpha$ where $\rho$ is the fluid density, which in practical cases this can be taken as constant (so that the fluid is incompressible). However, a feature of LB is that slight fluid compressibility is maintained in the algorithm; this makes all the dynamics fully local (the sound-speed is finite) rather than having to solve each timestep for a pressure field that responds instantaneously to distant events. (Locality is important to efficient parallelization strategies, and maintains near-linear scaling of the computational cost with the size of the system under investigation [@Kevinscaling].) The fluid velocity also obeys the Navier-Stokes equation, which for effectively incompressible fluids reads $$\begin{aligned} \label{Navier-Stokes} \rho\left[\partial_t + u_\beta\partial_\beta\right] u_\alpha & = & \partial_\beta \Pi_{\alpha\beta} \\ \nonumber & + & \eta\partial_{\beta}\left(\partial_\alpha u_\beta+ \partial_\beta u_\alpha\right),\end{aligned}$$ where $\eta$ is the viscosity, Cartesian components are denoted by Greek indices, and $\Pi_{\alpha\beta}$ is a thermodynamic stress tensor. This tensor, like $\mathbf{H}$, is found by differentiation of the free energy, and its divergence $\partial_\beta\Pi_{\alpha\beta}$ represents an effective body force, acting on the fluid, arising from the response to deformation of the order parameter field. (Appendix A gives the explicit form.) In the case of active fluids, the forces do not solely stem from a free energy but include terms arising from the dissipative conversion of chemical energy into motion. This creates an additional contribution to $\Pi_{\alpha\beta}$. Eqs. \[Q\_motion\] and \[Navier-Stokes\] are very complex to solve due to their inherent nonlinearities, and only limited progress is possible with analytical techniques. In contrast, LB offers an ideal method to solve numerically these equations, allowing their full dynamics to be addressed not only in one dimension but also in 2D and 3D flows. Such studies can not only test approximate analytic solutions where these exist, but also lead to improved insight into the nonlinear physics contained in the underlying models. For purely Newtonian fluids, the LB algorithm proceeds by the introduction of “mesoscopic” velocity distribution functions $f(\mathbf{c}_i;\mathbf{x})$, proportional to the density of (notional) fluid particles sitting at a lattice node at $\mathbf{x}$ having a certain velocity $\mathbf{c}_i$ chosen from a discrete set [@Succi]. (These discrete velocities correspond to moving by one lattice site in a timestep.) The distribution functions evolve according to an appropriate local dynamics, which recovers the Navier-Stokes and continuity equations upon coarse graining. (The density $\rho$ then equates to $\sum_i f(\mathbf{c}_i)$ and the average fluid velocity $\mathbf{u}$ becomes the first moment, $\sum_i f(\mathbf{c}_i)\mathbf{c}_i$, of the distribution.) Historically, the first LB approach to study complex fluids [@Swift96; @Denniston04] consisted of one extra set of distribution functions for each new order parameter component entering the equations of motion. More recently, however, it has become apparent that this “full LB” approach had the drawback of requiring a large memory to store all the distribution functions for the whole lattice, on top of being rather cumbersome from a theoretical viewpoint. At the same time, the new generation of LB studies for complex fluids has shown the need to address ever larger systems, to make best use of the potentially very good scalability of parallel LB codes. As a result, new hybrid algorithms have been coded and deployed, for both binary fluids and liquid crystals [@Tiribocchi09; @Marenduzzo07b], where the LB algorithm is used to solve the Navier-Stokes equation (Eq. \[Navier-Stokes\]), and is coupled to a standard finite-difference solver for the order parameter dynamics (Eq. \[Q\_motion\]). At each timestep, the fluid velocity found by LB is used to calculate the advective derivative in Eq. \[Q\_motion\], while the order parameter found from that equation is used to compute the forcing term, $\partial_\beta\Pi_{\alpha\beta}$, in Eq. \[Navier-Stokes\]. By this division of labour, LB is *only* used to handle the momentum and mass transport – the problem for which it was originally devised. In what follows, we review recent applications of full and hybrid LB algorithms to the study of active liquid crystalline fluids and the blue phases of cholesteric liquid crystals. In both cases, as we shall see, numerical simulations have proved extremely helpful in providing a link between theoretical predictions and experimental observations. While in this work we focus on a “hydrodynamic” description of liquid crystalline fluids via continuum models, it is important to note that there is a variety of other coarse grained models and methods to study liquid crystals. Most relevant to our topics here, liquid crystalline molecules may be individually modelled via e.g. soft spherocylinders, interacting with Gay-Berne potentials (see e.g. [@PCCP09; @Wilson08] for a series of recent examples) and a related approach has also managed to stabilise blue phases [@Memmer00], although the length scale accessible with this more detailed approach is typically significantly smaller than the ones which can be studied with an LB continuum description. Active fluid simulations ======================== Active fluids have become a highly topical research area at the interface of soft matter and biological physics. Generally speaking, an active particle “absorbs energy from its surroundings or from an internal fuel tank and dissipates it in the process of carrying out internal movements”. (The quote is from a well-known paper of the Bangalore group, which helped pioneer the use of continuum models of active matter [@Hatwalne04].) This definition applies to bacteria, swimming algae and other microorganisms, as well as to living cells or cell extracts. However active materials may also be non-biological, and a synthetic example is a shaken granular fluid [@Narayan07]. As the definition clearly implies, active materials may remain far from thermodynamic equilibrium even in steady state, due to their continuous energy intake, and this renders their properties of particular interest to physicists. We consider active fluids comprising a concentrated suspension of such active particles. Paradigmatic examples of active fluids are bacterial suspensions and solutions of cytoskeletal gel components (actin fibers or microtubules) with molecular motors (myosin or kinesin). The term “active gels” is also widely used for these materials; but while all are non-Newtonian, not all of them are strongly viscoelastic. Experimental studies with active fluids have uncovered a wide range of intriguing and non-trivial physical properties. For instance, microscopy studies of droplets of [*Salmonella*]{} [@Harshey] and of [*B. subtilis*]{} [@Goldstein; @Aranson; @Cisneros07] reveal that when the bacteria are concentrated enough (more than about 20-30% in volume fraction), long range correlations arise, creating eye-catching patterns of flow involving long-lived vortices (see Fig. 1). These resemble the turbulent flow of fluids at high Reynold numbers, although remarkably in this case, the Reynolds number is very small – effectively zero [@Bray]. (In this respect, the resulting “bacterial turbulence” resembles elastic turbulence in polymer solutions, which is attained past a critical value of the Wiessenberg number [@Morozov; @Larson].) The equations of motion of an active liquid crystalline fluid have been written down, either on the basis of symmetry [@Hatwalne04; @Kruse04], or via a coarse graining of an underlying microscopic model of stiff cytoskeletal gels and molecular motors [@Liverpool]. There are three main differences between these equations and the equations of motion of a passive liquid crystal. The first one is the presence of an active term in the stress tensor, whose divergence acts as a force in the Navier-Stokes equation (Eq. \[Navier-Stokes\]). This extra active term has been first shown in [@Simha02] to be proportional to an activity constant, $\zeta$, which to lower order is linear in the energy uptake of the fluid (e.g., via ATP hydrolysis), times the local order parameter $\mathbf{Q}$. A second term arises from activity also, but the form of this can be absorbed into the free energy (see Appendix) and we set this to zero. A third and final difference can arise in cases where an oriented swarm of swimming particles have a collective mean velocity relative to the surrounding fluid, causing a ‘self-advection’ effect. Such materials are called ‘polar’ and are distinct from the ‘apolar’ case which describes either a swarm in which equal number of particles swim forward and backward along the director axis, or particles which are non-motile but nonetheless exert active forces on the fluid. (Motile and non-motile active particles are sometimes called “movers” and “shakers” respectively.) More details on the exact forms of these active terms, and of the equations of motion used to describe polar systems, are given in the Appendix. The sign of $\zeta$ – hence of the active apolar contribution to the stress tensor – is of vital importance for the hydrodynamics and the rheology of active fluids [@Marenduzzo07a; @Cates08; @Hatwalne04]. A positive $\zeta$ corresponds to a suspension of extensile active particle, or “pushers” [@Ishikawa08], which exert forces along the molecular axis away from the centre of mass and towards the surrounding fluid. A negative $\zeta$ corresponds to a fluid of contractile active particles, or “pullers” [@Ishikawa08], for which the force dipoles is exerted axially towards the centre of mass. Examples of contractile fluids are suspensions of the biflagellated alga [*Chlamydomonas*]{} and actomyosin gels (or more general suspensions of non-permanently cross-linked cytoskeletal gels and molecular motors). On the other hand, the majority of bacteria are thought to be extensile [@Ishikawa08]. However, we are currently lacking quantitative experiments which measure the velocity field around active particles, which could lead to estimates of the values of $\zeta$ in these various cases. Alongside the two distinctions already made (apolar versus polar, and contractile versus extensile), all rodlike molecules, including passive ones, fall into two further categories, known as “flow aligning” and “flow tumbling”. Without activity, the former exhibit stable flow in which the director is inclined to the flow direction at a certain angle (the Leslie angle) whereas the latter undergo continuous director evolution, which is frequently chaotic [@Chakrabarti04]. To avoid the double complexity created by the flow tumbling instability on top of activity, we address here only the flow-aligning case. Early theoretical work determined the linear stability of these systems and it was found that an infinite sample of active material with nonzero $\mathbf{Q}$ (whether polar or apolar) is hydrodynamically unstable to order parameter fluctuations, and that this instability is connected to the generation of a spontaneous fluid flow [@Hatwalne04; @Kruse04; @Simha02]. (For extensile – but not contractile – particles, this instability is present even if the flow velocity is constrained to be a function of one spatial coordinate only.) It was also realised that the introduction of boundaries together with suitable “anchoring conditions” (fixing the molecular orientation) would lead to the stabilisation of the non-flowing ordered phase of uniform $\mathbf{Q}$. (Stability is restored for small values of $\zeta$, lying below a threshold $\zeta_c$ which decreases with system size, $L$, as $1/L^2$.) However, within these analytical approaches, it was not possible to determine the ultimate flow pattern resulting from this hydrodynamic instability. The simulations reported in [@Marenduzzo07a; @Cates08; @Marenduzzo07b] gave therefore the first quantitative predictions for the resulting spontaneous flow patterns in unstable active fluids. We focus first on a quasi-1D slab geometry with planar anchoring along the boundary. Here it was found that for an apolar fluid, upon increasing the activity, the system organises into a spontaneous Poiseuille flow (with a smoothly varying flow velocity, maximal at the centre of the slab). This spontaneously breaks symmetry and causes a net mass flux along the slab axis. For stronger activity and/or larger system sizes, one can also find spontaneously “shear-banded” flows in which successive layers of material have very different shear rates. In a 2D (thin film) flow geometry, the patterns differed significantly from the 1D case and were in good qualitative agreement with observations of flow patterns in, for instance, concentrated [*B. subtilus*]{} and [*Salmonella*]{} suspensions [@Harshey; @Goldstein; @Aranson; @Cisneros07]. Fig. 2 shows some of the patterns which were found in 2D. In the apolar case, for values of $\zeta$ just above the threshold, the instability first evolves into steady state “convective” rolls (Figs. 1a-b) [@Marenduzzo07b]. In some cases we also observed active bands or a succession of rolls (Fig. 2c), whereas deeper in the active phase we found that an initial array of rolls breaks up into what looks chaotic flow (“bacterial turbulence”) at low Reynolds number (Fig. 2d-f). The equations of motion for polar active suspensions (see Appendix) can be treated with the same method. This is somewhat similar, in that above a threshold there is a transition to a spontaneously flowing state. In this case, however, we do not observe rolls or bands and the system jumps directly into the “turbulent” flowing state (see Fig. 2f-h, with 2h showing trajectories of tracer particles which highlight the chaotic appearance of the flow). LB simulations have also proved helpful to characterise the rheology of active fluids. In Ref. [@Liverpool] it was suggested that the viscosity of a contractile active fluid should diverge at the passive isotropic-nematic transition in 2D. Simulations have confirmed and generalised this to the case of a 3D order parameter (albeit constrained to undergo a 1D flow) where it was clarified that the divergence there occurs at the spinodal point (which unlike the 2D case is different from the isotropic-nematic transition point). LB simulations also showed that upon increasing the density towards this spinodal point, one should observe a [*decrease*]{} in the viscosity for extensile fluids. The non-linear rheology should show even more striking behaviours [@Cates08]. Contractile fluids should strongly shear thicken for small forcing, and the extent of thickening should depend on the distance from the isotropic-nematic transition. Within a bulk nematic phase, a formal yield stress (a stress threshold below which there is no flow in steady state) is predicted, whereas for larger forcing, these materials exhibit shear thinning and approach the unenhanced (passive) viscosity at very large shear rates. In contrast, isotropic suspensions of concentrated extensile particles (bacteria) should start from a low viscosity and thicken to again approach the passive behaviour upon increasing the shear rate. Strikingly, a bulk oriented (nematic) phase of extensile active particles should show a zero effective viscosity for shear rates below some critical value. This is because the 3D ordered system in the absence of stress (but constrained to have 1D flow) spontaneously organizes into two shear bands with flow in opposite directions, so that there is zero net velocity of the fluid. A finite relative velocity of the confining walls can now be accommodated, still at zero stress, by adjusting the relative amounts of the two bands. (Since there is now a finite shear rate and no shear stress, the viscosity is formally zero.) Finally, our studies suggested that when subjected to shear flow, extensile fluids should form shear bands more readily than passive ones close to the isotropic-nematic transition, while contractile fluids should shear-band less readily. These rheological predictions remain provisional, assuming as they do a 1D flow profile and the results may well be modified when this assumption is relaxed. (Such work is now underway in our group.) Nonetheless, the recent advances in experimental techniques, which have made it possible to grow thin concentrated films of, e.g. [*B. subtilis*]{}, should render several of our results, such as the 2D predictions of Fig.2, testable in the near future. Blue phase simulations ====================== The “blue phases” (BPs) of chiral nematogenic molecules offer spectacular examples of functional soft matter; each comprises a self-sustained network of disclinations [@Wright89] embedded within a nematic matrix. (A disclination is a topological defect line, defined such that the nematic director rotates through a half-turn on treversing any circuit that encloses this line.) At high temperature, a fluid of chiral nematogenic molecules remains in the “isotropic phase”, with no preferred orientation of the molecular axes. Upon cooling down the sample, the molecules become oriented ($\mathbf{Q}$ is finite), but due to molecular chirality the director field $\mathbf{n}$ rotates with spatial position, describing everywhere a helix with a well-defined axis. (This is called the cholesteric phase.) However, very close to the transition, it is more advantageous locally for the director field to rotate in a helical fashion about any axis perpendicular to a straight line – this complicated pattern was named a “double twist cylinder” [@Wright89]. Mathematically, it is impossible to patch together such double twist cylinders without creating defects in between, and this frustration gives rise to the disclination network observed in the blue phases, and responsible for many of their remarkable physical properties. Most striking among these are their optical properties (such phases can be made in all colours, not just blue) which stem from the presence of a lattice of disclinations with a unit cell whose size is comparable to the wavelength of visible light. In BP I and II (the two most common) this lattice has long range cubic order; BP III however is certainly not cubic, and probably not ordered. BPs have a fascinating scientific history. They were first reported in the late 19th century, by a scientist named Reinitzer, and then long forgotten, until some new experimental interest arose in the 1960s and 1970s. Initial theories of BPs only came out in the 1980s (see e.g. [@Meiboom81]), when the concept of double twist was first proposed. At this stage BPs were widely considered to be of purely academic interest, mainly because they were only stable in a very narrow temperature range (about 1 K) close to the isotropic-cholesteric transition. In 1983, a world-leading expert on liquid crystals, F. C. Frank, said: “They \[Blue phases\] are totally useless, I think, except for one important intellectual use, that of providing tangible examples of topological oddities, and so helping to bring topology into the public domain of science, from being the private preserve of a few abstract mathematicians and particle theorists.” [@Wright89]. In the first decade of the 21st century, this view rather suddenly changed, following advances in fabrication that enormously increased the stability range of BPs, up to about 50 K [@Kikuchi02; @Coles05]. In May 2008 Samsung presented the first blue-phase based liquid crystal display at the annual SID International Symposium. This new display is able to operate at a high frame frequency (240 Hertz), does not require costly alignment treatment at the boundaries of the liquid crystal, and may one day supersede current LC (twisted nematic) display technologies. Traditional theories of BPs (see e.g. [@Grebel84]) were based on semi-analytical approximations. While they were extremely useful to gain a qualitative understanding of the physics of BPs, these approaches had to rely on severe assumptions for progress to be possible. For instance, when estimating the phase diagram, the [**Q**]{} tensor was approximated by a truncated Fourier series. Computational constraints only allowed very few harmonics to be considered. As a result, when important experimental observations were mispredicted by the theory, it was not clear whether this was a drawback of the underlying free energy functional (the Landau-de Gennes free energy, see Appendix), or simply an artifact of the simplifications which were employed. These early theoretical works left unexplained the detailed shape of the phase diagram – BPI and BPII appeared in the wrong order on increasing the chirality $\kappa$. (This parameter is proportional to the inverse helical pitch of the cholesteric; see Appendix.) With an electric field applied, the analytic theories were also unable to account for the anomalous field-induced distortion (electrostriction) of BPI, nor to explain why a new phase, named BPX, should be stable at all. Finally, most theories assume cubic symmetry, so cannot describe BPIII, or the “blue fog”, which is thought to comprise a network of disclination lines without long-range order. In recent years, LB simulations have been remarkably useful in filling most of these conceptual theoretical gaps, and have significantly extended our quantitative understanding of the physics of blue phases. Firstly, in Refs. [@Dupuis05a; @Alexander08], Eq. \[Q\_motion\] was solved by non-hybrid LB in the absence of fluid flow ($\mathbf{u}= \mathbf{0}$) with a set of initial conditions suggested from analytical expressions for the infinite chirality limit of BPI, BPII and $O_5$. ($O_5$ is another disclination lattice, which was proposed as stable by early theories, but not observed in experiments.) This procedure amounts to a free energy minimisation with a topological constraint; that is, the purely relaxational dynamics of Eq. \[Q\_motion\] was shown to maintain the point group of the disclination network chosen initially. Therefore, this approach can be used to map out the full phase diagram as a function of chirality, $\kappa$, and a reduced temperature parameter $\tau$ (see Appendix for the mathematical definition of these paramaters). This approach has the advantage of not making any approximations beyond those implicit in the selected (Landau-de Gennes) free energy. These LB simulations showed that the phase diagram predicted by the continuum theory is actually in [*good*]{} qualitative agreement with the experiments: BPI and BPII show up in the right order on increasing chirality, and $O_5$ is relegated to unphysical regions in the phase diagram [@Dupuis05a]. The structure of these phases is shown in Fig.3a,b where a surface is drawn around each defect line at a certain contour of the ordering strength. (This creates a rendering of each disclination as a fattened tube.) The numerical phase diagram (Fig.3c) is in good agreement with the experimental one (Fig.3d). An interesting calculation in [@Alexander06] has shown that even including an extra set of spherical harmonic in the analytical scheme of Ref. [@Grebel84] is unable to reproduce the details shown by the numerics. This is an example in which simulations are extremely important to accurately find what the predictions of a given theory are. Similarly, LB simulations were performed in the presence of an electric field in Ref. [@Alexander08]. It was found that under a small electric field the unit cell of BPII tended to elongate along the field direction, and shrink perpendicularly to it, whereas BPI displayed an opposite behaviour, once more in agreement with experiments. An intermediate field also turned the disclination network of BPI into a different structure (which it is tempting to identify with the experimentally observed BPX). Therefore, also when an electric field is present, the Landau-de Gennes free energy works remarkably well, although analytically tractable approximations of the resulting equations are not adequate to capture its predictions. As well as allowing one to find equilibrium states under different fields and thermodynamic conditions, LB of course comes into its own for dynamical problems in which fluid flow cannot be ignored. For instance, the existence of a disclination network affects the response of a BP to an imposed Poiseuille flow [@Dupuis05b]. Here, LB work has shown that flow can lead to the unzipping of disclinations of integer topological charge. (On a circuit around such a defect line, the director rotates through a whole number of turns, rather than the half-turn around a standard disclination; these higher-order lines can form metastable networks in the absence of an applied flow.) Flow also can cause the bending and twisting of the BPI and BPII disclination networks. This bending and twisting lead to an elastic component in the rheological behaviour, and as a result the simulations predict “permeative” flows, in which the molecules comprising the blue phase flow through a static disclination pattern whose geometry is hardly perturbed by the underlying molecular transport. The same kind of flow also occurs when cholesterics are sheared by small forcing along the direction of their helical axis [@permeation]. Typically, BPs also display significantly shear thinning behaviour, as a strong enough flow disrupts the disclination network in a manner that reduces the elastic stress. Until very recently, LB work on BPs was limited to one unit cell of the disclination lattice, within which several disclination cores are present and require a fine enough discretisation to be correctly resolved. However, supercomputers now allow supra-unit cell simulations of BPs, the first account of which we have recently given in Ref. [@Henrich09a], where we studied the domain growth dynamics of a BPII domain inside a cholesteric or isotropic “slab”, in a parameter region in which the BP is the thermodynamically stable state. The simulations give evidence of an intriguing domain growth kinetics. For small values of the chirality, the growth is slow and the resulting blue phase has no or few defects. When the chirality exceeds a certain threshold, however, the advancing disclination network changes its symmetry and reconstruct into a new hexagonal phase, which is so far undocumented in experiments. (This process is shown in Fig. 3e,f.) It would be of interest to determine whether this new BP is a metastable structure found due to the geometry we have focussed on (we considered just one planar slice of unit cells) or has a wider physical meaning. In all cases, these simulations are encouraging as they suggest that large scale simulations of BPs are within reach. This is of course of interest to the modelling of real devices, which can be manufactured in the micron scale which we can consider computationally. Conclusions and future prospects ================================ We hope that this selection of results has shown that LB simulations of liquid crystals are potentially extremely powerful in gaining new insights into the physics which is contained in the hydrodynamic equations of motion of liquid crystalline fluids. For both active fluids and blue phases, it would have been very difficult to compare theory and experiments – even at a qualitative level – without using these simulations. Although such comparisons remain in their infancy for active nematics, the spontaneous flow patterns in (for instance) concentrated bacterial suspensions, simulated via the continuum equations of motion of an active liquid crystal, are qualitatively comparable with the patterns observed in the experiments. In the case of blue phases, such comparisons are more clear cut. Here it was thought that the classic theory based on a Landau-de Gennes free energy was missing some physics because (for example) the phase diagram was poorly predicted and anomalous electrostriction in BPI was not found. Remarkably, LB has shown that this was a drawback of the approximations used to make analytical progress, and not of the original theory, which is qualitatively and semi-quantitatively accurate. The fields we have covered in this short review are, of course, still full of open questions, and we hope that future LB simulations will play an important role in clarifying some of these. In active fluids, it will be important to characterise the flow patterns in fully three-dimensional active suspensions, and also to extend the treatment we have covered here to the case in which there are density fluctuations or inhomogeneity in the fluid. Another related issue would be to study “active emulsions”, in which droplets of active gels are suspended in an aqueous passive medium, possibly enclosed by an elastic membrane. Ultimately, it would be very exciting if continuum theories like the one we solved numerically may be applied to, for instance, suspensions of cell extracts in an extracellular matrix. From the theory point of view, it appears that an urgent issue is to clarify to what extent active fluids faithfully represent concentrated suspensions of motile particles, or swimmers, by, for instance, comparing the results of continuum simulations to those of more microscopic models with fully resolved swimmers, which can also be treated via LB (though of a different kind than the one presented here) [@Ishikawa08; @Llopis06; @Nash08]. Large scale simulations of blue phases will also be likely to be important in the future. From an application point of view, the exciting potential of BP devices can ultimately be fully exploited if we manage to reach a quantitative understanding of their thermodynamics, their switching dynamics, and the role of flow. Supra-unit cell simulations are needed to this end, because the field leads to unit cell deformations and may cause full scale reconstruction of the disclination network. From a more fundamental point of view, we do not have a satisfactory understanding of non-cubic blue phases. Most notably, the structure of BPIII – the “blue fog” – is still not understood to date, and we hope that large scale simulations of amorphous disclination networks may shed some light on this elusive problem. We are grateful to G. P. Alexander, S. M. Fielding, A. N. Morozov, E. Orlandini and J. M. Yeomans for useful discussions. We acknowledge EPSRC grants EP/E045316/1 and EP/E030173/1 for funding, and computer time on Hector funded by EP/F054750/1. MEC holds a Royal Society Research Professorship. [99]{} S. Succi, [*The Lattice Boltzmann Equation for Fluid Dynamics and Beyond*]{}, Oxford University Press (2001). M. E. Cates and P. S. Clegg, [*Soft Matter*]{} [**4**]{}, 2132 (2008). M. R. Swift, E. Orlandini, W. R. Osborn and J. M. Yeomans, [*Phys. Rev. E*]{} [**54**]{}, 5041 (1996). G. Gonnella, E. Orlandini and J. M. Yeomans, [*Phys. Rev. Lett.*]{} [**78**]{}, 1695 (1997). C. Denniston, D. Marenduzzo, E. Orlandini and J. M. Yeomans, [*Phil. Trans. R. Soc. Lond. A*]{} [**362**]{}, 1745 (2004). D. Marenduzzo, E. Orlandini, Y. M. Yeomans, [*Phys. Rev. Lett.*]{} (2007). M. E. Cates, S. M. Fielding, D. Marenduzzo, E. Orlandini and J. M. Yeomans, [*Phys. Rev. Lett.*]{} [**101**]{}, 068102 (2008). K. Stratford, R. Adhikari, I. Pagonabarraga, J.-C. Desplat and M. E. Cates, [*Science*]{} [**309**]{}, 2198 (2005). A.N. Beris and B.J. Edwards, [*Thermodynamics of Flowing Systems*]{}, Oxford University Press, Oxford, (1994). D. C. Wright and N. D. Mermin, [*Rev. Mod. Phys.*]{} [**61**]{}, 385 (1989). M. E. Cates, J. C. Desplat, P. Stansell, A. J. Wagner, K. Stratford, R. Adhikari and I. Pagonabarraga, [*Phil. Trans. A*]{} [**363**]{}, 1917 (2005). A. Tiribocchi, N. Stella, A. Lamura, G. Gonnella, arXiv:0902.3921. D. Marenduzzo, E. Orlandini, M. E. Cates and J. M. Yeomans, [*Phys. Rev. E*]{} [**76**]{}, 031921 (2007). R. Faller, [*Phys. Chem. Chem. Phys.*]{} [**11**]{}, 1867 (2009). Z. E. Hughes, L. M. Stimson, H. Slim, J. S. Lintuvuori, J. M. Ilnytskyi, and M. R. Wilson, [*Comp. Phys. Comm.*]{} [**178**]{}, 724 (2008). R. Memmer, [*Liq. Cryst.*]{} [**27**]{}, 533 (2000). Y. Hatwalne, S. Ramaswamy, M. Rao and R. A. Simha, [*Phys. Rev. Lett.*]{} [**92**]{}, 118101 (2004). V. Narayan, S. Ramaswamy, N. Menon, [*Science*]{} [**317**]{}, 105 (2007). R. M. Harshey, [*Mol. Microbiol.*]{} [**13**]{}, 389 (1994). C. Dombrowski, L. Cisneros, S. Chatkaew, R. E. Goldstein and J. O. Kessler, [*Phys. Rev. Lett.*]{} [**93**]{}, 098103 (2004). A. Sokolov A, I. S. Aranson, J. O. Kessler and R. E. Goldstein, [*Phys. Rev. Lett.*]{} [**98**]{}, 158102 (2007). L. H. Cisneros, R. Cortez, C. Dombrowski, R. E. Goldstein, J. O. Kessler, [*Exp. Fluids*]{} [**43**]{}, 737 (2007). D. Bray, [*Cell movements: from molecules to motility*]{}, Garland Publishing, New York (2000). A. N. Morozov and W. van Saarloos, [*Phys. Rev. Lett.*]{} [**95**]{}, 024501 (2005). R. G. Larson, [*Nature*]{} [**405**]{}, 27 (2000). K. Kruse, J. F. Joanny, F. Julicher, J. Prost and K. Sekimoto, [*Phys. Rev. Lett.*]{} [**92**]{}, 078101 (2004). T. B. Liverpool and M. C. Marchetti, [*Europhys. Lett.*]{} [**69**]{}, 846 (2005). R. A. Simha and S. Ramaswamy [*Phys. Rev. Lett.*]{} [**89**]{}, 058101 (2002). T. Ishikawa and T. J. Pedley, [*Phys. Rev. Lett.*]{} [**100**]{}, 088103 (2008). B. Chakrabarti, M. Das, C. Dasgupta, S. Ramaswamy and A. K. Sood, [*Phys. Rev. Lett.*]{} [**92**]{}, 055501 (2004). S. Meiboom, J. P. Sethna, P. W. Anderson and W. F. Brinkman, [*Phys. Rev. Lett.*]{} [**46**]{}, 1216 (1981). H. Kikuchi, M. Yokota, Y. Hisakado, H. Yang and T. Kajiyama, [*Nat. Mat.*]{} [**1**]{}, 64 (2002). H. J. Coles and M. N. Pivnenko, [*Nature*]{} [**436**]{}, 977 (2005). H. Grebel, R. M. Hornreich and S. Shtrickman, [*Phys. Rev. A*]{} [**30**]{}, 3264 (1984). A. Dupuis, D. Marenduzzo, and J. M. Yeomans, [*Phys. Rev. E*]{} [**71**]{}, 011703 (2005). G. P. Alexander and D. Marenduzzo, [*Europhys. Lett.*]{} [**81**]{}, 66004 (2008). D. K. Yang and P. P. Crooker, [*Phys. Rev. A*]{} [**35**]{}, 4419 (1987). G. P. Alexander and J. M. Yeomans, [*Phys. Rev. E*]{} [**74**]{}, 061706 (2006). A. Dupuis, D. Marenduzzo, E. Orlandini, and J. M. Yeomans, [*Phys. Rev. Lett.*]{} [**95**]{}, 097801 (2005). D. Marenduzzo, E. Orlandini and J. M. Yeomans, [*Phys. Rev. Lett.*]{} [**92**]{}, 188301 (2004). O. Henrich, D. Marenduzzo, K. Stratford and M. E. Cates, arXiv:0901.3293; [*Comput. Math. with Appl.*]{}, accepted for publication (2009). I. Llopis and I. Pagonabarraga, [*Europhys. Lett.*]{} [**75**]{}, 999 (2006). R. W. Nash, R. Adhikari and M. E. Cates, [*Phys. Rev. E*]{} [**77**]{}, 026709 (2008). L. Giomi, T. B. Liverpool and M. C. Marchetti, [*Phys. Rev. Lett.*]{} [**101**]{}, 198101 (2008). [ 1.5cm 0.5cm]{} \(a) Turbulence in a sessile droplet of [*B. subtilis*]{}, viewed from below a petri dish. The horizontal line is the edge of the droplet (picture taken from Fig. 3 of Ref. [@Goldstein]). The scale bar is 35 $\mu$m. (b) Flow pattern in a similar bacterial droplet – the arrow at the right stands for a speed of 35 $\mu$m/s (picture taken from Fig. 4 of Ref. [@Goldstein]). Selected results from active fluid simulations. We only plot the velocity field, resulting from LB solutions of Eq. \[Navier-Stokes\]. The top row (a-c) shows stationary states obtained for apolar extensile fluids with moderate activity: it can be seen that the spontaneous flow has the shape of rolls (a,b) or of bands, in general tilted (c). The middle row show non-stationary “turbulent” solutions for larger values of the activity. The bottom row shows solution of the equations of motion of polar active gels. In (g) there is no self-advection term, so that the fluid is equivalent to an apolar gel, whereas in (h-i) this term is switched on. In (i) we plot the trajectories of 3 tracer particles, which show the “turbulent” nature of the flow. Parameters common to the apolar runs in (a)-(f) are: $\gamma=3$ (ensuring that we work in the ordered phase), $\xi=0.7$ (which together with our choice of $\gamma$ selects flow-aligning liquid crystals), $\Gamma=0.33775$, $K=0.08$, $\eta=0.57$. The activity parameter $\zeta$ was 0.001 (a), 0.002 (b), 0.01 (c), and 0.04 (d-f). Parameters for the polar runs in (g-i) are $\zeta=0.02$, $\lambda=1.1$, $K=0.04$, $\Gamma=0.3$, $\eta=1.67$, and $w=0$ (g), or $w=0.01$ (h,i). The top row shows the disclination lattices of BPI (a) and BPII (b), as obtained from LB simulations – 2 unit cells in each directions are shown. The second row shows a computational (c, within the one elastic constant approximation), and an experimental (d) typical phase diagram for blue phases (picture taken from Fig. 3 of Ref. [@Crooker89]). The key feature is that LB simulations predict the correct order of appearance of BPI and BPII upon increasing the chirality. Note that BPIII is not a cubic phase hence is not included in the theoretical phase diagram. The bottom two rows (e-f) show dynamical states obtained when a BPII domain grows inside an initially cholesteric matrix (see Ref. [@Henrich09a]). The reduced temperature was $\tau=0$ and the chirality was $\kappa=2$ (e) or $\kappa=1$ (f). ![](figure_B_subtilis.eps){width="100.00000%"} ![](figure2.eps){width="100.00000%"} ![](figure3.ps){width="70.00000%"} [**Appendix: Hydrodynamic equations of motion for active and passive liquid crystalline fluids**]{}\ In this Appendix we review the equations of motion for (active and passive) liquid crystalline fluids, which we solve by lattice Boltzmann simulations. These are the equations used to generate the results reviewed in our work. We first describe the thermodynamics of a liquid crystalline fluid in the absence of active stresses. This covers cholesterics and blue phases (and also active gels within a passive phase). We employ a Landau-de Gennes free energy ${\cal F}$, whose density we indicate by $f$. The free energy density can be written as a sum of two contributions, $f_1$ and $f_2$. The first is a bulk contribution, $$\begin{aligned} \nonumber f_1 & = & \frac{A_0}{2}(1 - \frac {\gamma} {3}) Q_{\alpha \beta}^2 - \frac {A_0 \gamma}{3} Q_{\alpha \beta}Q_{\beta \gamma}Q_{\gamma \alpha} \\ \nonumber & + & \frac {A_0 \gamma}{4} (Q_{\alpha \beta}^2)^2, \label{eqBulkFree}\end{aligned}$$ while the second is a distortion term. For nonchiral liquid crystals, we take the (standard) one elastic constant approximation [@Beris] $$f_2=\frac{K}{2} \left(\partial_\gamma Q_{\alpha \beta}\right)^2.$$ Where $A_0$ is a constant, $\gamma$ controls the magnitude of order (it may be viewed as an effective temperature or concentration for thermotropic and lyotropic liquid crystals respectively), while $K$ is an elastic constant. To describe cholesterics, we employ the slightly generalised distortion free energy, which is again standard [@Wright89]: $$f_2=\frac{K}{2} \left[ \bigl(\partial_{\beta}Q_{\alpha\beta}\bigr)^2+ \bigl( \epsilon_{\alpha \gamma \delta} \partial_{\gamma} Q_{\delta \beta} + 2q_0 Q_{\alpha \beta} \bigr)^2\right].$$ Here and in what follows Greek indices denote cartesian components and summation over repeated indices is implied. For blue phases, it is customary to identify the position in thermodynamic parameter space via the chirality, $\kappa$, and the reduced temperature, $\tau$. These may be defined in terms of previous quantities via [@Alexander06; @Alexander08]: $$\begin{aligned} \nonumber \kappa & = & \sqrt{\frac{108 K q_0^2}{A_0\gamma}} \\ \nonumber \tau & = & 27 \left(\frac{1-\gamma/3}{\gamma}\right).\end{aligned}$$ (Note that the reduced temperature was defined in older literature as $\tau = 27 \frac{1-\gamma/3}{\gamma}+\kappa^2$.) When needed, the anchoring of the director field on the boundary surfaces (Fig. 2) to a chosen unit vector ${\bf n}^0$ is ensured by adding a surface term in the free energy density $$\begin{aligned} \nonumber f_s & = & \frac{1}{2}W_0 (Q_{\alpha \beta}-Q_{\alpha \beta}^0)^2\\ \nonumber Q_{\alpha \beta}^0 & = & S_0 (n_{\alpha}^0n_{\beta}^0-\delta_{\alpha\beta}/3)\end{aligned}$$ The parameter $W_0$ controls the strength of the anchoring, while $S_0$ determines the degree of the surface order. If the surface order is to equal the bulk order, $S_0$ should be set equal to $q$, the order parameter in the bulk ($3/2$ times the largest eigenvalue of the [**Q**]{} tensor). $W_0$ is large (strong anchoring) in what follows. The equation of motion for [**Q**]{} is taken to be [@Beris] $$(\partial_t+{\vec u}\cdot{\bf \nabla}){\bf Q}-{\bf S}({\bf W},{\bf Q})= \Gamma {\bf H}+\tilde\lambda {\bf Q}$$ where $\Gamma$ is a collective rotational diffusion constant, and $\tilde\lambda$ is an activity parameter which for simplicity we set to zero in our simulations. (The resulting term can anyway be absorbed into a shift of $A_0$ and/or $\gamma$ in the free energy.) The first term on the left-hand side of the equation above is the material derivative describing the usual time dependence of a scalar quantity advected by a fluid with velocity ${\vec u}$. This is modified for rod-like molecules by a second term $$\begin{aligned} %\label{S_definition} \nonumber {\bf S}({\bf W},{\bf Q}) & = &(\xi{\bf D}+{\bf \omega})({\bf Q}+{\bf I}/3) \\ \nonumber & + & ({\bf Q}+ {\bf I}/3)(\xi{\bf D}-{\bf \omega}) \\ \nonumber & - & 2\xi({\bf Q}+{\bf I}/3){\mbox{Tr}}({\bf Q}{\bf W})\end{aligned}$$ where Tr denotes the tensorial trace, while ${\bf D}=({\bf W}+{\bf W}^T)/2$ and ${\bf \omega}=({\bf W}-{\bf W}^T)/2$ are the symmetric part and the anti-symmetric part respectively of the velocity gradient tensor $W_{\alpha\beta}=\partial_\beta u_\alpha$. The constant $\xi$ depends on the molecular details of a given liquid crystal, and determines, together with $\gamma$, whether a liquid crystal is flow aligning or flow tumbling (we restrict to the former case in this work). The first term on the right-hand side of the order parameter evolution equation describes the relaxation of the order parameter towards the minimum of the free energy. The molecular field ${\bf H}$ which provides the driving motion is given by $$%\begin{equation} {\bf H}= -{\delta {\cal F} \over \delta {\bf Q}}+({\bf I}/3) {\mbox{Tr}}{\delta {\cal F} \over \delta {\bf Q}}. \label{molecularfield}$$ The fluid velocity, $\vec u$, obeys the continuity equation and the Navier-Stokes equation, whose incompressible limit is Eq. \[Navier-Stokes\] in which $\Pi_{\alpha\beta}=\Pi^{\rm passive}_{\alpha\beta}+ \Pi^{\rm active}_{\alpha\beta}$. The stress tensor $\Pi^{\rm passive}_{\alpha\beta}$ necessary to describe ordinary LC hydrodynamics is (up to an isotropic pressure term) given by: $$\begin{aligned} \nonumber \Pi^{\rm passive}_{\alpha\beta}& = & 2\xi (Q_{\alpha\beta}+{1\over 3}\delta_{\alpha\beta})Q_{\gamma\epsilon} H_{\gamma\epsilon}\\\nonumber &-&\xi H_{\alpha\gamma}(Q_{\gamma\beta}+{1\over 3}\delta_{\gamma\beta})\\ \nonumber &-&\xi (Q_{\alpha\gamma}+{1\over 3}\delta_{\alpha\gamma})H_{\gamma\beta}\\ \nonumber &-&\partial_\alpha Q_{\gamma\nu} {\delta {\cal F}\over \delta\partial_\beta Q_{\gamma\nu}} \\ \nonumber & + & Q_{\alpha \gamma} H_{\gamma \beta} -H_{\alpha \gamma}Q_{\gamma \beta}. \label{BEstress}\end{aligned}$$ whereas the active term is given by, in leading order $$\Pi^{\rm active}_{\alpha\beta}=-\zeta Q_{\alpha\beta}$$ where $\zeta$ is an activity constant [@Hatwalne04]. Note that with the sign convention chosen here $\zeta>0$ corresponds to extensile rods and $\zeta<0$ to contractile ones [@Hatwalne04]. We can also use a variant of these equations to study polar active gels. The order parameter is this time a vector $P_\alpha$ (with variable magnitude) as there is no longer head-tail symmetry in the system. The equations of motion we used in our LB simulations (reported in Fig. 2, bottom row), are a simplified version of those presented in [@Giomi08]. The equation governing the evolution of the vectorial order parameter is $$\begin{aligned} \nonumber \left[\partial_t+\left(u_{\beta} +wP_{\beta}\right)\partial_{\beta})\right] P_{\alpha}= \\ \nonumber \lambda D_{\alpha\beta} P_{\beta} -\omega_{\alpha\beta}P_{\beta} +\Gamma' h_{\alpha}.\end{aligned}$$ In this equation, $w$ is another active term, due to swimming, which causes self-advection of the order parameter, while $\lambda$ is a material dependent constant – positive for rod-like molecules. If $|\lambda|>1$ the liquid crystalline passive phase is flow-aligning, otherwise it is flow-tumbling. The “molecular field” is now given by $h_{\alpha}=-\delta {\cal F}_{\rm pol}/\delta p_{\alpha}$ where ${\cal F}_{\rm pol}$ is the free energy for a polar active nematic, whose density is (see also Ref. [@Giomi08] where a more general form is used): $$f =\frac{a}{2} |{\mathbf P}|^2+\frac{b}{4}|{\mathbf P}|^4+ K\left(\partial_\alpha P_{\beta}\right)^2$$ where $K$ is an elastic constant, $b>0$, and we chose $a=-b$ to ensure that the minimum of the free energy is with $|{\mathbf P}|=1$. (Note that in this model ${\mathbf P}$ this time denotes a vector rather than a tensor). The Navier-Stokes equation is as in the tensorial model, but the stress tensor this time is $$%\Pi_{\alpha\beta}& = & \frac{1}{2}\left(P_{\alpha}h_{\beta}- P_{\beta}h_{\alpha}\right)-\frac{\lambda}{2}\left(P_{\alpha}h_{\beta}+ P_{\beta}h_{\alpha}\right) %\\ \nonumber %& - & \zeta P_{\alpha}P_{\beta}. - \zeta P_{\alpha}P_{\beta}.$$ As mentioned in the text, the active term in the stress tensor, proportional to $\zeta$, has therefore the same form for apolar and polar active gels. A few limits of the two theories considered above are worth noting. The tensorial model we have written down is equal to, for $\zeta=0$, the Beris-Edwards model for liquid crystal hydrodynamics. Analogously, for $\zeta=w=0$ the polar model reduces to the Leslie-Ericksen model of nematodynamics. For $w=0$, and a sample of uniaxial active liquid crystals, with a spatially uniform degree of orientational order, the tensorial model may be mapped onto the vectorial one (see [@Marenduzzo07b] for a proof of this).
--- abstract: 'We study the statistics of level-spacing of Instantaneous Normal Modes in a supercooled liquid. A detailed analysis allows to determine the mobility edge separating extended and localized modes in the negative tail of the density of states. We find that at temperature below the mode coupling temperature only a very small fraction of negative eigenmodes are localized.' author: - Stefano Ciliberti - 'Tomás S. Grigera' title: 'Localization threshold of Instantaneous Normal Modes from level-spacing statistics' --- Introduction ============ The Instantaneous Normal Modes (INM) of a liquid are the eigenvectors of the Hessian (second derivative) matrix of the potential energy, evaluated at an instantaneous configuration. The interest in the equilibrium-average properties of the INM originates in the proposal [@keyes] to use them to study liquid dynamical properties, especially diffusion, which is considered to be linked to unstable modes (a subset of the modes with negative eigenvalue) [@BeLa; @ScioTa97; @donati00; @lanave01; @deo; @lanave02]. They have been naturally applied to address the problem of the glass transition: the glass phase is viewed as that where free ([*i.e.*]{} non-activated) diffusion is absent, and the disappearance of diffusion should be linked to that of the unstable modes [@BeLa]. The identification of these unstable modes presents some problems [@critique; @response], and the localization properties of the INM are of interest. This is the problem we address here. Localization is an interesting and difficult problem in its own right. Given an $N\times N$ random matrix defined by the probability distribution of its elements in some (typically site) base, the problem is to determine whether an eigenmode projects to an extensive number of base vectors (extended state) or not (localized state) in the large $N$ limit. Only in a few cases there is a theoretical solution for this problem [@Lee; @anderson; @bouchaud]. From the point of view of random matrix theory, the INM are the eigenvectors of an Euclidean random matrix (ERM) [@mepaze], and the problem of localization in ERMs has been recently addressed analytically in the dilute limit [@noi]. Clearly, the problem is also hard from the numerical point of view, involving a question about the thermodynamic limit. Quantities such as the participation ratio [@bede], which can distinguish between localized and extended states but require knowledge of the eigenvectors, are problematic numerically because computation of eigenvectors is very expensive for large systems. Here we explore an approach [@carpena] relying only on eigenvalues, based on the fact that the statistics of level spacing is strongly correlated with the nature of the eigenmodes (see sec. II). We apply it for the first time to a model glass-forming liquid at a temperature below the mode coupling temperature $T_c$ [@goetze]. Our work can be regarded as an extension of the results of ref. , as far as we perform a detailed analysis of the level spacing. Our emphasis is on the exploring the usefulness of level-spacing statistics as a means to obtain a localization threshold in off-lattice systems and the possible limitations of this technique. Theoretical background ====================== The spectral function of the eigenvalues of a random matrix is $S(\lambda) = \sum_i \delta(\lambda-\lambda_i)/N$. Its (disorder-) average is the density of states $g(\lambda)$ (DOS). The cumulative spectral function for this particular realization of the disorder, $$\eta(\lambda) = \int_{-\infty}^{\lambda} d\lambda' \, S(\lambda') = \frac 1N \sum_{i=1}^N \theta(\lambda-\lambda_i) , \label{step}$$ can be decomposed into a smooth part plus a fluctuating term $\eta_\mathrm{fluct}(\lambda)$, whose average is zero. The smooth part is then $$\zeta(\lambda)\equiv\langle \eta(\lambda)\rangle = \int_{-\infty}^\lambda d\lambda' \,g(\lambda') \ . \label{zeta}$$ The level spacing is not studied directly on the $\lambda_i$, because it depends on the mean level density. To eliminate this dependence, one “unfolds” the spectrum, which means to map the original sequence $\{\lambda_i\}$ onto a new one $ \{\zeta_i(\lambda_i) \}$ according to Eq. (\[zeta\]) (see e.g. [@guhr] for a detailed explanation of this point). The cumulative spectral function can be expressed in terms of these new variables: $$\hat{\eta}(\zeta) \equiv \eta\big(\lambda(\zeta)\big) = \zeta + \hat\eta_\mathrm{fluct}(\zeta) .$$ The distribution of the variable $\zeta$ is uniform in the interval $[0,1]$ regardless the $g(\lambda)$. The nearest-neighbor spacing distribution $P(s)$ gives the probability that two *neighboring* unfolded eigenvalues $\zeta_i$ and $\zeta_{i+1}$ are separated by $s$. It is one of the most commonly used observables in random matrix theory. It is different from the two level correlation function and it involves all the $k$-level correlation functions with $k\ge 2$ [@mehta]. It displays a high degree of universality, exhibiting common properties in systems with very different spectra. Although no general proof has been given, its shape is thought to depend only on the localization properties of the states [@guhr]. In the case of the Gaussian Orthogonal Ensemble (GOE), where all states are extended in the thermodynamic limit, it is known [@mehta] that $P(s)$, normalized such that $\langle s\rangle = 1$, follows the so-called *Wigner surmise* (also known as Wigner-Dyson statistics), namely $$P_\mathrm{WD}(s) = \frac {\pi s}{2} \exp{\left(-\pi s^2/4\right)} . \label{wigner}$$ The linear behavior for small $s$ is an expression of the level repulsion. This form actually characterizes many different systems with extended eigenstates (see *e.g.* ref.  and references therein). In the case of INM, it has been shown [@deo] that it describes the level spacing better and better as the fraction of localized states decreases. On the other hand, a system whose states are all localized will have completely uncorrelated eigenvalues. This corresponds to a Poisson process, and the statistics of two adjacent levels is given by $$P_\mathrm{P}(s) = \exp{(-s)} . \label{poisson}$$ If one deals with a set of levels which includes both localized and extended states, one expects some distribution interpolating between those two. A natural ansatz is the simple linear combination $$P_\mathrm{LC}(s;\pi) = (1-\pi) P_\mathrm{P}(s) + \pi P_\mathrm{WD}(s) , \label{linear}$$ which holds under the hypothesis that contributions coming from localized and extended modes simply add linearly. Another possibility comes from a statistical argument due to Wigner (see for example [@mehta]), that leads to the heuristic function $$P(s) = \mu(s) \exp \left\{ -\int_{0}^{s} ds'\mu(s') \right\} , \label{W}$$ where $\mu(s)$ is called *level repulsion function*. Taking $\mu(s)=c_qs^q$, with $q\in[0,1]$, one obtains the Brody distribution [@brody] $$P_\mathrm{B}(s;q) = c_qs^q\exp{\left( -\frac {c_q s^{q+1}}{q+1}\right)} , \quad c_q = \frac {\Gamma^{q+1}[1/(q+1)]}{q+1} , \label{brody}$$ which interpolates between the Poisson ($q=0$) and Wigner-Dyson ($q=1$) distributions. However, this is just another phenomenological interpolation scheme, since there is no theoretical argument supporting a level repulsion function increasing as a power law with an exponent smaller than one. Method ====== The practical difficulty in performing the unfolding lies in finding a good approximation to the smooth (averaged) part of the cumulative spectral function, $\zeta(\lambda)$. We have first obtained a cumulative function averaged over many samples of the Hessian (computed from a corresponding number of equilibrium configurations) and then taken $\zeta(\lambda)$ as the function defined by a cubic spline interpolation of the resulting staircase. Once this function is defined, the spacings of each sample can be evaluated by extracting the $\lambda$ values according to the $g(\lambda)$ and then computing $s=\zeta(\lambda_{i+1}) -\zeta(\lambda_i)$; the histogram of these values is an estimate of the $P(s)$. We have also tried digital filtering (Savistky-Golay [@nr]) of the staircase, but the results were not satisfactory. The procedure is illustrated in Fig. \[F-unfolding\]. \[\]\[\]\[1.\][$g(\lambda)$]{} \[\]\[\]\[1.\][$\lambda$]{} \[\]\[\]\[1.\][$\zeta$]{} \[\]\[\]\[.9\][$\lambda$]{} \[\]\[\]\[.9\][$\zeta$]{} \[\]\[\]\[1.\][$s/\langle s \rangle$]{} \[\]\[\]\[1.\][$P(s/\langle s \rangle)$]{} \[\]\[\]\[1.\][$$]{} \[\]\[\]\[1.2\] \[\]\[\]\[1.\] \[\]\[\]\[1.2\] \[\]\[\]\[1.2\] \[\]\[\]\[1.2\] \[\]\[\]\[1.\] ![Evaluation of the level-spacing statistics. Top: INM spectrum of unit density soft-sphere (pair potential $1/r^12$) system at $T=0.68$, as obtained from the numerical diagonalization of 100 thermalized configurations. Middle: The cumulative function of the same system and decomposition in *smooth* and *fluctuating* parts (inset). Bottom: The level-spacing distribution of this system, normalized to have $\langle s \rangle = 1$. Poisson and Wigner-Dyson distributions are also shown for comparison.[]{data-label="F-unfolding"}](figs/spettro_2.ps){width=".8\columnwidth"} ![Evaluation of the level-spacing statistics. Top: INM spectrum of unit density soft-sphere (pair potential $1/r^12$) system at $T=0.68$, as obtained from the numerical diagonalization of 100 thermalized configurations. Middle: The cumulative function of the same system and decomposition in *smooth* and *fluctuating* parts (inset). Bottom: The level-spacing distribution of this system, normalized to have $\langle s \rangle = 1$. Poisson and Wigner-Dyson distributions are also shown for comparison.[]{data-label="F-unfolding"}](figs/staircase_2.ps){width=".8\columnwidth"} ![Evaluation of the level-spacing statistics. Top: INM spectrum of unit density soft-sphere (pair potential $1/r^12$) system at $T=0.68$, as obtained from the numerical diagonalization of 100 thermalized configurations. Middle: The cumulative function of the same system and decomposition in *smooth* and *fluctuating* parts (inset). Bottom: The level-spacing distribution of this system, normalized to have $\langle s \rangle = 1$. Poisson and Wigner-Dyson distributions are also shown for comparison.[]{data-label="F-unfolding"}](figs/ps_g11mono_2.ps){width=".8\columnwidth"} To estimate the localization threshold $\lambda_\mathrm{L}$, we proceed as follows. We divide the full spectrum into two parts at an arbitrary threshold $\lambda_{th}$ and study (after proper unfolding) the restricted level-spacing distributions $P_1(s) \equiv P(s | \lambda<\lambda_{th})$ and $P_2(s) \equiv P(s | \lambda>\lambda_{th})$. The localization threshold (where it exists) should correspond the value of $\lambda_{th}$ that leads to $P_1(s) = P_\mathrm{WD}(s)$ (extended eigenstates) and $P_2(s) = P_\mathrm{P}(s)$ (localized eigenstates). On the other hand if $\lambda_{th} \neq \lambda_\mathrm{L}$, $P_1(s)$ and $P_2(s)$ will bear more similarity to each other, since one of them will include spacings from both localized and extended levels. A qualitative feeling of what happens as $\lambda_{th}$ moves through the spectrum can be gathered from Fig. \[F-evol\_g11\], where it can be clearly seen how $P_2(s)$ evolves from a nearly Wigner to a Poisson distribution. We remark that these probabilities distributions are universal since no fitting parameters are required once the plot is versus $s/\langle s \rangle$, where $\langle s \rangle = \int \!\! P(s) s \, ds$. \[\]\[\]\[1\][$P_1(s/\langle s \rangle)$]{} \[\]\[\]\[1\][$P_2(s/\langle s \rangle)$]{} \[\]\[\]\[1\][$s/\langle s \rangle$]{} \[\]\[\]\[1\][$$]{} ![Level spacing distributions $P_1(s/\langle s \rangle)$ and $P_2(s/\langle s \rangle)$ obtained from the positive part of the spectrum of Fig. \[F-unfolding\] for several values of $\lambda_\mathrm{th}$. Wigner-Dyson (left) and Poisson (right) distributions are also plotted.[]{data-label="F-evol_g11"}](figs/ps_g11mono_pos.ps){width="1.\columnwidth"} Since one cannot say, based on a finite sample, when one of the distributions becomes “exactly” Poisson or Wigner, one looks for the value of $\lambda_{th}$ that makes both distributions “as different as possible from each other.” To do this we use (following ref. ) the Jensen-Shannon (JS) divergence as a measure of the distance between two distributions. It is defined as $$D_\mathrm{JS}[P_1,P_2] = H[a_1 P_1+a_2 P_2] -a_1 H[P_1] -a_2 H[P_2] . \label{djs}$$ $H[P]= -\sum_i P(s_i) \log P(s_i)$ is the Shannon entropy of the distribution $P$, and $a_1,a_2=1-a_1$ are positive weights of each distribution. In what follows, we shall choose the weights as proportional to the support of the section of the (unfolded) spectrum considered to evaluate the level-spacing. This ensures that the JS divergence is not affected by differences in sizes [@JS]. The problem of finding the threshold is then reduced to finding the maximum of $D_\mathrm{JS}[P_1,P_2]$ as function of $\lambda_{th}$. We stress that the ideas behind the method are justified only in the large $N$ limit, and a study of finite-size effects is thus crucial in this context. Results ======= A case study: the GOE --------------------- We have applied this procedure to the GOE, as a test and illustration of the method. We generated ensembles of $N\times N$ random matrices for $N=10,20,50,500$ with i.i.d. elements (taken from Gaussian distribution with zero mean and variance $1/\sqrt{N}$) and computed the DOS by numerical diagonalization (Fig. \[F-dosgoe\]). The JS divergence has a maximum that tends to the band edge as $N$ grows (Fig. \[F-goeJS\], top), indicating that there is no localization threshold in this system, as it is known theoretically. To gain further insight into the workings of the method, we have also tried fitting the level-spacing distribution restricted to eigenvalues lower than $\lambda_{th}$ with the functions interpolating between Poisson and Wigner-Dyson, thus defining a kind of order parameter for localization ($\pi$ in the case of the linear combination, Eq. \[linear\], $q$ in the case of the Brody distribution, Eq. \[brody\]). Both $\pi$ and $q$ should be zero if $\lambda_{th}\leq \lambda_\text{\tiny L}$ and non-zero otherwise. As Fig. \[F-goeJS\] shows, both order parameters start from being different from their minimum at a value which roughly corresponds to the maximum of the JS divergence. However, the minimum value is not zero, most likely due to finite-size problems. Unfortunately, it is not possible to verify that in this case, because increasing $N$ decreases the fraction of localized states such that their number remains finite even when $N\to\infty$ [@mehta]. \[\]\[\]\[1.\][$\lambda$]{} \[\]\[\]\[1\][$g(\lambda)$]{} ![DOS of GOE matrices at several $N$. The solid line is the semicircular law predicted at $N\to\infty$. Inset: zoom on the left tail.[]{data-label="F-dosgoe"}](figs/dos.ps){width=".8\columnwidth"} \[\]\[\]\[1\][$\lambda_{th}$]{} \[\]\[\]\[1.\][$q$]{} \[\]\[\]\[1.\][$D_\text{\tiny JS}\big[P_1,P_2\big]$]{} \[\]\[\]\[1.\][$\pi$]{} ![Top: JS divergence for GOE matrices at different $N$. Middle: The Brody parameter (see text) for the same values of $N$. Bottom: The order parameter $\pi$ from the linear approximation (\[linear\]).[]{data-label="F-goeJS"}](figs/JS_N.ps){width=".8\columnwidth"} ![Top: JS divergence for GOE matrices at different $N$. Middle: The Brody parameter (see text) for the same values of $N$. Bottom: The order parameter $\pi$ from the linear approximation (\[linear\]).[]{data-label="F-goeJS"}](figs/q_N.ps){width=".8\columnwidth"} ![Top: JS divergence for GOE matrices at different $N$. Middle: The Brody parameter (see text) for the same values of $N$. Bottom: The order parameter $\pi$ from the linear approximation (\[linear\]).[]{data-label="F-goeJS"}](figs/lineare.ps){width=".8\columnwidth"} The INM spectrum ---------------- We have studied the soft-sphere binary mixture of ref.  at unit density and $T=0.2029$ (to be compared with the mode-coupling critical temperature $T_c\approx 0.2262$). Equilibration of the supercooled liquid at this temperature has been possible thanks to the fast Monte Carlo algorithm of ref. . From the physical point of view, we are interested in studying the nature of negative modes (which represent about $4.3\%$ of the total modes for this system). At the temperature considered, the dynamics is highly arrested, and diffusion events are rare (indeed, at the mean field level, such as mode-coupling theory, diffusion is completely suppressed below $T_c$). Accordingly, one expects that all or most of the negative modes correspond to localized eigenvectors (*i.e.* local rearrangement of a non extensive number of particles). We find that this is not the case. \[\]\[\]\[1.3\][$g(\lambda)$]{} \[\]\[\]\[1.3\][$\lambda$]{} ![INM spectrum of the binary mixture of soft spheres at $\Gamma=1.49$ as obtained from 300 equilibrium configurations ($N=2048$).[]{data-label="F-dosg149"}](figs/spettro_bin_g149.ps){width="\columnwidth"} \[\]\[\]\[1.3\][$\lambda_{th}$]{} \[\]\[\]\[1.3\][$g(\lambda)$]{} \[\]\[\]\[1.3\][$D_\text{\tiny JS}\big[P_1,P_2\big]$]{} ![The Jensen-Shannon divergence for the negative tail of the spectrum of Fig. \[F-dosg149\]. The DOS $g(\lambda)$ is also plotted (here it is normalized such that $\int_{-\infty}^0 g(\lambda) = 1$). []{data-label="F-js149"}](figs/JS_bin_g149.ps){width="\columnwidth"} \[\]\[\]\[1.3\][$\lambda_{th}$]{} \[\]\[\]\[1.3\][$\pi$]{} ![The parameter $\pi$ is found by fitting the level-spacing distribution $P(s|\lambda<\lambda_{th})$ to the form in (\[linear\]).[]{data-label="F-pi149"}](figs/lin_comb_g149.ps){width="\columnwidth"} In Fig. \[F-dosg149\] we show the INM spectrum for a system of 2048 soft spheres. The spectrum is expected to have two localization thresholds (on the positive and negative tails), so to apply the scheme above we need first to separate the positive and negative modes. We focus on the negative modes. The JS divergence for the negative part, evaluated as explained above, is shown in Fig. \[F-js149\] for $N=400,800$ and $2048$ particles. As $N$ increases, the maximum of these curves does not shift as in the GOE example but it becomes sharper, pointing to a localization threshold. A quadratic fit of the peak, for the largest size system leads to $\lambda_\mathrm{L} = -16.8 \pm 1.4$. We also verify that the two distributions $P_1(s)$ and $P_2(s)$ are indeed Poisson and Wigner (respectively) for this threshold value. We next try to fit with the linear interpolation: in Fig. \[F-pi149\] we plot the fitting parameter $\pi$ (cf.Eq. \[linear\]) for the $P(s|\lambda<\lambda_{th})$. As in the GOE case, the parameter goes to a non-zero value. At the values of $N$ available to us, there is no clear evidence that larger system sizes will make $\pi$ go to zero for $\lambda_{th} \le \lambda_\mathrm{L}$. To check whether this behavior is an indication of some non-linear effect, we have performed the following test. Assuming that the threshold is actually at $\lambda_\mathrm{L}$, for each of the values of $\lambda_{th}$ of Fig. \[F-pi149\] we have generated random spacings distributed with the linear combination of Poisson and Wigner-Dyson. The weight $\pi$ was taken as proportional to the number of actual levels between $\lambda_{L}$ and $\lambda_{th}$, i.e. $$\pi \propto \int_{\lambda_\mathrm{L}}^{\lambda_{th}} g(\lambda) d\lambda \ .$$ We then tried the same fitting procedure we applied to the INM spectrum, to see whether it would yield the same $\pi$ used. We found that samples of at least $\approx 90000$ levels were needed in order to recover the correct weight with the fit (this is more than ten times greater than the number of levels available from the INM spectrum of the simulated liquid). Hence we attribute the finite value of $\pi$ (Fig. \[F-pi149\]) to finite-size effects. In the fit with the Brody distribution, the finite-size effects seem to be less pronounced (see Fig. \[F-qq149\]). The Brody parameter is consistent with a localized phase for $\lambda \lesssim -16$: here one can see that in the large $N$ limit the order parameter goes to zero as $\lambda \le \lambda_\mathrm{L}$. So the results from the fits and from the JS divergence are consistent with the existence of a localization threshold. Though a more accurate determination of the threshold needs larger system sizes, this results shows that most (more than 96%) of the negative modes in this system are of an extended nature. \[\]\[\]\[1.3\][$\lambda_{th}$]{} \[\]\[\]\[1.3\][$$]{} \[\]\[\]\[1.3\][q]{} \[\]\[\]\[1.3\][$D_\text{\tiny JS}\big[P_1,P_2\big]$]{} ![The Brody order parameter for the supercooled liquid at different sizes.[]{data-label="F-qq149"}](figs/qq_bin_g149.ps){width="\columnwidth"} Conclusions =========== The study of the level-spacing distribution of the INM spectrum of a glass forming liquid in the supercooled regime we have presented shows that it is possible to locate the mobility edge for the negative tail of this spectrum with reasonable precision (other techniques like the inverse participation ratio are usually less precise). The INM level-spacing distribution is reasonably described in terms of Wigner and Poisson distributions and this information can be used to determine the mobility edge. We have applied the technique to the soft-spheres binary liquid below $T_c$. Our result can be summarized by stating that at this temperature only $3.4\%$ of the negative modes are localized. This adds to the evidence (see e.g. the critique by Gezelter et al. [@critique]) that not all extended imaginary modes can be regarded as leading to free diffusion. It has been argued [@BeLa; @donati00; @lanave01; @lanave02] that not all extended negative modes should be considered unstable in the sense of this approach \[one should exclude false saddles (also called shoulder modes) and saddles that do not connect different minima, an analysis we have not done here\]. Our result implies not only that many negative modes cannot be regarded as diffusive (not surprising in view of earlier results, e.g. refs. ), but also that the vast majority of these non diffusive negative modes are extended, even below $T_c$. It would be interesting to extend these results to study the temperature dependence of the localization properties of the INM across the Mode Coupling temperature. We expect to do this in the near future. Acknowledgments {#acknowledgments .unnumbered} =============== We thank G. Biroli, O. Bohigas, N. Deo, S. Franz, V. Martín-Mayor, G. Parisi and P. Verrocchio for useful discussions and comments. S.C. was supported by the ECHP programme under contract HPRN-CT-2002-00307, [*DYGLAGEMEM*]{}. T.S.G. is a career scientist of the Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET, Argentina). [10]{} T. Keyes, J. Chem. Phys. [**101**]{}, 5081 (1994); J. Phys. Chem. A [**101,**]{} 2921 (1997). S. Bembenek and B. Laird, Phys. Rev. Lett. [**74**]{}, 936 (1995); J. Chem. Phys. [**104**]{}, 5199 (1996). F. Sciortino and P. Tartaglia, Phys. Rev. Lett. [**78,**]{} 2385 (1997). C. Donati, F. Sciortino, and P. Tartaglia, Phys. Rev. Lett. [**85,**]{} 1464 (2000). E. La Nave, A. Scala, F. W. Starr, F. Sciortino, and H. E. Stanley, Phys. Rev. Lett. [**84,**]{} 4605 (2000); E. La Nave, A. Scala, F. W. Starr, H. E. Stanley, and F. Sciortino, Phys. Rev. E [**64,**]{} 036102 (2001). S. Sastry, N. Deo, and S. Franz, Phys. Rev. E [**64**]{}, 016305 (2001). E. La Nave, H. E. Stanley, and F. Sciortino, Phys. Rev. Lett. [**88,**]{} 035501 (2002). J. D. Gezelter, E. Rabani, and B. J. Berne, J. Chem. Phys. [**107,**]{} 4618 (1997). T. Keyes, W.-X. Li, and U. Zurcher, J. Chem. Phys. [**109,**]{} 4693 (1998). P. A. Lee, T. V. Ramakrishnan, Rev. Mod. Phys. [**57**]{}, 287 (1985). P. W. Anderson, Phys. Rev. [**109**]{}, 1492 (1958); D. J. Thouless, Phys. Rep. [**13**]{}, 93 (1974); R. Abou-Chacra, D. J. Thouless, and P. W. Anderson, J. Phys. C [**6**]{}, 1734 (1973). P. Cizeau, J. P. Bouchaud, Phys. Rev. E [**50**]{}, 1810 (1994). M. Mézard, G. Parisi, and A. Zee, Nucl. Phys. B [**559**]{}, 689 (1999). S. Ciliberti, T. S. Grigera, V. Martin-Mayor, G. Parisi, and P. Verrocchio, cond-mat/0403122. R. J. Bell, P. Dean, Discuss. Faraday Soc. [**50**]{}, 55 (1970). P. Carpena and P. Bernaola-Galván, Phys. Rev. B [**60**]{}, 201 (1999). W. Götze and L. Sjogren, Rep. Prog. Phys. [**55,**]{} 241 (1992). T. Guhr, A. Müller-Groling, and H. A. Weidenmüller, Phys. Rep. [**299**]{}, 189 (1998). M. L. Mehta, [*Random Matrices*]{} (Academic Press, New York, 1991). O. Bohigas, *Random Matrix Theory and Chaotic Dynamic*, in *Chaos and Quantum Physics*, M.J. Giannoni, A. Voros and J. Zinn-Justin eds, Elsevier Science Publisher B.V., (1991). See for example T.A. Brody, J. Flores, J.B. French, P.A. Mello, A. Pandey, and S.S.M. Wong, Rev. Mod. Phys. [**53**]{}, 385 (1983). P. Bernaola-Galván, R. Román-Roldán, and J. L Oliver, Phys. Rev. E [**53**]{}, 5181 (1996). W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, [*Numerical Recipes*]{}, Second Edition (Cambridge University Press, Cambridge, 1992), also at [http://www.library.cornell.edu/nr/bookcpdf.html]{}. B. Bernu, J.-P. Hansen, Y. Hiwatari, and G. Pastore, Phys. Rev. A [**36,**]{} 4891 (1987); J.-L. Barrat, N. Roux, and J.-P. Hansen, Chem. Phys. [**149,**]{} 197 (1990). T. S. Grigera and G. Parisi, Phys. Rev. E [**63**]{}, 045102(R) (2001).
--- abstract: 'We analyze the [*ASCA*]{} spectrum of the X-ray binary system in eclipse using atomic models appropriate to recombination-dominated level population kinetics in an overionized plasma. In order to estimate the wind characteristics, we first fit the eclipse spectrum to a single-zone photoionized plasma model. We then fit spectra from a range of orbital phases using global models of photoionized winds from the companion star and the accretion disk that account for the continuous distribution of density and ionization state. We find that the spectrum can be reproduced by a density distribution of the form derived by @cas75 for radiation-driven winds with $\dot{M}/v_\infty$ consistent with values for isolated stars of the same stellar type. This is surprising because the neutron star is very luminous ($\sim10^{38}$[ergs$^{-1}$]{}) and the X-rays from the neutron star should ionize the wind and destroy the ions that provide the opacity for the radiation-driven wind. Using the same functional form for the density profile, we also fit the spectrum to a spherically symmetric wind centered on the neutron star, a configuration chosen to represent a disk wind. We argue that the relatively modest orbital variation of the discrete spectrum rules out a disk wind hypothesis.' author: - 'Patrick S. Wojdowski, Duane A. Liedahl' - Masao Sako bibliography: - 'ms.bib' title: 'The X-Ray Photoionized Wind in /' --- Introduction {#intro} ============ Since the early observations of the High Mass X-ray Binary (HMXB) Cen X-3, it has been known that the system exhibits a residual X-ray flux in eclipse, indicating that the X-rays are scattered or otherwise reprocessed, as might occur in a wind [@sch72]. A similar phenomenon has been observed in other HMXBs — for example, @bec78 observed a residual eclipse flux from Vela X-1. It has been realized since the far ultraviolet became accessible with rocket and satellite-borne instruments that isolated hot stars expel material at rates of order $10^{-6}\,M_\sun$yr$^{-1}$ [@mor67], and so the existence of winds in HMXBs might have been expected. It was shown that in isolated hot stars, strong winds are driven by transfer of the outward momentum of the ultraviolet stellar radiation to the matter through line absorption [@luc70; @cas75]. However, it was pointed out by @hat77 that in an X-ray binary, X-rays from the compact object would ionize a part of the wind around the X-ray source to such a high degree that the ions with line transitions which enable radiative driving would not be present. Presumably, if an X-ray source were luminous enough, it could completely shut off the radiation driven wind on the X-ray illuminated side of the companion star. Calculations by @mgr82 showed that radiative driving is disabled for X-ray luminosities larger than $\sim5{\times10^{34}}$[ergs$^{-1}$]{}. As HMXBs typically have luminosities in the range $10^{36}$–$10^{38}$[ergs$^{-1}$]{}, this would indicate that radiative driving could not function at all in X-ray binaries. However, this calculation assumed that the wind is optically thin to the ionizing X-rays. The effects of the wind’s opacity to X-rays has been explored by @mas84 and by @ste91. These calculations considered the formation of a He$^{++}$/He$^+$ Strömgren surface in the wind. On the He$^+$ side of this boundary, all elements have lower ionization, and more of the ions necessary for radiative driving are present. @mas84 concluded that most wind-fed X-ray binaries must contain Strömgren boundaries and hence radiative winds. This conclusion is limited to wind-fed systems because of the coupling of the wind parameters to the accretion luminosity in these systems. An increase in the luminosity of the X-ray source tends to increase the volume of the He$^{++}$ region, however, to increase the luminosity it is necessary to increase the mass loss rate of the companion star or decrease the wind velocity, both of which tend to decrease the volume of the He$^{++}$ region. The investigation of @ste91 was not limited to wind-fed systems. It showed that X-ray luminosities as large as $\sim10^{36}$[ergs$^{-1}$]{} diminished the wind velocity and mass loss rate but did not completely shut off the wind. However, for luminosities larger than $4{\times10^{36}}$[ergs$^{-1}$]{} @ste91 was “unable to to find dynamical solutions” for the wind. [@ho87] explored the behavior of high luminosity (exclusively) wind-fed X-ray binaries. However, they did not explicitly consider optical depth effects and their condition for X-ray ionization turning off the wind is questionable, even for the optically thin case [c.f.  @ste91]. We regard the nature of winds in X-ray binaries outside of the X-ray shadow of the companion as unresolved, and especially problematic in the case of high luminosity ($L_{\rm X}\sim10^{38}$[ergs$^{-1}$]{}) systems. A wind driving mechanism which does not depend on the presence of ions with UV resonance lines is thermal pressure due to X-ray heating of the exposed face of the companion star (also referred to as evaporative winds). This mechanism was invoked by @bas73 and by @aro73 to explain the mass transfer in , though @alm74 and @mcr75 found that a thermal wind alone could not power the X-ray source. @day93a showed that an X-ray excited wind could account for the mass transfer in , as well as explain the extended eclipse transitions seen in the source. At least one simulation of the disk-fed high-mass X-ray binary , in which the effects of X-ray heating were included, showed that a stronger wind was driven from the accretion disk, and that the structure of the wind was dominated by the disk wind [@owe97]. Before the launch of [*ASCA*]{} in 1993 the energy resolution of most cosmic X-ray detectors was rather poor ($\Delta E/E\sim$10–20%) and therefore, these instruments were unable to detect the X-ray emission lines which are a signature of optically thin, highly ionized gas. Nor is there any other waveband in which discrete emission from very highly ionized gas could be detected. Most observational X-ray examinations of winds in HMXBs focused on absorption in low-ionization material, iron fluorescence, which is produced primarily in low-ionization material, and the Compton scattering continuum which is insensitive to the ionization state. [@kal82b; @nag86; @sat86; @cla88; @hab89; @woo95]. The lack of a regular low-ionization wind on the X-ray illuminated side of HMXBs, and therefore the presence of a high-ionization wind there, has been inferred from decreases in equivalent widths and velocities of the P Cygni profiles of ultraviolet resonance lines away from X-ray eclipse [@dup80; @vdk82; @ham84; @vrt97]. However, direct study of the highly ionized material requires X-ray spectroscopy. The Solid-state Imaging Spectrometers [SIS, @gen95] on board [[*ASCA*]{} ]{}have energy resolution of a few per cent which allows identification and study of many previously undetectable X-ray spectral features. With the SIS detectors, recombination and fluorescence emission features were seen from several ions from the HMXBs Vela X-1 [@nag94], Cen X-3 [@ebi96], and Cyg X-3 [@kit94; @lie96; @kaw96]. When a compact X-ray source is occulted by its companion, the emission spectrum from an extended wind can be studied without the confusion from the more intense, generally featureless, spectrum of X-rays from the neutron star. @sak99 studied the eclipse spectrum of the low luminosity HMXB Vela X-1 obtained by [[*ASCA*]{} ]{}and estimated the rate of mass loss. Though a highly ionized wind exists in Vela X-1, they found that that most of the mass is inside dense clumps, which are not highly ionized. This allows for the possibility that radiation imparts its outward momentum to the clumps which then drag the hot, diffuse wind outward. Presumably, these clumps could be destroyed (or inhibited from forming) by a more luminous X-ray source such as Cen X-3 or SMC X-1. In fact, @ebi96 showed that the equivalent width of the 6.4keV iron fluorescence line in Cen X-3 was nearly constant with orbital phase, indicating that most of the low-ionization material in the system is located near the neutron star, and that very little low-ionization material is found in the extended wind. This is also confirmed by pulsations in the 6.4keV line [@day93b; @aud98]. @woj00 used hydrodynamic simulations of the wind in the most luminous persistent HMXB SMC X-1 by @blo95 to predict the X-ray eclipse spectrum of that system and compared it to a spectrum obtained with [*ASCA*]{}. In the simulation, a tenuous, very highly ionized wind formed on the X-ray illuminated side and a denser wind developed on the X-ray shadowed side. However, dense finger-like structures protruding from the shadowed side of the companion were swept into the X-ray illuminated region by the Coriolis force. The calculations showed that the reprocessed radiation from the tenuous gas was dominated by Compton scattering and the denser gas emitted copious amounts of recombination radiation. The observed spectrum was nearly featureless however, and @woj00 concluded that the dense fingers that appeared in the simulation could not be present in the wind of SMC X-1. Cen X-3 is one the most luminous, persistent known HMXBs in the Galaxy. It consists of a 4.8 second pulsar in a 2.08 day eclipsing orbit [@sch72] with its O6–8 III type [@con78; @hut79] companion V779 Cen. The high X-ray luminosity of Cen X-3 makes it an excellent candidate for the study of X-ray photoionized winds. It was observed by [*ASCA*]{} over approximately half an orbit, which included an eclipse. @ebi96 found several emission lines in this data set, which were mostly from hydrogen-like ions. From the intensities of these lines, they made estimates of the scale and physical conditions of the wind. We re-analyze the data set of @ebi96 using the observed spectra to test physically motivated wind models with the goal of providing constraints on the wind driving mechanism. In [§]{} \[reduction\] we describe our reduction of the data. In [§]{} \[single-zone\], we calculate emission spectra for photoionized plasmas using a list of $\sim$3000 lines and emission features, fit the observed spectra using single zone emission spectra and then use the results to estimate wind parameters. In [§]{} \[global\_models\] we calculate spectra using explicit parameterized models of the wind density distribution, and fit the observed spectra to determine wind parameters more accurately. We test two explicit global wind models: 1) a stellar wind from the companion with the velocity profile of a radiatively driven wind and 2) motivated by the accretion disk wind in the simulation of @owe97, a wind with the same velocity profile but centered on the neutron star. In [§]{} \[optdep\] we justify an assumption that the wind is optically thin to X-rays which is used in previous sections. In [§]{} \[discussion\], we discuss the implications of our results. Data Reduction {#reduction} ============== We obtained the screened REV2 [*ASCA*]{} event data from the 1993 June 24-25 observation of Cen X-3 across an eclipse from the HEASARC archive. All manipulation of the data was done with FTOOLS v4.2 [@ft42] and all of the programs mentioned in this section are from that package. We divided the data into the same four time segments as @ebi96, corresponding to phase ranges $-0.31$ to $-0.29$, $-0.23$ to $-0.08$, $-0.08$ to $0.13$, and 0.14 to 0.20. The data was taken in a mix of FAINT and BRIGHT modes. For the data taken in FAINT mode, we used the files which had been converted to BRIGHT mode on the ground in order to have homogeneous data. For each of the time segments, we extracted all counts from inside a circle of radius 191. We extracted a background spectrum from an annulus of inner radius 191  and outer radius 382. During the observation, the center of the image was placed so that the source counts were distributed over all four of the detector chips. During the eclipse phase, all four chips were on but during the rest of the observation only one of the four chips was on, resulting in a $\sim$45% collection efficiency due to the placement of the source near the chip boundary. The spectra were extracted with the standard channel binning applied by XSELECT resulting in 512 energy channels. The spectra from the different chips of each detector and from the two SIS detectors from the same time interval were added using ADDASCASPEC. The energy channels in the range 2.9–8.0keV were binned additionally by a factor of two. Additional binning was applied to the energy channels in the range 8.0–10.0keV so that each channel in the eclipse spectrum had at least 50 counts. The analysis tools we used to compute the detector response include the reduction in effective area due to the fact the source areas we have chosen do not include all of the photons focussed by the mirrors. However, the background regions also contain some source photons which are subtracted from the source spectrum in computing the background selection region. Our source region (for all chips on) contains approximately 80% of the photons for a point source at the center and our background region contains approximately 15% [@ser95]. The background spectrum is multiplied by the area of the source region and divided by the area of the background region and then subtracted from the source spectrum. Because the background region is larger than the source region by a factor of 3, approximately 5% of the source flux is subtracted from the source spectrum. This effect is not accounted for by our analysis. This problem is compounded by the fact that the image of Cen X-3 is further smeared, in a way which depends on energy, due to an X-ray halo and this halo is delayed in time relative to the direct photons. However, the radius of the halo is approximately the size of the source region and the surface brightness due to the halo is generally no larger than that due to the image of the direct photons [@woo94] and therefore leads to a similar subtraction of source photons. Because of the complexity of these effects, we do not try to account for them explicitly but note that fluxes (and quantities proportional to flux) which we derive are too small by approximately a factor 10%. This error does not qualitatively affect any of the results we derive in this work. Single-Zone Spectral Models {#single-zone} =========================== Emission spectra from X-ray binaries are generally interpreted with the assumption that photoionization from the compact X-ray source is the dominant source of ionization in the circumstellar plasma. This is justified by the relative values of the luminosities of the X-ray sources and the densities and linear scales of the system. While it is straightforward to measure X-ray luminosities and, where orbital parameters are available from pulsar timing and optical spectroscopy, linear scales in X-ray binaries, determinations of the matter density that do not depend on the assumption of photoionization equilibrium are less reliable. Photoionization equilibrium has been inferred directly from observed X-ray recombination spectra [e.g., @lie96]. In the case of Cen X-3 however, there are no obvious spectral signatures of recombination dominance. The observed spectrum consists mainly of Ly$\alpha$ transitions from hydrogen-like ions which, in principle, can be produced in plasmas where collisions dominate the ionization. We therefore attempt to fit the eclipse spectrum of Cen X-3 with spectral models of emission from collisionally ionized (coronal) plasma as well as photoionized plasmas. The X-ray spectrum observed from Cen X-3 includes at least the following components: direct emission from the neutron star, X-rays from the neutron star which have been scattered in the wind, continuum and line emission from the wind and other circumstellar material, and X-rays scattered from interstellar dust grains. All of these components contribute to the observed continuum, but only the wind and circumstellar material can emit lines. We found that, in general, it was possible to fit the continuum using two power laws, each absorbed by a different column density. The more highly absorbed power-law may correspond approximately to the X-rays from the neutron star viewed through some dense component of circumstellar material and the less absorbed power-law to the neutron star continuum scattered in the extended wind and by dust. However, since our primary goal is to extract and interpret the emission line spectrum, we do not attempt to constrain the parameters of these power laws in a manner that would force them to correspond to physical sources of continuum emission [c.f., @ebi96]. We interpret the lines, except for the 6.4keV Fe K$\alpha$ line as emission from the extended wind. These plasma emission models are described in detail below. The spectral model is, $${\cal F}(\epsilon) = \mathrm{e}^{-\sigma(\epsilon)N_{\rm H1}} \left\{f_{\rm pl1}(\epsilon)+f_{\rm plasma}(\epsilon) + \\ \mathrm{e}^{-\sigma(\epsilon)N_{\rm H2}} [I_{\rm line}\delta(\epsilon-\epsilon_{\rm line}) + f_{{\rm pl}2}(\epsilon)]\right\} \label{spec_exp}$$ where the power laws, $$f_{{\rm pl}i}(\epsilon)=K_{{\rm pl}i}\left(\frac{\epsilon}{1\,{\rm keV}}\right)^{-\alpha_i},$$ $f_{\rm plasma}$ is the plasma emission model, $\sigma(\epsilon)$ is the interstellar absorption cross-section of @mor83, $N_{{\rm H}i}$ are the absorption column densities, $I_{\rm line}$ is the line photon flux and $\delta$ is the Dirac delta function, used for the the 6.4keV Fe K$\alpha$ line complex. The energy of the Fe K$\alpha$ line, $\epsilon_{\rm line}$, was constrained to be in the range 6.3–6.5keV. The Fe K$\alpha$ line was the only fluorescent feature required to fit the data in any of our fits. @ebi96 identified an emission feature at energy 1.25$\pm$0.04keV with the 1.25keV K$\alpha$ fluorescent line. However, in all of our spectral models described here, this emission feature is fully accounted for by Ly$\beta$ (1.21keV). When the neutron star is eclipsed by the companion star, the X-rays from the extended wind only are observed. Therefore, for the purpose of testing basic plasma emission models, we fit only the spectrum from eclipse. For our spectral fits, we used the XSPEC spectral fitting program [v10.0, @arn96], importing our own models for emission from photoionized plasmas. Collisional Ionization Equilibrium {#cie} ---------------------------------- We fit the eclipse spectrum, using for $f_{\rm plasma}$ the emission spectrum of an isothermal plasma in collisional ionization equilibrium (CIE). Also referred to as coronal equilibrium, this describes a situation where recombination is balanced by collisional ionization by electrons, and ionization by radiation is negligible. We use the MEKAL model [@mew95] contained in XSPEC for the CIE plasma emission. The three parameters of the MEKAL model are the temperature, metal abundance, and the normalization. The emission processes in CIE are determined by two-body interaction rates, so the normalization of the flux is proportional to the emission measure ($E\equiv\int n_{\rm e}^2{d}V$) divided by the square of the distance to the source. The best fit parameters are shown in Table \[io\_mod\_fit\], and the spectrum is plotted in Figure \[ecl\_mekal\]. The MEKAL model, with best-fit parameters, which reproduces most of the the observed emission lines, also accounts for all of the continuum emission below $\sim$4keV through bremsstrahlung radiation from the same gas. For parameters for which the MEKAL model accounts for the observed line emission, the model also accounts for all of the continuum emission below $\sim$4keV. However, collisionally ionized gas must also scatter X-rays from the neutron star. In the [[*ASCA*]{} ]{}band, optically thin electron scattering reproduces the continuum shape of the X-ray source with a fractional luminosity relative to the source spectrum equal to: $$\frac{L_{\rm scat}}{L}=\frac{\sigma_{\rm T}}{4\pi}\int\frac{n_{\rm e}}{r^2}{d}V,$$ where $\sigma_{\rm T}$ is the Thomson cross-section and $r$ is the distance from the compact source. We can estimate this fraction by taking the orbital separation $a$ as the linear scale of the system and setting: $$\begin{aligned} r = a, & n_{\rm e}=(E/V)^{1/2}, & V =\frac{4\pi}{3}a^3 . \label{fiduc_dims}\end{aligned}$$ Then, $$\label{eq:scat} \frac{L_{\rm scat}}{L}\approx(12\pi)^{-1/2}\sigma_{\rm T} E^{1/2}a^{-1/2} =1.6{\times10^{-2}}$$ If the neutron star has an intrinsic spectrum in the 2–10keV band which is a power law with photon index 1.5, and luminosity $\sim10^{38}$[ergs$^{-1}$]{}, then, using the system parameters of Table \[syspars\], the Compton scattered luminosity in the 2–10keV band should be $\sim10^{36}$[ergs$^{-1}$]{}. This corresponds to a power-law normalization of $I_{\rm pl}=1.6{\times10^{-2}}$s$^{-1}$cm$^{-2}$keV$^{-1}$. However, the upper limit on $I_{\rm pl1}$ from the fit, $1.1{\times10^{-4}}$s$^{-1}$cm$^{-2}$keV$^{-1}$, corresponds to a scattered luminosity of $7{\times10^{33}}$[ergs$^{-1}$]{}. In principle, it is possible that the Compton scattered continuum could be reduced relative to the thermal emission due to clumping of the wind. If the wind is clumped with volume filling factor $f$, then the density would have to be increased by a factor $f^{-1/2}$ to preserve the emission measure. The new Compton scattered flux would then be changed by a factor $f^{1/2}$ since its magnitude is proportional to $nV$. A very small filling factor ($f\approx10^{-4}$) would be necessary however and we regard this as unlikely. We therefore reject this model. Increasing the metal abundance in the gas increases the flux of the lines relative to the bremsstrahlung continuum, thereby providing “room” for a scattered continuum component. Therefore, we tried another fit in which we allowed the metal abundance to be free. For this fit, we tied the normalization of the first power law (pl1) to the emission measure of the MEKAL component as described above (Eq. \[eq:scat\]). XSPEC does not allow the user to set one parameter to the square root of another parameter, so we fixed the power-law normalization according to a trial value of the emission measure, fit, and iterated to find a best fit emission measure with a consistent power-law normalization. This procedure resulted in a statistically acceptable fit (Table \[io\_mod\_fit\], Figure \[ecl\_mekalv\]). The lower limit on the abundance derived from this procedure, however, is 25 times solar, which we believe to be implausibly high. and so we reject this model as well. Photoionization Equilibrium {#photoio} --------------------------- Having rejected plausible collisional ionization spectral models, we proceed to fit the observed spectra using model emission spectra from photoionized plasmas. The radiation spectrum due to recombination consists of radiative recombination continua (RRC) from free electrons recombining directly to bound states and lines which are produced through radiative cascades. The volume emissivity of the recombination feature $k$ is $j_k= n_{\rm e} n_{i+1} \alpha_k(T)$ where $n_{i+1}$ is the density of the recombining ion to which the electron recombines. The function $\alpha_k(T)$ is an effective recombination rate coefficient (the recombination coefficient for recombinations that produce transition $k$) that depends on atomic parameters and the level population kinetics. In a gas which is optically thin and has density low enough such that all ions may be assumed to be in the ground state (i.e., the density is low compared to the critical density for all important transitions), the ion fractions and temperature are, for a given spectrum of ionizing radiation, functions only of the ionization parameter [$\xi=L/nr^2$, @tar69]. Therefore, for a photoionized plasma, the emissivity of any feature may be written $j_k=n_{\rm e}^2f_k(\xi)$, and, for the entire spectrum, $j_\nu/n_{\rm e}^2$ is a function only of $\xi$. To calculate the temperature and ion fractions as a function of $\xi$, we need to know the X-ray spectrum of the neutron star. The best way to do this is to observe the X-ray spectrum from the neutron star directly while outside of eclipse. This [*ASCA*]{} data set includes data outside of eclipse. However, the observed spectra in these phases outside of eclipse differs greatly from the intrinsic spectrum of the neutron star because at the time of the observation the line of sight to the neutron star contained circumstellar material that substantially absorbed the direct X-rays. When circumstellar material passes in front of the neutron star, the direct X-rays are absorbed preferentially at low energies. Even when direct low energy X-rays are absorbed completely, low energy X-rays are still observed from scattering and emission from the extended wind, just as during eclipse by the companion star. The residual X-rays from scattering are generally not pulsed. The X-rays from Cen X-3 in this data set show both a spectrum which is deficient in low-energy X-rays compared to other observations [@nag92; @san98] and also are not pulsed at low energy [@ebi96]. @nag92 used similar evidence to demonstrate that a “pre-eclipse dip” observed by [*Ginga*]{} was due to absorption. In these out-of-eclipse intervals, the large absorption of the direct X-rays makes the strength of the direct flux comparable to and difficult to distinguish from the residual flux from scattering and wind emission, and it is impossible to isolate the flux from the neutron star in the observed spectrum. For the intrinsic spectrum of the neutron star, we instead use a spectrum obtained by @san98 with [*BeppoSAX*]{}. @bur00 have shown that in this observation, the X-ray flux is pulsed down to the energy band below 1.8keV indicating that it originates in the proximity of the neutron star. There is an additional advantage to using [*BeppoSAX*]{} in that it can measure the spectrum up to $\sim 200$keV. For $\xi\gtrsim 10^4$, the ions are nearly fully stripped, and the temperature is determined by Comptonization. The Compton temperature is highly sensitive to the presence of high energy photons. We used the “Lorenzian” model of @san98, extending the power-law continuum to low energy by setting the absorption column to zero, and excluding the cyclotron absorption feature, since a value for the depth of this feature is not provided. We used the XSTAR program [v1.43, @kal99] to compute ion fractions and temperatures for 100 uniformly spaced values of $\log\xi$ from $-2$ to 6 using the @san98 spectrum. In the XSTAR calculation, we used a luminosity of 10$^{30}$[ergs$^{-1}$]{} and a constant density 10$^{-2}$cm$^{-3}$ — again, for an optically thin gas, the absolute values of the luminosity and the density are not important. We then calculated the recombination spectrum at each value of $\xi$ using the ion fractions and temperatures from XSTAR, and line and RRC powers calculated using the Hebrew University/Lawrence Livermore Atomic Code [HULLAC, @kla77] and the Photoionization Cross-Section code [PIC, @sal88] for recombination cross-sections. These atomic models have been used to analyze the recombination spectrum of Cyg X-3 [@lie96] and Vela X-1 [@sak99] and are described further there. We fit the eclipse spectrum using for $f_{\rm plasma}$ our recombination model spectra plus a corresponding bremsstrahlung component. The bremsstrahlung component is the BREMSS model from XSPEC. We tied the normalization of the bremsstrahlung component to that of the recombination component such that the emission measures would be equal. We set the temperature of the bremsstrahlung model to be a locally linear approximation to the function $T(\log\xi)$ computed with XSTAR in the neighborhood of the best fit value of $\log\xi$. The best fit spectrum is shown in Figure \[ecl\_xi\], and the best fit parameters are shown in Table \[io\_mod\_fit\]. The best fit value of the ionization parameter ($\log\xi=3.19$) corresponds to an electron temperature of 0.41keV. At this temperature, the radiative recombination continua are broad, and so it is not surprising that they do not appear prominently in the spectrum. It is because of this that @ebi96 were able to fit the spectrum using only lines and not RRC. This demonstrates that though the presence of prominent narrow RRC in emission spectra indicates photoionization [@lie96], the lack of narrow RRC does not necessarily indicate some other type of equilibrium. It can be seen that the bremsstrahlung component is relatively faint, thus allowing a power-law continuum at low energies. The luminosity implied for the power-law component \#1 is $4{\times10^{35}}$[ergs$^{-1}$]{}which is much closer to the expected Compton scattered luminosity of $\sim10^{36}$[ergs$^{-1}$]{}. A quasi-continuum of blended lines and RRC accounts for 21% of the photon flux in the 1–3keV band. This differs significantly from the individual line fits of @ebi96 in which the lines constitute only 5% of the flux in this band. From this single-zone fit, it is possible to make a crude estimate of the X-ray luminosity and parameters of the wind in Cen X-3, if we take the orbital separation to be the characteristic linear scale of the system, and use our best fit value of the emission measure as in Equation \[fiduc\_dims\]. We then we find for a characteristic wind density $n\sim5{\times10^{10}}\,{\rm cm}^{-3}(d/10\,{\rm kpc})^{-1}$. From the definition of $\xi$, we then have $$L=\xi nr^2\sim 1.4{\times10^{38}}\,{\rm erg\,s^{-1}}(d/10\,{\rm kpc})^{-1}.$$ Note that this value of the luminosity is derived only from the flux of the recombination spectral features and the known system dimensions, yet it comes very close to the luminosity derived for Cen X-3 by measuring the continuum directly during the high state [@nag92; @san98]. @ebi96 derived similar, though somewhat larger, estimates for the characteristic density and luminosity ($1.6{\times10^{11}}$cm$^{-3}$ and $1.8{\times10^{38}}$[ergs$^{-1}$]{} for $d$=10kpc) using an emission measure derived from the fluxes of the 6.7keV and 6.9keV lines of, respectively, helium-like and hydrogen-like iron in the eclipse and egress spectra and by assuming a value of $\log\xi=3.4$ from their ratio. Our values differ because we use slightly different values for the linear scale and also because our values of $\xi$ and $E$ are derived from fitting all of the lines and not just the hydrogen- and helium-like iron lines. We now estimate the characteristics of the stellar wind from this derived value of the density. In a spherically symmetric steady-state wind, continuity demands that: $$\frac{\dot{M}}{v(R)}={4\pi R^2 \mu n(R)} \label{mvfr}$$ where $\dot{M}$ is the mass-loss rate, $v$ is the velocity, $R$ is the distance from the center of the star, $n$ is density of hydrogen (neutral and ionized), and the quantity $\mu$ is the gas mass per hydrogen atom which we take to be 1.4$m_{\rm H}$. Again using the fiducial dimensions, $\dot{M}/v\sim4{\times10^{-9}}M_\sun$yr$^{-1}$(kms$^{-1}$)$^{-1}$($d$/10kpc)$^{-1}$ which, as will be discussed in [§]{} \[discussion\] below, is near values for isolated stars with spectral type similar to V779 Cen. While this model provides a statistically acceptable fit to the data, it is unlikely that the entire emission line region can be characterized by a single value of $\xi$. Therefore, we suggest that this model provides no more than a semi-quantitative description of the data. In the next section we use explicit wind models and spectra of all of the full phases of the observation to derive these wind parameters more accurately and to explore the wind geometry. Global Photoionized Wind Models {#global_models} =============================== We use the recombination spectra described in the previous section to derive spectra for explicit wind density distributions. In order to calculate the global recombination spectrum of photoionized gas, we use the differential emission measure formalism [DEM, @sak99]. For an arbitrary distribution of gas, the total spectrum of recombination radiation is given by $$\label{spec_int} L_\nu=\int \frac{j_\nu(\xi)}{n_{\rm e}^2}\left[\frac{{d}E}{{d}\log\xi}\right]{d}\log\xi ,$$ where the quantity in brackets is the DEM distribution, which hereafter will be referred to as $D(\xi)$. In practice, we integrate only the volume of gas that is visible to the observer and refer to this as the apparent DEM. To calculate the DEM distribution, it is necessary to know only the density distribution of the gas and the luminosity of the radiation source. To calculate $j_\nu(\xi)/n_{\rm e}^2$, it is necessary to know only the spectral shape of the ionizing radiation and the elemental composition of the gas. In this work, we calculate the spectrum of recombination emission from the wind of Cen X-3 for different models of the wind using different luminosities for the neutron star, but always using the same spectral shape for the neutron star X-ray emission and always assuming solar abundances [@and89]. The DEM formalism requires us to evaluate $j_\nu(\xi)/n_{\rm e}^2$ only once for a number of values $\xi$, and then, for different matter distributions and luminosities, to calculate the DEM distribution and evaluate the integral in Equation \[spec\_int\]. To calculate the DEM distribution for a given density distribution and luminosity, we divide the binary system into spatial cells. We then calculate $\xi$ and the emission measure for each spatial cell. The emission measure for each cell is then added to a running sum of the emission measure for a $\xi$ bin. Dividing the emission measure in each bin by the width of that bin in $\log\xi$ gives the differential emission measure. If the density distribution is described by parameters such that the density everywhere scales linearly with one parameter, which we will call $\eta$, then to calculate the DEM distribution for new values of $\eta$ and the luminosity $L$ it is not necessary to recalculate the DEM distribution by summation over the spatial cells. For new values $L^\prime$ and $\eta^\prime$, the new DEM distribution $D^\prime$ is related to the old DEM distribution $D$ by $$\label{dem_scal} D^\prime\left(\frac{L^\prime \eta}{L \eta^\prime}\xi\right)= \left(\frac{\eta^\prime}{\eta}\right)^2 D(\xi).$$ This identity is derived in Appendix \[app:dem\]. For a change in the luminosity and density parameter such that $L^\prime/\eta^\prime=L/\eta$, the DEM distribution is changed only by a constant factor $(\eta^\prime/\eta)^2=(L^\prime/L)^2$ and therefore so is the total emission spectrum. DEM Distributions for Model Winds {#dem_mods} --------------------------------- As alluded to in [§]{} \[intro\], the structure of stellar winds in X-ray binaries are likely to be rather complicated. In addition to the effects of X-ray photoionization on the radiation driving, the gravity of the compact object and orbital motion may have a significant effect on the wind density distribution [e.g., @fri82]. Though numerical simulations may account for many of these effects, we choose instead to use a simple, spherically symmetric density distribution with free parameters such that the density is easily recalculated for a change in parameters. We approximated the geometry of the X-ray binary system as a spherical star in orbit with a point-like neutron star emitting X-rays isotropically. For the dimensions of the system, we assumed the values in Table \[syspars\]. We used density distributions for spherically symmetric, radiation driven winds [@cas75; @kud89] from the companion star. Modeling accretion disk winds introduces a new set of complications. They may be driven thermally [e.g., @woo96], radiatively [e.g., @pro99], or flung out along rotating magnetic field lines [magnetohydrodynamically, e.g., @bla82; @pro00]. While stellar winds may be symmetrical in two dimensions (azimuth and altitude), disk winds are necessarily symmetric in no more than one dimension (azimuth). However, in order to derive some observational constraints on a disk wind hypothesis, we use the same spherically symmetric symmetric radiation-driven velocity profile as for the stellar wind but center the wind on the neutron star. The explicit form of the wind velocity profile is given by: $$\label{vofr} v(R)=v_0 + (v_\infty-v_0)(1-R_{\rm in}/R)^\beta$$ where $v_0$ and $v_\infty$ are the wind velocities at the stellar surface and at infinity and the parameter $\beta$ describes the acceleration of the wind. With $\beta=0$, this equation describes a wind which is immediately accelerated to its terminal velocity. The variable $R$ is the distance from the center of the wind (the center of the companion for the stellar wind or the neutron star for the disk wind). For the stellar wind, the wind begins at the surface of the companion so $R_{\rm in}=R_\star$. For a disk wind however, no such natural inner radius exists and therefore, for the disk wind, we make $R_{\rm in}$ an extra parameter of the model and assume that the volume for which $R<R_{\rm in}$ is empty. The value of $R_{\rm in}$ may correspond, for example, to a characteristic radius on the accretion disk from which the wind arises. Because an accretion disk may be no larger than the Roche lobe of the accreting object, the Roche lobe radius should provide an approximate upper limit on $R_{\rm in}$. A rearrangement of Equation \[mvfr\] gives the density distribution. $$n(R)=\frac{\dot{M}}{4\pi R^2 \mu v(R)}.$$ In order to make use of the scaling relation of Equation \[dem\_scal\], we include the velocity profile explicitly (Equation \[vofr\]) and reexpress the density as follows: $$n(R)=\left(\frac{\dot{M}}{v_\infty}\right)(4\pi\mu)^{-1} R^{-2}\left[{\frac{v_0}{v_\infty}}+\left(1-{\frac{v_0}{v_\infty}}\right)\left(1-\frac{R_{\rm in}}{R}\right)^\beta \right]^{-1}.$$ According to this parameterization, the parameter [$\dot{M}/v_\infty$]{} plays the role of $\eta$. The shape of the DEM distribution is then then a function of the parameters [$Lv_\infty/\dot{M}$]{}, $\beta$, $v_0/v_\infty$, and, for the disk wind, $R_{\rm in}$. For a given choice of those parameters, the magnitude of the DEM distribution is proportional to $(\dot{M}/v_\infty)^2$ or, equivalently, $L^2$. Calculation of DEM Distributions and Global Spectra {#calcdem} --------------------------------------------------- The wind distributions we use are symmetric to rotation around the line containing the neutron star and the center of the companion star. This allowed us to conserve computational resources by using rings around this axis as the spatial cells to calculate the DEM distributions. The exclusion of the region occulted by the companion star from the apparent DEM breaks this rotational symmetry but we were still able to conserve computation, using rings as the spatial cells, by computing the fraction of each ring not occulted by the companion star and multiplying the emission measure for each ring by that factor. We used the binary system parameters from Table \[syspars\], and fixed $v_0/v_\infty$ at 0.015 which corresponds approximately to the photospheric thermal velocity of an O-type star for $v_0$ if $v_\infty$ is of order 1000kms$^{-1}$. For the spectra which were accumulated during an orbital phase interval longer than 0.02, we averaged the apparent DEM over phases separated by no more than that. Our parameter grid contains the values $\beta=(0.0, 0.1,0.2,..., 1.5)$, $\log(Lv_\infty/\dot{M})=(-0.5, -0.33, -0.17, 0, 0.17,..., 2.5)$ in units of $10^{37}$[ergs$^{-1}$]{} for $L$, 1000kms$^{-1}$ for $v_\infty$ and $10^{-6}\,M_\sun$yr$^{-1}$ for $\dot{M}$. For the disk wind, the grid also contained $R_{\rm in}=(3.16, 5.62, 10, 17.8,31.6)\,R_\sun$. The resultant spectra were stored in a FITS format XSPEC ATABLE file [@arn95]. In order to demonstrate the appearance and behavior of the DEM distributions, we plot DEM distributions and contour maps of $\log\xi$ using the best fit parameters found in the following section ([§]{} \[fitting\], Table \[fitdata\]). In Figure \[spider\] we show a map of $\log\xi$ for the companion star wind and in Figure \[star\_dems\] we plot the apparent DEM distributions for orbital phases representative of the observed phases. In Figures \[spider\_disk\] & \[disk\_dems\] we show the same plots for the disk wind model. However, as mentioned in [§]{} \[dem\_mods\], we expect the Roche lobe radius to be an upper limit on $R_{\rm in}$ and so we use $R_{\rm in}=3.4R_\sun$, the Roche lobe radius as determined from the parameters of Table \[syspars\] and the formula of @egg83, for these plots instead of a best fit value of $R_{\rm in}$. In Figure \[disk\_dems\], it can be seen that the apparent emission measure is dramatically reduced during eclipse for a disk wind inner radius significantly smaller than the companion star radius. Spectral Fitting {#fitting} ---------------- For both the disk and the stellar wind models, we simultaneously fit the spectra at all four phases, using for $f_{\rm plasma}$ the recombination spectrum for the observable wind at each phase as discussed above. We allowed the parameters of all of the spectral components except for $f_{\rm plasma}$ to vary independently for the four observed phases. We show the best fit values and associated errors in Table \[fitdata\]. We show the spectral fits for the stellar wind in Figure \[four\_fit\]. The best fit model spectrum for the eclipse phase is shown at high resolution in Figure \[ecl\_unconvolved\]. Since our focus is on the line spectrum, we let the continuum parameters vary to fit the continuum shape without constraining them to physically meaningful bounds. Some strange results were obtained, such as the very large columns, normalizations, and photon indexes for the second power law in the egress phase (for both the stellar and disk wind models). Though the normalizations are very large, the fluxes of this component are comparable to that in the other phases. We note that our best fit value of the neutron star luminosity from the stellar wind model is very near the value obtained by direct measurement of the unocculted continuum. We emphasize that this luminosity value is obtained only from our recombination spectra for wind models, and is determined almost entirely by the observed line fluxes, not from any direct measurement of the broad band flux. Both wind models provide acceptable values of $\chi^2$, though for the disk wind the fits favor values of the inner radius which are at least comparable to the radius of the companion star. In Figure \[diskchi\], we plot the value of $\chi^2$ for the fits as a function of the assumed inner radius. As mentioned in [§]{} \[calcdem\], values of $R_{\rm in}$ much smaller than the companion star radius lead to dramatic variations in the DEM distribution, and therefore the emission line flux over eclipse. The observed emission line spectrum, however, varies only moderately across the eclipse. As the preferred radii are very large compared to the neutron star Roche-lobe radius, we consider the possibility that the wind could be dominated by matter arising from the accretion disk unlikely. We compare the lines fluxes predicted by our best fit stellar wind model with line fluxes measured by @ebi96 in Figure \[fig:line\_fluxes\]. According to our model, the lines from elements with lower values of $Z$ vary somewhat more as a function of orbital phase. This is because the lines from low-$Z$ ions are produced more efficiently in the region near the surface of the companion star, where the ionization parameter is lower, and which is occulted more during eclipse. Unfortunately, the quality of the current data is not sufficient to test this prediction. @ebi96 noted that the H-like and He-like lines of iron were undetectable during the first phase (“pre-eclipse”) from which they inferred that the overall ionization parameter was lower at that phase. Our analysis presumes that we observe the same symmetric wind, only occulted differently, at every phase. Therefore, in our model, the line fluxes must be the same at symmetric phases. However, our predicted line fluxes are greater only by a factor of a few from their $1\sigma$ limits and so our assumption that the wind density distribution and luminosity are constant is not excluded. Optical Depth of the Stellar Wind {#optdep} ================================= Until now, we have assumed that the stellar wind is optically thin to X-rays from the neutron star. For the densities and luminosities of our best fit model, the state of X-ray photoionized gas is affected little by optical depth inside an ionized Strömgren-type zone, where the charge state distribution of helium is dominated by He$^{++}$ but outside this zone, where the charge state distribution of helium becomes dominated by He$^+$, the effects of optical depth quickly become important [e.g., @kal82a]. The size of a Strömgren zone is usually derived for spherically symmetric nebulae [e.g., @ost89] with the condition that the rate at which the central source emits photons capable of ionizing helium is equal to the total rate of recombinations which do not produce ionizing photons (i.e., recombinations to excited states). However, the Strömgren radius may also be estimated in a nebula which is not spherically symmetric for any direction from the ionizing source using the condition that the rate of helium ionizing photons emitted per solid angle is equal to the total rate of recombinations per solid angle: $$\frac{1}{4\pi}\int_{\nu_0}^\infty\frac{L_\nu}{h\nu}{d}\nu= \int_0^{R_{\rm S}}\alpha_{\rm B}n_{\rm e} n_{\rm He^{++}}n_{\rm e} r^2 {d}r \label{strom}$$ where $\alpha_{\rm B}$ is the recombination rate to excited states and $\nu_0$ is ionization threshold of He$^{++}$ ($h\nu_0=54.4$eV). If we assume that the spectrum of Cen X-3 is a power-law with photon index 1, cut off at energies above $E_{\rm cut}=15$keV (approximately the @san98 spectrum) then the integral on the left hand side is $$L E_{\rm cut}^{-1}\ln(E_{\rm cut}/h\nu_0)=\frac{L}{2.67\,{\rm keV}}=2.3{\times10^{46}}\,{\rm s}^{-1}$$ for $L=10^{38}$[ergs$^{-1}$]{}. The computation of the integral on the right-hand side is complicated by the fact that the recombination coefficient, $\alpha_{\rm B}$, is a function of temperature. However, the temperature dependence of $\alpha_{\rm B}$ is only $\sim T^{-1/2}$ and so we can get a good estimate by assuming that the nebula is isothermal. In the calculations of [@kal82a], the temperature just inside of the He$^{++}$/He$^{+}$ boundary is approximately $10^5$K and so we use for $\alpha_{\rm B}$ the constant value $2{\times10^{-13}}$cm$^3$s$^{-1}$ [@ost89 Table 2.8]. In the ionized zone, nearly all of the helium is in the form He$^{++}$ and so $n_{\rm He^{++}}=8.3{\times10^{-2}}n_{\rm e}$ almost exactly. The condition that the nebula is optically thin becomes: $$\int_0^R n_e^2 r^2 dr<1.1{\times10^{59}}\,{\rm cm}^{-3}.$$ We note that the quantity on the left hand side is the emission measure per solid angle. For our wind, from the neutron star to the face of the companion along the line of centers, this integral is $2.72{\times10^{57}}$cm$^{-3}$ and in the direction away from the companion to infinity, it is $2.98{\times10^{56}}$cm$^{-3}$. Therefore, it is clear that the assumption that our model wind is optically thin, which we have used to calculate emission spectra, is self-consistent. Discussion ========== We note that our results for the global wind models can be generalized beyond the homogeneous, spherically symmetric distributions we have used. If the wind is clumped on a scale which is small compared to the system such that the ionization parameter inside the clumps and the total emission measure is the same as in a corresponding homogeneous model, then the observed spectrum will be unchanged. To construct such a model where the clumps have a filling factor $f$, the density must be increased by a factor $f^{-1/2}$ to keep $\int n_{\rm e}^2 {d}V$ unchanged and so to keep $\xi$ unchanged, $L$ must also be increased by a factor $f^{-1/2}$. Because our best fit luminosity is already at the high end of the range of luminosities observed for Cen X-3, the wind cannot be clumped with a filling factor significantly smaller than unity. A similar, though no longer exact, extension can be made to winds that, instead of being confined in small clumps, are confined in solid angle. An example of this scenario is a wind which exists on the surface a cone. Such a geometry is believed to exist in broad absorption line quasars and may be due to preferential launching of the wind along rays from the central object which graze the surface of the accretion disk [e.g., @mur95; @pro00]. For such a geometry, the factor $\Omega/4\pi$, where $\Omega$ is the solid angle to which the wind is confined, plays the same role as $f$ above. For the same reasons then, we can exclude these types of winds. Both the stellar wind and the disk wind models give statistically satisfactory fits to the data. However, the best fit values of the inner radius in the disk model are very large compared to the size of the accretion disk. If the extended reprocessing material originates primarily from an accretion disk wind, a geometry in which the wind is prevented from producing recombination radiation inside a radius which is approximately equal to companion star radius (which is $\sim$4 times the maximum size of the accretion disk) is required. We believe that it would require fine-tuned conditions to produce such a geometry, and we consider the possibility that a disk wind dominates the extended circumstellar material unlikely. As we have noted, in our analysis, the mass loss rate and terminal velocity can be obtained only in the combination $\dot{M}/v_\infty$. If Doppler shifts of lines due to motion of the wind could be measured, these two parameters could be determined independently. The errors in the line energies quoted by @ebi96 correspond to velocity upper limits of $\sim$2000[kms$^{-1}$]{} for the iron lines and $\sim$6000[kms$^{-1}$]{} for the lower energy lines. Therefore, only an upper limit on the mass loss rate of the highly ionized wind of $\sim3{\times10^{-6}}\,M_\sun$yr$^{-1}$ can be obtained. The fact that the value of the luminosity that we obtain from our spectral line fits is very close to the value determined from direct measurements of the continuum gives us confidence that the parameters of our model correspond to unique physical values that accurately describe the stellar wind. For the stellar wind, the best fit values of $\dot{M}/v_\infty$ and $\beta$ are similar to those found in isolated stars. For example, for the 06.5Iaf star and the 06.5III(f) star , the values of $\dot{M}$ are $6{\times10^{-6}}$ and $1.5{\times10^{-6}}$ $M_\sun$yr$^{-1}$ and the values of $v_\infty$ are 2200 and 2500kms$^{-1}$ [@lam99] which corresponds to values of $\dot{M}/v_\infty$ of 2.7${\times10^{-9}}\,M_\sun$yr$^{-1}$([kms$^{-1}$]{})$^{-1}$ and 0.6${\times10^{-9}}\,M_\sun$yr$^{-1}$([kms$^{-1}$]{})$^{-1}$ as compared with 1.56$\pm0.12{\times10^{-9}}\,M_\sun$yr$^{-1}$([kms$^{-1}$]{})$^{-1}$ (Table \[fitdata\]) for our fits to the Cen X-3 spectrum. The values of $\beta$ for these two isolated O stars are 0.7 and 0.8 [@lam99], which are also near our derived value of 0.57[$^{+0.06}_{-0.07}$]{} (Table \[fitdata\]). Therefore, in our model, the best fit wind parameters are roughly consistent with those of the normal radiatively driven winds in isolated massive stars. Considering that the wind is highly ionized and the radiation driving mechanism which governs the structure of the wind in isolated massive stars cannot function, this result is very surprising. To within an order of magnitude or so, the total emission measure in a smooth wind is given by $E \sim (\dot{M}/v_{\infty})^2/(m_{\rm p}^2 R_*)$. Therefore, to the extent that the companion stars in HMXBs are roughly the same size, lose mass at roughly the same rate, and produce smooth winds with roughly the same terminal velocities, the total emission measures should be approximately equal. Numerically, the above estimate gives, for typical parameters, $E \sim 10^{59}~\rm{cm^{-3}}$. (Note that this estimate applies equally well to isolated early-type stars.) Clumping in the wind, however, will tend to increase the emission measure. On the other hand, clumping also tends to decrease the local value of $\xi$, in effect, removing gas from the high-$\xi$ end of the DEM distribution and adding gas to the lower-$\xi$ end. Thus clumping reduces the wind’s X-ray recombination line emission, and increases its fluorescence emission. We have argued that the Cen X-3 wind is smooth. @sak99 have shown that the wind in Vela X-1 is highly clumped. In Figure \[vc\_logdem\], we compare the DEM distribution derived here for Cen X-3 with that derived, also using X-ray recombination, by @sak99 for Vela X-1. The value of $\dot{M}/v_{\infty}$ we have derived here for the wind of Cen X-3 is comparable to the value for Vela X-1 derived by a number of methods [@dup80; @sat86; @sak99]. As one caveat, we point out that for Vela X-1 the contribution from fluorescing material, which would cause an upturn in its DEM curve below $\log \xi \approx 1.5$, is not included in the plot.[^1] Above $\log \xi=2.0$, however, the differences are real, and are quite striking. The relatively small DEM magnitudes in Vela X-1 are a consequence of the fact that most of the mass in the wind is “locked up” in clumps of high density, hence, low $\xi$. For example, near $\log \xi=3.0$, where lines from He-like and H-like iron are produced, the DEM magnitudes vary by $\sim100$, which means that these lines are $\sim100$ times more luminous in Cen X-3 than in Vela X-1. This does not translate into excessively large line equivalent widths in Cen X-3, however, since its X-ray luminosity is also roughly 100 times higher. With this comparison arises the question as to what conditions are required in order to produce and/or destroy clumps in X-ray irradiated winds, a subject beyond the scope of this paper, but possibly one that bears on the nature of the wind driving force. The absence of substantial clumping in Cen X-3, as inferred from the modest phase variations of the iron K$\alpha$ equivalent width, is demonstrated by, and is consistent with, the large DEM magnitudes for $\log \xi > 2$. It has been suggested [@sak99] that the existence of a clumped wind component in Vela X-1 allows normal radiative driving by the UV field of the companion. Clearly, that is disallowed in the case of Cen X-3. We therefore conclude that the wind is most likely driven by X-ray heating of the illuminated surface of the companion star as proposed by @day93a. We look forward to [*Chandra*]{} observations of this system. High resolution spectroscopic data will allow us to measure independently the lines of the helium-like 2$\rightarrow$1 triplets and therefore determine the ionization mechanism directly [@lie99]. We will also be able to measure much smaller velocities (of order 100kms$^{-1}$) and therefore be able to set much tighter constraints on the mass-loss rate of the companion and the dynamics of the wind. Differential Emission Measure {#app:dem} ============================= We have defined the emission measure as $\int n_{\rm e}^2 {d}V$. We now define the ionization parameter limited emission measure as: $$\label{deflem} E(\xi)\equiv\int_{V(\xi^\prime\leq\xi)} n_{\rm e}^2 {d}V,$$ i.e., the emission measure in the volume in which the ionization parameter is less than some value. The linear differential emission measure is then $$\label{defdem} \frac{{d}E(\xi)}{{d}\xi}=\lim_{\delta\xi\to 0} \frac{E(\xi +\delta \xi)-E(\xi)}{\delta\xi} =\frac{1}{\delta \xi}\int_{V(\xi\leq\xi^\prime\leq\xi+\delta\xi)} n_{\rm e}^2 {d}V.$$ This infinitesimal volume is the space between two surfaces, each of which is defined by a single value of the ionization parameter. The infinitesimal distance between the surfaces is $\delta\xi/|\nabla\xi|$. Therefore, $$\frac{{d}E(\xi)}{{d}\xi}=\int_{S(\xi)}\frac{n_{\rm e}^2}{|\nabla \xi|}\,{d}S.$$ where $S(\xi)$ and $dS$ specify the integral over the surface specified by the ionization parameter $\xi$. We define the differential emission measure $$\label{dems} D(\xi)\equiv\frac{{d}E(\xi)}{{d}\log\xi}= \int_{S(\xi)}\frac{n_{\rm e}^2}{|\nabla \log\xi|}\,{d}S.$$ Suppose we have a model for the density distribution such that the density scales linearly with the parameter $\eta$. Consider a change of parameters $\eta\to \eta^\prime$ and $L\to L^\prime$. The surface defined by $\xi$ for the unprimed parameters is the same surface defined by $$\xi^\prime=\frac{L^\prime \eta}{L\eta^\prime}\xi.$$ Therefore, $D^\prime(\xi^\prime)$ can be related to $D(\xi)$ by transforming the integrand in Equation \[dems\]. Under the parameter change, $\log\xi$ differs from $\log\xi^\prime$ by an additive constant and so $|\nabla\log\xi|=|\nabla\log\xi^\prime|$. However, $n_{\rm e}^2$ is modified by the factor $(\eta^\prime/\eta)^2$. Therefore, $$D^\prime(\xi^\prime)=\left(\frac{\eta^\prime}{\eta}\right)^2D(\xi).$$ This is Equation \[dem\_scal\]. We thank Chris Mauche and Daniel Proga for helpful discussions and careful reading of the manuscript. We also thank John Blondin for helpful discussions. This research has made use of data obtained through the High Energy Astrophysics Science Archive Research Center Online Service, provided by the NASA/Goddard Space Flight Center. This research has made use of NASA’s Astrophysics Data System Abstract Service. D. A. L. was supported in part by NASA Long Term Space Astrophysics grant S-92654-F. Work at LLNL was performed under the auspices of the U. S. Department of Energy by University of California Lawrence Livermore National Laboratory under Contract W-7405-Eng-48. M. S. was supported under NASA grant NAG 5-7737. [lccc]{} $N_{\rm H1}$ ($10^{22}$cm$^{-2}$) & 0.96[$^{+0.03}_{-0.02}$]{} & 0.91$\pm$0.02 & 1.03$\pm$0.04\ $E$ (10$^{58}$cm$^{-3}$)($d$/10 kpc)$^{-2}$ & 3.62[$^{+0.06}_{-0.08}$]{} & 0.17[$^{+0.09}_{-0.04}$]{} & 2.58[$^{+0.22}_{-0.27}$]{}\ $\log\xi$ & & & 3.19[$^{+0.05}_{-0.07}$]{}\ $kT$ (keV) & 7.5[$^{+0.4}_{-0.5}$]{} & 8.9$\pm$0.5 & 0.41$\pm$0.01\ $Z/Z_\sun$ & 1(fixed) & 39[$^{+112}_{-\phn16}$]{} & 1(fixed)\ $\alpha_1$ & 1.5(fixed) & 1.5(fixed) & 1.50$\pm$0.09\ $I_{\rm pl1}$ (10$^{-3}$ s$^{-1}$cm$^{-2}$keV$^{-1}$) & $<0.11$ & 5.6(fixed) & 6.7$\pm$0.6\ $n_{\rm H2}$ ($10^{22}$cm$^{-2}$) & 71[$^{+10}_{-11}$]{} & 74$\pm$14 & 50[$^{+11}_{-20}$]{}\ $\epsilon_{\rm line}$ (keV) & 6.41[$^{+0.018}_{-0.020}$]{} & 6.393[$^{+0.015}_{-0.017}$]{} & 6.388[$^{+0.020}_{-0.015}$]{}\ $I_{\rm line}$(10$^{-4}$phcm$^{-2}$ s$^{-1}$) & 3.8[$^{+0.9}_{-0.8}$]{} & 4.8[$^{+1.4}_{-0.9}$]{} & 3.3[$^{+1.2}_{-0.6}$]{}\ $\alpha_2$ & 1.5$\pm$0.4 & 1.6[$^{+0.5}_{-0.6}$]{} & 1.5[$^{+0.7}_{-0.4}$]{}\ $I_{\rm pl2}$ (10$^{-3}$ph s$^{-1}$cm$^{-2}$) & 39[$^{+57}_{-25}$]{} & 11[$^{+18}_{-\phn7}$]{} & 17[$^{+79}_{-10}$]{}\ $\chi^2$/d.o.f. & 176/170 & 146/170 & 176/169\ probability & 35% & 90% & 33%\ \[io\_mod\_fit\] [llc]{} $R_\star$(companion radius) & 11.8 $R_\sun$ & a\ $a$(orbital separation) & 19.2 $R_\sun$ & a\ $i$(inclination angle) & 70.2& a\ Distance & 10 kpc & b\ \[syspars\] [lccccccccc]{} $N_{\rm H1}$ ($10^{22}$cm$^{-2}$) & 0.95$\pm$0.08 & 0.97[$^{+0.02}_{-0.04}$]{} & 1.00[$^{+0.02}_{-0.04}$]{} & 0.94[$^{+0.07}_{-0.08}$]{} & & 0.86$\pm0.08$ & 0.94[$^{+0.05}_{-0.04}$]{} & 1.03$\pm{0.04}$ & 0.87[$^{+0.07}_{-0.08}$]{}\ $\alpha_1$ & 0.59[$^{+0.17}_{-0.15}$]{} & 0.96[$^{+0.09}_{-0.06}$]{} & 1.42[$^{+0.11}_{-0.04}$]{} & -0.18[$^{+0.07}_{-0.04}$]{} & & 0.56[$^{+0.16}_{-0.14}$]{} & 0.96[$^{+0.10}_{-0.06}$]{} & 1.41[$^{+0.12}_{-0.05}$]{} & -0.19[$^{+0.03}_{-0.07}$]{}\ $I_{\rm pl1}$ (10$^{-3}$phcm$^{-2}$ s$^{-1}$keV$^{-1}$) & 17.4[$^{+3.0}_{-2.4}$]{} & 7.3[$^{+0.8}_{-0.5}$]{} & 5.8[$^{+0.7}_{-0.5}$]{} & 3.1[$^{+0.4}_{-0.3}$]{} & & 17.2[$^{+2.5}_{-2.3}$]{} & 7.3[$^{+0.9}_{-0.2}$]{} & 5.5[$^{+0.7}_{-0.2}$]{} & 3.09[$^{+0.16}_{-0.27}$]{}\ $N_{\rm H2}$ ($10^{22}$cm$^{-2}$) & 35[$^{+8}_{-6}$]{} & 140[$^{+27}_{-32}$]{} & 50[$^{+\phn8}_{-14}$]{} & 183[$^{+35}_{-45}$]{} & & 38[$^{+9}_{-7}$]{} & 154[$^{+28}_{-36}$]{} & 46[$^{+15}_{-13}$]{} & 197[$^{+21}_{-15}$]{}\ $\epsilon_{\rm line}$ (keV) & 6.374[$^{+0.023}_{-0.027}$]{} & 6.371[$^{+0.016}_{-0.022}$]{} & 6.391[$^{+0.017}_{-0.015}$]{} & 6.390$\pm$0.020 & & 6.376[$^{+0.024}_{-0.028}$]{} & 6.371[$^{+0.017}_{-0.022}$]{} & 6.392[$^{+0.016}_{-0.015}$]{} & 6.391[$^{+0.020}_{-0.021}$]{}\ $I_{\rm line}$(10$^{-4}$phcm$^{-2}$ s$^{-1}$) & 23$\pm$5 & 40[$^{+25}_{-13}$]{} & 3.3[$^{+0.7}_{-0.6}$]{} & 2.2[$^{+1.8}_{-0.5}$]{}${\times10^{2}}$ & & 24$\pm$5 & 49[$^{+18}_{-15}$]{} & 3.2[$^{+0.8}_{-0.6}$]{} & 27[$^{+12}_{-\phn6}$]{}\ $\alpha_2$ & 1.5[$^{+0.5}_{-0.3}$]{} & 3.3[$^{+0.6}_{-0.9}$]{} & 1.5[$^{+0.4}_{-0.5}$]{} & 7.5[$^{+0.9}_{-0.4}$]{} & & 1.6[$^{+0.5}_{-0.4}$]{} & 3.6[$^{+0.9}_{-1.0}$]{} & 1.5[$^{+0.5}_{-0.3}$]{} & 8.0[$^{+0.1}_{-1.2}$]{}\ $I_{\rm pl2}$ (10$^{-3}$phcm$^{-2}$s$^{-1}$keV$^{-1}$) & 2.7[$^{+4.3}_{-1.3}$]{}${\times10^{2}}$ & 6.1[$^{+46}_{-\phn5.4}$]{}${\times10^{3}}$ & 20[$^{+35}_{-14}$]{} & 8.3[$^{+620}_{-\phn\phn0.9}$]{}${\times10^{7}}$ & & 3.3[$^{+5.8}_{-1.9}$]{}${\times10^{2}}$ & 1.4[$^{+6.1}_{-1.3}$]{}${\times10^{3}}$ & 18[$^{+39}_{-13}$]{} & 2.6[$^{+190}_{-\phn\phn1.5}$]{}${\times10^{8}}$\ $R_{\rm in} (R_\sun)$ & & &\ $\beta$ & & &\ $Lv_\infty/\dot{M}$ & & &\ $\dot{M}/v_\infty$\[$d$/(10 kpc)\]$^{-1}$ & & &\ $L$\[$d$/(10 kpc)\]$^{-1}$ & & &\ $\chi^2$ & & &\ \[fitdata\] [lccccc]{} & 1.022 & 4.7 & 2.7 & 1.9 & 3.0\ & 1.472 & 1.7 & 1.0 & 0.7 & 1.0\ & 2.006 & 2.0 & 1.3 & 1.0 & 1.4\ & 2.621 & 1.7 & 1.2 & 0.9 & 1.2\ & 6.667 & 4.9 & 3.6 & 3.2 & 3.6\ & 6.966 & 2.8 & 2.2 & 2.0 & 2.3\ \[tab:line\_fluxes\] ![image](f1.ps){width="2.5in"} [Figure \[ecl\_mekal\]]{} ![image](f2.ps){width="2.5in"} [Figure \[ecl\_mekalv\]]{} ![image](f3.ps){width="2.5in"} [Figure \[ecl\_xi\]]{} ![image](f4.ps){width="3.0in"} [Figure \[spider\]]{} ![image](f5.ps){width="3.0in"} [Figure \[star\_dems\]]{} ![image](f6.ps){width="3.0in"} [Figure \[spider\_disk\]]{} ![image](f7.ps){width="3.0in"} [Figure \[disk\_dems\]]{} --------------------------------- --------------------------------- ![image](f8a.ps){width="2.5in"} ![image](f8b.ps){width="2.5in"} ![image](f8c.ps){width="2.5in"} ![image](f8d.ps){width="2.5in"} --------------------------------- --------------------------------- [Figure \[four\_fit\]a,b,c,d]{} ![image](f9.ps){width="3.0in"} [Figure \[ecl\_unconvolved\]]{} ![image](f10.ps){width="3.0in"} [Figure \[diskchi\]]{} ![image](f11.ps){width="4.0in"} [Figure \[fig:line\_fluxes\]]{} ![image](f12.ps){width="3.0in"} [Figure \[vc\_logdem\]]{} [^1]: Mapping the DEM distribution for low-$\xi$ material based upon X-ray spectra requires much higher spectral resolution, so that fluorescent line complexes can be resolved into their respective charge states.
--- abstract: 'We present nonlinear photonic circuit models for constructing programmable linear transformations and use these to realize a coherent Perceptron, i.e., an all-optical linear classifier capable of learning the classification boundary iteratively from training data through a coherent feedback rule. Through extensive semi-classical stochastic simulations we demonstrate that the device nearly attains the theoretical error bound for a model classification problem.' author: - | Nikolas Tezak[^1], Hideo Mabuchi\ Edward L. Ginzton Laboratory, Stanford University\ Stanford, CA 94305, USA bibliography: - 'Remote.bib' title: 'A Coherent Perceptron for All-Optical Learning' --- [**[Keywords:]{}** Optical information processing, Coherent Feedback, Machine Learning, Photonic Circuits, Nonlinear optics, Perceptron]{} Introduction {#sec:introduction} ============ Recent progress in integrated nanophotonic engineering [@Kippenberg2004KerrNonlinearity; @Haye2007Optical; @Levy2005Nanomagnetic; @Razzari2009CmosCompatible; @Englund2007Controlling; @Fushman2008Controlled; @Nozaki2010SubFemtojoule; @Cohen2014Phonon; @Vandoorne2014Experimental; @Santori2014Quantum] has motivated follow-up proposals [@Mabuchi2011Nonlinear; @Pavlichin2013Photonic] of nanophotonic circuits for all-optical information processing. While most of these focus on implementations of digital logic, we present here an approach to all-optical analog, *neuromorphic* computation and propose design schemes for a set of devices to be used as building blocks for large scale circuits. Optical computation has been a long-time goal [@Abraham1982Optical; @Smith1984Optical], with research interest surging regularly after new engineering capabilities are attained [@Miller1997Physical; @Miller2010Are], but so far the parallel progress and momentum of CMOS based integrated electronics has outperformed all-optical devices. In recent years we have seen rapid progress in the domain of machine learning, and artificial intelligence in general. Although most current ‘big data’-applications are realized on digital computing architectures, there is now an increasing amount of computation done in specialized hardware such as GPUs. Specialized analog computational devices for solving specific subproblems more efficiently than possible with either GPUs or general purpose computers are being considered or already implemented by companies such as IBM, Google and HP and in academia, as well. [@Ananthanarayanan2009Cat; @Neven2014Hardware; @Strukov2008Missing; @Wang2013Coherent] Specifically in the field of neuromorphic computation, there has been impressive progress on CMOS based analog computation platforms [@Choudhary2012Silicon; @Cassidy2014RealTime]. Several neuromorphic approaches to use complex nonlinear optical systems for machine learning applications have recently been proposed [@Duport2012AllOptical; @Vandoorne2011Parallel; @Vaerenbergh2012Cascadable; @Dejonckheere2014AllOptical] and some initial schemes have been implemented [@Larger2012Photonic; @Vandoorne2014Experimental]. So far, however, all of these ‘optical reservoir computers’ have still required digital computers to prepare the inputs and process the output of these devices with the optical systems only being employed as static nonlinear mappings for dimensional lifting to a high dimensional feature space [@Cover1965Geometrical], in which one then applies straightforward linear regression or classification for learning an input-output map. [@Verstraeten2010Reservoir] In this work, we address how the final stage of such a system, i.e., the linear classifier could be realized all-optically. We provide a universal scheme, i.e., independent of which particular kind of optical nonlinearity is employed, for constructing *tunable* all-optical, phase-sensitive amplifiers and then outline how these can be combined with self-oscillating systems to realize an optical amplifier with *programmable* gain, i.e., where the gain can be set once and is then fixed subsequently. Using these as building blocks we construct an all-optical *perceptron* [@Rosenblatt1957PerceptronA; @Rosenblatt1958Perceptron], a system that can classify multi-dimensional input data and, using pre-classified training data learn the correct classification boundary ‘on-line’, i.e., incrementally. The perceptron can be seen as a highly simplified model of a neuron. While the idea of all-optical neural networks has been proposed before [@Miller1993Novel] and an impressive scheme using electronic, measurement-based feedback for spiking optical signals has been realized [@Fok2013Pulse], to our knowledge, we offer the first complete description for how the synaptic weights can be stored in an optical memory and programmed via feedback. The physical models underlying the employed circuit components are high intrinsic-$Q$ optical resonators with strong optical nonlinearities. For theoretical simplicity we assume resonators with either a $\chi_2$ or a $\chi_3$ nonlinearity, but the design can be adapted to depend on only one of these two or alternative nonlinearities such as those based on free carrier effects or optomechanical interactions. The strength of the optical nonlinearity and the achievable $Q$-factors of the optical resonators determine the overall power scale and rate at which a real physical device could operate. Both a stronger nonlinearity and higher $Q$ allow operating at lower overall power. We present numerical simulations of the system dynamics based on the semi-classical Wigner-approximation to the full coherent quantum dynamics presented in [@Santori2014Quantum]. For photon numbers as low as ($\sim 10-20$) this approximation allows us to accurately model the effect of optical quantum shot noise even in large-scale circuits. In the limit of both very high $Q$ and very strong nonlinearity, we expect quantum effects to become significant as entanglement can arise between the field modes of physically separated resonators. In the appendix, we provide full quantum models for all basic components of our circuit. The possibility of a quantum speedup is being addressed in ongoing work. Recently, D-Wave Systems has generated a lot of interest in their own superconducting qubit based quantum annealer. Although the exact benefits of quantum dynamics in their machines has not been conclusively established [@Boixo2014Evidence], recent results analyzing the role of tunneling in a quantum annealer [@Boixo2014Computational] are intriguing and suggest that quantum effects can be harnessed in computational devices that are not unitary quantum computers. The Perceptron algorithm {#sub:review_of_the_perceptron_algorithm} ------------------------ The perceptron is a machine learning algorithm that maps an input $x\in{\mathbb{R}}^n$ to a single binary class label $\hat{y}_w[x]\in\{0, 1\}$. Binary classifiers generally operate by dividing the input space into two disjoint sets and identifying these with the class labels. The perceptron is a linear classifier, meaning that the surface separating the two class label sets is a linear space, a hyperplane, and its output is computed simply by applying a step function $\theta(u):=\mathbbm{1}_{u \ge 0}$ to the inner product of a single data point $x$ with a fixed *weight vector* $w$: $$\begin{aligned} \hat{y}_w[x] := \theta(w^Tx) = \begin{cases} 1 \text{ for } w^Tx \ge 0, \\ 0 \text{ otherwise.}\end{cases}\end{aligned}$$ Geometrically, the weight vector $w$ parametrizes the hyperplane $\{z\in{\mathbb{R}}^n:\; w^T z=0\}$ that forms the decision boundary. In the above parametrization the decision boundary always contains the origin $z=0$, but the more general case of an affine decision boundary $\{\tilde{z}\in{\mathbb{R}}^n:\; \tilde{w}^T \tilde{z} = b\}$ can be obtained by extending the input vector by a constant $z = (\tilde{z}^T, 1)^T\in{\mathbb{R}}^{n+1}$ and similarly defining an extended weight vector $w=(\tilde{w}^T, -b)^T$. The perceptron converges in a finite number of steps for all linearly separable problems [@Rosenblatt1957PerceptronA] by randomly iterating over a set of pre-classified training data $\{(y^{(j)},x^{(j)}) \in \{0, 1\} \otimes {\mathbb{R}}^n,\; j=1, 2,\dots, M\}$ and imparting a small weight correction $w\to w + \Delta w$ for each falsely classified training example $x^{(j)}$ $$\begin{aligned} \Delta w = \tilde{\alpha} \left(y^{(j)}-\hat{y}_w[x^{(j)}]\right) x^{(j)}.\label{eq:perceptron_discrete}\end{aligned}$$ The *learning rate* $\tilde{\alpha}>0$ determines the magnitude of the correction applied for each training example. The expression in parentheses can only take on the values $\{ 0, -1, 1\}$ with the zero corresponding to a correctly classified example and the non-zero values corresponding to the two different possible classification errors. Usually there exist many separating hyperplanes for a given linear binary classification problem. The standard perceptron is only guaranteed to find one that works for the training set. It is possible to introduce a notion of optimality to this problem by considering the minimal distance (“margin”) of the training data to the found separating hyperplane. Maximization of this margin naturally leads to the “support vector machine” (SVM) algorithm [@Cortes1995SupportVector]. Although the SVM outperforms the perceptron in many classification tasks it does not lend itself to a hardware implementation as readily because it cannot be trained incrementally. It is this that makes the perceptron algorithm especially suited for a hardware implementation: We can convert the discrete update rule to a differential equation $$\begin{aligned} \dot{w}(t) = \alpha \left\{y(t)-\hat{y}_{w(t)}(t)\right\} x(t), \label{eq:perceptron_continuous}\end{aligned}$$ and then construct a physical system that realizes these dynamics. In this continuous-time version the inputs are piece-wise constant $x(t) = x^{(j_t)},$ $y(t) = y^{(j_t)}$ and take on the same discrete values as above indexed by $j_t := \lceil \frac{t}{\Delta t} \rceil \in \{1,2,\dots, M = \frac{T}{\Delta t}\}.$ The circuit modeling framework {#sub:the_model} ------------------------------ Circuits are fully described via Quantum Hardware Description Language (QHDL) [@Tezak2012Specification] based on Gough and James’ SLH-framework [@Gough2009Series; @Gough2008Quantum]. To carry out numerical simulations for large scale networks, we derive a system of semi-classical Langevin equations based on the Wigner-transformation as described in [@Santori2014Quantum]. Note that there is a perfect one-to-one correspondence between nonlinear cavity models expressed via SLH and the Wigner method as long as the nonlinearities involve only oscillator degrees of freedom. There is ongoing research in our group to establish similar results for more general nonlinearities [@Hamerly2015FCM]. Both the Wigner method and the more general SLH framework can be used to model networks of quantum systems where the interconnections are realized through bosonic quantum fields. The SLH framework describes a system interacting with $n$ independent input fields in terms of a unitary scattering matrix $S$ parametrizing direct field scattering, a coupling vector $L=(L_1, L_2, \dots, L_n)^T$ parametrizing how external fields couple into the system and how the system variables couple to the output and a Hamilton operator inducing the internal dynamics. We summarize these objects in a triplet $(S, L, H).$ $L$ and $H$ are sufficient to parametrize any Schrödinger picture simulation of the quantum dynamics, e.g., the master equation for a mixed system state $\rho$ is given by $$\begin{aligned} \label{eq:SLH_master} \dot \rho = - i[H, \rho] + \sum_{j=1}^n \left(L_j \rho L_j^\dagger - \frac12 \{L_j^\dagger L_j, \rho\}\right).\end{aligned}$$ The scattering matrix $S$ is important when composing components into a network. In particular, the input-output relation in the SLH framework is given by $$\begin{aligned} \label{eq:SLH_inout} dA_{\rm out} = S\, dA_{\rm in} + L\,dt,\end{aligned}$$ where the $dA_{\rm in/out,j},\, j=1,2,\dots, n$ are to be understood as quantum stochastic processes whose differentials can be manipulated via a quantum Ito calculus [@Gough2009Series]. The Wigner method provides a simplified, approximate description which is valid when all non-linear resonator modes are in strongly displaced states $\cite{Santori2014Quantum}.$ The simulations presented here were carried out exclusively at energy scales for which the Wigner method is valid, allowing us to scale to much larger system sizes than we could in a full SLH-based quantum simulation. This is because the computational complexity of the Wigner method scales at most quadratically (and in sparsely interconnected systems nearly linearly) with the number of components as opposed to the exponential state space scaling of a quantum mechanical Hilbert space. We nonetheless provide our models in both Wigner-method form and SLH form in anticipation that our component models will also be extremely useful in the full quantum regime. In the Wigner-based formalism, a system is described in terms of time-dependent complex coherent amplitudes ${\alpha}(t)=({\alpha_{1}}(t), {\alpha_{2}}(t),\dots, {\alpha_{m}}(t))^T$ for the internal cavity modes and external inputs ${\beta_{\rm in}}(t) = ({\beta_{{\rm in},{1}}}(t), {\beta_{{\rm in},{2}}}(t), \dots, {\beta_{{\rm in},{n}}}(t))^T$. These amplitudes relate to quantum mechanical expectations as $\langle {\alpha_{j}} \rangle \approx \langle a_j\rangle_{\rm QM},$ where $\langle \cdot \rangle$ denotes the expectation with respect to the Wigner quasi distribution and $\langle \cdot \rangle_{\rm QM}$ a quantum mechanical expectation value. See [@Santori2014Quantum] for the corresponding relations of higher order moments. To simplify the analysis, we exclusively work in a rotating frame with respect to all driving fields. As in the SLH case we define output modes ${\beta_{\rm out}}(t)$ that are algebraically related to the inputs and the internal modes. The full dynamics of the internal and external modes are then governed by a multi-dimensional Langevin equation $$\begin{aligned} \dot{{\alpha}}(t) &= \left[\mathbf{A} {\alpha}(t) + \mathbf{a} + A_{\rm NL}({\alpha},t)\right] + \mathbf{B} {\beta_{\rm in}}(t), \label{eq:abnl}\end{aligned}$$ as well as a purely algebraic, linear input-output relationship $$\begin{aligned} {\beta_{\rm out}}(t) &= \left[\mathbf{C} {\alpha}(t) + \mathbf{c}\right] + \mathbf{D}{\beta_{\rm in}}(t) \label{eq:cd-io}.\end{aligned}$$ The complex matrices $\mathbf{A}, \mathbf{B}, \mathbf{C}, \mathbf{D}$ as well as the constant bias input vectors $\mathbf{a}$ and $\mathbf{c}$ parametrize the linear dynamics, whereas the function $A_{\rm NL}({\alpha},t)$ gives the nonlinear contribution to the dynamics of the internal cavity modes. Each input consists of a coherent, deterministic part and a stochastic contribution ${\beta_{{\rm in},{j}}}(t)={\bar{\beta}_{{\rm in},j}}(t) + \eta_j(t)$. The stochastic terms $\eta_j(t) = \eta_{j,1}(t) + i \eta_{j, 2}(t)$ are assumed to be independent complex Gaussian white noise processes with correlation function $\langle \eta_{j,s}(t)\eta_{k,r}(t')\rangle = \frac{1}{4}\delta_{jk}\delta_{sr}\delta(t-t').$ The linearity of the input-output relationship in either framework and in the external degrees of freedom leads to algebraic rules for deriving reduced models for whole circuits of nonlinear optical resonators by concatenating component models and algebraically solving for their interconnections. [@Gough2008Quantum; @Santori2014Quantum] To see the basic component models used in this work see Appendix \[sec:component\_models\]. Netlists for composite components and the whole circuit will be made available at [@Tezak2014PerceptronFiles]. The Coherent Perceptron Circuit {#sec:the_all_optical_perceptron_circuit} =============================== The full perceptron’s circuit is visualized in Figure \[fig:Perceptron\]. ![image](figures/PerceptronHor.pdf){width="12cm"} The input data $x$ to the perceptron circuit is encoded in the real quadrature of $N$ coherent optical inputs. Equation  informs us what circuit elements are required for a hardware implementation by decomposing the necessary operations: 1. Each input $x_j$ is multiplied by a weight $w_j$. 2. The weighted inputs are coherently added. 3. The sum drives a thresholding element to generate the estimated class label $\hat{y}$. 4. In the training phase (input $T=1$) the estimated class label $\hat{y}$ is compared with the true class label (input $Y$) and based on the outcome, feedback is applied to modify the weights $\{w_j\}$. The most crucial element for this circuit is the system that multiplies an input $x_j$ with a programmable weight $w_j$. This not only requires having a linear amplifier with tunable gain, but also a way to encode and store the continuous weights $w_j$. In the following we outline one way how such systems can be constructed from basic nonlinear optical cavity models: Section \[sec:variable\_gain\_amplifiers\] presents an elegant way to construct a phase sensitive linear optical amplifier where the gain can be tuned by changing the amplitude of a bias input. In Section \[sec:encoding\_and\_storing\_the\_gain\] we propose using an above threshold non-degenerate optical parametric amplifier to store a continuous variable in the output phase of the signal (or idler) mode. In Section \[sec:programmable\_gain\_amplifier\] these systems are combined to realize an optical amplifier with *programmable* gain, i.e., a control input can program its gain, which then stays constant even after the control has been turned off. Finally, we present a simple model for all-optical switches based on a cavity with two modes that interact via a cross-Kerr-effect in Section \[sec:optical\_switches\]. This element is used both for the feedback logic as well as the thresholding function to generate the class label $\hat{y}$. Tunable Gain Kerr-amplifier {#sec:variable_gain_amplifiers} --------------------------- A single mode Kerr-nonlinear resonator driven by an appropriately detuned coherent drive $\epsilon$ can have a strongly nonlinear dependence of the intra-cavity energy on the drive power. When the drive of a single resonator is given by the sum of a constant large bias amplitude and a small signal $\epsilon=\frac{1}{\sqrt{2}} (\epsilon_0 +\delta\epsilon)$, the steady state reflected amplitude is $\epsilon'=\frac{1}{\sqrt{2}} (\eta\epsilon_0 + g_-(\epsilon_0) \delta\epsilon + g_+(\epsilon_0) \delta\epsilon^\ast) +O(\delta\epsilon^2)$, where $|\eta|\le 1$ with equality for the ideal case of negligible intrinsic cavity losses. The small signal thus experiences phase sensitive gain dependent on the bias amplitude and phase. We provide analytic expressions for the gain in Appendix \[ssub:single\_mode\_kerr\]. Placing two identical resonators in the arms of an interferometer allows for isolating the signal and bias outputs even if their amplitudes vary by canceling the scattered bias in one output and the scattered signal in the other (cf. Figure \[fig:figures\_Amplifier\]). This highly symmetric construction, which generalizes to any other optical nonlinearity, ensures that the the signal output is linear in $\delta \epsilon$ up to third order[^2]. If the system parameters are well-chosen, the amplifier gain depends very strongly on small variations of the bias amplitude. This allows to tune the gain from close to unity to its maximum value, which, for a given waveguide coupling $\kappa$ and Kerr coefficient $\chi$ depends on the drive detuning from cavity. For Kerr-nonlinear resonators there exists a critical detuning beyond which the system becomes bi-stable and exhibits hysteresis. This can be used for thresholding type behavior though as shown in [@Tait2013Dream] in this case it may be advantageous to reduce the symmetry of the circuit. It is convenient to engineer the relative propagation phases such that at maximum gain, a real quadrature input signal $x\in {\mathbb{R}}$ leads to an amplified output signal $x' = g_{rr}^{\rm{max}}x$ with no imaginary quadrature component (other than noise and higher order contributions). However, for different bias input amplitudes and consequently lower gain values the output will generally feature a linear imaginary quadrature component $x' = \left[g_{rr}(\epsilon_0) + i g_{ir}(\epsilon_0)\right]x$ as well. Figure \[fig:amp\_gain\_bias\] demonstrates this for a particular choice of maximal gain. We note that there exist previous proposals of using nonlinear resonator pairs inside interferometers to achieve desirable input-output behavior [@Tait2013Dream], but to our knowledge, no one has proposed using these for signal/bias isolation and tunable gain. To first order the linearized Kerr model is actually identical to a sub-threshold degenerate OPO model. This implies that it can be used to generate squeezed light and also that one could replace the Kerr-model by an OPO model. An almost identical circuit, but featuring resonators with additional internal loss equal to the wave-guide coupling[^3] and constantly biased to *dynamic resonance* $\langle |\alpha|^2 \rangle_{\rm{ss}} = -\Delta/\chi$ can be used to realize a *quadrature filter*, i.e., an element that has unity gain for the real quadrature and zero for the imaginary one. Now the quadrature filtered signal still has an imaginary component, but to linear order this only consists of transmitted noise from the additional internal loss. While it would be possible to add one of these downstream of every tunable Kerr amplifier, in our specific application it is more efficient to add just a single one downstream of where the individual amplifier outputs are summed (cf. Section \[sec:thresholder\]). This also reduces the total amount of additional noise introduced to the system. Encoding and Storing the Gain {#sec:encoding_and_storing_the_gain} ----------------------------- In the preceding section we have seen how to realize a tunable gain amplifier, but for programming and *storing* this gain (or equivalently its bias amplitude) an additional component is needed. Although it is straightforward to design a multi-stable system capable of outputting a discrete set of different output powers to be used as the amplifier bias, such schemes would likely require multiple nonlinear resonators and it would be more cumbersome to drive transitions between the output states. An alternative to such schemes is given by systems that have a continuous set of stable states. Recent analysis of continuous time recurrent neural network models trained for complex temporal information processing tasks has revealed multi-dimensional stable attractors in the internal network dynamics that are used to store information over time. [@Sussillo2013Opening] A simple semi-classical nonlinear resonator model to exhibit this is given by a non-degenerate optical parametric oscillator (NOPO) pumped above threshold; for low pump input powers this system allows for parametric amplification of a weak coherent signal (or idler) input. In this case vacuum inputs for the signal and idler lead to outputs with zero expected photon number. Above a critical threshold pump power, however, the system down-converts pump photons into pairs of signal and idler photons. Due to an internal $U(1)$ symmetry of the underlying Hamiltonian (cf. Appendix \[ssub:nopo\_model\]), the signal and idler modes spontaneously select phases that are dependent on each other but independent of the pump phase. This implies that there exists a whole manifold of fix-points related to each other via the symmetry transformation $(\alpha_s, \alpha_i)\to(\alpha_s e^{i\phi}, \alpha_i e^{-i\phi})$, where $\alpha_s$ and $\alpha_i$ are the rotating frame signal and idler mode amplitudes, respectively. Consequently the signal output of an above threshold NOPO lives on a circular manifold (cf Figure \[fig:figures\_PhaseMemory\]). Vacuum shot noise on the inputs leads to phase diffusion with a rate of $\gamma_\Phi = \frac{\kappa}{8n_0}$, where $\kappa$ is the signal and idler line width and $n_0$ is the steady state intra cavity photon number in either mode. We point out that this diffusion rate does not directly depend on the strength of the nonlinearity which only determines how strongly the system must be pumped to achieve a given intra cavity photon number $n_0$. A weak external signal input breaks the symmetry and biases the signal output phase towards the external signal’s phase. This allows for changing the programmed phase value. Finally, we note that parametric oscillators can also be realized in materials with vanishing $\chi_2$ nonlinearity. They have been successfully realized via four-wave mixing (i.e., exploiting a $\chi_3$ nonlinearity) in [@Kippenberg2004KerrNonlinearity; @Savchenkov2004Low; @Haye2007Optical] and even in opto-mechanical systems [@Cohen2014Phonon] in which case the idler mode is given by a mechanical degree of freedom. In principle any nonlinear optical system that has a stable limit cycle could be used to store and encode a continuous value in its oscillation phase. Non-degenerate parametric oscillators stand out because of their theoretical simplicity allowing for a ‘static’ analysis inside a rotating frame. Programmable Gain Amplifier {#sec:programmable_gain_amplifier} --------------------------- Combining the circuits described in the preceding sections allows us to construct a fully programmable phase sensitive amplifier. In Figure \[fig:amp\_gain\_bias\] we see that there exists a particular bias amplitude at which the real to real quadrature gain vanishes $g_{rr}(\epsilon_{0}^{\rm{min}}) = 0$. We combine the NOPO signal output $\xi=r e^{i\Phi}$ with a constant phase bias input $\xi_0$ (cf. Figure \[fig:phase\_memory\_bias\]) on a beamsplitter such that the outputs vary between zero gain and the maximal gain bias values $\left|\frac{\xi_0 \pm r e^{i\Phi}}{\sqrt{2}}\right| \in [\epsilon_{0}^{\rm{min}}, \epsilon_{0}^{\rm{max}}]$. To realize both positive and negative gain, we use the second output of that beamsplitter to bias another tunable amplifier. The two amplifiers are always biased oppositely meaning that one will have maximal gain when the other’s gain vanishes and vice versa. The overall input signal is split and sent through both amplifiers and then re-combined with a relative $\pi$ phase shift. This complementary setup leads to an overall effective gain tunable within $G_{rr}(\Phi) \in [-\frac{g_{rr}^{\rm max}}{2}, \frac{g_{rr}^{\rm max}}{2}]$ (cf.  Figure \[fig:gain\_vs\_phase\]). In Figure \[fig:synapse\] we present both the complementary pair of amplifiers and the NOPO used for storing the bias as well as some logic elements (described in Section \[sec:optical\_switches\]) used for implementing conditional training feedback. We call the full circuit a synapse because it features programmable gain and implements the perceptron’s conditional weight update rule. ![Synapse circuit composed of a programmable amplifier and feedback logic (cf. Section \[sec:optical\_switches\]) that implements the perceptron learning feedback for a single weight. The upper amplifier when biased optimally leads to positive gain whereas the lower amplifier leads to negative gain due to the additional $\pi$ phase shift.[]{data-label="fig:synapse"}](figures/Synapse.pdf){width="10cm"} The resulting synapse model is quite complex and certainly not optimized for a minimal component number but rather the ease of theoretical analysis. A more resource efficient programmable amplifier could easily be implemented using just two or three nonlinear resonators. E.g., inspecting the the real to imaginary quadrature gain $g_{ir}(\epsilon_0)$ in Figure \[fig:amp\_gain\_bias\] we see that close to $\epsilon_0^{\rm{max}}$ it passes through zero fairly linearly and with an almost symmetric range. This indicates that we could use a single tunable amplifier to realize both positive and negative gain. Using only a single resonator for the tunable amplifier could work as well, but it would require careful interferometric bias cancellation and more tedious upfront analysis. We do not think it is feasible to use just a single resonator for both the parametric oscillator and the amplifier because any amplified input signal would have an undesirable back-action on the oscillator phase. Optical Switches {#sec:optical_switches} ---------------- The feedback to the perceptron weights (cf. Equation ) is conditional on the binary values of the given and estimated class labels $y$ and $\hat{y}$, respectively. The logic necessary for implementing this can be realized by means of all-optical switches. There have been various proposals and demonstrations [@Poustie2000Demonstration; @Nozaki2010SubFemtojoule] of all-optical gates/switches and quantum optical switches [@Milburn1989Quantum]. The model that we assume here (cf. Figure \[fig:figures\_Fredkin\]) is to use two different modes of a resonator that interact via a cross-Kerr-effect, i.e., power in the control mode leads to a refractive index shift (or detuning) for the signal mode. The index shift translates to a control mode dependent phase shift of a scattered signal field yielding a controlled optical phase modulator. Wrapping this phase modulator in a Mach-Zehnder interferometer then realizes a controlled switch: If the control mode input is in one of two different states $|\xi| \in {0, \xi_0}$, the signal inputs are either passed through or switched. This operation is often referred to as a *controlled swap* or Fredkin gate [@Fredkin1982Conservative] which was originally proposed for realizing reversible computation. This dispersive model has the advantage that the control input signal can be reused. Note that at control input amplitudes significantly different from the two control levels the outputs are coherent mixtures of the inputs, i.e., the switch then realizes a tunable beamsplitter. Finally, we point out that using two different (frequency non-degenerate) resonator modes has the advantage that the interaction between control and signal inputs is phase insensitive which greatly simplifies the design and analysis of cascaded networks of such switches. Generation of the Estimated Label {#sec:thresholder} --------------------------------- The estimated classifier label $\hat{y}$ should be a step function applied to the inner product of the weight vector and the input. In the preceding sections we have shown how individual inputs $x_j$ can be amplified with programmable gain to give $\tilde{s}_j = \tilde{G}(\Phi_j)x_j$, thus realizing the individual contributions to the inner product. These are then summed on an $n$-port beamsplitter that has an output which gives the uniformly weighted sum $\tilde{s} := \frac{1}{\sqrt{N}}\sum_{k=1}^N \tilde{G}(\Phi_k)x_k$. The gain factors $\tilde{G}(\Phi_k) = G_{rr}(\Phi_k) + i G_{ir}(\Phi_k)$ generally have an unwanted imaginary part which we subtract by passing the summed output through a *quadrature filter* circuit (cf. the last paragraph of Section \[sec:variable\_gain\_amplifiers\]), which has unit gain for the real quadrature and zero gain for the imaginary quadrature leading to an overall output $s = {\rm Re} \, \tilde{s} = \frac{1}{\sqrt{N}}\sum_{k=1}^N G_{rr}(\Phi_k)x_k$. The thresholding circuit should now produce a high output if $s>0$ and a zero output if $s \le 0$. It turns out that the optical Fredkin gate described in the previous section already works almost as a two mode thresholder, where the control input leads to a step-like response in the signal outputs: A constant signal input amplitude which encodes the logical ‘1’ state is applied to one of the signal inputs. When the control input amplitude is varied from zero to $\xi_0$, the signal output turns on fairly abruptly at some threshold $\xi_{\rm th} < \xi_0$. To make the thresholding phase sensitive, the control input is given by the sum of $s$ and a constant offset $s_0$ that provides a phase reference: $c = \frac{1}{\sqrt{2}}(s + s_0)$. For a Fredkin gate operated with continuous control inputs the signal output is almost zero for a considerable range of small control inputs. However, for very high control inputs, i.e., significantly above $\xi_0$, the signal output decreases instead of staying constant as would be desirable for a step-function like profile. We found that this issue can be addressed by transmitting the control input through a single mode Kerr-nonlinear cavity, with resonance frequency chosen such that the transmission gain $|c'/c|$ is peaked close to $c'=\xi_0$. For input amplitudes larger than $c$, the transmission gain is lower (although $|c'|$ still grows monotonically with $|c|$) which extends the input range over which the subsequent Fredkin gate stays in the on-state. Results {#sec:results} ======= The perceptron’s SDEs where simulated using a newly developed custom software package named QHDLJ [@Tezak2014Qhdlj] implemented in Julia [@Bezanson2014Julia] which allows allows for dynamic compilation of circuit models to LLVM [@Lattner2004Llvm] bytecode that runs at speed comparable to C/C++. All individual simulations can be carried out on a laptop, but the results in Figure \[fig:error\_rate\_gda\_opt\] were obtained by averaging over the results of 100 stochastic simulation run on an HP ProLiant server with 80 cores. The current version of QHDLJ uses one process per trajectory, but the code could easily be vectorized. In Figure \[fig:classification\_trajectory\] we present an example of a single application of an $N=8$ perceptron including both a learning stage with pre-labeled training data and a classification testing stage in which the perceptron’s estimated class labels are compared with their correct values. The data to be classified here are sampled from a different $8-$ dimensional Gaussian distribution for each class label with their mean vectors separated by a distance $ \| \mu_1 - \mu_0 \|_2 / \sigma = 2$ relative to the standard deviation of both individual clusters. For each sample the input was held constant for a duration $\Delta t = 2 \kappa^{-1}$ where $\kappa$ is the NOPO signal and idler line width. The perceptron was first trained with $M_{\rm train}=100$ training examples and subsequently tested on $M_{\rm test}=100$ test examples with the learning feedback turned off. ![Single trajectory divided into a training interval $0 \le t \le M_{\rm train}\Delta t$ during which the learning feedback is active and a test interval $M_{\rm train}\Delta t < t \le M_{\rm test}\Delta t$. During training and testing, respectively, the system is driven by $M_{\rm train} = M_{\rm test} = 100$ separate input states which are held constant for an interval $\Delta t = 2 \kappa^{-1}$. The estimated class label is discretized by averaging the output intensity over each input interval, dividing the result by the intensity $|\zeta|^2$ corresponding to the logical ‘1’ output state and rounding. The upper panel compares the correct class label $y$ (green) with the estimated class label $\hat{y}$ (black) during training and testing, respectively. The area between them indicates errors or at least lag of the estimator and is shaded in light red. The second panel shows occurrences of classification errors (red vertical bars). The slight shading near the beginning and the end of the trajectory in the second panel visualizes the segments corresponding to the upper left and right panel, respectively. The third panel shows the learned linear amplitude gains for each synapse. After the learning feedback is turned off at $t=M_{\rm train}\Delta t$, they diffuse slightly due to optical shot noise.[]{data-label="fig:classification_trajectory"}](figures/classification_trajectory.pdf){width="95.00000%"} In Figure \[fig:classification\_boundaries\] we visualize linear projections of the testing data as well as the estimated classification boundaries. We can see that the classifier performs very well far away from the decision boundary. Close to the decision boundary there are some misclassified examples. ![Projection of training data and classification boundaries. The data has been rotated such that the $s_1$ coordinate lines up with the learned normal vector of the separating hyperplane. Incorrectly classified data are plotted in red. The faint blue (red) lines visualize the evolution of the classifier boundary during training (testing).[]{data-label="fig:classification_boundaries"}](figures/class_boundaries.pdf){width="\textwidth"} We proceed to compare the performance of the classifier to the theoretically optimal performance achievable by any classifier and with the optimal classifier for this scenario, Gaussian Discriminant Analysis (GDA) [@Fisher1936Use; @Mclachlan1992Discriminant], implemented in software. Using the identical perceptron model as above and an identical training/testing procedure, we estimate the error rate $p_{\rm err} = \mathbb{P}[y\ne \hat{y}]$ of the trained perceptron as a function of the cluster separation $ \| \mu_1 - \mu_0 \|_2 / \sigma$. The results are presented in Figure \[fig:error\_rate\_GDA\]. Identically distributed training and testing data was used to evaluate the performance of the GDA algorithm and both results are compared to the theoretically optimal error rate for this discrimination task, which can be computed analytically to be $p_{\rm err,\, optim.} = \frac{1}{2}{\rm erfc}\left(\frac{\| \mu_1 - \mu_0 \|_2}{\sqrt{8}\sigma}\right),$ where ${\rm erfc}(x) = \frac{2}{\sqrt{\pi}} \int_x^\infty e^{-u^2} {du}$ is the complementary error function. We see that the all-optical perceptron’s performance is comparable to GDA’s performance for this problem and both algorithms attain performance close to the theoretical optimum. The learning rate of the perceptron is determined by two things, the overall strength of the learning feedback as well as the time for which each example is presented to the circuit. In Figure \[fig:error\_rate\_learning\_rate\] we plot the estimated error rate for varying feedback strength and duration. As can be expected intuitively, we find that there are trade-offs between speed (smaller $\Delta t$ preferable) and energy consumption (smaller $\alpha$ preferable). Time scales and power budget {#sec:power_budget_and_energy_scalings} ---------------------------- Here we roughly estimate the power consumption of the whole device and discuss how to scale it up to a higher input dimension. Any real-world implementation will depend strongly on the engineering paradigm, i.e., the choice of material/nonlinearity as well as the engineering precision, but based on recently achieved progress in nonlinear optics we will estimate an order of magnitude range for the input power. The signal and feedback input power to the circuit will scale linearly in the number of synapses $N$. The bias inputs for the amplifiers has to be larger than the signal to ensure linearly operation, but it should be expected that some of the scattered bias amplitudes can be reused to power multiple synapses. In our models we have defined all rates relative to the line width of the signal and idler mode of the NOPO, because this is the component that should necessarily have the smallest decay rate to ensure a long lifetime for the memory. All other resonators are employed as nonlinear input-output transformation devices and therefore a high bandwidth (corresponding to much lower loaded quality factor) is necessary for achieving a high bit rate. For our simulations we typically assumed quality factors that were lower than the NOPO’s by 1-2 orders of magnitude. Based on self-oscillation threshold powers reported in [@Kippenberg2004KerrNonlinearity; @Haye2007Optical; @Levy2009CmosCompatible; @Razzari2009CmosCompatible] and the switching powers of [@Nozaki2010SubFemtojoule] we estimate the necessary power per synapse to be in the range of $\sim 10-100\mu{\rm Watt}$. By re-using the scattered pump and bias fields it should be possible to reduce the power consumption per amplifier even further. Even for the continuous wave signal paradigm we have assumed (as opposed to pulsed/spiking signals such as considered in [@Vaerenbergh2012Cascadable]) the devices proposed here could be competitive with the current state of the art CMOS-based neuromorphic electrical circuits [@Cassidy2014RealTime]. In the simulations for the $8-$dimensional perceptron our input rate for training data was set to $\Delta t^{-1} = \frac{\kappa}{2}$. This value corresponds to roughly ten times the average feedback delay time between arrival of an input pattern and the conditional switching of the feedback logic upon arrival of the generated estimated state label $\hat{y}$. This time can be estimated as $\tau_{fb}(n) \approx G_{\rm max}\kappa_A^{-1} + \kappa_{QF}^{-1} + \kappa_{\rm thresh}^{-1} + n \kappa_F^{-1}$, where $n$ is the index of the synaptic weight, $G_{\rm max}$ is the amplifier gain range and $\kappa_A, \kappa_{QF}, \kappa_{\rm thresh}$ and $\kappa_F$ are the line widths of the amplifier, quadrature filter, the combined thresholding circuit (cf. Figure \[fig:figures\_Fredkin\]) and the feedback Fredkin gates. There is a contribution scaling with $n$ because the feedback traverses the individual weights sequentially to save power. When scaling up the perceptron to a higher dimension while retaining approximately the same input signal powers, it is intuitively clear that the combined ‘inner product’ signal amplitude $s$ scales as $s\propto \sqrt{N}s_1$, where $s_1$ is the signal amplitude for a single input. This allows to similarly scale up the amplitude $\zeta_0$ of the signal encoding the generated estimated state label $\hat{y}$ and consequently the bandwidth of the feedback Fredkin gates that it drives. A detailed analysis reveals that the Fredkin gate threshold scales as $\sqrt{N}$, in particular we find that $ \sqrt{|\chi|}\zeta_0 \propto \kappa_F \propto \sqrt{|\chi|}\xi_0 \propto \kappa_{\rm thresh}\propto \sqrt{|\chi|} s\propto \sqrt{N|\chi|}s_1$. The first two scaling relationships are due to the constraints on the Fredkin gate construction (cf. Appendix \[ssub:two\_mode\_kerr\]), the next two scaling relationships follow from demanding that the additional thresholding resonator be approximately dynamically resonant at the highest input level (cf. Appendices \[ssub:single\_mode\_kerr\] and \[ssub:two\_mode\_kerr\]). The last proportionality is simply due to the amplitude summation at the $N$-port beamsplitter. This reveals that when increasing $N$ the perceptron as constructed here would have to be driven at a lower input bit rate scaling as $\Delta t^{-1} \propto N^{-\frac12}$ or alternatively be driven with higher signal input powers. A possible solution that could greatly reduce the difference in arrival time $\sim \kappa_F^{-1}$ at each synapse could be to increase the waveguide-coupling to the control signal and thus decrease the delay per synapse. The resulting increase in the required control amplitude $\zeta_0$ can be counter-acted with feedback, i.e., by effectively creating a large cavity around the control loop. When even this strategy fails one could add fan-out stages for $\hat{y}$ which introduce a delay that grows only logarithmically with $N$. Finally, we note that the bias power of all the Kerr-effect based models considered here scales inversely with the respective nonlinear coefficient $\{|\zeta_0|^2, |s|^2\} \times |\chi| \sim {\rm const}$ when keeping the bandwidth fixed. This implies that improvements in the non-linear coefficient translate to lower power requirements or alternatively a faster speed of operation. Conclusion and Outlook {#sec:conclusion_and_outlook} ====================== In conclusion we have shown how to design an all-optical device that is capable of supervised learning from input data, by describing how tunable gain amplifiers with signal/bias isolation can be constructed from nonlinear resonators and subsequently combined with self-oscillating resonators to encode the programmed amplifier gain in their oscillation phase. By considering a few additional nonlinear devices for thresholding and all-optical switching we then show how to construct a perceptron, including the perceptron feedback rule. To our knowledge this is the first end-to-end description of an all-optical circuit capable of learning from data. We have furthermore demonstrated that despite optical shot-noise it nearly attains the performance of the optimal software algorithm for the classification task that we considered. Finally, we have discussed the relevant time-scales and pointed out how to scale the circuit up to large input dimensions while retaining the signal processing bandwidth and a low power consumption per input. Possible applications of an all-optical perceptron are as the trainable output filter of an optical reservoir computer or as a building block in a multi-layer all-optical neural network. The programmable amplifier could be used as a building block to construct other learning models that rely on continuously tunable gain such as Boltzmann machines and hardware implementations of message passing algorithms. An interesting next step would be to design a perceptron that can handle inputs at different carrier frequencies. In this case wavelength division multiplexing (WDM) might allow to significantly reduce the physical footprint of the device. A simple modification of the perceptron circuit could autonomously learn to invert linear transformations that were applied to its input signals. This could be used for implementing a circuit capable of solving linear regression problems. In combination with a multi-mode optical fibers such a device could also have applications for all-optical sensing. Finally, an extremely interesting question is whether harnessing quantum dynamics could lead to a performance increase. We hope to address these ideas in future work. Competing interests {#competing-interests .unnumbered} =================== The authors declare that they have no competing interests. Acknowledgements {#sub:acknowledgements .unnumbered} ================ This work is supported by DARPA-MTO under award no. N66001-11-1-4106. N.T. acknowledges support from a Stanford Graduate Fellowship. We would also like to thank Ryan Hamerly, Jeff Hill, Peter McMahon and Amir Safavi-Naeini for helpful discussion. Basic Component Models {#sec:component_models} ====================== Here we present the component models used to build the perceptron circuit. We will first describe the static components such as beamsplitters, phase shifts and coherent displacements, then proceed to describe the different Kerr-nonlinear models and finally the NOPO model. Static, Linear Circuit Components {#sub:static_circuit_components} --------------------------------- All of these components have in common that they have no internal dynamics, implying that the $A, B$ and $C$ matrices and the $a$-vector have zero elements, and $A_{\rm NL}$ is not defined. ### Constant Laser Source {#ssub:constant_laser_source} The simplest possible static component is given by single input/output coherent displacement with coherent amplitude $\eta$. This model is employed to realize static coherent input amplitudes. The $D$ matrix is trivially given by $D=(1)$ and the coherent amplitude is encoded in $c=(\eta)$. This leads to the desired input-output relationship $\beta_{\rm out} = \eta + \beta_{\rm in}$. For completeness we also provide the SLH [@Gough2009Series] model $((1), (\eta), 0)$. ### Static Phase Shifter {#ssub:static_phase_shifter} The static single input/outputs phase shifter has $D=(e^{i\phi})$ and $c = (0)$, leading to an input output relationship of $\beta_{\rm out} = e^{i\phi} \beta_{\rm in}$. Its SLH model is $((e^{i\phi}), (0), 0)$. ### Beamsplitter {#ssub:beamsplitter} The static beamsplitter mixes (at least) two input fields and can be parametrized by a mixing angle $\theta$. It has $D = \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos \theta \end{pmatrix}$ and $c = (0,0)^T$. This leads to an input output relationship $$\begin{aligned} \begin{pmatrix} \beta_{out,1}\\\beta_{out,2}\end{pmatrix} = \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos \theta \end{pmatrix} \begin{pmatrix} \beta_{in,1}\\\beta_{in,2}\end{pmatrix}\end{aligned}$$ Its SLH model is $\left(\begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos \theta \end{pmatrix}, \begin{pmatrix} 0\\0 \end{pmatrix}, 0 \right)$. Resonator Models {#sub:resonator_models} ---------------- We consider resonator models with $m$ internal modes and $n$ external inputs and outputs. We assume for simplicity that $a = \mathbf{0}$ and $c = \mathbf{0}$ meaning that we will model all coherent displacements explicitly in the fashion described above. We also assume that their scattering matrices are trivially given by $D = \mathbf{1}_n$ which means that far off-resonant input fields are simply reflected without a phase shift. Furthermore, none of our assumed models feature *linear* coupling between the internal cavity modes. This implies that the $A$-matrix is always diagonal. We are always working in a rotating frame. ### Single mode Kerr-nonlinear Resonator {#ssub:single_mode_kerr} A Kerr-nonlinearity is modeled by the nonlinear term $A_{\rm NL}^{\rm Kerr}(\alpha) = -i \chi |\alpha|^2 \alpha$ which can be understood as an intensity dependent detuning. The $A$-matrix is given by $(-\frac{\kappa_T}{2}-i\Delta),$ its $B$-matrix is $-(\sqrt{\kappa_1}, \sqrt{\kappa_2}, \dots, \sqrt{\kappa_n})$, where the total line width is given by $\sum_{j=1}^n\kappa_j = \kappa_T$ and the cavity detuning from any external drive is given by $\Delta$. The $C$-matrix is given by $C=-B^T$. The corresponding SLH model is $$\begin{aligned} \left(\mathbf{1}_n, \begin{pmatrix} \sqrt{\kappa_1} a \\ \vdots \\ \sqrt{\kappa_n} a\end{pmatrix}, \tilde{\Delta} a^\dagger a + \frac{\chi}{2} a^{2\dagger} a^2\right),\end{aligned}$$ where the detuning differs slightly $\tilde{\Delta} = \Delta + \chi$ as can be shown in the derivation of the Wigner-formalism. [@Santori2014Quantum] The special case of a single mirror with coupling rate $\kappa$ and negligible internal losses is of interest for construsting the phase sensitive amplifier described in Section \[sec:variable\_gain\_amplifiers\]. Considering again an input given by a large static bias and a small signal $\epsilon=\frac{1}{\sqrt{2}} (\epsilon_0 +\delta\epsilon)$, the steady state reflected amplitude is to first order $$\begin{aligned} \epsilon'\approx\frac{1}{\sqrt{2}} \left[\eta\epsilon_0 + g_-(\epsilon_0) \delta\epsilon + g_+(\epsilon_0) \delta\epsilon^\ast\right]. \end{aligned}$$ For negligible internal losses we can give provide exact expressions for $\eta, g_+$ and $g_-$. Rather than parametrizing these by the bias $\epsilon_0$ we parametrize them by the mean coherent intra-cavity amplitude $\alpha_0$. When the system is not bi-stable (see below) relationship defines a one-to-one map between $\epsilon_0$ and $\alpha_0.$ $$\begin{aligned} \eta & = -\frac{\kappa/2 - i(\Delta + \chi |\alpha_0|^2)}{\kappa/2 + i(\Delta + \chi |\alpha_0|^2)} \quad \Rightarrow |\eta| = 1,\\ g_- &= 1+\frac{\kappa\left[-\frac{\kappa}{2}+i\Delta+2i\chi |\alpha_0|^{2}\right]}{\left(\frac{\kappa}{2}\right)^2 + \left(\Delta + 2\chi |\alpha_0|^2\right)^2 - |\chi|^2|\alpha_0|^4}, \\ g_+ &= \frac{i\kappa \chi \alpha_0^2}{\left(\frac{\kappa}{2}\right)^2 + \left(\Delta + 2\chi |\alpha_0|^2\right)^2 - |\chi|^2|\alpha_0|^4},\\ \epsilon_0 & = - \frac{1}{\sqrt \kappa}\left[\frac{\kappa}{2} + i(\Delta + i\chi |\alpha_0|^2)\right]\alpha_0. \label{eq:ssamp_bias}\end{aligned}$$ The Kerr cavity exhibits bistability for a particular interval of bias amplitudes if and only if $\Delta/\chi < 0$ and $|\Delta| \ge \frac{\sqrt{3}\kappa}{2}=\Delta_{\rm th}.$ At any fixed bias amplitude and corresponding internal steady state mode amplitude the maximal gain experienced by a small signal is given by $g^{\rm max} = |g_-|+|g_+|$. Here maximal means that we maximize over all possible signal input phases relative to the bias input. To experience this gain, the signal has to be in an appropriate quadrature defined by $\arg{\delta \epsilon} = \frac{\arg g_- - \arg g_+}{2}.$ The orthogonal quadrature is then maximally de-amplified by a gain of $||g_-|-|g_+||$ and it is possible to show that for negligible losses the perfect squeezing relationship $\left(|g_-|+|g_+|\right)||g_-|-|g_+|| = \left| |g_-|^2 - |g_+|^2\right| = 1$ holds for any bias amplitude. Furthermore, for fixed cavity parameters $g^{\rm max}$ is maximized at a particular non-zero intra-cavity photon amplitude $$\begin{aligned} \label{eq:n_of_Delta} |\alpha_0^{\rm max}|^2 &= \sqrt{\frac{\Delta^2 + \frac{\kappa^2}{4}}{3\chi^2}} \\ \Rightarrow g^{\rm max}& = \sqrt{\frac{\sqrt{f} + \kappa}{\sqrt{f} - \kappa}}, \text{ with } f = 28\Delta^2 + 4\kappa^2 - 8 \Delta \sqrt{12 \Delta^2 + 3 \kappa^2}.\end{aligned}$$ Note that the maximal gain does not depend on the strength of the non-linearity. The relationship between $g^{\rm max}$ and $\Delta$ can be inverted: $$\begin{aligned} \label{eq:delta_of_g} \Delta = \frac{\sqrt{3}\kappa}{2} \frac{\left(g^{\rm max}-\sqrt{3}\right)\left(g^{\rm max}-\frac{1}{\sqrt{3}}\right)}{{g^{\rm max}}^2-1}\end{aligned}$$ Using all this it is straightforward to construct a tunable Kerr-amplifier. The symmetric construction proposed in Section \[sec:variable\_gain\_amplifiers\] provides the additional advantage that one does not have to cancel the scattered bias. It is also convenient to prepend and append phase shifters to the signal input and output that ensure $g_-=g_+ = g^{\rm max}/2$ at maximum gain. The quadrature filter construction relies on the presence of additional cavity losses that are equal to the input coupler $\kappa_2 = \kappa_1 = \kappa.$ In this case the gain coefficients for reflection of the first port are given by $$\begin{aligned} g_- &= 1+\frac{\kappa\left[-\kappa+i\Delta+2i\chi |\alpha_0|^{2}\right]}{\kappa^2 + \left(\Delta + 2\chi |\alpha_0|^2\right)^2 - |\chi|^2|\alpha_0|^4}, \\ g_+ &= \frac{i\kappa \chi \alpha_0^2}{\kappa^2 + \left(\Delta + 2\chi |\alpha_0|^2\right)^2 - |\chi|^2|\alpha_0|^4},\\ \epsilon_0 & = - \frac{1}{\sqrt \kappa}\left[{\kappa} + i(\Delta + i\chi |\alpha_0|^2)\right]\alpha_0. \label{eq:ssamp_bias}\end{aligned}$$ and one may easily verify that for dynamic resonance, i.e., $\chi|\alpha_0|^2 = -\Delta$, the gain coefficients are equal in magnitude $|g_-|=|g_+|$ which implies that there exists an input phase for which the reflected signal vanishes. ### Two mode Kerr-nonlinear resonator {#ssub:two_mode_kerr} We label the mode amplitudes as $\alpha_1$ and $\alpha_2$. In this case the nonlinearity includes a cross-mode induced detuning $$\begin{aligned} A_{\rm NL}^{\rm Kerr2}(\alpha) = \begin{pmatrix} -i \chi_a |\alpha_1|^2 \alpha_1 - i \chi_{ab} |\alpha_2|^2 \alpha_1 \\ -i \chi_{ab} |\alpha_1|^2 \alpha_2 - i \chi_{b} |\alpha_2|^2 \alpha_2 \end{pmatrix}\end{aligned}$$ The model matrices are $$\begin{aligned} A & = \begin{pmatrix} -\frac{\kappa_{a,T}}{2}-i\Delta_a & 0 \\ 0 & -\frac{\kappa_{b,T}}{2}-i\Delta_b \end{pmatrix}, \\ B & = -\begin{pmatrix} \sqrt{\kappa_{a,1}}& \sqrt{\kappa_{a,2}}& \dots & \sqrt{\kappa_{a,n_a}} & 0 & \dots & 0 \\ 0 & 0& \dots & 0 & \sqrt{\kappa_{b,1}}& \sqrt{\kappa_{b,2}}& \dots & \sqrt{\kappa_{b,n_b}} \end{pmatrix},\\ C &= -B^T,\end{aligned}$$ and the corresponding SLH model is $$\begin{aligned} \left(\mathbf{1}_{n_a+n_b}, C \begin{pmatrix} a \\ b \end{pmatrix}, \tilde{\Delta}_a a^\dagger a + \tilde{\Delta}_b b^\dagger b + \frac{\chi_a}{2}a^{2\dagger} a^2 + \frac{\chi_b}{2}b^{2\dagger} b^2 + \chi_{ab}a^\dagger a b^\dagger b\right),\end{aligned}$$ with $\tilde{\Delta}_{a/b} = \Delta_{a/b} + \chi_{a/b} + \frac{\chi_{ab}}{2}$ and where the Wigner-correspondence[^4] is $\langle \alpha_1\rangle_{\rm W} = \langle a \rangle$, $\langle \alpha_2\rangle_{\rm W} = \langle b \rangle$. We briefly summarize how to construct a controlled phase shifter using an ideal two-mode Kerr cavity with a single input coupling to each mode and negligible additional internal losses. We exploit that in this case the reflected steady state signal amplitude $\zeta'$ is identical to the input amplitude $\zeta$ up to a power dependent phase shift $$\begin{aligned} \zeta' = -\frac{\frac{\kappa_a}{2} - i \left(\Delta_a + i\chi_{a} |\alpha_0|^2+ i\chi_{ab} |\beta_0|^2 \right)}{\frac{\kappa_a}{2} + i \left(\Delta_a + i\chi_{a} |\alpha_0|^2+ i\chi_{ab} |\beta_0|^2 \right)} \zeta\quad \Rightarrow |\zeta'| = |\zeta|.\end{aligned}$$ We assume that the control input amplitude takes on two discrete values $\xi = 0$ or $\xi = \xi_0$ and that variations of the signal input amplitude are small $|\zeta|\approx |\zeta_0|$. In this case a good choice of detunings and coupling rates is given by $$\begin{aligned} \label{eq:modulator_params1} \Delta_a &= \frac{\kappa_a}{2} - \frac{2\chi_a |\zeta_0|^2}{\kappa_a} \\ \Delta_b &= \frac{\kappa_a\chi_b}{\chi_{ab}} - \frac{2\chi_{ab}|\zeta_0|^2}{\kappa_a} \\ \xi_0 &= \frac{\sqrt{\kappa_a\kappa_b}}{2\sqrt{|\chi_{ab}|}} \label{eq:modulator_params3}\end{aligned}$$ in addition to two inequality constraints $$\begin{aligned} \label{eq:modulator_constraints} \Delta_a & \le \sqrt{3}\frac{\kappa_a}{2}\\ \Delta_b & \le \sqrt{3}\frac{\kappa_b}{2}\end{aligned}$$ that ensure that the system is stable. This construction ensures that $\frac{\left.\zeta'\right|_{\xi=\xi_0}}{\left.\zeta'\right|_{\xi=0}} = -1$ and in fact it can easily be generalized to the more realistic case of non-negligible internal losses. Finally note that the inequality constraints imply that the lower bounds for the input couplings scale as $\kappa_a^{\rm min}, \kappa_b^{\rm min} \propto |\zeta_0|$ which is important for our power analysis in Section \[sec:power\_budget\_and\_energy\_scalings\]. This, in turn implies that $\xi_0 \propto |\zeta_0|$ which is a fairly intuitive result. The controlled phase shifter can now be included in one arm of a Mach-Zehnder interferometer to create a Fredkin gate (cf. Section \[sec:optical\_switches\]). To realize a thresholder, the control mode input is prepended with a two port Kerr-cavity with parameters chosen such that it becomes dynamically resonant with maximal differential transmission gain close to where its output gives the correct high control input $\xi_0.$ Overall, we remark that even when we account for the prepended cavity, the relationship $c \propto |\zeta_0|$ still holds, where $c$ is the input to the thresholder. To see how the total decay rate of the thresholding cavity $\kappa_{\rm thresh}$ scales consider first that to get maximum differential gain or contrast, we ought pick a detuning right at or below the Kerr stability threshold $\Delta \approx \Delta_{\rm th} = \sqrt{3}\kappa_{\rm thresh}/2.$ We choose the maximum input amplitude such that it approximately achieves dynamic resonance within the prepended thresholding cavity. This occurs when $\Delta = -\chi |\alpha_0|^2$ (cf. Appendix \[ssub:single\_mode\_kerr\]) and at an input amplitude of $c \propto \sqrt{\kappa_{\rm thresh}\left|\frac{\Delta}{\chi}\right|} \propto \kappa_{\rm thresh}.$ ### NOPO model {#ssub:nopo_model} The NOPO model has consists of three modes, the signal and idler modes $\alpha_s, \alpha_i$ and the pump mode $\alpha_p$. We assume a triply resonant model[^5] and that $\omega_s + \omega_i = \omega_p$, allowing for resonant conversion of pump photons into pairs of signal and idler photons and vice versa. The nonlinearity is given by $$\begin{aligned} A_{\rm NL}^{\rm NOPO}(\alpha) = \begin{pmatrix} \chi \alpha_i^\ast \alpha_p \\ \chi \alpha_s^\ast \alpha_p \\ - \chi \alpha_s \alpha_i \end{pmatrix}\end{aligned}$$ and the model matrices are $$\begin{aligned} A &= \begin{pmatrix} -\frac{\kappa}{2} & 0 & 0 \\ 0 & -\frac{\kappa}{2} & 0 \\ 0 & 0 & -\frac{\kappa_{p}}{2} \end{pmatrix},\quad B = -\begin{pmatrix} \sqrt{\kappa} & 0 & 0 \\ 0 & \sqrt{\kappa} & 0 \\ 0 & 0 & \sqrt{\kappa_{p}} \end{pmatrix},\\ C &= -B^T.\end{aligned}$$ Here, the SLH model is given by $$\begin{aligned} \left(\mathbf{1}_3, C \begin{pmatrix} a \\ b \\ c \end{pmatrix}, i\chi\left(abc^\dagger - a^\dagger b^\dagger c\right)\right)\end{aligned}$$ where now $a,b$ and $c$ correspond to $\alpha_s, \alpha_i$ and $\alpha_p$. A steady state analysis of the system driven only by a pump input amplitude $\epsilon$ reveals that below a critical threshold $|\epsilon| < \epsilon_{\rm th} = \frac{\kappa \sqrt{\kappa_p}}{4 \chi}$ the system as a unique fixpoint with $\alpha_s=\alpha_i=0$ and $\alpha_p = -\frac{2\epsilon}{\sqrt{\kappa_p}}.$ Above threshold $|\epsilon| \ge \epsilon_{\rm th}$, the intra-cavity pump amplitude stays constant at the threshold value $\alpha_p = -\frac{2\epsilon_{\rm th}\epsilon/|\epsilon|}{\sqrt{\kappa_p}} = -\frac{\kappa\epsilon/|\epsilon|}{2\chi}$ and the signal and idler mode obtain non-zero magnitude $$\begin{aligned} |\alpha_s| = |\alpha_i| = \sqrt{\frac{4\epsilon_{\rm th}}{\kappa} \left(|\epsilon| - \epsilon_{\rm th}\right)}.\end{aligned}$$ As an interesting consequence of the model’s symmetry there exists not a single above threshold state but a whole manifold of fixpoints parametrized by a correlated signal and idler phase $$\begin{aligned} \alpha_s & = \sqrt{\frac{4\epsilon_{\rm th}}{\kappa} \left(|\epsilon| - \epsilon_{\rm th}\right)} e^{i\phi + i \phi_0}\\ \alpha_i & = \sqrt{\frac{4\epsilon_{\rm th}}{\kappa} \left(|\epsilon| - \epsilon_{\rm th}\right)} e^{-i\phi+ i \phi_0}\end{aligned}$$ where the common phase $\phi_0$ is fixed by the pump input phase via $$\begin{aligned} \alpha_s\alpha_i = -\frac{4\epsilon_{\rm th}}{\kappa}\left(|\epsilon| - \epsilon_{\rm th}\right) \frac{\epsilon}{|\epsilon|}.\end{aligned}$$ In particular, for $\epsilon < 0$ we have $\alpha_i = \alpha_s^\ast.$ Above threshold the system will rapidly converge to a fixpoint of well-defined phase $\phi$. Without quantum shot noise $\phi$ would remain constant. With noise, however, the system can freely diffuse along the manifold. When the pump bias input is sufficiently large compared to threshold and consequently there are many signal and idler photons present in the cavity at any given time $(|\alpha_{s/i}|^2 \gg 1)$ one can analyze the dynamics along the manifold and of small orthogonal deviations from the manifold. In the symmetric case considered here where signal and idler have equal decay rates, the differential phase degree of freedom $\phi = \frac{\arg \alpha_i - \arg \alpha_s}{2}$ decouples from all other variables and approximately obeys the SDE $$\begin{aligned} d\phi & = \sqrt{\gamma_\phi} dW_t, \quad dW_t^2 = dt\\ \text{with } \gamma_\phi& = \frac{\kappa}{8|\alpha_s|^2} = \frac{\kappa^2}{32\epsilon_{\rm th} \left(|\epsilon| - \epsilon_{\rm th}\right)}.\end{aligned}$$ It is relatively straightforward to generalize these results to a less symmetric model with different signal and idler couplings and even non-zero detunings, but for a given nonlinearity the model considered here provides the smallest phase diffusion and thus the best analog memory. For a very thorough analysis of this model we refer to [@Graham1968QuantumFluctuations]. Composite component models {#sub:composite_component_models} -------------------------- Due to the scope of this article, we will refrain from including the full net lists for the composite component models in this article and instead publish them online at [@Tezak2014PerceptronFiles]. We remark that composing a photonic circuit from the above described non-linear photonic models is often complicated by the fact that the steady state input-output relationships are hard or even impossible to invert analytically. A systematic approach to optimizing component parameters would be highly desirable. [^1]: [email protected] [^2]: One can easily convince oneself that all even order contributions are scattered into the bias output. [^3]: In the photonics community this is referred to as *critically coupled,* whereas the amplifier circuit would ideally be strongly *overcoupled* such that additional internal losses are negligible. [^4]: In this appendix we denote expectations with respect to the Wigner function as $\langle \cdot \rangle _{\rm W}$ and quantum mechanical expectations as $\langle \cdot \rangle$. [^5]: It is possible to drop this resonance assumption for the pump.
--- abstract: | Momenta and masses of heavy projectile fragments ($Z \geq 8$), produced in collisions of $^{197}$Au with C, Al, Cu and Pb targets at E/A = 600 MeV, were determined with the ALADIN magnetic spectrometer at SIS. Using these informations, an analysis of kinematic correlations between the two and three heaviest projectile fragments in their rest frame was performed. The sensitivity of these correlations to the conditions at breakup was verified within the schematic SOS-model. For a quantitative investigation, the data were compared to calculations with statistical multifragmentation models and to classical three-body calculations. With classical trajectory calculations, where the charges and masses of the fragments are taken from a Monte Carlo sampling of the experimental events, the dynamical observables can be reproduced. The deduced breakup parameters, however, differ considerably from those assumed in the statistical multifragmentation models which describe the charge correlations. If, on the other hand, the analysis of kinematic and charge correlations is performed for events with two and three heavy fragments produced by statistical multifragmentation codes, a good agreement with the data is found with the exception that the fluctuation widths of the intrinsic fragment energies are significantly underestimated. A new version of the multifragmentation code MCFRAG was therefore used to investigate the potential role of angular momentum at the breakup stage. If a mean angular momentum of 0.75$\hbar$/nucleon is added to the system, the energy fluctuations can be reproduced, but at the same time the charge partitions are modified and deviate from the data. address: | $^{1}$ Gesellschaft für Schwerionenforschung, D-64220 Darmstadt, Germany\ $^{2}$ MPI für Kernphysik Heidelberg, D-69029 Heidelberg, Germany\ $^{3}$ Centre de Recherches Nucléaires, F-67037 Strasbourg, France\ $^{4}$ Laboratoire National Saturne, CEN Saclay, F-91191 Gif-sur-Yvette, France\ $^{5}$ Dipartimento di Fisica dell’ Universitá and I.N.F.N., I-95129 Catania, Italy\ $^{6}$ Istituto di Scienze Fisiche dell’ Universitá and I.N.F.N., I-20133 Milano, Italy\ $^{7}$ Institut für Kernphysik, Universität Frankfurt, D-60486 Frankfurt, Germany\ $^{8}$ Forschungszentrum Rossendorf, D-01314 Dresden, Germany\ $^{9}$ Nuclear Science Division, LBL, Berkeley, CA 94720, USA\ $^{10}$ Dept. of Physics, Yale University, New Haven, CT 06512, USA\ $^{11}$ IHEP, Beijing 100039, P.R. China\ $^{12}$ Department of Physics, MIT, Cambridge, MA 02139, USA author: - 'M. Begemann-Blaich$^{1}$, V. Lindenstruth$^{1,9}$, J. Pochodzalla$^{1,2}$, J.C. Adloff$^{3}$, P. Bouissou$^{4}$, J. Hubele$^{1}$, G. Imme$^{5}$, I. Iori$^{6}$, P. Kreutz$^{7}$, G.J. Kunde$^{1,10}$, S. Leray$^{4}$, Z. Liu$^{1,11}$, U. Lynen$^{1}$, R.J. Mei$\!$jer$^{1}$, U. Milkau$^{1}$, A. Moroni$^{6}$, W.F.J. Müller$^{1}$, C. Ngô$^{4}$, C.A. Ogilvie$^{1,12}$, G. Raciti$^{5}$, G. Rudolf$^{3}$, H. Sann$^{1}$, M. Schnittker$^{1}$, A. Schüttauf$^{2}$, W. Seidel$^{8}$, L. Stuttge$^{3}$, W. Trautmann$^{1}$, A. Tucholski$^{1}$' title: Breakup Conditions of Projectile Spectators from Dynamical Observables --- Introduction {#SEC1} ============ In several experiments with the ALADIN spectrometer, the decay of excited projectile spectator matter at beam energies between 400 and 1000 MeV per nucleon was studied [@HUB91; @KRE93; @SCH96]. In these collisions, energy depositions are reached which cover the range from particle evaporation to multi-fragment emission and further to the total disassembly of the nuclear matter, the so-called ’rise and fall of multifragment emission’ [@OGI92]. The most prominent feature of the multi-fragment decay is the universality that is obeyed by the fragment multiplicities and the fragment charge correlations. These observables are invariant with respect to the entrance channel – i.e. independent of the beam energy and the target – if plotted as a function of $Z_{bound}$, where $Z_{bound}$ is the sum of the atomic numbers $Z_{i}$ of all projectile fragments with $Z_{i} \geq 2$. For different projectiles, the dependence of the fragment multiplicity on $Z_{bound}$ follows a linear scaling law. These observations indicate that compressional effects are only of minor importance. In contrast to central collisions at lower energies, where large radial flow effects are observed, the quantitative interpretation of kinematic observables is therefore simplified. More important, these characteristics are an indication that chemical equilibrium is attained prior to the fragmentation stages of the reaction. In fact, statistical models were found to be quite successful in describing the experimental fragment yields and charge correlations, if the breakup of an expanded system was assumed [@BAO93; @BAR93; @BOT92; @BOT95; @BOT85; @BON85; @GRO86]. In addition, the temperature of the excited matter, extracted from double ratios of isotope yields, is reproduced. On the other hand, the kinetic energy spectra of particles and fragments are not equally well described within the statistical picture. The energy spectra of light charged particles ($A \leq 4$) can be explained by a thermal emission of the fragments, but their slopes correspond to temperatures approximately three times larger than those extracted from isotope ratios [@XI97]. While this may be an indication for pre-breakup emission if remains to be understood whether the kinetic energies of intermediate mass fragments ($ 3 \leq Z \leq 30$) are consistent with the statistical approach. The dynamics of the multifragmentation process has therefore to be studied. It is well known that kinematic correlations, which are governed by the long range Coulomb repulsion, are sensitive to the disintegration process. Previous studies concentrated mostly on the two-fragment velocity correlation functions [@TRO87; @GRO89; @KIM91; @KIM92; @BOW93; @BAU93; @KAE93; @BAO94; @SCH94]. Only few attempts were made to analyze higher order correlations. However, these studies were done either for heavy fragments at much lower beam energies [@GLA83; @PEL86; @BOU89; @BIZ92; @BRU94] or for light charged particles only [@LAU94]. In this paper, the results of a kinematic analysis of the fragmentation process of the projectile spectator are presented. Heavy projectile fragments produced in peripheral Au induced collisions at E/A = 600 MeV are studied without the influence of energy thresholds of the detectors. Moreover, the analysis is performed in the center of mass frame of the fragments, thus reducing the influence of directed collective motion of the emitting source. On the other hand, a limit of $Z \geq 8$ is imposed for kinematic observables by the lower detection threshold of the TP-MUSIC II tracking detector. The analysis is therefore restricted to the excitation energy range characterized by increasing fragment multiplicities. The experiment {#SEC2} ============== Experimental setup {#SEC21} ------------------ The experiment was performed with the ALADIN forward spectrometer at the heavy-ion synchrotron SIS of the GSI Darmstadt, using a gold beam with an energy of 600 MeV per nucleon and a typical intensity of 2000 beam particles during a 500 ms spill. A schematic view of the experimental setup in the bending plane of the ALADIN magnet is shown in figure \[setup\]. The incoming beam entered the apparatus from the left and first hit the beam counters, where for each beam particle the position in a plane perpendicular to the beam direction and the arrival time were measured with resolutions $\delta_x \approx \delta_y \approx$ 0.5 mm FWHM and $\delta_t$ =100 ps FWHM, respectively. One meter downstream, the reaction target was positioned. Targets of C, Al, Cu and Pb with a thickness between 200 and 700 mg/cm$^2$ were used, corresponding to an interaction probability of up to 3%. Light charged particles from the mid-rapidity zone of the reaction were detected by a Si-CsI-array which was placed at angles between 7$^{o}$ and 40$^{o}$ with a solid angle coverage of approximately 30% in this angular range ( 50% between 7$^{o}$ and 25$^{o}$, 15% between 25$^{o}$ and 40$^{o}$ ). Fragments from the decay of the projectile spectator, emitted into a cone of approximately 5$^{o}$ around the direction of the incident beam, entered the magnetic field of the magnet. The magnet was operated at a bending power of 1.4 Tm which corresponded to a deflection of 7.2$^{o}$ for fragments with beam rigidity. The particles were detected in the TOF-wall, which was positioned 6 m behind the target. The time-of-flight of light particles with respect to the beam counter was measured with a resolution of 300 ps FWHM and with a resolution of 140 ps FWHM for particles with a charge of 15 and above. The TOF-wall provided the charge of all detected particles with single element resolution for charges up to eight. Charged particles with charges of eight and above were simultaneously identified and tracked by a time-projecting multiple-sampling ionization chamber TP-MUSIC II ( see section \[SEC22\] ), which was positioned outside the magnetic field between the magnet and the TOF-wall. To minimize the influence of scattering, of energy loss and of secondary nuclear reactions of the fragments after their production in the target, the spectrometer up to an entrance window in front of the ionization chamber was operated in vacuum. The components of the apparatus with the exception of the MUSIC detector have already been described in [@HUB91]. The MUSIC detector {#SEC22} ------------------ The TP-MUSIC detector is a time-projection multiple-sampling ionization chamber. If a charged particle passes through its active volume, an ionization track containing positive ions – which will drift to the cathode – and free electrons – which will move in the direction of the anodes – is produced. Due to the homogeneous electric field, the drift velocity of the electrons towards the anodes is independent of the position within the gas volume. Therefore, the distance of the primary particle track from the anode is proportional to the time the center of the electron cloud needs to reach the anode. The version TP-MUSIC II [@BAU97] which was used in this experiment is shown in figure \[music\] [^1]. It consists of three active volumes with the drift field in adjacent sections perpendicular to each other, two for the measurement of the horizontal and one for that of the vertical position and angle of the particle track. Each field cage has an active area of 100 cm (horizontal) times 60 cm (vertical) and a length of 50 cm. The horizontal field cages are both divided into two halves with a vertical cathode plane in the middle of the detector to reduce the maximum drift length and the high voltages necessary to provide the drift field. The chambers were operated at a high voltage of 150 V/cm, i.e. 7.5 kV for the horizontal and 9 kV for the vertical field cages, P10 ( 90% argon, 10% methane ) at a pressure of 800 mbar served as counting gas. To allow multiple sampling of the particle signals, each anode is subdivided into 16 stripes with a width of 3 cm each. The anode signals were recorded using flash ADCs with a sampling rate of 16 MHz. Together with a drift velocity of the electrons of approximately 5.3 cm/$\mu$s this corresponds to amplitude measurements at a step size of 3 mm in the direction of the drift. Since the drift time of the electron cloud is measured by each of the 16 anodes of a field cage i.e. at 16 points along the beam direction (z-direction), the complete track information both in x- ( field cage 1 and 3 ) and y-direction ( field cage 2 ) of the primary charged particle inside the MUSIC volume is available. The detector is operated outside the magnetic field volume of the ALADIN magnet, therefore the ionization track through the MUSIC gas is a straight line which is obtained by fitting the 16 track positions by three straight lines - one in each field cage. The position resolution has been estimated using the fact that the horizontal component of a track is determined with two separate field cages. The intersections of the measured track segments from the first and the third field cage with a virtual reference plane, positioned in the center of the vertical field cage and perpendicular to the z-direction, are calculated. The distance between these two points of intersection is a measure of the overall position and angle resolution of the detector. Its distribution is a gaussian with a width of 2.4 mm FWHM for particles with a charge of 20 and above which increases to approximately 12 mm at the detection threshold of $Z=8$. These values are of the same order of magnitude as the effect of small angle scattering of the fragments in the counting gas of the MUSIC. The amplitude of the primary signal produced by a particular fragment is proportional to $ q^2\beta^2$, where $\beta $ is the velocity of the particle and $q$ is its charge state. Fragments from the decay of the projectile spectator are moving approximately with beam velocity. In this case, all fragments with nuclear charges up to 50 are fully stripped after passing through the target matter. They remain fully stripped in the detector gas, the primary signal is therefore proportional to the square of the nuclear charge of the particle. For particles with nuclear charges between 50 and 79, the mean charge exchange length in the MUSIC gas ( 8 cm and 30 cm for $Z=50$ and 79, respectively ) is small compared to the path length of the particle within the MUSIC detector. They reach their equilibrium charge state within the detector volume and the primary signal is proportional to the square of the effective charge. The amplitude of the primary signal decreases due to diffusion broadening ( proportional to the drift distance ) and due to impurities of the counting gas ( proportional to the square of the drift distance ). The amplitude measured at the anode is therefore dependent on the drift distance of the electron cloud. To determine the position correction the incident beam – i.e. particles with known $Z$ and $\beta $ – is swept across the field cages by varying the field of the magnet. In addition, the signals are corrected for the deviations from the beam velocity. This is essential for the charge resolution of binary fission fragments which have the widest distribution of laboratory velocities of all heavy fragments ($Z \geq 8$) from the decay of the projectile spectator. A charge resolution of 0.5 charge units FWHM is reached. This is demonstrated in figure \[musicz\] where a charge spectrum of the MUSIC-detector is shown. Since both the differences in pulse height for two neighboring charges and their fluctuations are proportional to $Z$, the charge resolution is independent of the charge of the fragment. The lower threshold for particle identification reached in this experiment is $Z=8$. Momentum and mass reconstruction {#SEC23} -------------------------------- From the tracks of the charged particles measured behind the ALADIN magnet, the rigidity vector can be determined if the magnetic field is known. It was decided to fit the particle properties as a function of the measured track parameters rather than using a backtracing method because the latter is more time consuming at the analysis stage. For particles with a rigidity vector $\vec{R}$, trajectories starting at the target position $z_{targ}$ and coordinates ($x_{targ}$, $y_{targ}$) within the beam spot are calculated using the routines provided by the program package GEANT [@GEA]. The starting conditions are chosen from a five-dimensional grid with equidistant spacing for the variables $x_{targ}$, $y_{targ}$, $1/R$, $R_{x}/R$ and $R_{y}/R$. For a given magnetic field strength of the ALADIN magnet, the intersection ($x_{music}$, $y_{music}$) of each track with the reference plane of the MUSIC detector and its angle ($m_x$, $m_y$) relative to this plane as well as the path length to this point are determined. The bending plane of the magnet is the horizontal x-z-plane, i.e. the main component of the magnetic field points to the direction of the y-axis, although the fringe fields can not be neglected, especially if the full geometric acceptance is used. Since a large range in $N/Z$-ratios (0.7 - 1.5) and emission angles has to be covered, only 40% of the grid points correspond to trajectories which reach the reference plane behind the magnet, all others end at the wall of the magnet chamber where they are lost. For the successful tracks, the three components of the rigidity vector together with the path length are fitted as the product of one-dimensional functions of five variables: the position ($x_{music}$, $y_{music}$), the angle $m_x$ and the target position ($x_{targ}$, $y_{targ}$). The fit is done by means of an expansion in series of Chebychev polynomials for each variable. For a magnet which has virtually a dipole field, the most relevant terms are linear in $x_{music}$ and $m_x$ for $1/R$, $R_{x}/R$ and the path length, and linear in $y_{music}$ for $R_{y}/R$, but for a accuracy of the momentum reconstruction on the percent level, higher order terms can not be ignored. Under the assumption of an expansion up to third order, approximately 1000 individual contributions have to be calculated, which is not feasible. However, a particular term can be estimated by the size of the related expansion coefficient, since Chebychev polynomials are orthogonal within the interval from -1 to 1, and at the same time all their minima and maxima within this interval have the values -1 and 1, respectively. ( Strictly mathematically speaking, this is not correct. Among other conditions, the orthogonality relations can only be used if the full parameter space is covered. This is not the case, since not all of the tracks reach the reference plane. ) In a second step, small terms are gradually suppressed until the $\chi ^2$ of the fit has increased by 10%, thus reducing the total number from between 400 to 1000 in the first step ( depending on the highest order taken into account ) to 25 to 40 ( depending on the variable ). The fitting procedure is then repeated using only the remaining relevant terms which leads to slightly different expansion coefficients in the final result. Once this fitting procedure has been performed for each setting of the magnetic field used during the experiment, the reconstruction of the rigidity vector and of the path length is reduced to the evaluation of a set of polynomials. If a reasonable qualitiy of the reconstruction can be archieved, this is a justification for the somewhat heuristic method to select the relevant contributions. The accuracy can easily be determined by calculating tracks with random start values – i.e. with starting coordinates at the target and for the rigidities not identical with the starting parameters used for the fitting procedure. The reconstruction is done for these tracks by evaluating the fit functions, and the input values are compared to the reconstructed ones. The mean deviations for rigidity and path length are a measure of the uncertainty caused by the reconstruction method itself. Clearly, the size of these deviations is on the one hand dependent on the mesh size of the grid of start values and on the other hand on the choice of the highest order taken into account for the expansion in Chebychev polynomials. Both quantities were optimized until the internal accuracy for all variables was better than 0.1% FWHM within the chosen range of rigidities between 1.2 and 3.6 GeV/c. The final set of coefficients was obtained by fitting $\approx$12000 tracks with a maximum order of 4 for each polynomial and a maximum of 6 for the sum of the orders within a term. A very similar procedure as described above can be used to estimate the expected errors due to the experimental resolution of the two position detectors in front and behind the magnet. A random offset of the order of the experimental uncertainties is added to the positions and slopes prior to the evaluation of the polynomials. Afterwards, the difference between the reconstructed values with and without random offsets is calculated. The mean value of these deviations is the resolution expected due to the experimental uncertainties. It was found that both an uncertainty of 0.7 mrad and of 3 mm produce an error in the rigidity of 1%. With the time and position resolutions given in the previous section, rigidity resolutions of approximately 1.2% and 3% can be expected for beam particles and medium heavy fragments with charge $\approx $12, respectively. The quality of the rigidity reconstruction can be demonstrated by the rigidity distribution of beam particles passing through the apparatus without any nuclear interaction. Within all targets used in this experiment, gold projectiles reach their equilibrium charge state, providing particles with identical momenta and charge states 77$^+$, 78$^+$, 79$^+$, i.e. with rigidities which differ by 1.3% per charge state. With the carbon target, the influence of angular straggling within the target is small and negligible compared to the experimental errors due to the position resolution. In figure \[beam\], the rigidity distribution of beam particles after passing through the carbon target is plotted versus their x-position in the MUSIC reference plane. In this representation of the data, the three charge states (  equilibrium charge state distribution of 600 MeV/u gold in carbon: 59% of the projectiles are fully stripped, 35% have a charge state of 78$^+$ and 6% of 77$^+$  [@STO91] ) are clearly visible, i.e. the rigidity resolution for heavy nuclei is approximately 1.3% FWHM which is in agreement with values expected from the resolutions of the individual detectors. Using the reconstructed values for the rigidity and path length, the charge of the particle measured by the MUSIC detector and the time of flight given by the TOF-wall, the velocity and the momentum vector can be calculated for each charged particle detected both in the MUSIC and the TOF-wall, i.e. for particles with a charge of eight and above. The knowledge of velocity and momentum allows the calculation of the particle’s mass. In figure \[masses\], the mass spectra for the reactions Au+Al and Au+Cu are shown. Single mass resolution for charges up to 12 is obtained, corresponding to a mass resolution $\Delta$A/A of approximately 4.0% FWHM for light fragments. The dominant contribution to the uncertainty of the mass measurement $$\frac{\Delta A}{A} = \sqrt{ \left( \frac{ \Delta R}{R} \right) ^{2} + \left( \gamma^{2} \cdot \frac{ \Delta TOF}{TOF} \right) ^{2} } \quad .$$ is caused by the mass dependent error of the time measurement which is amplified by the factor $\gamma^2$ ( $\gamma^2$=2.6 for 600 MeV/nucleon ). From this, a rigidity resolution of 2.4% FWHM can be deduced for light fragments. Data {#SEC3} ==== The breakup dynamics of multifragmenting spectator matter will be reflected in the momenta of the fragments produced. Especially observables combining the kinematic information of two or more particles – e.g. relative velocities – are governed by the long-range Coulomb repulsion and are therefore sensitive to time scales of the decay and spatial properties of the decaying source. Clearly, the breakup pattern will change with increasing excitation energy transferred to the spectator matter. From the analysis of reactions of gold projectiles with different targets and beam energies between 400 and 1000 MeV/nucleon it is well established [@HUB91; @SCH96; @POC95] that the quantity $Z_{bound}$ – defined as the sum of the charges of all particles with charge two and above, which are emitted from the projectile spectator and detected in the TOF-wall – reflects directly the size of the spectator as well as the excitation energy transferred to the spectator nucleus. It was furthermore shown that the mean number of fragments produced in a reaction as well as other observables characterizing the populated partition space were independent of the target used, if they were investigated as a function of the quantity $Z_{bound}$. $Z_{bound}$ is therefore used as a sorting parameter describing the violence of the reaction. As was discussed earlier, momenta and masses could only be reconstructed for particles with a charge $Z$ of eight and above. It will be shown in the next section that events with two and more large fragments with $Z \geq 8$ cover the $Z_{bound}$ range from 30 to 70. The maximum mean number of intermediate mass fragments – defined as fragments with charges between 3 and 30 – is observed for a $Z_{bound}$ value of approximately 40. The dataset available covers therefore the range from peripheral collisions up to the region of maximum fragment production. Characterization of two- and three-particle events {#SEC31} -------------------------------------------------- To show the characteristics of the event classes with two and three heavy particles with charge $Z \geq 8$, their reaction cross sections $d\sigma /d Z_{bound}$ are plotted in the upper panel of figure \[wqternary\] for the four different targets as a function of $Z_{bound}$. In the following, events with two (three) fragments with charge $Z \geq 8$ are called binary (ternary). Binary events attributed to binary fission were excluded by the condition that either the lighter fragment is of charge below 20 or the sum of the two charges is smaller than 60. In the $Z_1 Z_2$-plane, this region is well separated from the region of binary fission [@RUB96]. For comparison, the inclusive reaction cross sections – i.e. without conditions on fragment multiplicity and charge – are also shown: The binary and ternary events represent approximately 10% and 1% of the nuclear reaction cross section, respectively. In order to demonstrate that binary fission events as defined above populate an impact parameter region different from that of binary events without fission, the cross section for binary fission in the reaction Au+C is included. These events are obviously produced in very peripheral reactions. It had been shown earlier [@KRE93] that multifragment events evolve – with decreasing $Z_{bound}$ – from events with one heavy residue in the exit channel of the reaction and not from binary fission events. The sum of the charges of the two and three fragments is plotted in the lower panels. In ternary events, typically 80% of $Z_{bound}$ is contained in the charges of the three heavy fragments with average charges and masses of $\langle Z_i\rangle$ = 22, 13, 10 and $\langle A_i\rangle$ = 48, 29, 20, i=1,2,3. In binary events, the sum of the charges of the heavy fragments accounts on average for 75% of Z$_{bound}$ with a clear minimum at Z$_{bound}$=40, where the maximum mean number of intermediate mass fragments is observed. The average charges and masses for this event class are $\langle Z_i\rangle$ = 26, 13 and $\langle A_i\rangle$ = 57, 27, i=1,2. It will now be demonstrated that these two event classes are representative subsets of the experimental data, i.e. that for a given $Z_{bound}$ value no evidence for a strong dependence on the number of heavy fragments is found. This means that other quantities defining an event do not show a close correlation between their mean values and the multiplicity of the heavy fragments if analyzed according to $Z_{bound}$. Evidently, only observables can be used for this investigation which are not dominated by autocorrelations. The multiplicity of intermediate mass fragments for instance contains the number of all heavy fragments with a charge smaller than 30. The mean multiplicity of IMFs is therefore influenced by the selection criterion and will be significantly different for events with different numbers of heavy fragments in the exit channel. The mean number $\langle M_{lp}\rangle$ of light particles from the mid-rapididy zone of the reaction which were detected in the hodoscope is a quantity which is certainly dependent on the violence of the reaction but independent of the specific decay channels of the excited projectile spectator. In figure \[multlp\], the inclusive distributions of $\langle M_{lp}\rangle$ versus $Z_{bound}$ for the four targets are shown together with the distributions for events with two and three heavy particles. In agreement with the participant-spectator-model, the size of the interaction zone – represented by the mean number of light particles – increases with decreasing size of the projectile spectator. The distributions are independent of the multiplicity of heavy projectile fragments with the exception of the most peripheral reactions ( $Z_{bound} \geq$ 65 ). In this range of largest impact parameters the inclusive data are dominated by spallation and not by multifragmentation events. There, the restriction to events with two or three heavy particles in the exit channel is synonymous with the selection of events with higher mean energy. The transversal deflection of the decaying projectile spectator is another quantity which is not influenced by autocorrelations with regard to the decay pattern. Since in events with two or three heavy particles in the exit channel the heavy particles contain typically 75-80% of $Z_{bound}$, the center of mass of these particles is in good approximation the center of mass of the decaying system. Thus, the transversal velocity $$\beta _{trans} = \sqrt{\beta _{x}^{2} + \beta _{y}^{2}}$$ of the center of mass of the two or three particles with respect to the beam frame was calculated. In figure \[betaparperp\], the mean values of this velocity as a function of $Z_{bound}$ are compared for events with two and three heavy fragments in the exit channel. In agreement with inclusive measurements at 400 MeV/nucleon [@KUN94], $ \beta _{trans}$ increases monotonously with decreasing $Z_{bound}$ and establishes the transversal deflection of the projectile spectator and therefore the transversal momentum transfer (bounce) as a measure of the deposition of excitation energy into the spectator matter. Pure Coulomb interaction during a grazing collision would lead to very small values for the bounce between $5 \cdot 10^{-4}$c and $ 4 \cdot 10^{-3}$c for C and Pb, respectively. But due to the trigger condition demanding at least one light particle detected in the hodoscope and therefore a nuclear reaction, the bounce does not vanish for $Z_{bound} = 80$. The increasing Coulomb repulsion with increasing charge of the target nucleus is nevertheless reflected in the small target dependence. Within the experimental errors, the transversal velocity at a given $Z_{bound}$ is independent of the two decay patterns studied. The two quantities $\langle M_{lp}\rangle$ and $ \beta _{trans}$ describe properties related to the initial reaction phase – the size of the fireball and the excitation energy transferred to the spectator matter. The fact that these quantities are independent of a specific choice of the multiplicity of heavy fragments demonstrates that a restriction to the subset of events, defined by the detection threshold of the MUSIC detector, does not select a non-typical sample of the produced projectile spectators. Two- and three-particle observables {#SEC32} ----------------------------------- From the measured momenta of the heavy fragments the intrinsic momenta $\vec{p}_{cm}(i)$ and velocities $\vec{v}_{cm}(i)$ in the center of mass frame (CM-frame) of the binary or ternary heavy fragment system werde determined. This has primarily the advantage of eliminating the projectile velocity from the analysis. Furthermore, it reduces the influence of directed collective motion on the momenta of the particles. This is especially important if the data are to be compared to calculations with models which do not include linear collective motions. By construction, these momenta are collinear in the case of two and coplanar in the case of three particle events. For the further analysis, a new coordinate system has been chosen such that for each event the momentum vectors lie in the same plane – the xy-plane – and that the direction of the heaviest particle coincides with the x-axis. This eliminates the three Euler angles which describe the spacial orientation of the momenta relative to the beam axis. The kinematics of the two and three heavy fragments is thus reduced to one ( $p_{x}(1)$ ) and three ( $p_{x}(1)$, $p_{y}(2)$, $p_{x}(2)-p_{x}(3)$ ) parameters, respectively. The relative kinematics of the fragments can thus uniquely be expressed in terms of one and three independent quantities which, for the analysis presented in this paper, are chosen as follows: (i) the total kinetic energy $E_3$ of the fragments in the CM-frame, (ii) the reduced relative velocity $v_{red}(2,3)$, and (iii) a quantity $\Omega_{\Delta}$ which describes the event shape in velocity space. In the case of only two heavy particles, the kinetic energy $E_2$ alone is sufficient to describe the decay dynamics. The sum $E_3$ of the kinetic energies of the three particles is calculated in their CM-frame $$E_3 = \sum_{i=1}^3 \frac{p^2_{cm}(i)}{2\cdot m_0 \cdot A_i} \quad , \label{EQ_2}$$ where $m_0$=931.5 MeV/c$^2$ is the atomic mass unit and $A_i$ the mass number of the fragment $i$. The kinetic energy of the particles is dominated by the Coulomb interaction which itself is strongly dependent on the charges involved. The mean value $\langle E_3 \rangle $ is therefore studied together with the standard deviation $\sigma_3$ of the $E_3$-distribution as a function of the nominal Coulomb-repulsion $E_c$ of the fragments at the time of the breakup, i.e. as a function of the Coulomb potential of three touching spheres with radii $R_i$ = 1.4 $\cdot$ $A_i^{1/3}$: $$E_{c} = e^2 \cdot \sum_{i<j} \frac{Z_i \cdot Z_j} {1.4\cdot (A_i^{1/3}+A_j^{1/3})} \quad . \label{EQ_3}$$ This is a generalization of the well known Viola formula [@VIO63]. For events with only two heavy fragments the kinetic energy and the Coulomb repulsion are calculated accordingly. The experimental results are plotted in figure \[ekin3\] for the four targets used. Within the statistical uncertainties, no target dependence is apparent. In all further plots, mean values of the kinetic energy and of the width of the energy distribution for the combined data of all four targets will therefore be shown. $\langle E_2 \rangle$, $\langle E_3 \rangle $, and $\sigma_2$, $\sigma_3$ depend linearly on $E_{c}$ and are parameterized in terms of straight line fits ($y=m \cdot x+b$) common to the data of all four targets. The slopes and intercepts of these fits are listed in the following table: -------------------- ------ ------------ ------- ------------ m$_E$ 0.43 $\pm$ 0.05 0.37 $\pm$ 0.04 b$_E$ (MeV) 39.0 $\pm$ 4.0 76.0 $\pm$ 5.0 m$_{\sigma}$ 0.0 $\pm$ 0.05 -0.07 $\pm$ 0.01 b$_{\sigma}$ (MeV) 28.0 $\pm$ 3.0 44.0 $\pm$ 4.0 -------------------- ------ ------------ ------- ------------ The parameters $b_E$ and $b_{\sigma}$ describe the mean energies and their variations in the limit of $E_{c}$=0, i.e. without Coulomb interaction, both for binary and ternary events. Under the assumption of a purely thermal source with a temperature $T$ and without Coulomb interaction the mean values $\langle E_2 \rangle $ and $\langle E_3 \rangle $ of the kinetic energy distributions are $2T$ with a width $\sigma_2$ of $\sqrt{2} T$ and $4T$ with a width $\sigma_3$ of $2T$ in case of surface emission and $3/2 \cdot T$ with a width $\sigma_2$ of $\sqrt{3/2} T$ and $3T$ with a $\sigma_3$ of $\sqrt{3} T$ in case of volume emission of the fragments. For both breakup scenarios, the temperatures deduced from these relations are within the experimental errors identical for binary and ternary events. The assumption of volume emission leads to a temperature of 25 MeV whereas the value for surface emission is 20 MeV. Results obtained in the reaction Au + Au at 1000 MeV/nucleon where kinetic temperatures were extracted from the energy spectra of light charged particles up to $^4$He emitted from the target spectator [@XI97] and temperatures extracted from transverse momentum distributions at 600 MeV/u [@SCH96] are of similar size ( 15-20 MeV ). In line with previous studies [@KIM92], the reduced relative velocity is defined as $$v_{red}(i,j)=\frac{v_{rel}(i,j)}{\sqrt{Z_i+Z_j}} \quad , \label{EQ_1}$$ where $v_{rel}(i,j)$ is the relative velocity of particles $i$ and $j$ and $Z_i$ and $Z_j$ are the corresponding charges of the fragments. With this definition, the mutual Coulomb repulsion within a fragment pair is charge independent. For ternary events, the reduced relative velocity of the second and third largest fragment is calculated. Its mean experimental value, averaged over all targets, is 0.0206$\cdot$c $\pm$ 0.0005$\cdot$c. This value will be used later on to adjust the input parameters of model calculations. The third quantity $\Omega_{\Delta}$ characterizes the configuration of the three velocity vectors $$\Omega_{\Delta} = \frac{ \Delta_{123} }{ \Delta_0} \quad , \label{EQ_4}$$ where $\Delta_{123}$ denotes the area of the triangle with its three sides given by the three relative velocities $\vec{v}_{rel}(1,2)$, $\vec{v}_{rel}(2,3)$, and $\vec{v}_{rel}(1,3)$. The normalization $\Delta_0$ represents the area of an equilateral triangle with an circumference of $$u = |\vec{v}_{rel}(1,2)| + |\vec{v}_{rel}(2,3)| + |\vec{v}_{rel}(1,3)| \quad ,$$ which is the largest area possible for a given circumference. Thus, $\Omega_{\Delta}$ varies between 0 and 1, where $\Omega_{\Delta}$=0 corresponds to a streched configuration with the three relative velocities being collinear and $\Omega_{\Delta}$=1 corresponding to a situation where the three CM-velocities point to the corners of an equilateral triangle. The normalized experimental distributions of the reduced area $\Omega_{\Delta}$ are shown in figure \[omega\] for the four targets: The probability to find an equilateral velocity configuration is two orders of magnitude larger than that for a stretched one. Within the statistical errors, the distributions are independent of the target, therefore the mean value averaged over all four targets was determined to increase the statistics especially for small values of $\Omega_{\Delta}$. In order to address the question of possible correlations between the event shape and the charges of the fragments, the average charges $\langle Z_i \rangle $ ($i$=1,2,3) of the three fragments ordered according to their sizes are studied as a function of $\Omega_{\Delta}$ for the combined data of all targets. The results are shown in figure \[z123\]. Within the statistics, the average charges are independent of $\Omega_{\Delta}$, indicating that the probability distribution of $\Omega_{\Delta}$ is not a trivial consequence of the charge distribution or the spectator size. Sensitivity of the three-particle variables {#SEC33} ------------------------------------------- In order to illustrate the potential sensitivity of the chosen observables, calculations with the schematic SOS-code [@LOP92] were performed. This code was especially developped to study the influence of two extreme breakup mechanisms on experimentally observable kinetic quantities, using in both cases a nuclear system of a given size and excitation energy and identical multifragment channels. It produces multifragment events with two sets of momentum distributions, simulating for each event on the one hand a sequence of binary decays and on the other hand a simultaneous breakup using the final partition of the sequential decay chain and placing the fragments randomly but without overlap in a sphere. For this investigation, masses and excitation energies of the decaying spectator nuclei were chosen according to [@BAO93] where the authors adjusted the input parameters of a statistical fragmentation model (Berlin model) until the relation between $\langle M_{imf}\rangle$ and $Z_{bound}$ was well reproduced for the system Au + Cu at 600 MeV/nucleon. Since the main motivation of the calculations using the SOS code was to illustrate the potential usefulness of the presented observables and not to describe the dynamical aspects of the data, no further attempt was done to optimize the input parameters of the code. The standard built-in parameters [@LOP92] were used, especially a density for the simultaneous breakup szenario of one half of normal nuclear density which is much larger than the values extracted from statistical multifragmentation models ( $\rho/ \rho_0$ = 0.3 in the Copenhagen and the Moscow code and 0.135 in the MCFRAG code ). If the sensitivity of the chosen observables is to be tested, it is – however – important that the simulations provide a sample of Monte Carlo data which matches – with respect to the fragment composition – the experimental data. This is demonstrated in figure \[z123\], where for both breakup scenarios the mean charges $\langle Z_i \rangle$ in ternary events, ordered according to their sizes, are compared to the experimental data. The large fluctuations for the simultaneous breakup scenario are due ro the fact that only very few events with small $\Omega_{\Delta}$ values are produced ( see next figure ). In figure \[omegasos\], the probability distribution of the quantity $\Omega_{\Delta}$ is shown for both breakup scenarios and the experimental data. As a reference, the $\Omega_{\Delta}$-distribution for a thermal system containing three non-interacting fragments is included. For the simultaneous breakup, the probability of stretched velocity configurations – i.e.  small $\Omega_{\Delta}$ – is significantly smaller than for a purely sequential decay process and for the limit of a thermal system. This difference was to be expected, since the repulsive mutual Coulomb interaction shifts initially stretched velocity configurations to larger values of $\Omega_{\Delta}$. The influence of the Coulomb interaction is especially strong for the relatively small radius used in this simulation, which is already an indication that smaller densities will lead to a better description of the experimental data. Only due to this repulsion, the velocity configuration is an image of the breakup configuration in the coordinate space. Any thermal motion – i.e. any motion which is independent of the relative positions of the fragments – reduces this correlation. For realistic input parameters of the decaying system (see section \[SEC4\]), the correlation coefficient $r(\Omega_{\Delta}, X_{\Delta})$ between $\Omega_{\Delta}$ and the equivalent quantity in the coordinate space $X_{\Delta}$ $$r(\Omega_{\Delta}, X_{\Delta}) = \frac{\langle \Omega_{\Delta} \cdot X_{\Delta} \rangle - \langle \Omega_{\Delta} \rangle \cdot \langle X_{\Delta} \rangle} {\sigma(\Omega_{\Delta}) \cdot \sigma(X_{\Delta})}$$ has values of approximately 0.1. ( Note that even in the case of $T=0$ and three identical charges this coefficient does not reach the value 1.0 since the relation between the distance of two charged particles and their relative momentum due to the Coulomb-repulsion is not linear. ) If – on the other hand – a selfsimilar radial flow dominates the momentum distribution, $r(\Omega_{\Delta}, X_{\Delta})$ can reach values around 0.3. In figure \[pvrel23sos\], the probability distribution of the reduced relative velocity $v_{red}(2,3)$ between the second and third largest fragments is shown, again both for the data and the SOS calculations. The two scenarios predict significantly different relative velocity distributions which in both cases differ clearly from the data. In particular, the sequential calculations (solid histogram) show a pronounced peak at $v_{red}(2,3) =0.012~c$. This structure originates from the direct splitting of an intermediate state into the observed fragments 2 and 3 at a rather late stage of the decay sequence. The absence of this structure in the data may therefore signal either a smearing of the relative velocity between the final fragments 2 and 3 by decays following the splitting into the primordial second and third largest fragments, or proximity effects caused by the presence of other particles, or a different decay mechanism which does not produce fragment 2 and 3 via a binary splitting. The results presented in figures \[omegasos\] and \[pvrel23sos\] suggest that the quantities chosen to describe the dynamics of the multifragment events are sensitive to important characteristics of the decay process. In the following chapter, the experimental results will be compared to calculations with statistical multifagmentation models and classical three-body calculations in order to limit the parameter space of the break-up scenario. Comparison to model calculations {#SEC4} ================================ Statistical multifragmentation models {#SEC41} ------------------------------------- Since statistical multifragmentation models have been shown to describe the observables in the partition space of the multifragmentation process [@KRE93; @BAO93; @BAR93; @BOT92; @BOT95a; @XI97], it is the obvious next step to compare their predictions to the kinetic energy distribution obtained in the present experiment. ( It should be emphasized that the description of the partition space comprises the cross sections for binary and ternary events as defined in \[SEC31\]. ) Results are shown for the Berlin code (MCFRAG) as well as the Copenhagen and the Moscow code. A detailed description of the differences between the three models can be found in reference [@GRO94]. An extensive and detailed investigation of all dynamical observables as defined in sections \[SEC31\] and \[SEC32\] was only performed using the statistical multifragmentation code MCFRAG [@BAO93]. All three models assume an equilibrated source with a given number of nucleons $A$ at a density $\rho$ with an excitation energy $E^*$ per nucleon. This source is non-homogeneous, it consists of regions of liquid with normal nuclear density and regions of gas. To compare the calculations to the experimentally observed decay of the projectile spectator, the global parameters $A$ and $E^*$ have to be provided as a function of the impact parameter $b$. To do so, the number of nucleons of the projectile spectator was calculated within a geometrical abrasion picture for the collision Au+Au using a radius parameter of 1.3 fm. The excitation energy for a given spectator size within the three codes was then chosen according to [@BAO93; @BAR93; @BOT92]. For the nuclear density at freeze out the standard values of the models were taken, i.e. $\rho/ \rho_0$ = 0.3 for the Copenhagen and the Moscow code and 0.135 for the MCFRAG code. In figure \[input\], the size of the projectile spectator and its excitation energy are shown versus the impact parameter. It should be noted that for all three models the excitation energy necessary to describe the partition space of the multifragmentation is significantly smaller than the experimental results obtained for the reaction Au + Au at 600 MeV/nucleon using a total energy balance [@POC95]. The number of events to be produced for each interval in $b$ was chosen according to the geometrical cross section for the interval, $dP(b) \sim bdb$. The impact parameter was varied between 0.5 and 12.0 fm in steps of 0.5 fm. For the MCFRAG code, the calculation of the observables was done twice: First, the output of the simulations was used directly, then random errors on the order of the experimental uncertainties for light particles were added to the masses and momenta of the fragments before the same analysis was performed. In this way, an upper estimation for the uncertainties produced by the experimental resolution was achieved. In figure \[e3\_ec\] the mean kinetic energies $\langle E_2 \rangle$ and $\langle E_3 \rangle$ for binary and ternary events ( as defined in \[SEC31\] ) in the center of mass frame of the two or three particles and the widths of these distributions $\sigma_2$ and $\sigma_3$ are plotted versus the nominal Coulomb energy. In the case of the MCFRAG code, the results including the experimental resolution are shown, for the two other sets of simulations, the uncertainties due to the experimental errors were added quadratically to the intrinsic widths of the energy distributions. The mean kinetic energy $\langle E_3 \rangle$ is reasonably well described by all models, although small differences arise: For the whole range of $E_{c}$, the calculations using the Copenhagen model is steeper than the experimental distribution, therefore the agreement is, compared to the two other models, worse. The overall agreement of data and the three sets of calculations – independent of internal details in the theoretical treatment of the fragmentation process – and the simultaneous description of $\langle E_2 \rangle$ and $\langle E_3 \rangle$ by the MCFRAG code are nevertheless a confirmation for the expansion of the nuclear matter prior to its decay. The width of the energy distributions, on the other hand, is underestimated by almost a factor of two both for events with two and three heavy fragments in the exit channel. In spite of deviations between the three sets of calculations, the inadequate description of $\sigma_3$ is a generic problem of all three statistical multifragmentation models. Using the MCFRAG code, it was verified that this underprediction of $\sigma_3$ can not be compensated by reasonable fluctuations of the initial excitation energy of a given spectator: Combining the events from three sets of calculations with 0.9, 1.0 and 1.1 times $E^*(A_0)$ does not change the width of the energy distribution. This variation of the excitation energy corresponds within the relevant range of spectator sizes approximately to the width of the energy distribution used in [@BOT95a] to describe the experimental charge distributions. In figure \[omegastat\] the probability distribution for the quantity $\Omega_{\Delta}$ is plotted both for the data and the calculations with the MCFRAG code. The calculated distribution is significantly steeper than the experimental one. On the other hand, it is less steep than the result of the SOS-calculation for a simultaneous breakup presented in figure \[omegasos\]. Since in both cases the excitation energy transferred to the spectator matter of a given size is identical and the breakup pattern is on average very similar, any differences in the velocity distributions are caused by the different radii of the breakup volume. This will result in different contributions from the Coulomb interaction and – more important – in different spacial breakup configurations. On average, an elongated structure will result in a smaller value of $\Omega_{\Delta}$ than a more compact one. If, however, the volume is very small like in the case of the SOS-calculations, elongated configurations are less likely. The probability distribution of $\Omega_{\Delta}$ is therefore expected to be steeper than for the more dilute system used for the MCFRAG-calculations. The influence of angular momentum {#SEC43} --------------------------------- The simulations presented in the previous section showed that the experimental energy distributions can not be explained in a purely thermal description of the nuclear matter, if the temperature is adjusted to reproduce the charge distributions. It was shown earlier that the coupling of random and collective motion increases the fluctuations of the kinetic energy [@MIL93]. As an additional degree of freedom angular momentum was therefore taken into account. It is well known from the study of fission and compound nuclei at lower energies that in heavy ion reactions very large angular momenta can be transferred, causing a collective rotation of the excited matter. INC-calculations at 100 and 200 MeV/nucleon show that the mean angular momentum per nucleon transferred can be as large as 0.75$\hbar$, but even more important than the mean values are the huge angular momentum fluctuations which may reach 0.5$\hbar$ per nucleon FWHM [@BLA92]. The influence of angular momentum on the decay pattern of nuclear matter within the framework of statistical fragmentation models has only barely been studied so far. Calculations with the MCFRAG model were done using a version of the code where the treatment of angular momentum was implemented in a fully microcanonical way [@BOT95]. The impact parameter was again varied between 3.0 and 12.0 fm in steps of 0.5 fm ( below 5 fm, no events with three heavy fragments are produced ) and a total number of 570000 events for each set of simulations was produced. In this implementation, the rotational degrees of freedom are assumed to be completely thermalized and the contribution of the intrinsic rotation of the produced fragments to the total angular momentum is neglected. This is supposed to be a good approximation for expanded systems at the time of freeze out, since the main part of the angular momentum is contained in the orbital motion of the fragments around the common center of mass. Calculations were performed for three nuclear densities 0.055$\rho_0$, 0.080$\rho_0$, 0.135$\rho_0$, using the relations between impact parameter, system size and excitation energy which were already shown in figure \[input\], and a mean angular momentum $\langle L \rangle$ of 0.75$\hbar$A. The angular momentum transfer was distributed according to $$P(L) = \frac{L}{0.5 \cdot \langle L \rangle} \cdot \exp \left( \frac{-L}{0.5 \cdot \langle L \rangle} \right) \quad .$$ In reference [@BOT95], it was already shown that simulations with this angular momentum distribution together with a nuclear density of 0.08$\rho_0$ describe simultaneously the quantities $\langle E_3 \rangle$ and $\sigma_3$. The results, again including the influence of the experimental uncertainties, are shown in figure \[ekin3ang1\] for the three densities listed above. As expected, the mean kinetic energy as well as the width of the energy distribution increases with increasing nuclear density. Due to the fact that the mean rotational energy is not very large, the incorporation of angular momentum does not change $\langle E_3 \rangle$ very much, as a comparison to figure \[e3\_ec\] demonstrates, but the large variation of angular momenta produces nonthermal fluctuations which increase the value of $\sigma_3$ significantly, resulting in a good description of both $\langle E_3 \rangle$ and $\sigma_3$ for densities between 0.055$\rho_0$ and 0.080$\rho_0$. At the same time, the quantity $\Omega_{\Delta}$ is much better described, as is shown in figure \[omegaang1\] where the probability distribution of $\Omega_{\Delta}$ is plotted for the three densities. Independent of the nuclear density chosen the probability for the occurence of stretched configurations of the three velocity vectors is enhanced. To check whether the decay pattern is changed by the angular momentum, observables which were used in earlier papers [@KRE93; @SCH96] to describe the charge partition space of the reaction were investigated: The mean values of the asymmetries $$a_{12} =\frac{Z_1 - Z_2}{Z_1 + Z_2} \qquad \mbox{and} \qquad a_{23} =\frac{Z_2 - Z_3}{Z_2 + Z_3} \quad ,$$ the mean number of intermediate mass fragments $M_{imf}$ and the average charge of the largest fragment $Z_{max}$ are calculated as a function of $Z_{bound}$. In figure \[partition1\], the results are shown for simulations with and without angular momentum together with the experimental data. Whereas the mean number of intermediate mass fragments $\langle M_{imf}\rangle$ does not change very much under the influence of angular momentum, this is not true for the details of the decay pattern of the spectator: The mean asymmetry $\langle a_{12}\rangle$ between the charges of the largest and the second largest fragment decreases dramatically for values of $Z_{bound}$ above 50, which means that the two fragments become more comparable in size. As a consequence, the mean charge of the largest fragment $\langle Z_{max}\rangle$ within an event also decreases. At the same time, the mean asymmetry between the charges of the second and the third largest fragment $\langle a_{23}\rangle$ increases, which means that in the presence of angular momentum the charge of the spectator is more evenly divided between the two largest fragments. The changes in the breakup pattern are more pronounced for a small freeze out density. These results are in qualitative agreement with the investigations presented by Botvina and Gross [@BOT95], where the size of the largest fragment and the relative size of the two largest fragments was studied under the influence of different amounts of angular momentum. From the calculations presented above it is obvious that large angular momenta per nucleon destroy the agreement between the results of the statistical multifragmentation code and the data as far as the partition pattern of the spectator matter is concerned. This is especially true for large values of $Z_{bound}$, i.e. for peripheral collisions. On the other hand, it was shown that the additional degree of freedom increased the fluctuations of the kinetic energy by a substantial amount. The question therefore arises whether a better overall agreement can be achieved if the transfer of angular momentum per nucleon to the system is reduced for large impact parameters. If the peripheral reactions are treated in the abrasion-ablation picture applying the formalism described in [@GAI91], values for the angular momentum transfer are obtained which are smaller than the value of 0.75$\hbar$/nucleon by a factor of 5 to 10. These numbers together with a density of 0.135$\rho_0$ result in a reasonable description of the partition but the energy fluctuations are again underestimated. The mean values of the asymmetries $a_{12}$ and $a_{23}$ might suggest that this can be compensated by an increase of the nuclear density at breakup. Unfortunately, this is in contradiction to the description of the quantity $\langle M_{imf}\rangle$. The probability to find large values of $\langle M_{imf}\rangle$ for the $Z_{bound}$ range between 40 and 70 decreases with increasing density. As the mean multiplicity of IMF’s is already too small, a further increase of the density would make the deviations even worse. This leaves no room for a parametrization of angular momentum transfer and density which fits both aspects of the experimental data. The charge partition space and the dynamics of multifragmentation events can not be described simultaneously by the statistical multifragmentation model even if angular momentum as an additional degree of freedom and therefore as a potential source for fluctuations is taken into account. The conclusions drawn in this section are valid only for a nuclear system where all degrees of freedom are completely thermalized. If this is not true, i.e. if the time scale for the equilibration of the rotational degrees of freedom is large compared to that of the thermalization of the excitation energy, the process of fragmentation is decoupled from the angular momentum transfer. In this case, the amount of angular momentum transferred to the spectator does not influence the partition space of the reaction, it only contributes to the final momentum distribution of the fragments. Therefore, density and excitation energy on the one hand and angular momentum on the other hand can be adjusted independently and a reasonable agreement with the experimental data can be achieved. This approach has been adopted by the Multics/Miniball group [@DAG97]. It has to be stated, though, that with this modification the fragmentation process is not treated in a purely microcanonical picture any longer. Classical three-body calculations {#SEC42} --------------------------------- A collective radial motion of all constituents of the spectator is another conveivable source of fluctuations of the kinetic energy. If the nuclear matter is compressed in the initial stage of the reaction, an additional nonequilibrated collective contribution to the motion of the nuclear matter will be present [@HOF76; @POG95]. Even though this effect is expected to be small in the peripheral collisions discussed in this paper, values for the radial flow energy up to 1.5 MeV can not be ruled out [@SCH96]. First attempts have been made to include collective radial flow in statistical models [@SUB96], but a consistent implemention is not yet available. Therefore, classical three-body calculations were performed to get a quantitative estimate for the influence of collective flow. The simultaneous emission out of a given volume is modeled in the following way: The centers of three non-overlapping fragments with a radius of $ 1.2 \cdot A^{1/3}$ are distributed randomly within a sphere of radius $R$. To each fragment, an isotropically distributed initial velocity is assigned. Constrained by momentum conservation, these velocities were selected according to a probability distribution for the relative kinetic energy $$P(E) \sim E^{\alpha} \cdot \exp \left( - \frac{E}{T} \right)$$ with $\alpha$ equals 0.5 or 1.0, corresponding to a volume or a surface emission of the fragments. In addition to this random motion, an initial radial flow velocity $$\vec{v}_{f,i} = \sqrt{ \frac{ 2 \cdot \epsilon_f}{m_0}} \cdot \frac{\vec{d}_{i}}{R}$$ was added to the random velocities of the thermal motion. Here, $\vec{d}_{i}$ is the position of the center of fragment $i$ with respect to the center of mass, $\epsilon_f$ is the flow energy per nucleon for fragments located at $d_{i}$ = $R$. The charges and masses of the fragments were obtained by a Monte Carlo sampling of the experimental events, thus reducing significantly the uncertainties associated with the fragment distribution. In order to account for the recoil from light particles emitted sequentially from the initial fragments ($Z_{i}^{\prime},A_{i}^{\prime}$), the measured charges $Z_i$ and masses $A_i$ were multiplied by a factor $(1-T^2/(a \cdot \Delta))^{-1}$. For this correction, a level density parameter of $a$=10 MeV was used. The quantity $\Delta$ represents the average energy removed by the emission of a nucleon. For simplicity, $ \Delta = 2T+E_s+E_b$ was assumed, where $E_s$=8 MeV and $E_b$=4 MeV are the typical separation energy and barrier height, respectively. After the interaction of the primordial fragments ($Z_{i}^{\prime},A_{i}^{\prime}$) has ceased, the sequential emission of light particles leading to the observed masses and charges ($Z_{i},A_{i}$) was assumed to take place. For each event, the temperature parameter $T$ was chosen according to the experimental value of $Z_{bound}$ from the relation $$T = f_T \cdot \sqrt{2\cdot(79-Z_{bound})} \quad ,$$ where $f_T$ is a free parameter. For $f_T$ = 1, the relation describes within the relevant range of $Z_{bound}$ reasonably well the temperatures of the initial projectile spectators as predicted by microscopic transport calculations [@BAU88; @HUB92; @KRE93]. A value of 0.75 is in agreement with experimental results obtained by the He-Li isotope thermometer [@POC95; @XI97]. The paths of the fragments were calculated under the influence of their mutual Coulomb field and two-fragment proximity forces according to Ref. [@LOP89]. Since for the further analysis those trajectories were rejected for which the fragments overlapped during the propagation, the influence of the proximity force turned out to be rather small. In a first step, these schematic trajectory calculations were performed with input parameters corresponding on average to those of the statistical model MCFRAG, i.e. $\alpha = 0.5$, $f_T \approx$ 0.6 - 0.8, $R \approx$ 7 - 9 fm and $\epsilon_f = 0$. The results for $\langle E_3 \rangle$ and $\sigma_3$ are comparable to those of the statistical model calculations, especially the width $\sigma_3$ is again significantly underpredicted in this case. In order to demonstrate this, the schematic calculations for $f_T$ = 0.7, $R$ = 8 fm and $\epsilon_f$ = 0 are included in figure \[e3\_ec\]. The agreement of the classical calculations and the statistical model calculations for a similar set of external parameters is a consistency check and shows in addition that the neglection of the influence of the lighter particles produced in the reaction i.e. the restriction of the experimental investigation to the two or three heaviest fragments does not change the results significantly. The probability distribution of the quantity $\Omega_{\Delta}$ is also compared to the results of the statistical model calculation ( see figure \[omegastat\] ). In a next step, the quantities $R$, $\epsilon_f$ and $f_T$ were varied to fit the experimental data. In order to quantify the agreement between the simulations and the experimental observations, a reduced $\chi^2$ was calculated for each parameter set: $$\chi^2 = \frac{1}{5} \sum_{i=1}^5 \frac{ (\omega_i - \mu_i)^2} { \delta_i^2} \quad . \label{EQ_6}$$ Here, $\omega_i$ are the four coefficients characterizing the fits to the three-particle data in figure \[ekin3\] and, in addition, the mean reduced velocity between the two lighter fragments as shown in figure \[pvrel23sos\]. $\delta_i$ and $\mu_i$ denote the experimental uncertainties of these quantities and the corresponding model predictions, respectively. The result is shown in figure \[classical3\]. A clear minimum of $\chi^2$ can be determined for each given flow parameter $\epsilon_f$ by varying independently the other two model parameters $R$ and $f_T$. The left part of figure \[classical3\] shows in a $R$ - $f_T$ plane the contour lines with $\chi^2$ = 2 for $\epsilon_f$ = 0 ($R \approx$ 15 fm), 0.5 ($R \approx$ 22 fm) and 1 MeV ($R \approx$ 26 fm) and for the two values of the exponent $\alpha$. The corresponding minima of the $\chi^2$-distribution are displayed in the right panel of figure \[classical3\] as a function of $\epsilon_f$. Both for volume emission and surface emission, values for the flow parameter $\epsilon_f$ larger than 1 MeV are ruled out whereas the results obtained by values between 0 and 1 MeV show no significant difference in $\chi^2_{min}$. To demonstrate the quality of the parameter adjustment, the quantities $\langle E_3 \rangle $, $\sigma_3$ and v$_{red}(2,3)$ are shown in figure \[classic\] for the parameter set $R$ = 22 fm, $\epsilon_f$ = 0.5 MeV and $f_T$ = 1.2. In the lower right part of figure \[classic\], $\Omega_{\Delta}$ – which was not used in the fitting procedure – is compared to the experimental values. As expected from the results shown in figures \[omegasos\] and \[omegastat\], the probability for the existence of stretched velocity configurations increases with increasing radius of the decaying system. The $\Omega_{\Delta}$-distribution is nevertheless not directly comparable to those shown in figures \[omegasos\] and \[omegastat\]: They were achieved assuming a fixed breakup density for all decaying systems, whereas the three-body calculations assume a fixed breakup volume. This set of simulations suggests the disintegration of a highly excited and rather extended nuclear system and very low values of the flow parameter. For $Z_{bound}$=55 – the mean value for events with three heavy particles in the exit channel of the reaction – the fit values correspond to a temperature parameter of approximately 10 MeV and a density below 0.05$\rho_0$ which is much smaller than the values used for the MCFRAG calculations in order to reproduce the partition space of the reaction. In the framework of these schematic calculations the large freeze out radius is due to the balance between Coulomb energy and temperature: If a higher nuclear density is assumed, the Coulomb repulsion is much stronger and requires therefore a compensation by a lower temperature parameter and a vanishing flow to describe the energy spectra. The fluctuations $\sigma_3$ of the kinetic energy, on the other hand, reflect in addition to thermal fluctuations also fluctuations due to the position sampling within the breakup volume. Thus, lower temperatures and especially smaller radii lead to a significant reduction of $\sigma_3$ which can not be compensated by the small values of radial flow consistent with the energy spectra. Conclusions {#SEC5} =========== Kinematic correlations between two and three heavy projectile fragments produced in Au induced reactions at E/A = 600 MeV have been studied. A comparison of the observables to the results of the schematic SOS-model confirms their sensitivity to the disassembly configuration. Classical trajectory calculations sampling the experimental charge distribution limit significantly the possible parameter space of the breakup scenario. Taken at their face values these simulations require highly excited and rather extended nuclear systems at the time of the breakup. These source parameters differ significantly from breakup parameters needed by statistical multifragmentation models in order to describe the observed fragment distributions and mean values of the kinetic energy distributions. On the other hand, these models are not able to reproduce the fluctuations of the energy distribution. Binary events not attributed to binary fission also show fluctuations of the relative kinetic energy which can only be described by the same rather high - and probably unrealistic - thermal contribution. The introduction of angular momentum into the statistical model improves the description of the energy fluctuations, but does not allow anymore to reproduce simultaneously the charge partition. For any further attempt to reconcile the kinetic observables and the partition pattern of the spectator matter two possible approaches seem conceivable: Either the assumption of a global equilibrium established prior to the fragmentation process is oversimplified and has to be given up, or the statistical models have to be refined. The nuclear interaction during the breakup process, for example, is so far ignored, the interaction between the fragments is limited to the Coulomb repulsion. ( In the classical three-body trajectory calculations present in this work, a nuclear proximity potential is included, but its influence is strongly suppressed by the requirement that the fragments do not overlap. ) One might speculate that in the case of a stronger overlap of the fragments in an earlier stage of the breakup, the nuclear attractive force between the fragments may partially compensate the Coulomb repulsion. Thus, smaller radii would not necessarily lead to an overestimation of the kinetic energies. First steps to add the nuclear interaction between the fragments to statistical decay models in a consistent manner have already been undertaken [@SAT90; @DAS93]. A recent publication suggests that the nuclear interaction is indeed relevant for excitation energies up to approximately 10 MeV/u [@DAS97]. At the same time, large fluctuations – similar to dissipative phenomena and shape fluctuations known to be important in binary fission [@GOE91] – may arise. On the long term, a quantitative understanding of fluctuations and their development during the disassembly phase clearly requires dynamical transport models which include a realistic treatment of fluctuations on a microscopic level. Significant progress in the development of microscopic transport models has been achieved during the last decade [@BUUQMD], but only recently first microscopic calculations were published which reproduce for the ALADIN data both the multiplicity of the fragments and the slopes of their kinetic energy spectra [@GOS97]. In the framework of this model – and in line with previous studies [@BOA89; @BOA89a; @KUN91] – it is found that the decaying system is not in thermal equilibrium and that the breakup is dominated by dynamical processes. However, the fragment composition agrees with the experimental one only for a short time interval after the collision (60 fm/c) and is drastically altered during the further time evolution. Thus, a consistent description of the time evolution from the first stages of the collision via the formation of primordial excited fragments to their eventual deexcitation and formation of individual quantum states within one microscopic model is still not available. First attempts to take into account the quantal nature of the nuclear system are being pursued [@OHN97; @SCH97; @SCH97a] for which the present data may serve as a valuable testing ground. The authors thank D.H.E. Gross and A.S. Botvina for providing us with their statistical multifragmentation codes and for helpful discussions. This work was partly supported by the Bundesministerium für Forschung und Technologie. J.P. and M.B. acknowledge the financial support of the Deutsche Forschungsgemeinschaft under the Contract no. Po 256/2-1 and Be 1634/1-1. [1000]{} J. Hubele, P. Kreutz, J.C. Adloff, M. Begemann-Blaich, P. Bouissou, G. Imme, I. Iori, G.J. Kunde, S. Leray, V. Lindenstruth, Z. Liu, U. Lynen, R.J. Meijer, U. Milkau, A. Moroni, W.F.J. Müller, C. Ngô, C.A. Ogilvie, J. Pochodzalla, G. Raciti, G. Rudolf, H. Sann, A. Schüttauf, W. Seidel, L. Stuttge, W. Trautmann, A. Tucholski, Z. Phys. A [**340**]{}, 263 (1991). P. Kreutz, J.C. Adloff, M. Begemann-Blaich, P. Bouissou, J. Hubele, G. Imme, I. Iori, G.J. Kunde, S. Leray, V. Lindenstruth, Z. Liu, U. Lynen, R.J. Meijer, U. Milkau, A. Moroni, W.F.J. Müller, C. Ngô, C.A. Ogilvie, J. Pochodzalla, G. Raciti, G. Rudolf, H. Sann, A. Schüttauf, W. Seidel, L. Stuttge, W. Trautmann, A. Tucholski, Nucl. Phys. [**A556**]{}, 672 (1993). A. Schüttauf, W.D. Kunze, A. Wörner, M. Begemann-Blaich, Th. Blaich, D.R. Bowman, R.J. Charity, A. Cosmo, A. Ferrero, C.K. Gelbke, C. Groß, W.C. Hsi, J. Hubele, G. Immé, I. Iori, J. Kempter, P. Kreutz, G.J. Kunde, V. Lindenstruth, M.A. Lisa, W.G. Lynch, U. Lynen, M. Mang, T. Möhlenkamp, A. Moroni, W.F.J. Müller, M. Neumann, B. Ocker, C.A. Ogilvie, G.F. Peaslee, J. Pochodzalla, G. Raciti, F. Rosenberger, Th. Rubehn, H. Sann, C. Schwarz, W. Seidel, V. Serfling, L.G. Sobotka, J. Stroth, L. Stuttgé, S. Tomasevic, W. Trautmann, A. Trzcinski, M.B. Tsang, A. Tucholski, G. Verde, C.W. Williams, E. Zude, B. Zwieglinski, Nucl. Phys.  [**A607**]{}, 457 (1996). C.A. Ogilvie, J.C. Adloff, M. Begemann-Blaich, P. Bouissou, J. Hubele, G. Imme, I. Iori, P. Kreutz, G.J. Kunde, S. Leray, V. Lindenstruth, Z. Liu, U. Lynen, R.J. Mei$\!$jer, U. Milkau, W.F.J. Müller, C. Ngô, J. Pochodzalla, G. Raciti, G. Rudolf, H. Sann, A. Schüttauf, W. Seidel, L. Stuttge, W. Trautmann, A. Tucholski, Phys. Rev. Lett. [**67**]{}, 1214 (1991). Bao-An Li, A.R. DeAngelis, D.H.E. Gross, Phys. Lett. B [**303**]{}, 225 (1993). H.W. Barz, W. Bauer, J.P. Bondorf, A.S. Botvina, R. Donangelo, H. Schulz, K. Sneppen, Nucl. Phys. [**A561**]{}, 466 (1993). A.S. Botvina and I.N. Mishustin, Phys. Lett. B [**294**]{}, 23 (1992). A.S. Botvina, I.N. Mishustin, M. Begemann-Blaich, J. Hubele, G. Imme, I. Iori, P. Kreutz, G.J. Kunde, W.D. Kunze, V. Linddenstruth, U. Lynen, A. Moroni, W.F.J. Müller, C.A. Ogilvie, J. Pochodzalla, G. Raciti, Th. Rubehn, h. Sann, A. Schüttauf, W. Seidel, W. Trautmann, A. Wörner, Nucl. Phys. [**A584**]{}, 737 (1995). A.S. Botvina and D.H.E. Gross, Nucl. Phys. [**A592**]{}, 257 (1995). A.S. Botvina, A.S. Il’inov, I.N. Mishustin, Yad. Fiz. [**42**]{}, 1127 (1985) (Sov. J. Nucl. Phys. [**42**]{}, 712 (1985)). J. Bondorf, R. Donangelo, I.N. Mishustin, C.J. Pethick, H. Schulz, K. Sneppen, Nucl. Phys. [**A443**]{}, 321 (1985). D.H.E. Gross, Zhang Xiao-ze, Xu Shu-yan, Phys. Rev. Lett. [**56**]{}, 1544 (1986). Hongfei Xi, T. Odeh, R. Bassini, M. Begemann-Blaich, A.S. Botvina, S. Fritz, S.J. Gaff, C. Groß, G. Immé, I. Iori, U. Kleinevoß, G.J. Kunde, W.D. Kunze, U. Lynen, V. Maddalena, M. Mahi, T. Möhlenkamp, A. Moroni, W.F.J. Müller, C. Nociforo, B. Ocker, F. Petruzzelli, J. Pochodzalla, G. Raciti, G. Riccobene, F.P. Romano, Th. Rubehn, A. Saija, M. Schnittker, A. Schüttauf, C. Schwarz, W. Seidel, V. Serfling, C. Sfienti, W. Trautmann, A. Trzcinski, G. Verde, A. Wörner, B. Zwieglinski, accepted for publication in Z. Phys. A. R. Trockel, U. Lynen, J. Pochodzalla, W. Trautmann, N. Brummund, E. Eckert, R. Glasow, K.D. Hildenbrand, K.H. Kampert, W.F.J. Müller, D. Pelte, H.J. Rabe, H. Sann, R. Santo, H. Stelzer, R. Wada, Phys. Rev. Lett. [**59**]{}, 2844 (1987). D.H.E. Gross, G. Klotz-Engmann, H. Oeschler, Phys. Lett. B [**224**]{}, 29 (1989). Y.D. Kim, R.T. de Souza, D.R. Bowmann, N. Carlin, C.K. Gelbke, W.G. Gong, W.G. Lynch, L. Phair, M.B. Tsang, F. Zhu, S. Pratt, Phys. Rev. Lett. [**67**]{}, 14 (1991). Y.D. Kim, R.T. de Souza, D.R. Bowmann, N. Carlin, C.K. Gelbke, W.G. Gong, W.G. Lynch, L. Phair, M.B. Tsang, F. Zhu, Phys. Rev. C [**45**]{}, 338 (1992). D.R. Bowman, G.F. Peaslee, N. Carlin, R.T. de Souza, C.K. Gelbke, W.G. Gong, Y.D. Kim, M.A. Lisa, W.G. Lynch, L. Phair, M.B. Tsang, C. Williams, N. Colonna, K. Hanold, M.A. McMahan, G.J. Wozniak, L.G. Moretto, Phys. Rev. Lett. [**70**]{}, 3534 (1993). E. Bauge, A. Elmaani, R.A. Lacey, J. Lauret, N.N. Ajitanand, D. Craig, M. Cronqvist, E. Gualtieri, S. Hannuschke, T. Li, B. Llope, T. Reposeur, A. Vander Molen, G.D. Westfall, J.S. Winfield, J. Yee, S. Yennello, A. Nadasen, R.S. Tickle, E. Norbeck, Phys. Rev. Lett. [**70**]{}, 3705 (1993). B. Kämpfer, R. Kotte, J. Mösner, W. Neubert, D. Wohlfarth, J.P. Alard, Z. Basrak, N. Bastid, I.M. Belayev, Th. Blaich, A. Buta, R. Caplar, C. Cerruti, N. Cindro, J.P. Coffin, P. Dupieux, J. Erö, Z.G. Fan, P. Fintz, Z. Fodor, R. Freifelder, L. Fraysse, S. Frolov, A. Gobbi, Y. Grigorian, G. Guillaume, N. Herrmann, K.D. Hildenbrand, S. Hölbling, A. Houari, S.C. Jeong, M. Jorio, F. Jundt, J. Kecskemeti, P. Koncz, Y. Korchagin, M. Krämer, C. Kuhn, I. Legrand, A. Lebedev, C. Maguire, V. Manko, T. Matulewicz, G. Mgebrishvili, D. Moisa, G. Montaru, I. Montbel, P. Morel, D. Pelte, M. Petrovici, F. Rami, W. Reisdorf, A. Sadchikov, D. Schüll, Z. Seres, B. Sikora, V. Simion, S. Smolyankin, U. Sodan, K. Teh, R. Tezkratt, M. Trzaska, M.A. Vasileiv, P. Wagner, J.P. Wessels, T. Wienold, Z. Wilhelmi, A.L. Zhilin, Phys. Rev. C [**48**]{}, R955 (1993). Bao-An Li, D.H.E. Gross, V. Lips, H. Oeschler, Phys. Lett. B [**335**]{}, 1 (1994). O. Schapiro, D.H.E. Gross, Nucl. Phys. [**A573**]{}, 143 (1994). P. Glässel, D. v. Harrach, H.J. Specht, L. Grodzins, Z. Phys. A [**310**]{}, 189 (1983). D. Pelte, U. Winkler, M. Bühler, B. Weissmann, A. Gobbi, K.D. Hildenbrand, H. Stelzer, R. Novotny, Phys. Rev. C [**34**]{}, 1673 (1986). R. Bougault, J. Colin, F. Delaunay, A. Genoux-Lubain, A. Hajfani, C. Le Brun, J.F. Lecolley, M. Louvel J.C. Steckmeyer, Phys. Lett. B [**232**]{}, 291 (1989). G. Bizard, D. Durand, A. Genoux-Lubain, M. Louvel, R. Bougault, R. Brou, H. Doubre, Y. El-Masri, H. Fugiwara, K. Hagel, A. Hajfani, F. Hanappe, S. Jeong, G.M. Jin, S. Kato, J.L. Laville, C. Le Brun, J.F. Lecolley, S. Lee, T. Matsuse, T. Motobayashi, J.P. Patry, A. Péghaire, J. Péter, N. Prot, R. Regimbart, F. Saint-Laurent, J.C. Steckmeyer, B. Tamain, Phys. Lett. B [**276**]{}, 413 (1992). M. Bruno, M.D’Agostino, M.L. Fiandri, E. Fuschini, L. Manduci, P.F. Mastinu, P.M. Milazzo, F.Gramegna, A.M.J. Ferrero, F. Gulminelli, I. Iori, A. Moroni, R. Scardaoni, P. Buttazzo, G.V. Margagliotti, G. Vannini, G. Auger, E. Plagnol, Nucl. Phys. [**A576**]{}, 138 (1994). J. Lauret and R.A. Lacey, Phys. Lett. B [**327**]{}, 195 (1994). G. Bauer, F. Bieser, F.P. Brady, J.C. Chance, W.F. Christie, M. Gilkes, V. Lindenstruth, U. Lynen, W.F.J. Müller, J.L. Romero, H. Sann, C.E. Tull, P. Warren, Nucl. Instr. and Meth. [**A 386**]{}, 249 (1997). J. Pochodzalla, T. Möhlenkamp, T. Rubehn, A. Schüttauf, A. Wörner, E. Zude, M. Begemann-Blaich, Th. Blaich, H. Emling, A. Ferrero, C. Groß, G. Imme, I. Iori, G.J. Kunde, W.D. Kunze, V. Lindenstruth, U. Lynen, A. Moroni, W.F.J. Müller, B. Ocker, G. Raciti, H. Sann, C. Schwarz, W. Seidel, V. Serfling, J. Stroth, W. Trautmann, A. Trzcinski, A. Tucholski, G. Verde, B. Zwieglinski, Phys. Rev. Lett. [**75**]{}, 1040 (1995). V.E. Viola, T. Sikkeland, Phys. Rev. [**130**]{}, 2044 (1963). R. Brun, F. Bruyant, M. Maire, A.C. McPherson, P. Zanarini, GEANT3 Report, CERN/DD/ec/84-1, (1986). Th. Stöhlker, H. Geissel, H. Folger, C. Kozhuharov, P.H. Mokler, G. Münzenberg, D. Schardt, Th. Schwab, M. Steiner, H. Stelzer, K. Sümmerer, Nucl. Instr. and Meth. [**B 61**]{}, 408 (1991). T. Rubehn, R. Bassini, M. Begemann-Blaich, Th. Blaich, A. Ferrero, C. Groß, G. Imme, I. Iori, G.J. Kunde, W.D. Kunze, V. Lindenstruth, U. Lynen, T. Möhlenkamp, L.G. Moretto, W.F.J. Müller, B. Ocker, J. Pochodzalla, G. Raciti, S. Reito, H. Sann, A. Schüttauf, W. Seidel, V. Serfling, W. Trautmann, A. Trzcinski, G. Verde, A. Wörner, E. Zude, B. Zwieglinski, Phys. Rev. C [**53**]{}, 3143 (1996). G.J. Kunde, PhD thesis (University Frankfurt) 1994. J.A. López and J. Randrup, Comp. Phys. Communications [**70**]{}, 92 (1992). D.H.E. Gross and K. Sneppen, Nucl. Phys. [**A567**]{}, 317 (1994). U. Milkau, M. Begemann-Blaich, E.-M. Eckert, G. Imme, P. Kreutz, A. Kühmichel, M. Lattuada, U. Lynen, C. Mazur, W.F.J. Müller, J.B. Natowitz, C. Ngô, J. Pochodzalla, G. Raciti, M. Ribrag, H. Sann, W. Trautmann, R. Trockel, Z. Phys. A [**346**]{}, 227 (1993). Th. Blaich, M. Begemann-Blaich, M.M. Fowler, J.B. Wilhelmy, H.C. Britt, D.J. Fields, L.F. Hansen, M.N. Namboodiri, T.C. Sangster, Z. Fraenkel, Phys. Rev. C [**45**]{}, 689 (1992). J.-J. Gaimard, K.-H. Schmidt, Nucl. Phys. [**A531**]{}, 709 (1991). M. D’Agostino, M. Bruno, N. Colonna, A. Ferrero, M.L. Fiandri, E. Fuschini, F. Gramegna, I. Iori, L. Manduci, G.V. Margagliotti, P.F. Mastinu, P.M. Milazzo, A. Moroni, F. Petruzelli, R. Rui, G. Vannini, J.D. Dinius, C.K. Gelbke, T. Glasmacher, D.O. Handzy, W. Hsi, M. Huang, G.J. Kunde, M.A. Lisa, W.G. Lynch, C.P. Montoya, G.F. Peaslee, L. Phair, C. Schwarz, M.B. Tsang, C. Williams, A.S. Botvina, P. Desesquelles, I. Mishustin, Proceedings of the XXXV. International Winter Meeting on Nuclear Physics, Bormio, ed. I. Iori (Ricerca Scientifica ed Educatione Permanente, Milano), 276 (1997). J. Hofmann, W. Scheid, W. Greiner, Il Nuovo Cimento [**33**]{}, 343 (1976). G. Poggi for the FOPI-Collaboration, Nucl. Phys. [**A586**]{}, 755 (1995). Subrata Pal, S.K. Samaddar, J.N. De, Nucl. Phys. A [**49**]{}, 608 (1996). J. Hubele, P. Kreutz, V. Lindenstruth, J.C. Adloff, M. Begemann-Blaich, P. Bouissou, G. Imme, I. Iori, G.J. Kunde, S. Leray, Z. Liu, U. Lynen, R.J. Meijer, U. Milkau, A. Moroni, W.F.J. Müller, C. Ngô, C.A. Ogilvie, J. Pochodzalla, G. Raciti, G. Rudolf, H. Sann, A. Schüttauf, W. Seidel, L. Stuttge, W. Trautmann, A. Tucholski, R. Heck, A.R. DeAngelis, D.H.E. Gross, H.R. Jaqaman, H.W. Barz, H. Schulz, W.A. Friedman, R.J. Charity, Phys. Rev. C [**46**]{}, R1577 (1992). W. Bauer, Phys. Rev. C [**38**]{}, 1297 (1988). J.A. López and J. Randrup, Nucl. Phys. [**A503**]{}, 183 (1989). L. Satpathy, M. Mishra, A. Das, M. Satpathy, Phys. Lett. B [**237**]{}, 181 (1990). A. Das, M. Mishra, M. Satpathy, L. Satpathy, J. Phys. G [**19**]{}, 319 (1993). C.B. Das, A. Das, M. Satpathy, L. Satpathy, Phys. Rev. C [**56**]{}, 1444 (1997). For a recent review see F. Gönnenwein, [*The Nuclear Fission Process*]{}, Edt. C. Wagemans, CRC Press, Boca Raton, 287 (1991). See e.g. [*The Nuclear Equation of State, Part A and B*]{}, Edt. W. Greiner and H. Stöcker, Plenum Press, New York (1989). P.B. Gossiaux, R. Puri, Ch. Hartnack, J. Aichelin, Nucl. Phys. [**A619**]{}, 379 (1997). D.H. Boal, J.N. Gosli, C. Wicentowich, Phys. Rev. Lett. [**62**]{}, 737 (1989). D.H. Boal, J.N. Gosli, C. Wicentowich, Phys. Rev. C [**40**]{}, 601 (1989). G.J. Kunde, J. Pochodzalla, J. Aichelin, E. Berdermann, B. Bethier, C. Cerruti, C.K. Gelbke, J. Hubele, P. Kreutz, S. Leray, R. Lucas, U. Lynen, U. Milkau, W.F.J. Müller, C. Ngô, C.H. Pinkenburg, G. Raciti, H. Sann, W. Trautmann, Phys. Lett. B [**272**]{}, 202 (1991). A. Ohnishi and J. Randrup, Phys. Lett. B [**394**]{}, 260 (1997). J. Schnack and H. Feldmeier, Prog. Part. Nucl. Phys. [**39**]{}, 393 (1997). J. Schnack and H. Feldmeier, Phys. Lett. B [**409**]{}, 6 (1997). =8.0cm =8.0cm =8.0cm =8.0cm =8.0cm =8.0cm =8.0cm =8.0cm =8.0cm =8.0cm =8.0cm =8.0cm =8.0cm =8.0cm =8.0cm =8.0cm =8.0cm =8.0cm =8.0cm =8.0cm =8.0cm [^1]: The data published in [@SCH96; @POC95; @RUB96] were taken with the version III of the TP-MUSIC and a larger TOF-wall.
--- author: - 'Geusa de A. Marques[^1] and Valdir B. Bezerra [^2].' title: Hydrogen atom in the gravitational fields of topological defects --- -1.5cm 18.8cm -2.0cm -2.0cm [*[Departamento de Física, Universidade Federal da Paraíba,]{}*]{} [*[Caixa Postal 5008, 58059-970, João Pessoa, PB, Brazil.]{}*]{} We consider a hydrogen atom in the background spacetimes generated by an infinitely thin cosmic string and by a point-like global monopole. In both cases, we find the solutions of the corresponding Dirac equations and we determine the energy levels of the atom. We investigate how the geometric and topological features of these spacetimes leads to shifts in the energy levels as compared with the flat Minkowski spacetime. PACS numbers: 03.65.Ge, 03.65.Nk, 14.80.Hv 3.0 cm **[I. Introduction]{}** The study of quantum systems in curved spacetimes goes back to the end of twenties and to the beginning of thirties of the last century[@a], when the generalization of the Schrödinger and Dirac equations to curved spaces has been discussed, motivated by the idea of constructing a theory which combines quantum physics and general relativity. Spinor fields and particles interacting with gravitational fields has been the subject of many investigations. Along this line of research we can mention those concerning the determination of the renormalized vacuum expectation value of the energy-momentum tensor and the problem of creation of particles in expanding Universes[@a1], and those connected with quantum mechanics in different background spacetimes[@a2] and, in particular, the ones which consider the hydrogen atom\[4-8\] in an arbitrary curved spacetime. The study of the single-particle states which are exact solutions of the generalized Dirac equation in curved spacetimes constitutes an important element to construct a theory that combines quantum physics and gravity and for this reason, the investigation of the behaviour of relativistic particles in this context is of considerable interest. It has been known that the energy levels of an atom placed in a gravitational field will be shifted as a result of the interaction of the atom with spacetime curvature\[4-8\]. These shifts in the energy levels, which would depend on the features of the spacetime, are different for each energy level, and thus are distinguishable from the Doppler effect and from the gravitational and cosmological redshifts, in which cases these shifts would be the same for all spectral lines. In fact, it was already shown that in the Schwarzschild geometry, the shift in the energy level due to gravitational effects is different from the Stark and Zeeman effects, and therefore, it would be possible, in principle, to separate the shifts in the energy levels caused by electromagnetic and by gravitational perturbations[@3]. Thus, in these situations the energy spectrum carries unambiguous information about the local features of the background spacetime in which the atomic system is located. The general theory of relativity, as a metric theory, predict that gravitation is manifested as curvature of spacetime. This curvature is characterized by the Riemann tensor. On the other hand, we know that there are connections between topological properties of the space and local physical laws in such a way that the local intrinsic geometry of the space is not sufficient to describe completely the physics of a given system. As an example of a gravitational effect of topological origin, we can mention the fact that only when a particle is transported around a cosmic string along a closed curve the string is noticed at all. This situation corresponds to the gravitational analogue[@14] of the electromagnetic Aharonov-Bohm effect[@10a], in which electrons are beamed past a solenoid containing a magnetic field. These effects are of topological origin rather than local. In fact, the nontrivial topology of spacetime, as well as its curvature, leads to a number of interesting gravitational effects. Thus, it is also important to investigate the role played by a nontrivial topology, for example, on a quantum system. As examples of these investigations we can mention the study of the topological scattering in the context of quantum mechanics on a cone [@5], and the investigations on the interaction of a quantum system with conical singularities[@6b; @gv] and on quantum mechanics on topological defects spacetimes [@6]. Therefore, taking into account that we have to consider the topology of spacetime in order to describe completely a given physical system, we want to address the question of how the nontrivial topology could affect the energy levels by shifting the atomic spectral lines. For the purpose of investigating this problem, a calculation of the energy levels shifts of the hydrogen atom is carried out in the spacetimes of an infinitely thin cosmic string[@7] and of a point-like global monopole[@8]. Topological defects may arise in gauge models with spontaneous symmetry breaking. They can be of various types such as monopoles, domain walls, cosmic strings and their hybrids[@9]. They may have been formed during universe expansion and their nature depends on the topology of the vacuum manifold of the theory under consideration[@kb]. The richness of the new ideas they brought along to general relativity seems to justify the interest in the study of these structures, and specifically the role played by their topological features at the atomic level. The gravitational field of a cosmic string is quite remarkable; a particle placed at rest around a straight, infinite, static cosmic string will not be attracted to it; there is no local gravity. The spacetime around a cosmic string is locally flat but not globally. The external gravitational field due to a cosmic string may be approximately described by a commonly called conical geometry. The nontrivial topology of this spacetime leads to a number of interesting effects like, for example, gravitational lensing [@10], emission of radiation by a freely moving particle[@11], electrostatic self-force[@12] on an electric charge at rest and the so-called gravitational Aharonov-Bohm effect[@14] among other. The spacetime of a point-like global monopole has also some unusual properties. It possesses a deficit solid angle $\Delta =32\pi ^{2}G\eta ^{2}$, $\eta $ being the energy scale of symmetry breaking. Test particles in this spacetime experiences a topological scattering by an angle $\pi \Delta /2$ irrespective of their velocity and their impact parameter. Also in this case, the nontrivial topology of spacetime, as well as its curvature, which are due to the deficit solid angle leads to a number of interesting effects[@gv1; @pec] which are not present in flat Minkowski spacetime. In this paper, we deal with the interesting problem concerning the modifications of the energy levels of a hydrogen atom placed in the gravitational fields of a cosmic string and of a global monopole. In order to investigate this problem further we determine the solutions of the corresponding Dirac equations and the energy levels of a hydrogen atom under the influence of these gravitational fields. To do these calculations we shall make the following assumptions: $(i)$ The atomic nucleus is not affected by the presence of the defect. $(ii)$ The atomic nucleus is located on the defect. With these, to do our calculations accordingly would have been possible and doing so it affords an explicit demonstration of the effects of spacetime topology on the shifts in the atomic spectral lines of the hydrogen atom. A similar problem concerning the effects of gravitational fields at atomic level has been considered before. As example of some works on this topic, we can mention \[4-8\] which obtained the expressions for the shifts in the energy levels of an atom caused by its interaction with the curvature of spacetime and also a recent paper[@b] which calculated the atomic energy level shifts of atoms placed in strong gravitational fields near collapsing spheroidal masses. The results obtained in this paper are related to the previous ones\[4-8\] connected with this topic, in the sense that we also study the effect of gravitational fields at the atomic level, however, our calculation provides an interesting new example of an effect at atomic scale which can be thought of as a consequence of the nontrivial topology of spacetime and this aspect was not taken into account by previous works\[4,8\]. In the case of an infinitely thin cosmic string spacetime, the shifts in the energy levels depend on the angle deficit and for the global monopole spacetime these shifts depend on the deficit solid angle. In both situations these effects vanish when these angle deficits vanish, as it should be. This paper is organized as follows. In section II we obtain the solution of the Dirac equation and we calculate the energy shifts experienced by a hydrogen atom placed in the gravitational field of a cosmic string. In section III we also obtain the solutions of the Dirac equation and we calculate the modifications of the spectrum of a hydrogen atom in the gravitational field of a global monopole. Finally, in section IV, we draw some conclusions. 1.0 cm [**II. Relativistic hydrogen atom in the spacetime of the cosmic string** ]{} In what follows we will study the behaviour of a hydrogen atom in the spacetime of a cosmic string. The line element corresponding to the cosmic string spacetime[@7] is given, in spherical coordinates, by $$ds^{2}=-c^{2}dt^{2}+dr^{2}+r^{2}d\theta ^{2}+\alpha ^{2}r^{2}\sin ^{2}\theta d\phi ^{2}. \label{100}$$ The parameter $\alpha =1-\frac{4G}{c^{2}}\bar{\mu}$ runs in the interval $% (0,1]$, with $\bar{\mu}$ being the linear mass density of the cosmic string. Let us consider the generally covariant form of the Dirac equation which is given by $$\left[ i\gamma ^{\mu }\left( x\right) \left( \partial _{\mu }+\Gamma _{\mu }\left( x\right) +i\frac{eA_{\mu }}{\hbar c}\right) -\frac{\mu c}{\hbar }% \right] \Psi \left( x\right) =0, \label{103}$$ where $\mu $ is the mass of the particle, $A_{\mu }$ is an external electromagnetic potential and $\Gamma _{\mu }\left( x\right) $ are the spinor affine connections which can be expressed in terms of the set of tetrad fields $e^{\mu}_{(a)}(x)$ and the standard flat spacetime $\gamma^{(a)}$ Dirac matrices as $$\begin{aligned} \Gamma_{\mu}=\frac{1}{4}\gamma^{(a)}\gamma^{(b)}e_{(a)}^{\nu} (\partial_{\mu}e_{(b)\nu} - \Gamma^{\sigma}_{\mu \nu}e_{(b)\sigma}). \label{fim} \end{aligned}$$ The generalized Dirac matrices $\gamma ^{\mu }\left( x\right) $ satisfies the anticommutation relations $$\left\{ \gamma ^{\mu }\left( x\right) ,\gamma ^{\nu }\left( x\right) \right\} =2g^{\mu \nu }\left( x\right) ,$$ and are defined by $$\gamma ^{\mu }\left( x\right) =e_{\left( a\right) }^{\mu }\left( x\right) \gamma ^{\left( a\right) }, \label{104}$$ where $e_{\left( a\right) }^{\mu }\left( x\right) $ obeys the relation $\eta ^{ab}e_{\left( a\right) }^{\mu }\left( x\right) e_{\left( b\right) }^{\nu }\left( x\right) =g^{\mu \nu };$ $\mu ,\,\nu =0,1,2,3$ are tensor indices and $a,\,b=0,1,2,3$ are tetrad indices. In this paper, the following explicit forms of the constant Dirac matrices will be taken $$\gamma ^{\left( 0\right) }=\left( \begin{array}{cc} {\bf 1} & 0 \\ 0 & {\bf -1} \end{array} \right) ;\text{ }\gamma ^{\left( i\right) }=\left( \begin{array}{cc} 0 & \sigma ^{i} \\ -\sigma ^{i} & 0 \end{array} \right) ;\text{ }i=1,2,3\,, \label{105}$$ where $\sigma ^{i}$ are the usual Pauli matrices. In order to write the Dirac equation in this spacetime, let us take the tetrads $e_{\left( a\right) }^{\mu }(x)$ as $$e_{\left( a\right) }^{\mu }(x)=\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & \sin \theta \cos \phi & \sin \theta \sin \phi & \cos \theta \\ 0 & \frac{\cos \theta \cos \phi }{r} & \frac{\cos \theta \sin \phi }{r} & -% \frac{\sin \theta }{r} \\ 0 & -\frac{\sin \phi }{\alpha r\sin \theta } & \frac{\cos \phi }{\alpha r\sin \theta } & 0 \end{array} \right) . \label{101}$$ Thus using (\[101\]), we obtain the following expressions for the generalized Dirac matrices $\gamma ^{\mu }\left( x\right) $ $$\begin{aligned} \gamma ^{0}\left( x\right) &=&\gamma ^{\left( 0\right) }, \nonumber \\ \gamma ^{1}\left( x\right) &=&\gamma ^{\left( r\right) }, \nonumber \\ \gamma ^{2}\left( x\right) &=&\frac{\gamma ^{\left( \theta \right) }}{r}, \nonumber \\ \gamma ^{3}\left( x\right) &=&\frac{\gamma ^{\left( \phi \right) }}{\alpha r\sin \theta }, \label{114}\end{aligned}$$ where $$\left( \begin{array}{c} \gamma ^{\left( r\right) } \\ \gamma ^{\left( \theta \right) } \\ \gamma ^{\left( \phi \right) } \end{array} \right) =\left( \begin{array}{ccc} \cos \phi \sin \theta & \sin \phi \sin \theta & \cos \phi \\ \cos \phi \cos \theta & \sin \phi \cos \theta & -\sin \phi \\ -\sin \phi & \cos \phi & 0 \end{array} \right) \left( \begin{array}{c} \gamma ^{\left( 1\right) } \\ \gamma ^{\left( 2\right) } \\ \gamma ^{\left( 3\right) } \end{array} \right) . \label{110}$$ The covariant Dirac Eq. (\[103\]), written in the spacetime of a cosmic string is then given by $$\begin{aligned} &&\left[ i\hbar %TCIMACRO{\tsum}% %BeginExpansion \mathop{\textstyle\sum}% %EndExpansion \nolimits^{r}\partial _{r}+i\hbar \frac{% %TCIMACRO{\tsum }% %BeginExpansion \mathop{\textstyle\sum}% %EndExpansion {}^{\theta }}{r}\partial _{\theta }+i\hbar \frac{% %TCIMACRO{\tsum }% %BeginExpansion \mathop{\textstyle\sum}% %EndExpansion {}^{\phi }}{\alpha r\sin \theta }\partial _{\phi }\right. \nonumber \\ &&\left. +i\hbar \frac{1}{2r}\left( 1-\frac{1}{\alpha }\right) \left( %TCIMACRO{\tsum}% %BeginExpansion \mathop{\textstyle\sum}% %EndExpansion \nolimits^{r}+\cot \theta %TCIMACRO{\tsum}% %BeginExpansion \mathop{\textstyle\sum}% %EndExpansion \nolimits^{\theta }\right) -\frac{eA_{0}}{c}-\gamma ^{\left( 0\right) } \mu c +\frac{E}{c}% \right] \chi \left( \vec{r}\right) =0, \label{119}\end{aligned}$$ where $\sum^{r}$, $\sum^{\theta }$ and $\sum^{\phi }$ are defined by $$%TCIMACRO{\tsum}% %BeginExpansion \mathop{\textstyle\sum}% %EndExpansion \nolimits^{r}\equiv \gamma ^{\left( 0\right) }\gamma ^{\left( r\right) };% \text{ }% %TCIMACRO{\tsum}% %BeginExpansion \mathop{\textstyle\sum}% %EndExpansion \nolimits^{\theta }\equiv \gamma ^{\left( 0\right) }\gamma ^{\left( \theta \right) };\text{ }% %TCIMACRO{\tsum }% %BeginExpansion \mathop{\textstyle\sum}% %EndExpansion {}^{\phi }\equiv \gamma ^{\left( 0\right) }\gamma ^{\left( \phi \right) }, \label{118}$$ and we have chosen $\Psi \left( x\right) $ as $$\Psi \left( x\right) =e^{-i\frac{E}{\hbar }t}\chi \left( \vec{r}\right) , \label{116}$$ which comes from the fact that the spacetime under consideration is static. We must now turn our attention to the solution of the equation for $\chi (\vec{r}) $. Then, let us assume that the solutions of Eq. (\[119\]) are of the form $$\chi \left( \vec{r}\right) =r^{-\frac{1}{2}\left( 1-\frac{1}{\alpha }\right) }\left( \sin \theta \right) ^{-\frac{1}{2}\left( 1-\frac{1}{\alpha }\right) }R\left( r\right) \Theta \left( \theta \right) \Phi \left( \phi \right) . \label{120}$$ Thus, substituting Eq. (\[120\]) into (\[119\]), we obtain the following radial equation $$\left( c% %TCIMACRO{\tsum}% %BeginExpansion \mathop{\textstyle\sum}% %EndExpansion \nolimits_{r}^{\prime }p_{r}+i\hbar c\frac{% %TCIMACRO{\tsum }% %BeginExpansion \mathop{\textstyle\sum}% %EndExpansion {}_{r}^{\prime }}{r} \gamma ^{\left( 0\right) }k_{\left( \alpha \right) }+eA_{0}+\mu c^{2}\gamma ^{\left( 0\right) }\right) R\left( r\right) =ER\left( r\right) . \label{136}$$ where $$k_{\left( \alpha \right) }=\pm \left( j_{\left( \alpha \right) }+\frac{1}{2}% \right) =\pm \left[ j+\frac{1}{2}+m\left( \frac{1}{\alpha }-1\right) \right] \label{135}$$ are the eigenvalues of the generalized spin-orbit operator $K_{\left( \alpha \right) }$ in the spacetime of a cosmic string and $j_{(\alpha)} $ corresponds to the eigenvalues of the generalized total angular momentum operator. The operator $K_{\alpha} $ is given by $$\hbar \gamma ^{\left( 0\right) } K_{\left( \alpha \right) }=\vec{% %TCIMACRO{\tsum }% %BeginExpansion \mathop{\textstyle\sum}% %EndExpansion }\cdot \vec{L}_{\left( \alpha \right) }+\hbar , \label{128b}$$ with $\vec{\Sigma}= (\Sigma^{r},\;\Sigma^{\theta},\;\Sigma^{\phi}) $ and $\vec{L}_{\left( \alpha \right) }$ is the generalized angular momentum operator[@gv] in the spacetime of the cosmic string, which is such that $\vec{L}_{\left( \alpha \right) }^{2}Y_{l_{\left( \alpha \right) }}^{m_{\left( \alpha \right) }}\left( \theta ,\phi \right) =\hbar ^{2}l_{\left( \alpha \right) }\left( l_{\left( \alpha \right) }+1\right) ,$ with $Y_{l_{\left( \alpha \right) }}^{m_{\left( \alpha \right) }}\left( \theta ,\phi \right) $ being the generalized spherical harmonics in the sense that $m_{\left( \alpha \right) }$ and $l_{\left( \alpha \right) }$ are not necessarily integers. The parameters $m_{\left( \alpha \right) }$ and $% l_{\left( \alpha \right) }$ are given, respectively, by $m_{\left( \alpha \right) }\equiv \frac{m}{\alpha }$ and $l_{\left( \alpha \right) }\equiv n+m_{\left( \alpha \right) }=l+m\left( \frac{1}{\alpha }-1\right) ,$ $% l=0,1,2,...$ $n-1$, $l$ is the orbital angular momentum quantum number, $m$ is the magnetic quantum number and $n$ is the principal quantum number. Let us choose the following two-dimensional representation for $% %TCIMACRO{\tsum}% %BeginExpansion \mathop{\textstyle\sum}% %EndExpansion \nolimits_{r}^{\prime }$ and $\gamma ^{\left( 0\right) }$$$%TCIMACRO{\tsum}% %BeginExpansion \mathop{\textstyle\sum}% %EndExpansion \nolimits_{r}^{\prime }\equiv \left( \begin{array}{cc} 0 & -i \\ i & 0 \end{array} \right) ;\text{ }\gamma ^{\left( 0\right) }\equiv \left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right) . \label{137}$$ Now, let us assume that the radial solution can be written as $$R(r)=\frac{1}{r}\left( \newline \begin{array}{c} -iF\left( r\right) \\ G\left( r\right) \end{array} \right) . \label{138}$$ Then, Eq. (\[136\]) decomposes into the coupled equations $$-i\left( \hbar c\right) ^{-1}\left[ E-E_{0}+\frac{e^{2}}{r}\right] F(r)+% \frac{dG(r)}{dr}+\frac{k_{(\alpha )}}{r}G(r)=0, \label{144}$$ and $$-i\left( \hbar c\right) ^{-1}\left[ E+E_{0}+\frac{e^{2}}{r}\right] G(r)+% \frac{dF(r)}{dr}-\frac{k_{(\alpha )}}{r}F(r)=0, \label{145}$$ where $E_{0}=\mu c^2 $ is the rest energy of the electron. Note that in obtaining these equations use was made of the fact that $A_{0}=-e/r$. The solutions of these equations are given in terms of the confluent hypergeometric function $ M(a,b;x)$ as $$\begin{aligned} F(r) &=&-i\sqrt{\frac{Q}{T}}\frac{e^{-rD}}{2}\left( rD\right) ^{\gamma _{\left( \alpha \right) }-1}\left[ M\left( \gamma _{\left( \alpha \right) }-1+\tilde{P},2\gamma _{\left( \alpha \right) }-1;2rD\right) \right. \nonumber \\ &&\left. +\frac{\left( \gamma _{\left( \alpha \right) }-1+\tilde{P}\right) }{% \left( k_{\left( \alpha \right) }+\tilde{Q}\right) }M\left( \gamma _{\left( \alpha \right) }+\tilde{P},2\gamma _{\left( \alpha \right) }-1;2rD\right) % \right] , \label{181}\end{aligned}$$ and $$\begin{aligned} G(r) &=&\frac{e^{-rD}}{2}\left( rD\right) ^{\gamma _{\left( \alpha \right) }-1}\left[ M\left( \gamma _{\left( \alpha \right) }-1+\tilde{P},2\gamma _{\left( \alpha \right) }-1;2rD\right) \right. \nonumber \\ &&\left. -\frac{\left( \gamma _{\left( \alpha \right) }-1+\tilde{P}\right) }{% \left( k_{\left( \alpha \right) }+\tilde{Q}\right) }M\left( \gamma _{\left( \alpha \right) }+\tilde{P},2\gamma _{\left( \alpha \right) }-1;2rD\right) % \right] , \label{180}\end{aligned}$$ where $T=\frac{E_{0}-E}{\hbar c};$ $Q=\frac{E_{0}+E}{\hbar c},$ $D=\sqrt{TQ}=% \frac{\sqrt{E_{0}^{2}-E^{2}}}{\hbar c};$ $\gamma _{\left( \alpha \right) }=1+% \sqrt{k_{\left( \alpha \right) }^{2}-\tilde{\alpha}^{2}};$ $\ \tilde{P}% \equiv \frac{\tilde{\alpha}}{2}\left( \sqrt{T/Q}-\sqrt{Q/T}\right) ;$ $% \tilde{Q}\equiv \frac{\tilde{\alpha}}{2}\left( \sqrt{T/Q}+\sqrt{Q/T}\right) , $ with $\tilde{\alpha}=\frac{e^{2}}{\hbar c}\approx\frac{1}{137}$ being the fine structure constant. The solutions given by (\[181\]) and (\[180\]) are divergent, unless the following condition is fulfilled $$\gamma _{\left( \alpha \right) }-1+\tilde{P}=-n;\text{ }n =0,1,2..., \label{b}$$ which means that $$\frac{1}{2}\tilde{\alpha}\left( \sqrt{\frac{T}{Q}}-\sqrt{\frac{Q}{T}}\right) =-\left( n +\gamma _{\left( \alpha \right) }-1\right) . \label{c}$$ From this equation we may infer that the energy eigenvalues are given by $$E=E_{0}\left[ 1+\tilde{\alpha}^{2}\left( n +\left| k_{\left( \alpha \right) }\right| \sqrt{1-\tilde{\alpha}^{2}k_{\left( \alpha \right) }^{-2}}% \right) ^{-2}\right] ^{-\frac{1}{2}}. \label{184}$$ This equation exhibits the angle deficit dependence of the energy levels. It is helpful to introduce the quantum number $n_{(\alpha)}$ that corresponds to the principal quantum number of the nonrelativistic theory when $ \alpha = 1$, $$n_{(\alpha)}=n+j_{\left( \alpha \right) }+\frac{1}{2}. \label{185}$$ Therefore, Eq. (\[184\]) may be cast in the form $$E_{n_{(\alpha)},j_{\left( \alpha \right) }}=E_{0}\left\{ 1+ \tilde{\alpha}^{2}\left[\left( n_{(\alpha)}-j_{\left( \alpha \right) }- \frac{1}{2}\right) + \left( j_{\left( \alpha \right) }+\frac{1}{2}\right) \sqrt{1-\tilde{\alpha}^{2}\left( j_{\left( \alpha \right) }+\frac{1}{2}\right) ^{-2}}\right] ^{-2}\right\} ^{-% \frac{1}{2}}. \label{186}$$ This equation can be written in a way which is better suited to physical interpretation. Thus, as $\tilde{\alpha}\ll 1,$ we can expand Eq. (\[186\]) in a powers of $\tilde{% \alpha}$, and as a result we get the following leading terms $$E_{n_{(\alpha)},j_{\left( \alpha \right) }}=E_{0}- E_{0}\frac{\tilde{\alpha}^{2}}{2n_{(\alpha)}^{2}}% +E_{0}\frac{\tilde{\alpha}^{4}}{2n_{(\alpha)}^{4}} \left( \frac{3}{4}-\frac{n_{(\alpha)}}{j_{\left( \alpha \right) }+\frac{1}{2}}\right) \text{.} \label{190}$$ The first term corresponds to the rest energy of the electron; the second one gives the energy of the bound states in the non-relativistic approximation and the third one corresponds to the relativistic correction. Note that these last two terms depend on the deficit angle. The further terms can be neglected in comparison with these first three terms. Now, let us consider the total shift in the energy between the states with $j=n-\frac{1}{2}$, and $% j=\frac{1}{2},$ for a given $n$. This shift is given by $$\begin{aligned} \Delta E_{n_{(\alpha)},j_{\left( \alpha \right) }} &=&\frac{\mu e^{8}}{\hbar ^{4}c^{2}n_{(\alpha)}^{3}}\left( \frac{n_{(\alpha)}- 1}{2\left[ n_{(\alpha)}+m\left( \frac{1}{\alpha }-1\right) \right] \left[ 1+m\left( \frac{1}{\alpha }% -1\right) \right] }\right) . \label{191}\end{aligned}$$ One important characteristic of Eq. (\[186\]) is that it contains a dependence on $n$, $j$ and $\alpha $. The dependence on $ \alpha $ corresponds to an analogue of the electromagnetic Aharonov-Bohm effect for bound states, but now in the gravitational context. Therefore, the interaction with the topology(conical singularity) causes the energy levels to change. Note that the presence of the cosmic string destroys the degeneracy of all the levels, corresponding to $l=0$ and $l=1$, and destroys partially this degeneracy for the other sublevels. Therefore, as the occurrence of degeneracy can often be ascribed to some symmetry property of the physical system, the fact that the presence of the cosmic string destroys the degeneracy means that there is a break of the original symmetry. Observe that for $\alpha =1$, the results reduce to the flat Minkowski spacetime case as expected. As a estimation of the effect of the cosmic string on the energy shift of the hydrogen atom, let us consider $\alpha =1-10^{-6}$ which corresponds to GUT cosmic strings. Using this value into Eq. (\[191\]), we conclude that the presence of the cosmic string reduces the energy of the level of the states $2P_{1/2}(n=2$, $l=1$, $j=l-\frac{1}{2}=\frac{1}{2}$, $m=1)$ to about $10^{-4}\%$ in comparison with the flat spacetime value. This decrease is of the order of the measurable Zeeman effect in carbon atoms for $2P$ states when submitted, for example, to an external magnetic field with strength to about tens of Tesla. Therefore, this shift in energy levels produced by a cosmic string is measurable as well. Finally, we can write down the general solution to Eq. (\[103\]) corresponding to a hydrogen atom placed in the background spacetime of a cosmic string. Thus, it reads $$\begin{aligned} \Psi _{l_{\left( \alpha \right) },j_{\left( \alpha \right) }=l_{\left( \alpha \right) }+\frac{1}{2},m_{\left( \alpha \right) }}\left( x\right) &=&e^{-i\frac{Et}{\hbar }}r^{-\frac{1}{2}\left( 1-\frac{1}{\alpha }\right) }\left( \sin \theta \right) ^{-\frac{1}{2}\left( 1-\frac{1}{\alpha }\right) } \nonumber \\ &&\times F_{\left( \alpha \right) }\left( r\right) \left( \begin{array}{c} \sqrt{\frac{l_{\left( \alpha \right) }+m_{\left( \alpha \right) }+\frac{1}{2}% }{2l_{\left( \alpha \right) }+1}}Y_{l\left( \alpha \right) }^{m_{\left( \alpha \right) }-\frac{1}{2}}\left( \theta ,\phi \right) \\ \sqrt{\frac{l_{\left( \alpha \right) }-m_{\left( \alpha \right) }+\frac{1}{2}% }{2l_{\left( \alpha \right) }+1}}Y_{l\left( \alpha \right) }^{m_{\left( \alpha \right) }+\frac{1}{2}}\left( \theta ,\phi \right) \end{array} \right) , \label{195}\end{aligned}$$ and $$\begin{aligned} \Psi _{l_{\left( \alpha \right) },j_{\left( \alpha \right) }=l_{\left( \alpha \right) }-\frac{1}{2},m_{\left( \alpha \right) }}\left( x\right) &=&e^{-i\frac{Et}{\hbar }}r^{-\frac{1}{2}\left( 1-\frac{1}{\alpha }\right) }\left( \sin \theta \right) ^{-\frac{1}{2}\left( 1-\frac{1}{\alpha }\right) } \nonumber \\ &&\times G_{\left( \alpha \right) }\left( r\right) \left( \begin{array}{c} -\sqrt{\frac{l_{\left( \alpha \right) }-m_{\left( \alpha \right) }+\frac{1}{2% }}{2l_{\left( \alpha \right) }+1}}Y_{l\left( \alpha \right) }^{m_{\left( \alpha \right) }-\frac{1}{2}}\left( \theta ,\phi \right) \\ \sqrt{\frac{l_{\left( \alpha \right) }+m_{\left( \alpha \right) }+\frac{1}{2}% }{2l_{\left( \alpha \right) }+1}}Y_{l\left( \alpha \right) }^{m_{\left( \alpha \right) }+\frac{1}{2}}\left( \theta ,\phi \right) \end{array} \right) , \label{196}\end{aligned}$$ where $F_{\left( \alpha \right) }\left( r\right) $ and $G_{\left( \alpha \right) }\left( r\right) $ are given by Eqs. (\[181\]) and (\[180\]), respectively, and the index $ \alpha$ was introduced to emphasize the dependence of these functions on this parameter. Note that the solutions depend on the topological features of the spacetime of a cosmic string whose influence appears codified in the parameter $\alpha $ associated with the presence of the cosmic string and this is the point at issue here. [**III.**]{} [**Relativistic hydrogen atom in the presence of a global monopole**]{} In continuation of the preceding consideration, in this section we shall be concerned with the study of the influence of a global monopole on the states of a hydrogen atom. The solution corresponding to a global monopole in a $O\left( 3\right) $ broken symmetry model has been investigated by Barriola and Vilenkin[@8]. Far away from the global monopole core we can neglect the mass term and as a consequence the main effects are produced by the solid deficit angle. The respective metric in the Einstein theory of gravity can be written as[@8] $$ds^{2}=-c^{2}dt^{2}+dr^{2}+b^{2}r^{2}\left( d\theta ^{2}+\sin ^{2}\theta d\phi ^{2}\right) , \label{200}$$ where $b^{2}=1-8\pi G\eta ^{2}$, the parameter $\eta $ being the energy scale of symmetry breaking. Now, let us choose the tetrad as $$e_{\left( a\right) }^{\mu }=\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & \sin \theta \cos \phi & \sin \theta \sin \phi & \cos \theta \\ 0 & \frac{\cos \theta \cos \phi }{br} & \frac{\cos \theta \sin \phi }{br} & -% \frac{\sin \theta }{br} \\ 0 & -\frac{\sin \phi }{br\sin \theta } & \frac{\cos \phi }{br\sin \theta } & 0 \end{array} \right) . \label{201}$$ Therefore, the generalized and flat spacetime Dirac matrices are related by $$\begin{aligned} \gamma ^{0}\left( x\right) &=&\gamma ^{\left( 0\right) }, \nonumber \\ \gamma ^{1}\left( x\right) &=&\gamma ^{\left( r\right) }, \nonumber \\ \gamma ^{2}\left( x\right) &=&\frac{\gamma ^{\left( \theta \right) }}{br}, \nonumber \\ \gamma ^{3}\left( x\right) &=&\frac{\gamma ^{\left( \phi \right) }}{br\sin \theta }. \label{214}\end{aligned}$$ where, $\gamma ^{\left( r\right) }$, $\gamma ^{\left( \theta \right) }$and $% \gamma ^{\left( \phi \right) }$ were defined in the previous section. Proceeding in analogy with section II we find that the generalized Dirac equation can be written, in this background spacetime, as $$\begin{aligned} &&\left[ i\hbar %TCIMACRO{\tsum}% %BeginExpansion \mathop{\textstyle\sum}% %EndExpansion \nolimits^{r}\partial _{r}+i\hbar \frac{% %TCIMACRO{\tsum }% %BeginExpansion \mathop{\textstyle\sum}% %EndExpansion {}^{\theta }}{br}\partial _{\theta }+i\hbar \frac{% %TCIMACRO{\tsum }% %BeginExpansion \mathop{\textstyle\sum}% %EndExpansion {}^{\phi }}{br\sin \theta }\partial _{\phi }\right. \nonumber \\ &&\left. +i\hbar \frac{1}{r}\left( 1-\frac{1}{b}\right) %TCIMACRO{\tsum}% %BeginExpansion (\mathop{\textstyle\sum}% %EndExpansion \nolimits^{r} + \cot \theta\mathop{\textstyle\sum}% %EndExpansion \nolimits^{\theta} )+\frac{e^{2}}{rc}-\gamma ^{\left( 0\right) }\mu c + \frac{E}{c} \right] \chi \left( \vec{r}\right) \left. =0\right. , \label{219}\end{aligned}$$ where Eq. (\[116\]) has been used in obtaining the above result. Now, let us assume that the solution of Eq. (\[219\]) can be written as $$\chi \left( \vec{r}\right) =r^{-\left( 1-\frac{1}{b}\right) }R \left( r\right) \Theta \left( \theta \right) \Phi \left( \phi \right) , \label{220}$$ Using Eq. (\[220\]), Eq. (\[219\]) turns into the simple form $$\left( c% %TCIMACRO{\tsum}% %BeginExpansion \mathop{\textstyle\sum}% %EndExpansion \nolimits_{r}^{\prime }p_{r}+i\hbar c\frac{% %TCIMACRO{\tsum }% %BeginExpansion \mathop{\textstyle\sum}% %EndExpansion {}_{r}^{\prime }}{r} \gamma ^{\left( 0\right) }k_{\left( b\right) }-% \frac{e^{2}}{r}+\mu c^{2}\gamma ^{\left( 0\right) }\right) R\left( r\right) =ER\left( r\right) , \label{221}$$ where $$\begin{aligned} k_{\left( b\right) } &=&\pm \left( \frac{j^{2}}{b^{2}}+\frac{j}{b^{2}}+\frac{% 1}{4}\right) ^{\frac{1}{2}} \nonumber \\ &=&\pm \left[ \left( \frac{j}{b}+\frac{1}{2}\right) ^{2}+\frac{j}{b}\left( \frac{1}{b}-1\right) \right] ^{\frac{1}{2}} \label{225}\end{aligned}$$ are the eigenvalues of the generalized spin-orbit operator $K_{\left( b\right) }$ in the spacetime of a global monopole which is given by $$\hbar K_{\left( b\right) }=\gamma ^{\left( 0\right) }\left[ \vec{% %TCIMACRO{\tsum }% %BeginExpansion \mathop{\textstyle\sum}% %EndExpansion }^{\prime \prime }\cdot \vec{L}_{\left( b\right) }+\hbar \right] ,$$ where $\sum^{\prime \prime }=\left( \begin{array}{cc} \vec{\sigma} & 0 \\ 0 & \vec{\sigma} \end{array} \right) .$ In the present case the generalized angular momentum will be denoted by $% L_{\left( b\right) }$. It is such that $\vec{L}_{\left( b\right) }^{2}Y_{l}^{m}\left( \theta ,\phi \right) =\frac{\hbar ^{2}}{b^{2}}l\left( l+1\right) ,$ $l=0,1,2,...$ $n-1$, and $\vec{L}_{\left( b\right) }=\frac{% \vec{L}}{b}$ is the angular momentum in the spacetime of a global monopole [@gv1]. Using the same procedure as in the previous section, we find $$\begin{aligned} F_{\left( b\right) }(r) &=&-i\sqrt{\frac{Q}{T}}\frac{e^{-rD}}{2}\left( rD\right) ^{\gamma _{\left( b\right) }-1}\left[ M\left( \gamma _{\left( b\right) }-1+\tilde{P},2\gamma _{\left( b\right) }-1;2rD\right) \right. \nonumber \\ &&\left. +\frac{\left( \gamma _{\left( b\right) }-1+\tilde{P}\right) }{% \left( k_{\left( b\right) }+\tilde{Q}\right) }M\left( \gamma _{\left( b\right) }+\tilde{P},2\gamma _{\left( b\right) }-1;2rD\right) \right] , \label{223}\end{aligned}$$ and $$\begin{aligned} G_{\left( b\right) }(r) &=&\frac{e^{-rD}}{2}\left( rD\right) ^{\gamma _{\left( b\right) }-1}\left[ M\left( \gamma _{\left( b\right) }-1+\tilde{P}% ,2\gamma _{\left( b\right) }-1;2rD\right) \right. \nonumber \\ &&\left. -\frac{\left( \gamma _{\left( b\right) }-1+\tilde{P}\right) }{% \left( k_{\left( b\right) }+\tilde{Q}\right) }M\left( \gamma _{\left( b\right) }+\tilde{P},2\gamma _{\left( b\right) }-1;2rD\right) \right] , \label{222}\end{aligned}$$ where $\gamma _{\left( b\right) }=1+\sqrt{k_{\left( b\right) }^{2}-\tilde{% \alpha}^{2}\text{ }}\,$; $T,$ $Q,$ $M,$ $\tilde{P}$ and $\tilde{Q}$ are the same defined previously. The index $b$ in the functions $F$ and $G$ indicates their dependence on this parameter. These functions are, formally, the same used in the previous section. By the use of condition (\[b\]) with the interchange of $ \gamma_{(\alpha)}$ by $ \gamma_{(b)}$, we obtain the following spectrum of energy eigenvalues $$E_{n_{(b)},j_{\left( b\right) }}=E_{0}\left\{ 1+\tilde{\alpha}^{2} \left[ n_{(b)}-\left| k_{\left( b\right) }\right| + \left| k_{\left( b\right) }\right| \sqrt{1-% \tilde{\alpha}^{2}k_{\left( b\right) }^{-2}}\right] ^{-2}\right\} ^{-\frac{1% }{2}}. \label{230}$$ in which we have defined $n_{(b)}$ as a number which reduces to the principal quantum number when $b=1$ and is given by $$n_{(b)}=n+\left| k_{\left( b\right) }\right| =n+\left[ \left( \frac{j}{b}+% \frac{1}{2}\right) ^{2}+\frac{j}{b}\left( \frac{1}{b}-1\right) \right] ^{% \frac{1}{2}}. \label{229}$$ Then, expanding Eq. (\[230\]) in a series of powers of $\tilde{\alpha% }$, we have the following leading terms $$\begin{aligned} E_{n_{(b)},j_{\left( b\right) }} &=&E_{0}- E_{0}\frac{\tilde{\alpha}^{2}}{2n_{(b)}^{2}} +E_{0}\frac{\tilde{\alpha}^{4}}{2n_{(b)}^{4}}\left( \frac{3}{4}- \frac{n_{(b)}}{\left[ \left( \frac{j}{b}+\frac{1}{2}\right) ^{2}+\frac{j}{b}\left( \frac{1}{b}% -1\right) \right] ^{\frac{1}{2}}}\right) \text{,} \label{234}\end{aligned}$$ which tell us what the dependence of each term with the parameter $ b$ is. In this case, the shift in the energy between the energy levels with $j=n-\frac{1}{2}$ and $j= \frac{1}{2},$ for a given $n$, is $$\begin{aligned} \Delta E_{n_{(b)},j_{\left( b\right) }} &=&\frac{\mu e^{8}} {2\hbar ^{4}c^{2}n^{3}} \nonumber \\ &&\left\{ \frac{\left[ \left( \frac{n_{(b)}}{b}-\frac{1}{2} \left( \frac{1}{b}% -1\right) \right) ^{2}+\left( \frac{n_{(b)}}{b}-\frac{1}{2b} \right) \left( \frac{1% }{b}-1\right) \right] ^{\frac{1}{2}}-\left[ \left( \frac{1}{2b}+\frac{1}{2}% \right) ^{2}+\frac{1}{2b}\left( \frac{1}{b}-1\right) \right] ^{\frac{1}{2}}}{% \left[ \left( \frac{1}{2b}+\frac{1}{2}\right) ^{2}+\frac{1}{2b}\left( \frac{1% }{b}-1\right) \right] ^{\frac{1}{2}}\left[ \left( \frac{n_{(b)}} {b}-\frac{1}{2}% \left( \frac{1}{b}-1\right) \right) ^{2}+\left( \frac{n_{(b)}} {b}-\frac{1}{2b}% \right) \left( \frac{1}{b}-1\right) \right] ^{\frac{1}{2}}}\right\} . \label{191}\end{aligned}$$ This equation reduces to the same result of the flat spacetime in the absence of the global monopole ($b=1$). It is worth noticing from Eq. (\[234\]) that the presence of the monopole does not break the degeneracy of the energy levels and as in the case of a cosmic string. As an estimation of the shift in the energy levels, let us consider a grand unified (GUT) monopole in which $b^{2}=1-10^{-6}$. Using this value into Eq. (\[234\]) we conclude that the presence of the monopole reduces the relativistic correction of the energy of the level $2P_{1/2}(n=2$, $l=1$, $j=l-\frac{1}{2}=\frac{1}{2}$, $m=1)$ in approximately $10^{-4}\%$ as compared with the result of the flat Minkowski spacetime. Finally, let us write down the general solution for this case. It reads as $$\begin{aligned} \Psi _{l,j=l+\frac{1}{2},m}\left( x\right) &=&e^{-i\frac{Et}{\hbar }% }r^{-\left( 1-\frac{1}{b}\right) } \nonumber \\ &&\times F_{\left( b\right) }\left( r\right) \left( \begin{array}{c} \sqrt{\frac{l+m+\frac{1}{2}}{2l+1}}Y_{l}^{m-\frac{1}{2}}\left( \theta ,\phi \right) \\ \sqrt{\frac{l-m+\frac{1}{2}}{2l+1}}Y_{l}^{m+\frac{1}{2}}\left( \theta ,\phi \right) \end{array} \right) , \label{240}\end{aligned}$$ and $$\begin{aligned} \Psi _{l,j=l-\frac{1}{2},m}\left( x\right) &=&e^{-i\frac{Et}{\hbar }}r^{-% \frac{1}{2}\left( 1-\frac{1}{b}\right) } \nonumber \\ &&\times G_{\left( b\right) }\left( r\right) \left( \begin{array}{c} -\sqrt{\frac{l-m+\frac{1}{2}}{2l+1}}Y_{l}^{m-\frac{1}{2}}\left( \theta ,\phi \right) \\ \sqrt{\frac{l+m+\frac{1}{2}}{2l+1}}Y_{l}^{m+\frac{1}{2}}\left( \theta ,\phi \right) \end{array} \right) , \label{300}\end{aligned}$$ where $F_{\left( b\right) }\left( r\right) $ and $G_{\left( b\right) }\left( r\right) $ are given by Eqs. (\[223\]) and (\[222\]), respectively. It is important to call attention to the fact that all these results depends on the geometrical and topological features of the global monopole spacetime. 1.0 cm IV. CONCLUSIONS {#iv.-conclusions .unnumbered} =============== With the purpose of discussing the role of the topology on an atomic system we carried out the calculations of the shifts in the energy levels of hydrogen atom placed in the spacetimes of a string and a monopole, adding, in this way, some new results to the interesting problem considered in seminal papers by Parker and collaborators\[4-8\] about the effects of gravitational fields at the atomic level, but now from the geometrical and topological points of view, instead of looking only for the local effects of the curvature as in those earlier papers\[4-8\]. The presence of a cosmic string changes the solution and shifts the energy levels of a hydrogen atom as compared with the flat Minkowski spacetime result. It is interesting to observe that these shifts depend on the parameter that defines the angle deficit and vanish when the angle deficit vanishes. These shifts arise from the topological features of the spacetime generated by this defect. In the case of the hydrogen atom in the spacetime of a global monopole, the modifications in the solution and the shifts in the energy levels are due to the combined effects of the curvature and the nontrivial topology determined by the deficit solid angle associated with this spacetime. These shifts also vanish when the deficit solid angle vanishes. Both effects can be thought of as a consequence of the topological influence of the spacetime under consideration upon the hydrogen atom. The decrease in the energy for the situations considered is only two orders of magnitude less than the ratio between the fine structure splitting and the energy of the ground state of the non-relativistic hydrogen atom and is of the order of the Zeeman effect. Therefore, the modifications in the spectra of the hydrogen atom due to the presence of the gravitational fields of a string or a monopole are all measurable, in principle. The obtained results show how the geometry and a nontrivial topology influences the energy spectrum as compared with the flat spacetime case and show how these quantities depend on the surroundings and their characteristics. These results also show how the solutions are modified. Therefore, the problem of finding how the energy spectrum of an atom placed in a gravitational field is perturbed by this background has to take into account not only the geometrical, but also the topological features of the spacetimes under consideration. In other words, the behaviour of an atomic system is determined not only by the curvature at the position of the atom, but also by the topology of the background spacetime. Acknowledgments {#acknowledgments .unnumbered} =============== We acknowledge Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) and Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES-Program PROCAD) for partial financial support. H. Tetrode, Z. Phys. [**50**]{}, 336 (1928); V. Fock, Z. Phys. [**53**]{}, 592 (1928); G. C. McVittie, Mon. Not. R. Astron. Soc., [**92**]{}, 868 (1932); E. Schrödinger, Physica [**6**]{}, 899 (1932); W. Pauli, Ann. Phys. (Leipzig) [**[18]{}**]{}, 337 (1933). L. Parker, Phys. Rev. [**[D3]{}**]{}, 346 (1971); C. J. Isham and J. E. Nelson, Phys. Rev. [**[D10]{}**]{}, 3226 (1974); L. H. Ford, Phys. Rev. [**[D14]{}**]{}, 3304 (1976); J. Audretsch and G. Schäfer, J. Phys. [**[A11]{}**]{}, 1583 (1978); A. A. Grib, S. G. Mamaev and V. M. Mostepanenko, Fortsch. Phys. [**[28]{}**]{}, 173 (1980); M. Castagnino et al, J. Math. Phys. [**[25]{}**]{}, 360 (1984); A. H. Najmi and A. C. Ottewill, Phys. Rev. [**[D30]{}**]{}, 1733 and 2573 (1984); L. P. Chimento and M. S. Mollerach, Phys. Rev. [**[D34]{}**]{}, 3689 (1986). J. Audretsch and G. Schäfer, Gen. Rel. Grav. [**[9 ]{}**]{}, 243 (1978); id. [**[9]{}**]{}, 489 (1978) and references therein.; A. O. Barut and I. H. Duru, Phys. Rev. [**[D36]{}**]{}, 3705 (1987); Phys. Lett. [**[A121]{}**]{}, 7 (1987); M. A. Castagnino et al, Phys. Lett. [**[A128]{}**]{}, 25 (1988); V. M. Villalba and U. Percoco, J. Math. Phys. [**[31]{}**]{}, 715 (1990); V. B. Bezerra and I. G. Araújo, Class. Quantum Grav. [**[11]{}**]{}, 1599 (1994); Renato Portugal, J. Math. Phys. [**[36]{}**]{}, 4296 (1995); A. O. Barut and Lambodar P. Singh, Int. J. Mod. Phys. [**[D4]{}**]{}, 479 (1995); V. B. Bezerra, J. Math. Phys. [**[38]{}**]{}, 2553 (1997); A. A. Rodrigues Sobreira and E. R. Bezerra de Mello, Grav. and Cosm. [**[5]{}**]{}, 177 (1999); V. B. Bezerra and S. G. Fernandes, Grav. and Cosm. [**[6]{}**]{}, 1 (2000). L. Parker, Phys. Rev. Lett. [**[44]{}**]{} , 1559 (1980). L. Parker, Phys. Rev. [**[D22]{}**]{} , 1922 (1980); id. [**[D24]{}**]{}, 535 (1981). L. Parker, Gen. Rel. Grav. [**[13]{}**]{} , 307 (1981). L. Parker and L. O. Pimentel, Phys. Rev. [**[D25]{}**]{}, 3180 (1982). T. K. Leen, L. Parker and L. O. Pimentel, Gen. Rel. Grav. [**[15]{}**]{} , 761 (1983). L. H. Ford and A. Vilenkin, J. Phys. [**A14**]{}, 2353 (1981); V. B. Bezerra, Phys. Rev. [**D35**]{}, 2031 (1987); id. Ann. Phys. (NY) [**203**]{}, 392 (1990). Y. Aharonov and D. Bohm, Phys. Rev. [**119**]{}, 485(1959). P. de Sousa Gerbert and R. Jackiw, Commun. Math. Phys. [**124**]{}, 229 (1989); J. Spinelly, E. R. Bezerra de Mello and V. B. Bezerra, Class. Quantum Grav. [**18**]{}, 1555 (2002). C. Furtado and Fernando Moraes, J. Phys. [**A33**]{}, 5513 (2000). Geusa de A. Marques and V. B. Bezerra, Class. Quantum Grav. [**[19]{}**]{}, 985 (2002). V. B. Bezerra, Class. Quantum Grav. [**8**]{}, 1939 (1991); E. S. Moreira, Phys. Rev. [**A58**]{}, 1678 (1998); M. Alvarez, J. Phys. [**A32**]{}, 4079 (1999). A. Vilenkin, Phys. Rep. [**121**]{}, 263 (1985); B. Linet, Gen. Rel. Grav. [**17**]{}, 1109 (1985). Manuel Barriola and A. Vilenkin, Phys. Rev. Lett. [**63**]{}, 341 (1989). A. Vilenkin and E. P. S. Shellard, [*Cosmic Strings and other Topological Defects,* ]{}Cambridge University Press, Cambridge (1994). T. W. B. Kibble, J. Phys. [**A9**]{}, 1387 (1976). J. R. Gott, Astrophys. J. [**288**]{}, 422 (1985). A. N. Aliev and D. V. Gal’tsov, Ann. Phys. (NY) [**193**]{}, 165 (1989). B. Linet, Phys. Rev. [**D33**]{}, 1833 (1986). Geusa de A. Marques and V. B. Bezerra, Mod. Phys. Lett. [**A,**]{} 1253 (2001). P. O. Mazur and J. Papavassiliou, Phys. Rev. [**D44**]{}, 1317 (1991); E. R. Bezerra de Mello and C. Furtado, Phys. Rev. [**[D56]{}**]{}, 1345 (1997); V. B. Bezerra and N. R. Khusnutdinov, Class. Quantum Grav. [**[19]{}**]{}, 3127 (2002). L. Parker, Dan Vollick and Ian Redmount, Phys. Rev. [**[D56]{}**]{}, 2113 (1997). [^1]: E-mail:[email protected] [^2]: E-mail:[email protected]
--- abstract: 'The quantum double Schubert polynomials studied by Kirillov and Maeno, and by Ciocan-Fontanine and Fulton, are shown to represent Schubert classes in Kim’s presentation of the equivariant quantum cohomology of the flag variety. We define parabolic analogues of quantum double Schubert polynomials, and show that they represent Schubert classes in the equivariant quantum cohomology of partial flag varieties. For complete flags Anderson and Chen [@AC] have announced a proof with different methods.' address: - 'Department of Mathematics, University of Michigan, 530 Church St., Ann Arbor, MI 48109 USA' - 'Department of Mathematics, Virginia Tech, Blacksburg, VA 24061-0123 USA' author: - Thomas Lam - Mark Shimozono title: Quantum double Schubert polynomials represent Schubert classes --- Introduction ============ Let $H^*({\mathrm{Fl}})$, $H^T({\mathrm{Fl}})$, $QH^*({\mathrm{Fl}})$, and $QH^T({\mathrm{Fl}})$ be the ordinary, $T$-equivariant, quantum, and $T$-equivariant quantum cohomology rings of the variety ${\mathrm{Fl}}={\mathrm{Fl}}_n$ of complete flags in ${\mathbb{C}}^n$ where $T$ is the maximal torus of $GL_n$. All cohomologies are with ${\mathbb{Z}}$ coefficients. The flag variety ${\mathrm{Fl}}_n$ has a stratification by Schubert varieties $X_w$, labeled by permutations $w \in S_n$, which gives rise to [*Schubert bases*]{} for each of these rings. This paper is concerned with the problem of finding polynomial representatives for the Schubert bases in a ring presentation of these (quantum) cohomology rings. These ring presentations are due to Borel [@Bo] in the classical case, and Ciocan-Fontanine [@Cio], Givental and Kim [@GK] and Kim [@Kim] in the quantum case. This is a basic problem in classical and quantum Schubert calculus. This problem has been solved in the first three cases: the Schubert polynomials are known to represent Schubert classes in $H^*({\mathrm{Fl}}_n)$ by work of Bernstein, Gelfand, and Gelfand [@BGG] and Lascoux and Schützenberger [@LS]; the double Schubert polynomials, also due to Lascoux and Schützenberger, represent Schubert classes in $H^T({\mathrm{Fl}}_n)$ (see for example [@Bi]); and the quantum Schubert polynomials of Fomin, Gelfand, and Postnikov [@FGP] represent Schubert classes in $QH^*({\mathrm{Fl}})$. These polynomials are the subject of much research by combinatorialists and geometers and we refer the reader to these references for a complete discussion of these ideas. Our first main result (Theorem \[T:main\]) is that the quantum double Schubert polynomials of [@KM; @CF] represent equivariant quantum Schubert classes in $QH^T({\mathrm{Fl}})$. Anderson and Chen [@AC] have announced a proof of this result using the geometry of Quot schemes. Now let $SL_n/P$ be a partial flag variety, where $P$ denotes a parabolic subgroup of $SL_n$. In non-quantum Schubert calculus, the functorality of (equivariant) cohomology implies that the (double) Schubert polynomials labeled by minimal length coset representatives again represent Schubert classes in $H^*(SL_n/P)$ or $H^T(SL_n/P)$. This is not the case in quantum cohomology. Ciocan-Fontanine [@Cio2] solved the corresponding problem in $QH^*(SL_n/P)$, extending Fomin, Gelfand, and Postnikov’s work to the parabolic case. Here we introduce the parabolic quantum double Schubert polynomials. We show that these polynomials represent Schubert classes in the torus-equivariant quantum cohomology $QH^T(SL_n/P)$ of a partial flag variety. Earlier, Mihalcea [@Mi2] had found polynomial representatives for the Schubert basis in the special case of the equivariant quantum cohomology of the Grassmannian. We thank Linda Chen for communicating to us her joint work with Anderson [@AC]. The (quantum) cohomology rings of flag manifolds ================================================ Presentations ------------- Let $x=(x_1,\dotsc,x_n)$, $a=(a_1,\dotsc,a_n)$, and $q=(q_1,\dotsc,q_{n-1})$ be indeterminates. We work in the graded polynomial ring ${\mathbb{Z}}[x;q;a]$ with $\deg(x_i)=\deg(a_i)=1$ and $\deg(q_i)=2$. Let $S={\mathbb{Z}}[a]$ be identified with $H^T({\mathrm{pt}})$ and let $e_j(a_1,\dotsc,a_n)$ be the elementary symmetric polynomial. Let $C_n$ be the tridiagonal $n\times n$ matrix with entries $x_i$ on the diagonal, $-1$ on the superdiagonal, and $q_i$ on the subdiagonal. Define the polynomials $E_j^n\in {\mathbb{Z}}[x;q]$ by $$\begin{aligned} \det(C_n-t \,{\mathrm{Id}}) = \sum_{j=0}^n (-t)^{n-j} E_j^n.\end{aligned}$$ Let $J$ (resp. $J^a$, $J^q$, $J^{qa}$) be the ideal in ${\mathbb{Z}}[x]$ (resp. $S[x]$, ${\mathbb{Z}}[x;q]$, $S[x;q]$) generated by the elements $e_j(x)$ (resp. $e_j(x)-e_j(a)$; $E_j^n$; $E_j^n-e_j(a)$) for $1\le j\le n$; in all cases the $j$-th generator is homogeneous of degree $j$. We have $$\begin{aligned} \label{E:H} H^*({\mathrm{Fl}})&\cong {\mathbb{Z}}[x]/J \\ \label{E:HT} H^T({\mathrm{Fl}})&\cong S[x]/J^a \\ \label{E:QH} QH^*({\mathrm{Fl}})&\cong {\mathbb{Z}}[x;q]/J^q \\ \label{E:QHT} QH^T({\mathrm{Fl}})&\cong S[x;q]/J^{qa}.\end{aligned}$$ as algebras over ${\mathbb{Z}}$, $S$, ${\mathbb{Z}}[q]$, and $S[q]$ respectively. The presentation of $H^*({\mathrm{Fl}})$ is a classical result due to Borel. The presentations of $QH^*({\mathrm{Fl}})$ and $QH^T({\mathrm{Fl}})$ are due to Ciocan-Fontanine [@Cio], Givental and Kim [@GK] and Kim [@Kim]. Schubert bases -------------- Let $X_w=\overline{B_-wB/B}\subset{\mathrm{Fl}}$ be the opposite Schubert variety, where $w\in W = S_n$ is a permutation, $B\subset SL_n$ is the upper triangular Borel and $B_-$ the opposite Borel. The ring $H^*({\mathrm{Fl}})$ (resp. $H^T({\mathrm{Fl}})$) has a basis over ${\mathbb{Z}}$ (resp. $S$) denoted $[X_w]$ (resp. $[X_w]_T$) associated with the Schubert varieties. Given three elements $u,v,w\in W$ and an element of the coroot lattice $\beta\in Q^\vee$ one may define a genus zero Gromov-Witten invariant $c^w_{uv}(\beta) \in{\mathbb{Z}}_{\ge0}$ (see [@GK; @Kim]) and an associative ring $QH^*({\mathrm{Fl}})$ with ${\mathbb{Z}}[q]$-basis $\{\sigma^w\mid w\in W\}$ (called the quantum Schubert basis) such that $$\begin{aligned} \sigma^u \sigma^v = \sum_{w,\beta} q_\beta c^w_{uv}(\beta) \sigma^w\end{aligned}$$ where $q_\beta = \prod_{i=1}^{n-1} q_i^{k_i}$ where $\beta=\sum_{i=1}^{n-1} k_i \alpha_i^\vee$ for $k_i\in{\mathbb{Z}}$. Similarly there is a basis of a ring $QH^T({\mathrm{Fl}})$ with $S[q]$-basis given by the equivariant quantum Schubert classes $\sigma^w_T$, defined using equivariant Gromov-Witten invariants, which are elements of $S$. We shall use the following characterization of $QH^T({\mathrm{Fl}})$ and its Schubert basis $\{\sigma^w_T\mid w\in W\}$ due to Mihalcea [@Mi]. Let $\Phi^+$ be the set of positive roots and $\rho$ the half sum of positive roots. For $w\in W$ define $$\begin{aligned} A_w &= \{\alpha\in \Phi^+\mid ws_\alpha\gtrdot w \} \\ B_w &= \{\alpha\in \Phi^+\mid \ell(ws_\alpha)=\ell(w)+1-{\langle \alpha^\vee\,,\,2\rho\rangle} \}. $$ Let $\omega_i(a)=a_1+\dotsm+a_i\in S$ be the fundamental weight. We write $A_w^n$ and $B_w^n$ to emphasize that the computation pertains to ${\mathrm{Fl}}=SL_n/B$. \[T:QHTchar\] [@Mi Corollary 8.2] For $w\in S_n$ and $1\le i\le n-1$ a Dynkin node, the equivariant quantum Schubert classes $\sigma^w_T$ satisfy the equivariant quantum Chevalley-Monk formula $$\begin{aligned} \label{E:EQChev} \sigma^{s_i}_T \sigma^w_T &= (-\omega_i(a) + w\cdot\omega_i(a)) + \sum_{\alpha\in A_w^n} {\langle \alpha^\vee\,,\,\omega_i\rangle} \sigma_T^{ws_\alpha} + \sum_{\alpha\in B_w^n} q_{\alpha^\vee} {\langle \alpha^\vee\,,\,\omega_i\rangle} \sigma_T^{ws_\alpha}.\end{aligned}$$ Moreover these structure constants determine the Schubert basis $\{\sigma^w_T\mid w\in S_n\}$ and the ring $QH^T(SL_n/B)$ up to isomorphism as ${\mathbb{Z}}[q_1,\dotsc,q_{n-1};a_1,\dotsc,a_n]$-algebras. Quantum double Schubert polynomials =================================== Now we work with infinite sets of variables $x=(x_1,x_2,\dotsc)$, $q=(q_1,q_2,\dotsc)$, and $a=(a_1,a_2,\dotsc)$. Various Schubert polynomials ---------------------------- Let $\partial_i^a = \alpha_i^{-1}(1-s_i^a)$ be the divided difference operator, where $\alpha_i=a_i-a_{i+1}$ and $s_i^a$ is the operator that exchanges $a_i$ and $a_{i+1}$. Since the operators $\partial_i=\partial_i^a$ satisfy the braid relations one may define $\partial_w=\partial_{i_1}\dotsm \partial_{i_\ell}$ where $w=s_{i_1}\dotsm s_{i_\ell}$ is a reduced decomposition. For $w\in S_n$ define the double Schubert polynomial ${\mathfrak{S}}_w(x;a)$ [@LS] and the quantum double Schubert polynomial ${\tilde{{\mathfrak{S}}}}_w(x;a)\in S[x;q]$ [@KM; @CF] by $$\begin{aligned} \label{E:dSchubdef} {\mathfrak{S}}_w(x;a) &= (-1)^{\ell(ww_0^{(n)})}\partial_{ww_0^{(n)}}^a \prod_{i=1}^{n-1} \prod_{j=1}^i (x_j-a_{n-i}) \\ \label{E:qdSchubdef} {\tilde{{\mathfrak{S}}}}_w(x;a) &= (-1)^{\ell(ww_0^{(n)})}\partial_{ww_0^{(n)}}^a \prod_{i=1}^{n-1} \det(C_i-a_{n-i}{\mathrm{Id}})\end{aligned}$$ where $w_0^{(n)}\in S_n$ is the longest element.[^1] Note that it is equivalent to define ${\mathfrak{S}}_w(x;a)$ by setting the $q_i$ variables to zero in ${\tilde{{\mathfrak{S}}}}_w(x;a)$. Let $S_\infty = \bigcup_{n\ge1} S_n$ be the infinite symmetric group under the embeddings $i_n:S_n\to S_{n+1}$ that add a fixed point at the end of a permutation. Due to the stability property [@KM] ${\tilde{{\mathfrak{S}}}}_{i_n(w)}(x;a) = {\tilde{{\mathfrak{S}}}}_w(x;a)$ for $w\in S_n$, the quantum double Schubert polynomials ${\tilde{{\mathfrak{S}}}}_w(x;a)$ are well-defined for $w\in S_\infty$. Similarly, ${\mathfrak{S}}_w(x;a)$ is well-defined for $w\in S_\infty$. For $w\in S_\infty$, define the (resp. quantum) Schubert polynomial ${\mathfrak{S}}_w(x) = {\mathfrak{S}}_w(x;0)$ (resp. ${\tilde{{\mathfrak{S}}}}_w(x) = {\tilde{{\mathfrak{S}}}}_w(x;0)$) by setting the $a_i$ variables to zero in the (resp. quantum) double Schubert polynomial. Note that ${\mathfrak{S}}_w(x)$, ${\mathfrak{S}}_w(x;a)$, ${\tilde{{\mathfrak{S}}}}_w(x)$, and ${\tilde{{\mathfrak{S}}}}_w(x;a)$ are all homogeneous of degree $\ell(w)$. The original definition of quantum Schubert polynomial in [@FGP] is different. However their definition and the one used here, are easily seen to be equivalent [@KM], due to the commutation of the divided differences in the $a$ variables and the quantization map ${\theta}$ of [@FGP], which we review in §\[SS:qt\]. \[L:lead\] [@Mac] For $w\in S_\infty$, the term that is of highest degree in the $x$ variables and then is the reverse lex leading such term, in any of ${\mathfrak{S}}_w(x)$, ${\mathfrak{S}}_w(x;a)$, ${\tilde{{\mathfrak{S}}}}_w(x)$, and ${\tilde{{\mathfrak{S}}}}_w(x;a)$, is the monomial $x^{\mathrm{code}}(w)$, where $$\begin{aligned} {\mathrm{code}}(w) &= (c_1,c_2,\dotsc) \\ c_i &= |\{j\in{\mathbb{Z}}_{>0} \mid \text{$i<j$ and $w(j)<w(i)$} \}| \qquad\text{for $i\in{\mathbb{Z}}_{>0}$.}\end{aligned}$$ \[L:code\] [@Mac] There is a bijection from $S_\infty$ to the set of tuples $(c_1,c_2,\dotsc)$ of nonnegative integers, almost all zero, given by $w\mapsto{\mathrm{code}}(w)$. Moreover it restricts to a bijection from $S_n$ to the set of tuples $(c_1,\dotsc,c_n)$ such that $0\le c_i \le n-i$ for all $0\le i\le n$. \[L:basisquot\] 1. $\{{\mathfrak{S}}_w(x)\mid w\in S_n\}$ is a ${\mathbb{Z}}$-basis of ${\mathbb{Z}}[x_1,\dotsc,x_n]/J_n$. 2. $\{{\mathfrak{S}}_w(x;a)\mid w\in S_n\}$ is a ${\mathbb{Z}}[a]$-basis of ${\mathbb{Z}}[x_1,\dotsc,x_n;a_1,\dotsc,a_n]/J_n^a$. 3. $\{{\tilde{{\mathfrak{S}}}}_w(x)\mid w\in S_n\}$ is a ${\mathbb{Z}}[q]$-basis of ${\mathbb{Z}}[x_1,\dotsc,x_n;q_1,\dotsc,q_{n-1}]/J_n^q$. 4. $\{{\tilde{{\mathfrak{S}}}}_w(x;a)\mid w\in S_n\}$ is a ${\mathbb{Z}}[q,a]$-basis of ${\mathbb{Z}}[x_1,\dotsc,x_n;q_1,\dotsc,q_{n-1};a_1,\dotsc,a_n]/J_n^{q,a}$. Since in each case the highest degree part of the $j$-th ideal generator in the $x$ variables is $e_j(x_1,\dotsc,x_n)$, any polynomial may be reduced modulo the ideal until its leading term in the $x$ variables is $x^\gamma$ where $\gamma=(\gamma_1,\dotsc,\gamma_n)\in{\mathbb{Z}}_{\ge0}^n$ with $\gamma_i \le n-i$ for $1\le i\le n$. But these are the leading terms of the various kinds of Schubert polynomials. Geometric bases --------------- Under the isomorphism (resp. , ) the Schubert basis $[X_w]$ (resp. $[X_w]_T$, $\sigma^w$) corresponds to ${\mathfrak{S}}_w(x)$ (resp. ${\mathfrak{S}}_w(x;a)$, ${\tilde{{\mathfrak{S}}}}_w(x)$), by [@LS; @BGG] for $H^*({\mathrm{Fl}})$, [@Bi] for $H^T({\mathrm{Fl}})$, and [@FGP] for $QH^*({\mathrm{Fl}})$. Our first main result is: \[T:main\] Under the $S[q]$-algebra isomorphism the quantum equivariant Schubert basis element $\sigma^w_T$ corresponds to the quantum double Schubert polynomial ${\tilde{{\mathfrak{S}}}}_w(x;a)$. Theorem \[T:main\] is proved in Section \[S:proofmain\]. Stable quantization {#SS:qt} ------------------- This section follows [@FGP]. Let $e_i^r = e_i(x_1,x_2,\dotsc,x_r)\in{\mathbb{Z}}[x]$ be the elementary symmetric polynomial for integers $0\le i\le r$. By [@FGP Prop. 3.3], ${\mathbb{Z}}[x]$ has a ${\mathbb{Z}}$-basis of *standard monomials* $e_I=\prod_{r\ge1} e_{i_r}^r$ where $I=(i_1,i_2,\dotsc)$ is a sequence of nonnegative integers, almost all zero, with $0\le i_r \le r$ for all $r\ge 1$. The stable quantization map is the ${\mathbb{Z}}[q]$-module automorphism ${\theta}$ of ${\mathbb{Z}}[x;q]$ given by $$\begin{aligned} e_I \mapsto E_I := \prod_{j\ge1} E_{i_j}^j.\end{aligned}$$ By [@FGP; @CF; @KM] we have (this is the definition of quantum Schubert polynomial in [@FGP]) $$\begin{aligned} \label{E:qtSchub} {\theta}({\mathfrak{S}}_w(x)) = {\tilde{{\mathfrak{S}}}}_w(x)\qquad\text{for all $w\in S_\infty$.}\end{aligned}$$ The map ${\theta}$ is extended by ${\mathbb{Z}}[a]$-linearity to a ${\mathbb{Z}}[q,a]$-module automorphism of ${\mathbb{Z}}[x,q,a]$. Cauchy formulae --------------- The double Schubert polynomials satisfy [@Mac] $$\begin{aligned} \label{E:dsCauchy} {\mathfrak{S}}_w(x;a) = \sum_{v{\le^L}w} {\mathfrak{S}}_{vw^{-1}}(-a) {\mathfrak{S}}_v(x)\end{aligned}$$ where $v{\le^L}w$ denotes the left weak order, defined by $\ell(wv^{-1})+\ell(v)=\ell(w)$. For a geometric explanation of this identity see [@A]. We have [@KM; @CF] $$\begin{aligned} \label{E:quantCauchy} {\theta}({\mathfrak{S}}_w(x;a)) &= {\tilde{{\mathfrak{S}}}}_w(x;a) = \sum_{v{\le^L}w} {\mathfrak{S}}_{vw^{-1}}(-a) {\tilde{{\mathfrak{S}}}}_v(x).\end{aligned}$$ The first equality follows from the divided difference definitions of ${\tilde{{\mathfrak{S}}}}_w(x;a)$ and ${\mathfrak{S}}_w(x;a)$ and the commutation of the divided differences in the $a$-variables with quantization. The second equality follows from quantizing . We require the explicit formulae for Schubert polynomials indexed by simple reflections. \[L:qdsreflection\] We have $$\begin{aligned} \label{E:Schubreflect} {\mathfrak{S}}_{s_i}(x) &= \omega_i(x) = x_1+x_2+\dotsm+x_i \\ \label{E:qSchubreflect} {\tilde{{\mathfrak{S}}}}_{s_i}(x) &= {\mathfrak{S}}_{s_i}(x) \\ \label{E:qdSchubreflect} {\tilde{{\mathfrak{S}}}}_{s_i}(x;a) &= \omega_i(x) - \omega_i(a).\end{aligned}$$ Since $s_i$ is an $i$-Grassmannian permutation with associated partition consisting of a single box, its Schubert polynomial is the Schur polynomial [@Mac] ${\mathfrak{S}}_{s_i}(x) = S_1[x_1,\dotsc,x_i] = \omega_i(x)$, proving . For we have ${\tilde{{\mathfrak{S}}}}_{s_i}(x) = {\theta}({\mathfrak{S}}_{s_i}(x))={\theta}(e_{1,i})=E_{1,i} = \omega_i(x)$. For , by we have ${\tilde{{\mathfrak{S}}}}_{s_i}(x;a) = {\tilde{{\mathfrak{S}}}}_{s_i}(x) - {\mathfrak{S}}_{s_i}(a) = \omega_i(x) - \omega_i(a)$ as required. Chevalley-Monk rules for Schubert polynomials ============================================= The Chevalley-Monk formula describes the product of a divisor class and an arbitrary Schubert class in the cohomology ring $H^*({\mathrm{Fl}})$. The goal of this section is to establish the Chevalley-Monk rule for quantum double Schubert polynomials. The Chevalley-Monk rules for (double, quantum, quantum double) Schubert polynomials should be viewed as product rules for the cohomologies of an infinite-dimensional flag ind-variety ${\mathrm{Fl}}_\infty$ of type $A_\infty$ with Dynkin node set ${\mathbb{Z}}_{>0}$ and simple bonds between $i$ and $i+1$ for all $i\in {\mathbb{Z}}_{>0}$. Let $\Phi^+= \{\alpha_{ij}=a_i-a_j\mid 1\le i<j\}$ be the set of positive roots. Let $\alpha_{ij}^\vee=a_i-a_j$ and $\alpha_i^\vee=\alpha_{i,i+1}^\vee$ by abuse of notation. Let $s_{ij}=s_{\alpha_{ij}}$ for $1\le i<j$. For $w\in S_\infty$ let $A_w$ and $B_w$ be defined as before Theorem \[T:QHTchar\] but using the infinite set of positive roots $\Phi^+$ and letting $\rho=(0,-1,-2,\dotsc)$. To distinguish between the finite and limiting infinite cases, we denote by $A_w^n$ the set $A_w$ for $w\in S_n$ which uses the positive roots of $SL_n$. Schubert polynomials -------------------- \[P:Chev\] [@Che; @Mo]. For $w\in S_n$ and $1\le i\le n-1$, in $H^*({\mathrm{Fl}})$ we have $$\begin{aligned} [X_{s_i}] [X_w] = \sum_{\alpha\in A_w^n} {\langle \alpha^\vee\,,\,\omega_i\rangle} [X_{ws_\alpha}].\end{aligned}$$ \[P:poly\] [@Mac] For $w\in S_\infty$ and $i\in {\mathbb{Z}}_{>0}$ the Schubert polynomials satisfy the identity in ${\mathbb{Z}}[x]$ given by $$\begin{aligned} \label{E:poly} {\mathfrak{S}}_{s_i}(x) {\mathfrak{S}}_w(x) &= \sum_{\alpha\in A_w} {\langle \alpha^\vee\,,\,\omega_i\rangle} {\mathfrak{S}}_{ws_\alpha}(x).\end{aligned}$$ It is necessary to take a large-rank limit $(n \gg 0)$ to compare Propositions \[P:Chev\] and \[P:poly\]. Let $n=2$. We have $[X_{s_1}]^2 = 0$ in $H^*({\mathrm{Fl}}_2)$ since $A_{s_1}=\emptyset$ for $SL_2$. Lifting to polynomials we have ${\mathfrak{S}}_{s_1}^2 = x_1^2 = {\mathfrak{S}}_{s_2s_1}$ since $A_{s_1}=\{\alpha_{13}\}$, which is not a positive root for $SL_2$. Note that ${\mathfrak{S}}_{s_2s_1}\in J_2$ and $s_2s_1\in S_3\setminus S_2$. In $H^*({\mathrm{Fl}}_n)$ for $n\ge3$ we have $[X_{s_1}]^2 = [X_{s_2s_1}]$. Quantum Schubert polynomials ---------------------------- \[P:qChev\] [@FGP] For $w\in S_n$ and $1\le i\le n-1$, in $QH^*({\mathrm{Fl}}_n)$ we have $$\begin{aligned} \sigma^{s_i}\sigma^w = \sum_{\alpha\in A_w^n} {\langle \alpha^\vee\,,\,\omega_i\rangle} \sigma^{ws_\alpha} + \sum_{\alpha\in B_w^n} q_{\alpha^\vee} {\langle \alpha^\vee\,,\,\omega_i\rangle} \sigma^{ws_\alpha}.\end{aligned}$$ \[P:Qpoly\] [@FGP] For $i\in {\mathbb{Z}}_{>0}$ and $w\in S_\infty$ the quantum Schubert polynomials satisfy the identity in ${\mathbb{Z}}[x;q]$ given by $$\begin{aligned} \label{E:Qpoly} {\tilde{{\mathfrak{S}}}}_{s_i}(x) {\tilde{{\mathfrak{S}}}}_w(x) &= \sum_{\alpha\in A_w} {\langle \alpha^\vee\,,\,\omega_i\rangle} {\tilde{{\mathfrak{S}}}}_{ws_\alpha}(x) + \sum_{\alpha\in B_w} q_{\alpha^\vee} {\langle \alpha^\vee\,,\,\omega_i\rangle} {\tilde{{\mathfrak{S}}}}_{ws_\alpha}(x).\end{aligned}$$ Double Schubert polynomials --------------------------- \[P:EChev\] [@KK] [@Rob]. For $w\in S_n$ and $1\le i\le n-1$, in $H^T({\mathrm{Fl}}_n)$ we have $$\begin{aligned} [X_{s_i}]_T [X_w]_T = (-\omega_i(a) + w\cdot \omega_i(a)) [X_w]_T +\sum_{\alpha\in A_w^n} {\langle \alpha^\vee\,,\,\omega_i\rangle} [X_{ws_\alpha}]_T.\end{aligned}$$ The following is surely known but we include a proof for lack of a known reference. \[P:ECpoly\] For $w\in S_\infty$ and $i\in{\mathbb{Z}}_{>0}$, the double Schubert polynomials satisfy the identity in ${\mathbb{Z}}[x,a]$ given by $$\begin{aligned} \label{E:ECpoly} {\mathfrak{S}}_{s_i}(x;a) {\mathfrak{S}}_w(x;a) = (-\omega_i(a)+w\cdot \omega_i(a)) {\mathfrak{S}}_w(x;a)+ \sum_{\alpha\in A_w} {\langle \alpha^\vee\,,\,\omega_i\rangle} {\mathfrak{S}}_{ws_\alpha}(x;a).\end{aligned}$$ Fix $w \in S_\infty$. We observe that the set $A_w$ is finite. Let $N$ be large enough so that all appearing terms make sense for $S_N$. By [@Bi] under the isomorphism , $[X_w]_T\mapsto {\mathfrak{S}}_w(x;a)+J_N^a$ for $w\in S_N$. By Proposition \[P:EChev\] for $H^T({\mathrm{Fl}}_N)$, equation holds modulo an element $f\in J_N^a$. We may write $f=\sum_{v\in S_\infty} b_v {\mathfrak{S}}_v(x;a)$ where $b_v\in {\mathbb{Z}}[a]$ and only finitely many are nonzero. Choose $n\ge N$ large enough so that $v\in S_n$ and $b_v\in {\mathbb{Z}}[a_1,\dotsc,a_n]$ for all $v$ with $b_v\ne 0$. Applying Proposition \[P:EChev\] again for $H^T({\mathrm{Fl}}_n)$ we deduce that $f\in J_n^a$. By Lemma \[L:basisquot\] it follows that $f=0$ as required. Quantum double Schubert polynomials ----------------------------------- Theorem \[T:QHTchar\] gives the equivariant quantum Chevalley-Monk rule for $QH^T({\mathrm{Fl}}_n)$. We cannot use the multiplication rule in Theorem \[T:QHTchar\] directly because we are trying to prove that the quantum double Schubert polynomials represent Schubert classes. We deduce the following product formula by cancelling down to the equivariant case which was proven above. \[P:QEChevalley\] The quantum double Schubert polynomials satisfy the equivariant quantum Chevalley-Monk rule in ${\mathbb{Z}}[x,q,a]$: for all $w\in S_\infty$ and $i\ge1$ we have $$\begin{aligned} \label{E:EQCpoly} {\tilde{{\mathfrak{S}}}}_{s_i}(x;a) {\tilde{{\mathfrak{S}}}}_w(x;a) &= (-\omega_i(a)+w\cdot\omega_i(a)) {\tilde{{\mathfrak{S}}}}_w(x;a) + \sum_{\alpha\in A_w} {\langle \alpha^\vee\,,\,\omega_i\rangle} {\tilde{{\mathfrak{S}}}}_{ws_\alpha}(x;a) \\ \notag &+ \sum_{\alpha\in B_w} q_{\alpha^\vee} {\langle \alpha^\vee\,,\,\omega_i\rangle}{\tilde{{\mathfrak{S}}}}_{ws_\alpha}(x;a).\end{aligned}$$ Starting with and using Lemma \[L:qdsreflection\] and we have $$\begin{aligned} 0 &= -{\mathfrak{S}}_{s_i}(x;a){\mathfrak{S}}_w(x;a) + (-\omega_i(a)+w\cdot\omega_i(a)){\mathfrak{S}}_w(x;a)+\sum_{\alpha\in A_w} {\langle \alpha^\vee\,,\,\omega_i\rangle} {\mathfrak{S}}_{ws_\alpha}(x;a) \\ &=(\omega_i(a)-{\mathfrak{S}}_{s_i}(x)){\mathfrak{S}}_w(x;a) + (-\omega_i(a)+w\cdot\omega_i(a)){\mathfrak{S}}_w(x;a)+\sum_{\alpha\in A_w} {\langle \alpha^\vee\,,\,\omega_i\rangle} {\mathfrak{S}}_{ws_\alpha}(x;a) \\ &=-{\mathfrak{S}}_{s_i}(x)\sum_{v{\le^L}w} {\mathfrak{S}}_{vw^{-1}}(-a){\mathfrak{S}}_v(x) + (w\cdot\omega_i(a)){\mathfrak{S}}_w(x;a)+\sum_{\alpha\in A_w} {\langle \alpha^\vee\,,\,\omega_i\rangle} {\mathfrak{S}}_{ws_\alpha}(x;a) \\ &= -\sum_{v{\le^L}w} {\mathfrak{S}}_{vw^{-1}}(-a)\sum_{\alpha\in A_v} {\langle \alpha^\vee\,,\,\omega_i\rangle}{\mathfrak{S}}_{vs_\alpha}(x) + (w\cdot\omega_i(a)){\mathfrak{S}}_w(x;a)+\sum_{\alpha\in A_w} {\langle \alpha^\vee\,,\,\omega_i\rangle} {\mathfrak{S}}_{ws_\alpha}(x;a).\end{aligned}$$ Quantizing and rearranging, we have $$\begin{aligned} &\,(w\cdot\omega_i(a)){\tilde{{\mathfrak{S}}}}_w(x;a)+\sum_{\alpha\in A_w} {\langle \alpha^\vee\,,\,\omega_i\rangle} {\tilde{{\mathfrak{S}}}}_{ws_\alpha}(x;a) \\ &= \sum_{v{\le^L}w} {\mathfrak{S}}_{vw^{-1}}(-a)\sum_{\alpha\in A_v} {\langle \alpha^\vee\,,\,\omega_i\rangle}{\tilde{{\mathfrak{S}}}}_{vs_\alpha}(x) \\ &= \sum_{v{\le^L}w} {\mathfrak{S}}_{vw^{-1}}(-a)\left({\tilde{{\mathfrak{S}}}}_{s_i}(x) {\tilde{{\mathfrak{S}}}}_v(x)-\sum_{\alpha\in B_v} {\langle \alpha^\vee\,,\,\omega_i\rangle}{\tilde{{\mathfrak{S}}}}_{vs_\alpha}(x) \right) \\ &= {\tilde{{\mathfrak{S}}}}_{s_i}(x) {\tilde{{\mathfrak{S}}}}_w(x;a)-\sum_{v{\le^L}w} {\mathfrak{S}}_{vw^{-1}}(-a)\sum_{\alpha\in B_v} {\langle \alpha^\vee\,,\,\omega_i\rangle}{\tilde{{\mathfrak{S}}}}_{vs_\alpha}(x).\end{aligned}$$ Therefore to prove it suffices to show that $$\begin{aligned} \label{E:corrections} &\,\,\,\sum_{v{\le^L}w} {\mathfrak{S}}_{vw^{-1}}(-a) \sum_{\alpha\in B_v} q_{\alpha^\vee} {\langle \alpha^\vee\,,\,\omega_i\rangle} {\tilde{{\mathfrak{S}}}}_{vs_\alpha}(x) \\ \notag &= \sum_{\alpha\in B_w} {\langle \alpha^\vee\,,\,\omega_i\rangle} q_{\alpha^\vee} \sum_{v{\le^L}ws_\alpha} {\mathfrak{S}}_{vs_\alpha w^{-1}}(-a) {\tilde{{\mathfrak{S}}}}_v(x).\end{aligned}$$ Let $$\begin{aligned} A &= \{(v,\alpha)\in W\times \Phi^+ \mid \text{$v{\le^L}w$ and $\alpha\in B_v$} \} \\ B &= \{(u,\alpha)\in W\times \Phi^+ \mid \text{$u{\le^L}ws_\alpha$ and $\alpha\in B_w$} \}.\end{aligned}$$ To prove it suffices to show that there is a bijection $A\to B$ given by $(v,\alpha)\mapsto (vs_\alpha,\alpha)$. Let $(v,\alpha)\in A$. Then $w = (wv^{-1})(v)$ is length-additive since $v{\le^L}w$ and $v = (vs_\alpha)(s_\alpha)$ is length-additive because $\ell(s_\alpha)={\langle \alpha^\vee\,,\,2\rho\rangle}-1$ and $\alpha\in B_v$. Therefore $w = (wv^{-1})(vs_\alpha)(s_\alpha)$ is length-additive. It follows that $ws_\alpha = (wv^{-1})(vs_\alpha)$ is length-additive and that $vs_\alpha {\le^L}ws_\alpha$. Moreover we have $\ell(ws_\alpha)=\ell(wv^{-1})+\ell(vs_\alpha)=\ell(wv^{-1})+\ell(v)+1-{\langle \alpha^\vee\,,\,2\rho\rangle} =\ell(w)+1-{\langle \alpha^\vee\,,\,2\rho\rangle}$. Therefore $(vs_\alpha,\alpha)\in B$. Conversely suppose $(u,\alpha)\in B$. Let $v=u s_\alpha$. Arguing as before, $w = (ws_\alpha u^{-1})(u)(s_\alpha) = (w v^{-1})(v)$ are length-additive. We deduce that $v{\le^L}w$ and that $\ell(v)=\ell(w)-\ell(wv^{-1})=\ell(ws_\alpha)+1-{\langle \alpha^\vee\,,\,2\rho\rangle}-\ell(wv^{-1})=\ell(vs_\alpha)+1-{\langle \alpha^\vee\,,\,2\rho\rangle}$ so that $\alpha\in B_v$ as required. Proof of Theorem \[T:main\] {#S:proofmain} =========================== Let $I^a$ be the ideal in ${\mathbb{Z}}[x,a]$ generated by $e_i^p(x)-e_i^p(a)$ for $p\ge n$ and $i\ge1$, and $a_i$ for $i > n$. Let $J^a\subset{\mathbb{Z}}[x,a]$ be the ${\mathbb{Z}}[a]$-submodule spanned by ${\mathfrak{S}}_w(x;a)$ for $w\in S_\infty\setminus S_n$ and $a_i {\mathfrak{S}}_u(x;a)$ for $i >n$ and any $u\in S_\infty$. We shall show that $I^a=J^a$. Let $c_{i,p}=s_{p+1-i}\dotsm s_{p-2} s_{p-1} s_p\in S_\infty\setminus S_p$ be the cycle of length $i$. We note that the family $\{e_i^p(x)-e_i^p(a)\mid 1\le i\le p\}$ is unitriangular over ${\mathbb{Z}}[a]$ with the family $\{{\mathfrak{S}}_{c_{i,p}}(x;a)\mid 1\le i\le p\}$. Since ${\mathbb{Z}}[x,a] = \bigoplus_{u\in S_\infty} {\mathbb{Z}}[a] {\mathfrak{S}}_u(x;a)$, to show that $I^a\subset J^a$ it suffices to show that ${\mathfrak{S}}_u(x;a) {\mathfrak{S}}_{c_{i,p}}(x,a)\in J^a$ for all $p\ge n$, $i\ge1$, and $u\in S_\infty$. But this follows from the fact that the product of ${\mathfrak{S}}_u(x,a){\mathfrak{S}}_v(x;a)$ is a ${\mathbb{Z}}[a]$-linear combination of ${\mathfrak{S}}_w(x;a)$ where $w\ge u$ and $w\ge v$. Let $K$ be the ideal in ${\mathbb{Z}}[x,a]$ generated by $a_i$ for $i > n$. Then $I^a/K$ has ${\mathbb{Z}}[a_1,\dotsc,a_n]$-basis given by standard monomials $e_I$ with $i_r>0$ for some $r\ge n$, while $J^a/K$ has ${\mathbb{Z}}[a_1,\dotsc,a_n]$-basis given by ${\mathfrak{S}}_w(x;a)$ for $w\in S_\infty\setminus S_n$. The quotient ring ${\mathbb{Z}}[x,a]/K$ has ${\mathbb{Z}}[a_1,\dotsc,a_n]$-basis given by all standard monomials $e_I$ for $I=(i_1,i_2,\dotsc)$ with $0\le i_p \le p$ for all $p \ge1$ and almost all $i_p$ zero, and also by all double Schubert polynomials ${\mathfrak{S}}_w(x;a)$ for $w\in S_\infty$. But the standard monomials $e_I$ with $i_n=i_{n+1}=\dotsm=0$ are in graded bijection with the ${\mathfrak{S}}_w(x;a)$ for $w\in S_n$. It follows that $I^a=J^a$ by graded dimension counting. Let $J_\infty^{qa}$ be the ideal of ${\mathbb{Z}}[x,q,a]$ generated by $E_i^p-e_i^p(a)$ for all $i\ge1$ and $p\ge n$, together with $q_i$ for $i\ge n$ and $a_i$ for $i > n$. We wish to show that $$\begin{aligned} \label{E:bigqSchubideal} {\tilde{{\mathfrak{S}}}}_w(x;a)\in J_\infty^{qa}\qquad\text{for all $w\in S_\infty\setminus S_n$}.\end{aligned}$$ For this it suffices to show that $$\begin{aligned} \label{E:thetaimage} \theta(J^a_\infty) \subset J^{qa}_\infty.\end{aligned}$$ To prove it suffices to show that $$\begin{aligned} \label{E:thetastdgen} \theta(e_I(e_i^p(x)-e_i^p(a))) \in J^{qa}_\infty\qquad\text{for standard monomials $e_I$, $i\ge1$ and $p\ge n$.}\end{aligned}$$ To apply $\theta$ to this element we must express $e_I e_i^p$ in standard monomials. The only nonstandardness that can occur is if $i_p > 0$. In that case one may use [@FGP (3.2)]: $$\begin{aligned} e_i^p e_j^p &= e_{i-1}^p e_{j+1}^p + e_j^p e_i^{p+1}- e_{i-1}^p e_{j+1}^{p+1}.\end{aligned}$$ Note that ultimately the straightening of $e_I e_i^p$ into standard monomials, only changes factors of the form $e_k^q$ for $k\ge1$ and $q\ge p$. Let $E_I = \prod_{r\ge1} E_{i_r}^r$ for $I=(i_1,i_2,\dotsc)$. If we consider $E_I (E_i^p - e_i^p(a))$ and use [@FGP (3.6)] $$\begin{aligned} E_i^p E_j^p &= E_{i-1}^pE_{j+1}^p + E_j^p E_i^{p+1}-E_{i-1}^p E_{j+1}^{p+1} + q_p( E_{j-1}^{p-1} E_{i-1}^p- E_{i-2}^{p-1} E_j^p).\end{aligned}$$ to rewrite it into quantized standard monomials, we see that the two straightening processes differ only by multiples of $q_p$, $q_{p+1}$, etc. Therefore $$\begin{aligned} \theta(e_I (e_i^p(x)-e_i^p(a))) - E_I(E_i^p-e_i^p(a)) \in J_\infty^{qa}\end{aligned}$$ But $E_I(E_i^p-e_i^p(a))\in J_\infty^{qa}$ so holds and follows. The ring ${\mathbb{Z}}[x,q,a]/J_\infty^{qa}$ has a ${\mathbb{Z}}[q_1,\dotsc,q_{n-1};a_1,\dotsc,a_n]$-basis given by ${\tilde{{\mathfrak{S}}}}_w(x;a)$ for $w\in S_n$. This follows from Lemmata \[L:lead\] and \[L:code\]. Moreover this basis satisfies the equivariant quantum Chevalley-Monk rule for $SL_n$ by Proposition \[P:QEChevalley\]. By Theorem \[T:QHTchar\] there is an isomorphism of ${\mathbb{Z}}[q_1,\dotsc,q_{n-1};a_1,\dotsc,a_n]$-algebras $QH^T(SL_n/B)\to {\mathbb{Z}}[x,q,a]/J_\infty^{qa}$. Moreover, $\sigma^w_T$ and ${\tilde{{\mathfrak{S}}}}_w(x;a)$ (or rather, its preimage in $QH^T({\mathrm{Fl}})$) are related by an automorphism of $QH^T({\mathrm{Fl}})$. But the Schubert divisor class $\sigma^{s_i}_T$ is (by definition) represented by a usual double Schubert polynomial ${\mathfrak{S}}_{s_i}(x;a) = {\tilde{{\mathfrak{S}}}}_{s_i}(x;a)$ (Lemma \[L:qdsreflection\]) in Kim’s presentation, and these divisor classes generate $QH^T({\mathrm{Fl}})$ over ${\mathbb{Z}}[q_1,\dotsc,q_{n-1};a_1,\dotsc,a_n]$. Thus the automorphism must be the identity, completing the proof. Parabolic case {#S:para} ============== Notation -------- Fix a composition $(n_1,n_2,\dotsc,n_k)\in{\mathbb{Z}}_{>0}^k$ with $n_1+n_2+\dotsm+n_k=n$. Let $P\subset SL_n({\mathbb{C}})$ be the parabolic subgroup consisting of block upper triangular matrices with block sizes $n_1,n_2,\dotsc,n_k$. Then $SL_n/P$ is isomorphic to the variety of partial flags in ${\mathbb{C}}^n$ with subspaces of dimensions $N_j:=n_1+n_2+\dotsm+n_j$ for $0\le j\le k$. Denote by $W_P$ the Weyl group for the Levi factor of $P$ and $W^P$ the set of minimum length coset representatives in $W/W_P$. For every $w\in W$ there exists unique elements $w^P\in W^P$ and $w_P\in W_P$ such that $w=w^P w_P$; moreover this factorization is length-additive. Let $w_0\in W$ be the longest element and let $w_0=w_0^Pw_{0,P}$ so that $w_{0,P}\in W_P$ is the longest element. \[X:parabolic\] Let $(n_1,n_2,n_3)=(2,1,3)$. Then $(N_1,N_2,N_3)=(2,3,6)$, $w_0^P=564123$ (that is, $w\in S_6 $ is the permutation with $w(1)=5$, $w(2)=6$, etc.), and $w_{0,P}=213654$. Parabolic quantum double Schubert polynomials --------------------------------------------- Let ${\mathbb{Z}}[x_1,\dotsc,x_n;q_1,\dotsc,q_{k-1}]$ be the graded polynomial ring with $\deg(x_i)=1$ and $\deg(q_j)=n_j+n_{j+1}$ for $1\le j\le k-1$. Following [@AS] [@Cio2] let $D=D^P$ be the $n\times n$ matrix with entries $x_i$ on the diagonal, $-1$ on the superdiagonal, and entry $(N_{j+1},N_{j-1}+1)$ given by $-(-1)^{n_{j+1}} q_j$ for $1\le j\le k-1$. For $1\le j\le k$ let $D_j$ be the upper left $N_j \times N_j$ submatrix of $D$ and for $1\le i\le N_j$ define the elements $G_i^j \in {\mathbb{Z}}[x;q]$ by $$\begin{aligned} \det(D_j-t\,{\mathrm{Id}}) = \sum_{i=0}^{N_j} (-t)^{N_j-i} G_i^j.\end{aligned}$$ The polynomial $G_i^j$ is homogeneous of degree $i$. For $w\in W^P$, we define the parabolic quantum double Schubert polynomial ${\tilde{{\mathfrak{S}}}}^P_w(x;a)$ by $$\begin{aligned} {\tilde{{\mathfrak{S}}}}_w^P(x;a) &= (-1)^{\ell(w (w_0^P)^{-1})}\partial_{w (w_0^P)^{-1}}^a \prod_{j=1}^{k-1} \prod_{i=n-N_{j+1}+1}^{n-N_j} \det(D_j-a_i {\mathrm{Id}}).\end{aligned}$$ \[X:parabolicqSchub\] Continuing Example \[X:parabolic\] we have $$\begin{aligned} {\tilde{{\mathfrak{S}}}}^P_{w_0^P}(x;a) &= \det(D_1-a_4{\mathrm{Id}}) \det(D_2-a_1{\mathrm{Id}})\det(D_2-a_2{\mathrm{Id}})\det(D_2-a_3{\mathrm{Id}})\\ &=(x_1-a_4)(x_2-a_4)\prod_{i=1}^3((x_1-a_i)(x_2-a_i)(x_3-a_i)+q_1).\end{aligned}$$ The parabolic quantum double Schubert polynomials ${\tilde{{\mathfrak{S}}}}^P_w(x;a)$ have specializations similar to the quantum double Schubert polynomials. Let $w\in W^P$. 1. We define the parabolic quantum Schubert polynomials by the specialization ${\tilde{{\mathfrak{S}}}}^P_w(x)={\tilde{{\mathfrak{S}}}}^P_w(x;0)$ which sets $a_i=0$ for all $i$. In Lemma \[L:qparaspec\] it is shown that these polynomials coincide with those of Ciocan-Fontanine [@Cio2], whose definition uses a parabolic analogue of the quantization map of [@FGP]. 2. Setting $q_i=0$ for all $i$ one obtains the double Schubert polynomial ${\mathfrak{S}}_w(x;a)$. 3. Setting both $a_i$ and $q_i$ to zero one obtains the Schubert polynomial ${\mathfrak{S}}_w(x)$. Let $J_P$ be the ideal in ${\mathbb{Z}}[x]^{W_P}$ (resp. $J_P^a\subset S[x]^{W_P}$, $J_P^q\subset {\mathbb{Z}}[x]^{W_P}[q]$, $J_P^{qa}\subset S[x]^{W_P}[q]$) generated by the elements $e_i^n(x)$, (resp. $e_i^n(x)-e_i^n(a)$, $G_i^k$, $G_i^k-e_i^n(a)$) for $1\le i\le n$. The aim of this section is to establish (4) of the following theorem. \[T:mainpara\] 1. There is an isomorphism of ${\mathbb{Z}}$-algebras [@BGG; @LS] $$\begin{aligned} H^*(SL_n/P) &\cong {\mathbb{Z}}[x]^{W_P}/J_P \\ [X_w] &\mapsto {\mathfrak{S}}_w(x)+J_P.\end{aligned}$$ 2. There is an isomorphism of $S$-algebras [@Bi] $$\begin{aligned} H^T(SL_n/P) &\cong S[x]^{W_P}/J_P^a \\ [X_w]_T &\mapsto {\mathfrak{S}}_w(x;a)+J_P^a.\end{aligned}$$ 3. There is an isomorphism of ${\mathbb{Z}}[q]$-algebras [@AS] [@Kim2] [@Kim3] $$\begin{aligned} QH^*(SL_n/P) &\cong {\mathbb{Z}}[x]^{W_P}[q]/J_P^q \\ \sigma^{P,w} &\mapsto {\tilde{{\mathfrak{S}}}}_w^P(x)+J_P^q.\end{aligned}$$ 4. There is an isomorphism of $S[q]$-algebras $$\begin{aligned} \label{E:parabquanteqiso} QH^T(SL_n/P) &\cong S[x]^{W_P}[q]/J_P^{qa} \\ \label{E:parabquanteqisoclass} \sigma^{P,w}_T&\mapsto {\tilde{{\mathfrak{S}}}}_w^P(x;a) + J_P^{qa}.\end{aligned}$$ Here $[X_w]$, $[X_w]_T$, $\sigma^{P,w}$, and $\sigma^{P,w}_T$, denote the Schubert bases for their respective cohomology rings for $w\in W^P$. The isomorphism is due to [@Kim3]. We shall establish , namely, that under this isomorphism, the parabolic quantum double Schubert polynomials are the images of parabolic equivariant quantum Schubert classes. Stability of parabolic quantum double Schubert polynomials {#SS:parabstable} ---------------------------------------------------------- The following Lemma can be verified by direct computation and induction. Let $\beta=(\beta_1,\dotsc,\beta_n)\in{\mathbb{Z}}_{\ge0}^n$ be such that $\beta_i \le n-i$ for $2\le i\le n$. We have $$\begin{aligned} \partial_{n-1}^a\dotsm \partial_2^a \partial_1^a \cdot a^\beta &= \begin{cases} 0 &\text{if $\beta_1 <n-1$} \\ a_1^{\beta_2}a_2^{\beta_3}\dotsm a_{n-1}^{\beta_n} &\text{if $\beta_1=n-1$.} \end{cases}\end{aligned}$$ Suppose $w\in W^P$ is such that $w(r)=r$ for $N_{k-1}<r\le n$. Let $w_0^{(p,q)}$ be the minimum length coset representative of the longest element in $S_{p+q}/(S_p\times S_q)$ and let $w_0^{P_-}$ be the minimum length coset representative of the longest element in $S_{N_{k-1}}/(S_{n_1}\times\dotsm\times S_{n_{k-1}})$. We have the length-additive factorization $w_0^P = w_0^{(N_{k-1},n_k)} w_0^{P_-}$. Also $\ell(w (w_0^P)^{-1}) = \ell(w (w_0^{P_-})^{-1})+\ell(w_0^{(n_k,N_{k-1})})$. Using the above Lemma repeatedly we have $$\begin{aligned} (-1)^{\ell(ww_0^P)} {\tilde{{\mathfrak{S}}}}_w^P(x;a) &= \partial_{w(w_0^P)^{-1}}^a \prod_{j=1}^{k-1} \prod_{i=n-N_{j+1}+1}^{n-N_j} \det(D_j-a_i{\mathrm{Id}}) \\ &= \partial_{w w_0^{P_-}}^a \partial_{w_0^{(n_k,N_{k-1})}}^a \left(\prod_{j=1}^{k-2} \prod_{i=n-N_{j+1}+1}^{n-N_j} \det(D_j-a_i{\mathrm{Id}})\right) \prod_{i=1}^{n_k} \det(D_{k-1}-a_i{\mathrm{Id}}) \\ &= (-1)^{\ell(w_0^{(n_k,N_{k-1})})}\partial_{w w_0^{P_-}}^a w_0^{(n_k,N_{k-1})} \cdot \left(\prod_{j=1}^{k-2} \prod_{i=n-N_{j+1}+1}^{n-N_j} \det(D_j-a_i{\mathrm{Id}})\right) \\ &= (-1)^{\ell(w_0^{(n_k,N_{k-1})})}\partial_{w w_0^{P_-}}^a \prod_{j=1}^{k-2} \prod_{i=N_{k-1}-N_{j+1}+1}^{N_{k-1}-N_j} \det(D_j-a_i{\mathrm{Id}}) \\ &= (-1)^{\ell(w_0^{(n_k,N_{k-1})})} (-1)^{\ell(w(w_0^{P_-})^{-1})} {\tilde{{\mathfrak{S}}}}_w^{P_-}(x;a)\end{aligned}$$ The final outcome is $$\begin{aligned} {\tilde{{\mathfrak{S}}}}_w^P(x;a) &= {\tilde{{\mathfrak{S}}}}_w^{P_-}(x;a).\end{aligned}$$ This means that if we append a block of size $m$ to ${{n_\bullet}}$ and append $m$ fixed points to $w\in W^P$, the parabolic quantum double Schubert polynomial remains the same. Stable parabolic quantization ----------------------------- This section follows [@Cio2]. Consider an infinite sequence of positive integers ${{n_\bullet}}=(n_1,n_2,\dotsc)$. Let $N_j=n_1+\dotsm+n_j$ for $j\ge 1$. Let $W=S_\infty =\bigcup_{n\ge1} S_n$ be the infinite symmetric group (under the inclusion maps $S_n\to S_{n+1}$ that add a fixed point at the end), $W_P$ the subgroup of $W$ generated by $s_i$ for $i\notin \{N_1,N_2,\dotsc\}$, $W^P$ the set of minimum length coset representatives in $W/W_P$, etc. Let ${\mathbb{Y}}^P$ be the set of tuples of partitions ${{\lambda^\bullet}}=({\lambda}^{(1)},{\lambda}^{(2)},\dotsc)$ such that ${\lambda}^{(j)}$ is contained in the rectangle with $n_{j+1}$ rows and $N_j$ columns and almost all ${\lambda}^{(j)}$ are empty. Define the standard monomial by $$\begin{aligned} g_{{\lambda^\bullet}}= \prod_{j\ge1} \prod_{i=1}^{n_{j+1}} g_{{\lambda}^{(j)}_i}^j\end{aligned}$$ where $g_r^j=e_r^{N_j}(x)$. The following is a consequence of [@Cio2] by taking a limit. [@Cio2] $\{g_{{\lambda^\bullet}}\mid {{\lambda^\bullet}}\in{\mathbb{Y}}^P \}$ is a ${\mathbb{Z}}$-basis of ${\mathbb{Z}}[x]^{W_P}$. We also observe that $\{{\mathfrak{S}}_w(x)\mid w\in W^P \}$ is a ${\mathbb{Z}}$-basis of ${\mathbb{Z}}[x]^{W_P}$. Define the ${\mathbb{Z}}[q]$-module automorphism ${\theta}^P$ of ${\mathbb{Z}}[x,q]$ by ${\theta}(g_{{\lambda^\bullet}})=G_{{\lambda^\bullet}}$ where $$\begin{aligned} G_{{\lambda^\bullet}}= \prod_{j\ge1} \prod_{i=1}^{n_{j+1}} G_{{\lambda}^{(j)}_i}^j.\end{aligned}$$ By ${\mathbb{Z}}[a]$-linearity it defines a ${\mathbb{Z}}[q,a]$-module automorphism of ${\mathbb{Z}}[x,q,a]$, also denoted ${\theta}^P$. \[L:qparaspec\] For all $w\in W^P$, $$\begin{aligned} \label{E:paraquant} {\tilde{{\mathfrak{S}}}}_w^P(x) &= {\theta}^P({\mathfrak{S}}_w(x)) \\ \label{E:paraquantequiv} {\tilde{{\mathfrak{S}}}}_w^P(x;a) &= {\theta}^P({\mathfrak{S}}_w(x;a)).\end{aligned}$$ Let $w\in W^P$. We may compute ${\tilde{{\mathfrak{S}}}}_w^P(x;a)$ by working with respect to $(n_1,n_2,\dotsc,n_k)$ for some finite $k$; the result is independent of $k$ by the previous subsection. Then is an immediate consequence of the commutation of ${\theta}^P$ and the divided difference operators in the $a$ variables. Equation follows from by setting the $a$ variables to zero. Cauchy formula -------------- Keeping the notation of the previous subsection, we have For all $w\in W^P$ we have $$\begin{aligned} {\tilde{{\mathfrak{S}}}}^P_w(x;a) = \sum_{v{\le^L}w} {\mathfrak{S}}_{vw^{-1}}(-a) {\tilde{{\mathfrak{S}}}}^P_v(x).\end{aligned}$$ Observe that if $v{\le^L}w$ then $v\in W^P$ so that ${\tilde{{\mathfrak{S}}}}^P_v(x)$ makes sense. We have $$\begin{aligned} {\tilde{{\mathfrak{S}}}}^P_w(x;a) &= {\theta}^p({\mathfrak{S}}_w(x;a)) \\ &= {\theta}^P(\sum_{v{\le^L}w} {\mathfrak{S}}_{vw^{-1}}(-a){\mathfrak{S}}_v(x)) \\ &= \sum_{v{\le^L}w} {\mathfrak{S}}_{vw^{-1}}(-a) {\tilde{{\mathfrak{S}}}}_v^P(x).\end{aligned}$$ Parabolic Chevalley-Monk rules ------------------------------ Fix ${{n_\bullet}}=(n_1,n_2,\dotsc,n_k)$ with $\sum_{j=1}^k n_j=n$ and let $P$ be the parabolic defined by ${{n_\bullet}}$ and so on. Let $Q^\vee_P$, $\Phi^+_P$, and $\rho_P$ be respectively the coroot lattice, the positive roots, and the half sum of positive roots, for the Levi factor of $P$. Let $\eta_P:Q^\vee\to Q^\vee/Q^\vee_P$ be the natural projection. Let $\pi_P:W\to W^P$ be the map $w\mapsto w^P$ where $w=w^P w_P$ and $w^P\in W^P$ and $w_P\in W_P$. Define the sets of roots $$\begin{aligned} \label{E:paraA} A_{P,w}^n &= \{\alpha\in\Phi^+\setminus \Phi^+_P\mid \text{$w s_\alpha \gtrdot w$ and $ws_\alpha\in W^P$} \} \\ \label{E:paraB} B_{P,w}^n &= \{\alpha\in\Phi^+\setminus \Phi^+_P\mid \text{$\ell(\pi_P(w s_\alpha))=\ell(w)+1-{\langle \alpha^\vee\,,\,2(\rho-\rho_P)\rangle}$} \}.\end{aligned}$$ For ${\lambda}= \sum_{i=1}^{k-1} b_i \alpha_{N_i}^\vee \in Q^\vee/Q_P^\vee$ with $b_i \in {\mathbb{Z}}_{\geq 0}$ we let $q_\lambda = \prod_i q_i^{b_i}$. The following is Mihalcea’s characterization of $QH^T(SL_n/P)$ which extends Theorem \[T:QHTchar\]. \[T:Mipara\][@Mi] For $w\in W^P$ and $i\in\{N_1,N_2,\dotsc,N_{k-1}\}$ (that is, $s_i\in W^P$), we have $$\begin{aligned} \sigma^{P,s_i}_T \sigma^{P,w}_T &= (-\omega_i(a)+w\cdot \omega_i(a)) \sigma^{P,w}_T \\ &+ \sum_{\alpha\in A_{P,w}^n} {\langle \alpha^\vee\,,\,\omega_i\rangle} \sigma^{P,ws_\alpha}_T + \sum_{\alpha\in B_{P,w}^n} {\langle \alpha^\vee\,,\,\omega_i\rangle} q_{\eta_P(\alpha^\vee)} \sigma^{P,\pi_P(ws_\alpha)}_T.\end{aligned}$$ Moreover these structure constants determine the Schubert basis $\{\sigma^{P,w}_T\mid w\in W^P\}$ and the ring $QH^T(SL_n/P)$ up to isomorphism as ${\mathbb{Z}}[q_1,\dotsc,q_{k-1};a_1,\dotsc,a_n]$-algebras. We now move to the context in which $(n_1,n_2,\dotsc)$ is an infinite sequence of positive integers. Let $A_{P,w}$ and $B_{P,w}$ be the analogues of the sets of roots defined in and where $\rho=(0,-1,-2,\dotsc)$ and $\rho_P$ is the juxtaposition of $(0,-1,\dotsc,1-n_j)$ for $j\ge1$. \[P:pqeChev\] For $w\in W^P$ and $i$ such that $i\in \{N_1,N_2,\dotsc,\}$ (that is, $s_i\in W^P$), the parabolic quantum double Schubert polynomials satisfy the identity $$\begin{aligned} {\tilde{{\mathfrak{S}}}}^P_{s_i}(x;a) {\tilde{{\mathfrak{S}}}}_w^P(x;a) &= (-\omega_i(a)+w\cdot \omega_i(a)) {\tilde{{\mathfrak{S}}}}_w^P(x;a) + \sum_{\alpha\in A_{P,w}} {\langle \alpha^\vee\,,\,\omega_i\rangle} {\tilde{{\mathfrak{S}}}}_{ws_\alpha}^P(x;a) \\ &+ \,\sum_{\alpha\in B_{P,w}} {\langle \alpha^\vee\,,\,\omega_i\rangle} q_{\eta_P(\alpha^\vee)} {\tilde{{\mathfrak{S}}}}_{\pi_P(ws_\alpha)}^P(x;a).\end{aligned}$$ The proof proceeds entirely analogously to that of Proposition \[P:QEChevalley\], starting with the Chevalley-Monk formula for double Schubert polynomials (applied in the special case that $w\in W^P$ and $s_i\in W^P$) and reducing to the following identity: $$\begin{aligned} & \,\,\sum_{v{\le^L}w} {\mathfrak{S}}_{vw^{-1}}(-a) \sum_{\alpha\in B_{P,v}} q_{\eta_P(\alpha^\vee)} {\langle \alpha^\vee\,,\,\omega_i\rangle} {\tilde{{\mathfrak{S}}}}_{\pi_P(vs_\alpha)}^P(x) \\ &= \sum_{\alpha\in B_{P,w}} {\langle \alpha^\vee\,,\,\omega_i\rangle} q_{\eta_P(\alpha^\vee)} \sum_{v{\le^L}\pi_P(ws_\alpha)} {\mathfrak{S}}_{v(\pi_P(ws_\alpha))^{-1}}(-a) {\tilde{{\mathfrak{S}}}}^P_v(x).\end{aligned}$$ For this it suffices to show that there is a bijection $(v,\alpha)\mapsto(\pi_P(vs_\alpha),\alpha)$ and inverse bijection $(u,\alpha)\mapsto (\pi_P(us_\alpha),\alpha)$ between the sets $$\begin{aligned} A &= \{(v,\alpha)\in W^P \times (\Phi^+\setminus \Phi^+_P)\mid \text{$v{\le^L}w$ and $\alpha\in B_{P,v}$} \} \\ B &= \{(u,\alpha)\in W^P \times (\Phi^+\setminus \Phi^+_P)\mid \text{$u{\le^L}\pi_P(ws_\alpha)$ and $\alpha\in B_{P,w}$} \}\end{aligned}$$ such that for $(v,\alpha)\in A$ we have $$\begin{aligned} \label{E:Schubindex} vw^{-1} = \pi_P(vs_\alpha) (\pi_P(ws_\alpha))^{-1}.\end{aligned}$$ To establish this bijection we use [@LamSh:qaf Lemma 10.14], which asserts that elements $\alpha\in B_{P,w}$ automatically satisfy the additional condition $\ell(ws_\alpha) = \ell(w) + 1 - {\langle \alpha^\vee\,,\,2\rho\rangle}$. Let $(v,\alpha)\in A$. Then we have length-additive factorizations $w = (wv^{-1})\cdot v$, $v=(vs_\alpha)\cdot s_\alpha$, and $vs_\alpha=\pi_P(vs_\alpha) \cdot x$ for some $x\in W_P$. Therefore $w=(wv^{-1})\cdot \pi_P(vs_\alpha) \cdot x \cdot s_\alpha $ is length-additive. This implies that $ws_\alpha = (wv^{-1})\cdot \pi_P(vs_\alpha) \cdot x$ is length-additive. One may deduce from this that $\pi_P(ws_\alpha)=(wv^{-1})\pi_P(vs_\alpha)$ and therefore that holds. The conditions that $(\pi_P(vs_\alpha),\alpha)\in B$ are readily verified. This shows that the map $A\to B$ is well-defined. The rest of the proof is similar. Proof of Theorem \[T:mainpara\](4) ---------------------------------- The proof is analogous to the proof of Theorem \[T:main\]. Given the finite sequence $(n_1,\dotsc,n_k)$ and parabolic subgroup $P$, consider the extension $(n_1,\dotsc,n_k,1,1,1,\dotsc)$ by an infinite sequence of $1$s. We use the notation $P_\infty$ to label the corresponding objects. That is, $W_P\cong W_{P_\infty}$ is the subgroup of $S_\infty$, $W^{P_\infty}$ is the set of minimum-length coset representatives in $S_\infty / W_{P_\infty}$, and so on. Let $R_P={\mathbb{Z}}[x,a]^{W_{P_\infty}}$ and $R^q_P = {\mathbb{Z}}[x,q,a]^{W_{P_\infty}}$, where $x = (x_1,x_2,\dotsc)$, $q = (q_1,q_2,\ldots)$, and $q_1,q_2,\dotsc,q_{k-1}$ are identified with the quantum parameters in Theorem \[T:mainpara\](4). Define $J^a_{P,\infty}$ to be the ideal of $R_P$ generated by $g_i^p(x)-e_i^{N_p}(a)$ for $i\ge1$ and $p\ge k$ and by $a_i$ for $i>n$. Note that for $p=k+j$ with $j\ge0$ we have $g_i^p(x)-e_i^{N_p}(a)=e_i^{n+j}(x)-e_i^{n+j}(a)$. Let $I^a_{P,\infty}$ be the ${\mathbb{Z}}[a]$-submodule of $R_P$ spanned by ${\mathfrak{S}}_w(x;a)$ for $w \in W^{P_\infty} \setminus S_n$, and by $a_i{\mathfrak{S}}_u(x;a)$ for $i > n$ and any $u\in W^{P_\infty}$. We claim that $I^a_{P,\infty}$ is an ideal. This follows easily from the fact that the only double Schubert polynomials occurring in the expansion of a product ${\mathfrak{S}}_w(x;a) {\mathfrak{S}}_v(x;a)$ lie above $w$ and $v$ in Bruhat order. But then it follows from Theorem \[T:mainpara\](2) and dimension counting that $I^a_{P,\infty} = J^a_{P,\infty}$. Define $J^{qa}_{P,\infty}$ to be the ideal of $R^q_P$ generated by $G_i^p-e_i^{N_p}(a)$ for $p\ge k$ and $i\ge 1$, and $a_i$ for $i > n$, and $q_i$ for $i \ge k$. We claim that ${\tilde{{\mathfrak{S}}}}^P_w(x;a) \in J^{qa}_{P,\infty}$ for $w \in W^{P_\infty} \setminus S_n$. This would follow from the definitions if we could establish that $\theta^{P_\infty}(J^a_{P,\infty}) \subset J^{qa}_{P,\infty}$. Since $\theta^{P_\infty}$ is trivial on the equivariant variables $a_1,a_2,\ldots$, it suffices to show that ${\theta}^{P_\infty}(g_{{{\lambda^\bullet}}}^P(g_i^p(x)-e_i^{N_p}(a))) \in J^{qa}_{P,\infty}$, for each $i\ge1$, $p\ge k$, and each standard monomial $g_{{{\lambda^\bullet}}}^P$. To apply ${\theta}^{P_\infty}$ to $g_{{{\lambda^\bullet}}}^P g_i^p$ we must first standardize this monomial. This can be achieved by using a parabolic version of the straightening algorithm of [@FGP Proposition 3.3]. The only nonstandard part of $g_{{{\lambda^\bullet}}}^P g_i^p$ is the possible presence of a factor $g_j^p g_i^p$. This can be standardised using only the (non-parabolic) relation [@FGP (3.2)] – any factors $g_m^{k'}$ for $k'<k$ in $g_{{{\lambda^\bullet}}}^P g_i^p$ are not modified. Similarly, the product $G_{{{\lambda^\bullet}}}^P G_i^p$ can be standardized using the following variant of [@FGP Lemma 3.5] which can be deduced from [@Cio2 (3.5)] : $$G^p_i G^{p+1}_{j+1}+ G^p_{i+1}G^p_{j} \pm q_p G^{p-1}_{i-n_{p+1}-n_p} G^p_j = G^p_j G^{p+1}_{i+1} +G^p_{k+1} G^p_i \pm q_p G^{p-1}_{j-n_{p+1}-n_p}G^p_i.$$ We note that when $p\ge k$, this relation only involves quantum variables $q_k, q_{k+1}, \dotsc$. Thus modulo $q_k,q_{k+1},\dotsc$, the straightening relation for $g_{{{\lambda^\bullet}}}^P g_i^p$ and for $G_{{{\lambda^\bullet}}}^PG_i^p$ coincide. It follows that ${\theta}^{P_\infty}(g_{{{\lambda^\bullet}}}^P(g_i^p(x)-e_i^{N_p}(a)) = G_{{{\lambda^\bullet}}}^P(G_i^p - e_i^{N_p}(a)) \mod J^{qa}_{P,\infty}$, and thus ${\tilde{{\mathfrak{S}}}}^P_w(x;a) \in J^{qa}_{P,\infty}$ for $w \in W^{P_\infty} \setminus S_n$. But $R^{qa}_{P,\infty}/J^{qa}_{P,\infty}$ has rank $|W^P|$ over $Z[q_1,\ldots,q_{k-1},a_1,\ldots,a_n]$ and so it follows that $J^{qa}_{P,\infty}$ is spanned by ${\tilde{{\mathfrak{S}}}}^P_w(x;a) \in J^{qa}_{P,\infty}$ for $w \in W^{P_\infty} \setminus S_n$ together with $q_i{\tilde{{\mathfrak{S}}}}^P_u(x;a)$ for $i\ge k$, $a_i{\tilde{{\mathfrak{S}}}}^P_u(x;a)$ for $i > n$ and any $u\in W^{P_\infty}$. Theorem \[T:mainpara\](4) now follows from Proposition \[P:pqeChev\] and the determination of $QH^T(SL_n/P)$ in Theorem \[T:Mipara\]. [LamSh]{} D. Anderson, Double Schubert polynomials and double Schubert varieties, preprint, 2006. D. Anderson and L. Chen, personal communication, 2010. A. Astashkevich and V. Sadov, Quantum cohomology of partial flag manifolds. Comm. Math. Phys. 170 (1995), 503-528. I.N. Bernstein, I.M. Gelfand, and S.I. Gelfand, Schubert cells and the cohomology of a flag space. (Russian) Funkcional. Anal. i Priložen. 7 (1973), no. 1, 64–65. S. Billey, Kostant polynomials and the cohomology ring for $G/B$. Duke Math. J. 96 (1999), no. 1, 205–224. A. Borel, Sur la cohomologie des espaces fibés principaux et des espaces homogènes des groupes de Lie compacts. Ann. of Math. (2) 57 (1953), 115–207. C. Chevalley, Sur les décompositions cellulaires des espaces $G/B$ in Algebraic Groups and their Generalizations: Classical Methods, American Mathematical Society, 1994, pp. 1-23. Proceedings and Symposia in Pure Mathematics, vol. 56, Part 1. I. Ciocan-Fontanine, Quantum cohomology of flag varieties. Intern. Math. Research Notices (1995), no. 6, 263–277. I. Ciocan-Fontanine, On quantum cohomology rings of partial flag varieties. Duke Math. J. 98 (1999), no. 3, 485–524. I. Ciocan-Fontanine and W. Fulton, Quantum double Schubert polynomials. Appendix J in Schubert Varieties and Degeneracy Loci, Lecture Notes in Math. 1689(1998), 134–138. S. Fomin, S.I. Gelfand, and A. Postnikov, Quantum Schubert polynomials. J. Amer. Math. Soc. 10 (1997), no. 3, 565–596. W. Fulton and C. Woodward, On the quantum product of Schubert classes. J. Algebraic Geom. 13 (2004), no. 4, 641–661. A. Givental and B. Kim, Quantum cohomology of flag manifolds and Toda lattices. Comm. Math. Phys. 168 (1995), no. 3, 609–641. J.E. Humphreys, Reflection groups and Coxeter groups. Cambridge Studies in Advanced Mathematics, 29. Cambridge University Press, Cambridge, 1990. xii+204 pp. A.N. Kirillov and T. Maeno, Quantum double Schubert polynomials, quantum Schubert polynomials and Vafa-Intriligator formula. Formal power series and algebraic combinatorics (Vienna, 1997). Discrete Math. 217 (2000), no. 1-3, 191–223. B. Kim, Quantum cohomology of flag manifolds $G/B$ and quantum Toda lattices. Ann. of Math. (2) 149 (1999), no. 1, 129–148. B. Kim, Quantum cohomology of partial flag manifolds and a residue formula for their intersection pairings. Internat. Math. Res. Notices 1995, no. 1, 1–15 (electronic). B. Kim, On equivariant quantum cohomology. Internat. Math. Res. Notices 1996, no. 17, 841–851. B. Kostant and S. Kumar, The nil Hecke ring and cohomology of $G/P$ for a Kac-Moody group $G$. Adv. in Math. 62 (1986), no. 3, 187–237. T. Lam and M. Shimozono, Quantum cohomology of $G/P$ and homology of affine Grassmannian. Acta. Math. 204 (2010), 49–90. A. Lascoux and M.-P. Schützenberger, Symmetry and flag manifolds, Lecture Notes in Mathematics 996 (1983) 118–144. I. G. Macdonald, Schubert polynomials. Surveys in combinatorics, 1991 (Guildford, 1991), 73–99, London Math. Soc. Lecture Note Ser., 166, Cambridge Univ. Press, Cambridge, 1991. L.C. Mihalcea, On equivariant quantum cohomology of homogeneous spaces: Chevalley formulae and algorithms. Duke Math. J. 140 (2007), no. 2, 321–350. L.C. Mihalcea, Giambelli formulae for the equivariant quantum cohomology of the Grassmannian. Transactions of Amer. Math. Soc. 360 (2008), 2285–2301. D. Monk, The geometry of flag manifolds. Proc. London Math. Soc. (3) 9 (1959), 253–286. D. Peterson, Lecture notes at MIT, 1997. S.A. Robinson, Equivariant Schubert calculus. Thesis (Ph.D.) The University of North Carolina at Chapel Hill. 2001. 49 pp. [^1]: This is not the standard definition of double Schubert polynomial. However it is easily seen to be equivalent using, say, the identity ${\mathfrak{S}}_{w^{-1}}(x;a)={\mathfrak{S}}_w(-a;-x)$ [@Mac].
--- abstract: 'We introduce a new notion of *Morita equivalence* for *diffeological groupoids*, generalising the original notion for Lie groupoids. For this we develop a theory of *diffeological groupoid actions*, *-bundles* and *-bibundles*. We define a notion of *principality* for these bundles, which uses the notion of a *subduction*, generalising the notion of a Lie group(oid) principal bundle. We say two diffeological groupoids are Morita equivalent if and only if there exists a *biprincipal* bibundle between them. Using a Hilsum-Skandalis tensor product, we further define a composition of diffeological bibundles, and obtain a bicategory ${{\mathbf{DiffeolBiBund}}}$. Our main result is the following: a bibundle is biprincipal if and only if it is *weakly invertible* in this bicategory. This generalises a well known theorem from the Lie groupoid theory. As an application of the framework, we prove that the *orbit spaces* of two Morita equivalent diffeological groupoids are diffeomorphic. We also show that the property of a diffeological groupoid to be *fibrating*, and its *category of actions*, are Morita invariants. ***Keywords.*** *Diffeology, Lie groupoids, diffeological groupoids, bibundles, Hilsum-Skandalis products, Morita equivalence, orbit spaces.*' author: - Nesta van der Schaaf bibliography: - 'ARTICLE.bib' title: '**Diffeological Morita Equivalence**' --- =1 Introduction {#section:introduction} ============ *Diffeology* originates from the work of J.-M. Souriau [@souriau1980groupes; @souriau1984groupes] and his students [@donato1983exemple; @donato1984revetements; @iglesias1985fibrations] in the 1980s. The main objects of this theory are *diffeological spaces*, a type of generalised smooth space that extends the traditional notion of a smooth manifold. They make for a convenient framework that deals well with (singular) quotients, function spaces (or otherwise infinite-dimensional objects), fibred products (or otherwise singular subspaces), and other constructions that lie beyond the realm of classical differential topology. As many of these constructions naturally occur in differential topology and -geometry, and since they cannot be studied with their standard tools, diffeology has become a useful addition to the geometer’s toolbox. *Diffeological groupoids* have recently garnered attention in the mathematical physics of general relativity [@blohmann2013groupoid; @glowacki2019], foliation theory [@androulidakis2019diffeological; @garmendia2019hausdorff; @macdonald2020holonomygroupoids], the theory of algebroids [@androulidakis2020integration], the theory of (differentiable) stacks [@roberts2018smooth; @watts2019diffeological], and even in relation to noncommutative geometry [@iglesias2018noncommutative; @iglesias2020quasifolds]. In all but one of these fields (general relativity), the notion of *Morita equivalence* is an important one. Yet, as the authors of [@garmendia2019hausdorff p.3] point out: “The theory of Morita equivalence for diffeological groupoids has not been developed yet.” In the current paper we present one possible development of such a notion, based on the results of the author’s Master thesis [@schaaf2020diffeology-groupoids-and-ME]. This development is a generalisation of the theory of Hilsum-Skandalis bibundles and the Morita equivalence of Lie groupoids, where many definitions and proofs, and certainly the general idea, extend quite straightforwardly to the diffeological case. The main exception is that we need to replace surjective submersions with so-called *subductions*. This special type of smooth map is, even on smooth manifolds, slightly weaker than the notion of a surjective submersion, but it turns out that they still share enough of their properties so that the entire theory can be developed[^1]. This development proceeds roughly as follows: based on the notions of *actions* and *bundles* defined in \[section:diffeological groupoid actions and bundles\], we define a diffeological version of a *bibundle* between groupoids (\[definition:diffeological groupoid bibundles\]). These stand in analogy to *bimodules* for rings, and can be treated as a generalised type of morphism between groupoids. This gives a *bicategory* ${{\mathbf{DiffeolBiBund}}}$ of diffeological groupoids, bibundles, and *biequivariant maps* (\[theorem:bicategory DiffBiBund\]). Using the aforementioned notion of a *subduction* (\[definition:subduction\]), we define *biprincipality* of bibundles, and with this, we obtain a notion of *Morita equivalence* for diffeological groupoids (\[definition:Morita equivalence and biprincipality\]). In the bicategory we also get a notion of equivalence, by way of the *weak isomorphisms*. A morphism in a bicategory is called *weakly invertible* if it is invertible *up to 2-isomorphism*. Two objects in a bicategory are called *weakly isomorphic* if there exists a weakly invertible morphism between them. The main point of this paper is to prove a *Morita theorem* for diffeological groupoids, characterising the weakly invertible bibundles, and hence realising Morita equivalence as a particular instance of weak isomorphism: A diffeological bibundle is weakly invertible if and only if it is biprincipal. In other words, two diffeological groupoids are Morita equivalent if and only if they are weakly isomorphic in the bicategory ${{\mathbf{DiffeolBiBund}}}$. A Morita theorem for Lie groupoids has been known in the literature for some time, see e.g. [@landsman2001quantized Proposition 4.21]. Throughout the paper, we shall point out some differences between the diffeological- and Lie theories. The main difference is that, due to technical constraints, a Morita theorem for Lie groupoids only holds in the restricted setting of *left principal* bibundles. The main improvement of \[theorem:weakly invertible bibundles are the biprincipal ones\] over the classical Lie Morita theorem, besides the generalisation to diffeology, is therefore that it considers also a more general class of bibundles. Besides this improvement, with this paper we hope to contribute a complete account of the basic theory of bibundles and Morita equivalence of groupoids, providing detailed proofs and constructions of most necessary technical results, and culminating in a proof of the main \[theorem:weakly invertible bibundles are the biprincipal ones\]. A brief outline of the contents of the paper is as follows. We briefly recall the definition of a diffeology in \[section:diffeology\]. In particular, we describe the diffeologies of fibred products (pullbacks) and quotients, since they will be important to describe the smooth structure of the orbit space and space of composable arrows of a groupoid. We also define and study the behaviour of *subductions*, especially in relation to fibred products. In \[section:diffeological groupoids\] we define *diffeological groupoids*, and highlight some examples from the literature. \[section:diffeological groupoid actions and bundles,section:diffeological bibundles\] contain the main contents of this paper. In them, we define the notions of smooth groupoid *actions* and *-bundles*. For the latter we give a new notion of *principality*, generalising the notion of a principal Lie group(oid) bundle. This leads naturally to the definition of a *biprincipal bibundle*, and hence to our definition of *Morita equivalence*. The remainder of \[section:diffeological bibundles\] is dedicated to a proof of \[theorem:weakly invertible bibundles are the biprincipal ones\]. In \[section:some applications\], we describe some *Morita invariants*, by generalising some well known theorems from the Lie theory. We prove: the property of a diffeological groupoid to be *fibrating* is preserved under our notion of Morita equivalence; the *orbit spaces* of two Morita equivalent diffeological groupoids are diffeomorphic; and the categories of representations of two Morita equivalent diffeological groupoids are categorically equivalent. Lastly, in \[section:closing section\], we discuss the question of diffeological Morita equivalence between Lie groupoids. We end the paper with the open \[question:does inclusion pseudofunctor reflect weak equivalence\], and some suggestions for future research. **Acknowledgements.** The author thanks Klaas Landsman and Ioan Mărcu for being the supervisor and second reader of his Master thesis, respectively, and for encouraging him to write the current paper. He also thanks Klaas for feedback on the paper, and Patrick Iglesias-Zemmour for email correspondence. Diffeology {#section:diffeology} ========== One of the main conveniences of *diffeology* [^2] is that the category ${{\mathbf{Diffeol}}}$ of diffeological spaces and smooth maps (\[definition:diffeology\]) is complete, cocomplete, (locally) Cartesian closed, and in fact a quasitopos [@baez2011convenient Theorem 3.2]. This means that we can perform many categorical constructions that are unavailable in the category ${{\mathbf{Mnfd}}}$ of smooth manifolds. From these, the ones that are important for us are pullbacks and quotients. We discuss both of these explicitly below. The approach of diffeology has been compared to other theories of generalised smooth spaces in [@stacey2011comparative; @batubenge2017diffeologicalfrolicher]. For some historical remarks we refer to [@iglesias2013beginning; @iglesias2019introduction] and [@schaaf2020diffeology-groupoids-and-ME Chapter I]. The main reference for this section is the textbook [@iglesias2013diffeology] by Iglesias-Zemmour, in which nearly all of the theory below is already developed. A *Euclidean domain* is an open subset $U\subseteq \mathbb{R}^m$, for arbitrary $m\in\mathbb{N}_{{\geqslant}0}$. A [[*parametrisation*]{}]{} on an arbitrary set $X$ is a function $U\to X$ defined on a Euclidean domain. We denote by ${\mathrm{Param}}(X)$ the set of all parametrisations on $X$. The basic idea behind diffeology is that it determines which parametrisations are *‘smooth’*, in such a way that it captures the properties of ordinary smooth functions on smooth manifolds. The precise definition is as follows: \[definition:diffeology\] Let $X$ be a set. A [[*diffeology*]{}]{} on $X$ is a collection of parametrisations ${\mathcal{D}}_X\subseteq {\mathrm{Param}}(X)$, containing what we call [[*plots*]{}]{}, satisfying the following three axioms: - *(Covering)* Every constant parametrisation $U\to X$ is a plot. - *(Smooth Compatibility)* For every plot $\alpha:U_\alpha\to X$ in ${\mathcal{D}}_X$ and every smooth function $h:V\to U_\alpha$ between Euclidean domains, we have that $\alpha\circ h \in{\mathcal{D}}_X$. - *(Locality)* If $\alpha:U_\alpha\to X$ is a parametrisation, and $(U_i)_{i\in I}$ an open cover of $U_\alpha$ such that each restriction $\alpha|_{U_i}$ is a plot of $X$, then $\alpha\in{\mathcal{D}}_X$. A set $X$, paired with a diffeology: $(X,{\mathcal{D}}_X)$, is called a [[*diffeological space*]{}]{}. Although, usually we shall just write $X$. A function $f:(X,{\mathcal{D}}_X)\to (Y,{\mathcal{D}}_Y)$ between diffeological spaces is called *smooth* if for every plot $\alpha\in{\mathcal{D}}_X$ of $X$, the composition $f\circ\alpha\in{\mathcal{D}}_Y$ is a plot of $Y$. The set of all smooth functions between such diffeological spaces is denoted $C^\infty(X,Y)$, and smoothness is preserved by composition. The category of diffeological spaces and smooth maps is denoted by ${{\mathbf{Diffeol}}}$, and the isomorphisms in this category are called *diffeomorphisms*. \[example:Euclidean and manifold diffeologies\] Any Euclidean domain $U$ gets a canonical diffeology ${\mathcal{D}}_U$, called the *Euclidean diffeology*. Its plots are the parametrisations that are smooth in the ordinary sense of the word. Similarly, we get a canonical diffeology ${\mathcal{D}}_M$ for any smooth manifold $M$, called the *manifold diffeology*. With respect to these diffeologies, the notion of smoothness defined in \[definition:diffeology\] agrees with the ordinary one. Hence the inclusion functor ${{\mathbf{Mnfd}}}\hookrightarrow{{\mathbf{Diffeol}}}$ is fully faithful, and we can adopt the previous definition without causing any confusion. \[example:coarse and discrete diffeologies\] Any set $X$ carries two canonical diffeologies. First, the largest diffeology, ${\mathcal{D}}^\bullet_X:={\mathrm{Param}}(X)$, called the *coarse diffeology*, containing all possible parametrisations. Letting $X^\bullet$ denote the diffeological space with the coarse diffeology, it is easy to see that every function $Z\to X^\bullet$ is smooth. On the other hand, the smallest diffeology on $X$ is ${\mathcal{D}}_X^\circ$, containing all locally constant parametrisations. This is called the *discrete diffeology*. Similar to the above, we find that every function $X^\circ\to Y$ is smooth. For any two diffeological spaces $X$ and $Y$, there is a natural diffeology on the space of smooth functions $C^\infty(X,Y)$ called the *standard functional diffeology* [@iglesias2013diffeology Article 1.57]. It is the smallest diffeology that makes the evaluation map $(f,x)\mapsto f(x)$ smooth. With these diffeologies, ${{\mathbf{Diffeol}}}$ becomes Cartesian closed. Generating families {#section:generating families} ------------------- The Axiom of Locality in \[definition:diffeology\] ensures that the smoothness of a parametrisation, or of a function between diffeological spaces, can be checked locally. This allows us to introduce the following notions, which will help us study interesting constructions, and will often simplify proofs. \[definition:generating families\] Consider a family ${\mathcal{F}}\subseteq {\mathrm{Param}}(X)$ of parametrisations on $X$. There exists a smallest diffeology on $X$ that contains ${\mathcal{F}}$. We denote this diffeology by $\langle {\mathcal{F}}\rangle$, and call it the *diffeology generated by ${\mathcal{F}}$*. If ${\mathcal{D}}_X=\langle {\mathcal{F}}\rangle$, we say ${\mathcal{F}}$ is a *generating family* for ${\mathcal{D}}_X$. The elements of ${\mathcal{F}}$ are called *generating plots*. The plots of the diffeology generated by ${\mathcal{F}}$ are characterised as follows: a parametrisation $\alpha:U_\alpha\to X$ is a plot in $\langle{\mathcal{F}}\rangle$ if and only if $\alpha$ is locally either constant, or factors through elements of ${\mathcal{F}}$. Concretely, this means that for all $t\in U_\alpha$ there exists an open neighbourhood $t\in V\subseteq U_\alpha$ such that $\alpha|_V$ is either constant, or of the form $\alpha|_V=F\circ h$, where $F:W\to X$ is an element in ${\mathcal{F}}$, and $h:V\to W$ is a smooth function between Euclidean domains. When the family ${\mathcal{F}}$ is *covering*, in the sense that $\bigcup_{F\in{\mathcal{F}}}\operatorname{im}(F)=X$, then the condition for $\alpha|_V$ to be constant becomes redundant, and the plots in $\langle{\mathcal{F}}\rangle$ are locally just of the form $\alpha|_V=F\circ h$. The main use of this construction is that we may encounter families of parametrisations that are not quite diffeologies, but that contain functions that we nevertheless want to be smooth. On the other hand, calculations may sometimes be simplified by finding a suitable generating family for a given diffeology. This simplification lies in the following result, saying that smoothness has only to be checked on generating plots: \[proposition:smoothness defined by generating plots\] Let $f:X\to Y$ be a function between diffeological spaces, such that ${\mathcal{D}}_X$ is generated by some family ${\mathcal{F}}$. Then $f$ is smooth if and only if for all $F\in{\mathcal{F}}$ we have $f\circ F\in{\mathcal{D}}_Y$. \[example:wire diffeology\] The *wire diffeology* (called the *spaghetti diffeology* by Souriau) is the diffeology ${\mathcal{D}}_\mathrm{wire}$ on $\mathbb{R}^2$ generated by $C^\infty(\mathbb{R},\mathbb{R}^2)$. The resulting diffeological space is not diffeomorphic to the ordinary $\mathbb{R}^2$, since the identity map ${\mathrm{id}}_{\mathbb{R}^2}:(\mathbb{R}^2,{\mathcal{D}}_{\mathbb{R}^2})\to (\mathbb{R}^2,{\mathcal{D}}_\mathrm{wire})$ is not smooth. \[example:manifold diffeology is generated by atlas\] The charts of a smooth atlas on a manifold define a generating family for the manifold diffeology from \[example:Euclidean and manifold diffeologies\]. Since a manifold may have many atlases, this shows that similarly any diffeology may have many generating families. Quotients {#section:quotients} --------- We use the terminology from \[section:generating families\] to define a natural diffeology on a quotient $X/{\sim}$. This question relates to a more general one: given a function $f:X\to Y$, and a diffeology ${\mathcal{D}}_X$ on the domain, what is the smallest diffeology on $Y$ such that $f$ remains smooth? The following provides an answer: \[definition:pushforward diffeology\] Let $f:X\to Y$ be a function between sets, and let ${\mathcal{D}}_X$ be a diffeology on $X$. The *pushforward diffeology* on $Y$ is the diffeology $f_\ast({\mathcal{D}}_X):= \langle f\circ{\mathcal{D}}_X\rangle$, where $f\circ {\mathcal{D}}_X$ is the family of parametrisations of the form $f\circ\alpha$, for $\alpha\in{\mathcal{D}}_X$. The pushforward diffeology is the smallest diffeology on $Y$ that makes $f$ smooth. We can now use this to define a natural diffeology on a quotient space: \[definition:quotient diffeology\] Let $X$ be a diffeological space, and let $\sim$ be an equivalence relation on the set $X$. We denote the equivalence classes by $[x]:=\lbrace y\in X:x\sim y\rbrace$. The *quotient* $X/{\sim}$ is the collection of all equivalence classes, and comes with a *canonical projection map* $p:X\to X/{\sim}$, which sends $x\mapsto [x]$. The *quotient diffeology* on $X/{\sim}$ is defined as the pushforward diffeology $p_\ast({\mathcal{D}}_X)$ of ${\mathcal{D}}_X$ along the canonical projection map. Naturally, with respect to this diffeology, the canonical projection map becomes smooth. The quotient diffeology will be used extensively, where the equivalence relation will often be defined by the orbits of a group(oid) action, or as the fibres of some smooth surjection. The existence of the quotient diffeology for arbitrary quotients should be contrasted to the situation for smooth manifolds, where quotients often carry no natural differentiable structure at all, but where instead one could appeal to the *Godement criterion* ([@serre1965lie Theorem 2, p. 92]). The following is an example of a quotient that does not exist as a smooth manifold, but whose diffeological structure is still quite rich: \[example:irrational torus\] The *irrational torus* is the diffeological space defined by the quotient of $\mathbb{R}$ by an additive subgroup: $T_\theta:= \mathbb{R}/(\mathbb{Z}+\theta\mathbb{Z})$, where $\theta\in\mathbb{R}\setminus\mathbb{Q}$ is an arbitrary irrational number. Equivalently, it can be described as the leaf space of the Kronecker foliation on the 2-torus with irrational slope. The topology of this quotient contains only the two trivial open sets, yet its quotient diffeology is non-trivial[^3]. They were first classified in [@donato1983exemple], whose result is (amazingly) directly analogous to the classification of the irrational rotation algebras [@rieffel1981cstar]. This example is treated in detail in [@schaaf2020diffeology-groupoids-and-ME Section 2.3]. Fibred products {#section:fibred products} --------------- The second construction we need is that of *fibred products*, which are the pullbacks in the category ${{\mathbf{Diffeol}}}$. Recall that if $f:X\to Z$ and $g:Y\to Z$ are two functions between sets with a common codomain, then the fibred product of sets is (up to unique bijection) $$X\times_Z^{f,g}Y := \lbrace (x,y)\in X\times Y : f(x)=g(y) \rbrace.$$ When each set is equipped with a diffeology, we shall construct a diffeology on the fibred product in two steps. First we describe a natural diffeology on the product $X\times Y$, and then show how this descends to a diffeology on the fibred product as a subset. \[definition:product diffeology\] Let $X$ and $Y$ be two diffeological spaces. The *product diffeology* on the Cartesian product $X\times Y$ is defined as $${\mathcal{D}}_{X\times Y} := \langle {\mathcal{D}}_X\times {\mathcal{D}}_Y \rangle,$$ where ${\mathcal{D}}_X\times{\mathcal{D}}_Y$ is the family of parametrisations of the form $\alpha_1\times \alpha_2$, for $\alpha_1\in{\mathcal{D}}_X$ and $\alpha_2\in{\mathcal{D}}_Y$. The plots in ${\mathcal{D}}_{X\times Y}$ are exactly the parametrisations $\alpha:U_\alpha\to X\times Y$ such that ${\mathrm{pr}}_1\circ \alpha$ and ${\mathrm{pr}}_2\circ\alpha$ are plots of $X$ and $Y$, respectively. We assume that products are always furnished with their product diffeologies. It is clear that both projection maps ${\mathrm{pr}}_1$ and ${\mathrm{pr}}_2$ are smooth with respect to the product diffeology. The smooth functions into $X\times Y$ behave exactly as one would expect, where $f:A\to X\times Y$ is smooth if and only if the components $f_1={\mathrm{pr}}_1\circ f$ and $f_2={\mathrm{pr}}_2\circ f$ are smooth. Next we define how the diffeology on a set $X$ transfers to any of its subsets: \[definition:subset diffeology\] Consider a diffeological space $X$, and an arbitrary subset $A\subseteq X$. Let $i_A:A\hookrightarrow X$ denote the natural inclusion map. The *subset diffeology* on $A$ is defined as $${\mathcal{D}}_{A\subseteq X} := \lbrace \alpha\in{\mathrm{Param}}(A) : i_A\circ \alpha\in{\mathcal{D}}_X \rbrace.$$ That is, $\alpha$ is a plot of $A$ if and only if when seen as a parametrisation of $X$, it is also a plot. We assume that a subset of a diffeological space is always endowed with its subset diffeology. Since the fibred product $X\times_Z^{f,g}Y$ is a subset of the product $X\times Y$, the following definition is a natural combination of \[definition:product diffeology,definition:subset diffeology\]: \[definition:fibred product diffeology\] Let $f:X\to Z$ and $g:Y\to Z$ be two smooth maps between diffeological spaces. The *fibred product diffeology* ${\mathcal{D}}_{X\times_Z^{f,g}Y}$ on the set $X\times_Z^{f,g}Y$ is the subset diffeology it gets from the product diffeology on $X\times Y$. Concretely: $${\mathcal{D}}_{X\times_Z^{f,g}Y} = \lbrace \alpha\in{\mathcal{D}}_{X\times Y} : f\circ\alpha_1 = g\circ\alpha_2 \rbrace.$$ That is, the plots of the fibred product are just plots of $X\times Y$, whose components satisfy an extra condition. We assume that all fibred products are equipped with their fibred product diffeologies. Subductions ----------- Subductions are a special class of smooth functions that generalise the notion of surjective submersion from the theory of smooth manifolds. Since there is no unambiguous notion of tangent space in diffeology (cf. [@christensen2016tangent]), the definition looks somewhat different. For (more) detailed proofs of the results in this section, we refer to [@iglesias2013diffeology Article 1.46] and surrounding text, and [@schaaf2020diffeology-groupoids-and-ME Section 2.6]. \[definition:subduction\] A surjective function $f:X\to Y$ between diffeological spaces is called a *subduction* if $f_\ast({\mathcal{D}}_X)={\mathcal{D}}_Y$. Note that subductions are automatically smooth. In the case that $f$ is a subduction, since it is then particularly a surjection, the family of parametrisations $f\circ{\mathcal{D}}_X$ is covering, and hence the plots of ${\mathcal{D}}_Y$ are all locally of the form $f\circ\alpha$, where $\alpha\in{\mathcal{D}}_X$. In other words, $f$ is a subduction if and only if $f$ is smooth and the plots of $Y$ can locally be *lifted* along $f$ to plots of $X$: \[lemma:characterisation of subductions\] Let $f:X\to Y$ be a function between diffeological spaces. Then $f$ is a subduction if and only if the following two conditions are satisfied: 1. The function $f$ is smooth. 2. For every plot $\alpha:U_\alpha\to Y$, and any point $t\in U_\alpha$, there exists an open neighbourhood $t\in V\subseteq U_\alpha$ and a plot $\beta:V\to X$, such that $\alpha|_V=f\circ \beta$. Since many of the functions we encounter will naturally be smooth already, the notion of subductiveness is effectively captured by condition *(2)* in this lemma. This can also be seen in the following simple example: \[example:projection maps are subductions\] Consider the product $X\times Y$ of two diffeological spaces $X$ and $Y$. The projection maps ${\mathrm{pr}}_1$ and ${\mathrm{pr}}_2$ are both subductions. \[example:quotient by surjection or subduction\] For a surjective function $\pi:X\to B$ we get an equivalence relation on $X$, where two points are identified if and only if they inhabit the same $\pi$-fibre. The equivalence classes are exactly the $\pi$-fibres themselves. We denote the quotient set of this equivalence relation by $X/{\pi}$, and equip it with the quotient diffeology whenever $X$ is a diffeological space. If $\pi$ is a subduction, then there is a diffeomorphism $B\cong X/{\pi}$ [@iglesias2013diffeology Article 1.52]. For subsequent use, we state here some useful properties of subductions with respect to composition: \[lemma:properties of subductions\] We have the following properties for subductions: 1. If $f$ and $g$ are two subductions, then the composition $f\circ g$ is a subduction as well. 2. Let $f:Y\to Z$ and $g:X\to Y$ be two smooth maps such that the composition $f\circ g$ is a subduction. Then so is $f$. 3. Let $\pi:X\to B$ be a subduction, and $f:B\to Y$ an arbitrary function. Then $f$ is smooth if and only if $f\circ\pi$ is smooth. In fact, $f$ is a subduction if and only if $f\circ\pi$ is a subduction. *(1)* This is [@iglesias2013diffeology Article 1.47]. *(2)* Assume $f:Y\to Z$ and $g:X\to Y$ are smooth, such that $f\circ g$ is a subduction. Take a plot $\alpha:U_\alpha\to Z$. Since the composition is a subduction, for every $t\in U_\alpha$ we can find an open neighbourhood $t\in V\subseteq U_\alpha$ and a plot $\beta:V\to X$ such that $\alpha|_V=(f\circ g)\circ \beta$. Since $g$ is smooth, we get a plot $g\circ\beta\in{\mathcal{D}}_Y$, which is a local lift of $\alpha$ along $f$. The result follows by \[lemma:characterisation of subductions\]. *(3)* If $f$ is smooth, it follows immediately that $f\circ \pi$ is smooth. Suppose now that $f\circ \pi$ is smooth. We need to show that $f$ is smooth. For that, take a plot $\alpha:U_\alpha\to X$. Since $\pi$ is a subduction, we can find an open cover $(V_t)_{t\in U_\alpha}$ of $U_\alpha$ together with a family of plots $\beta_t:V_t\to X$ such that $\alpha|_{V_t}=\pi\circ\beta_t$. It follows that each restriction $f\circ\alpha|_{V_t}=f\circ\pi\circ\beta_t$ is smooth, and by the Axiom of Locality it follows that $f\circ\alpha\in{\mathcal{D}}_Y$, and hence that $f$ is smooth. The claim about when $f$ is a subduction follows from *(2)*. We also collect the following noteworthy claim: \[proposition:injective subduction is diffeomorphism\] An injective subduction is a diffeomorphism. We recall now some elementary results on the interaction between subductions and fibred products, as obtained in [@schaaf2020diffeology-groupoids-and-ME Section 2.6]. We point out that if $f$ is a subduction, an arbitrary restriction $f|_A$ may no longer be a subduction. We know from \[example:projection maps are subductions\] that the second projection map ${\mathrm{pr}}_2$ of a product $X\times Y$ is a subduction, but it is not always the case that the restriction of this projection to a fibred product $X\times_Z^{f,g}Y$ is a subduction as well. The following result shows that, to ensure this, it suffices to assume that $f$ is a subduction: \[lemma:restriction of projection is subduction\] Let $f:X\to Z$ be a subduction, and let $g:Y\to Z$ be a smooth map. Then the restricted projection map $$\left.{\mathrm{pr}}_2\right|_{X\times_Z^{f,g}Y} : X\times_Z^{f,g}Y \longrightarrow Y$$ is also a subduction. In other words, in ${{\mathbf{Diffeol}}}$, subductions are preserved under pullback. Consider a plot $\alpha:U_\alpha\to Y$. By composition, this gives another plot $g\circ\alpha\in{\mathcal{D}}_Z$. Now, since $f$ is a subduction, for every $t\in U_\alpha$ we can find a plot $\beta:V\to X$ defined on an open neighbourhood $t\in V\subseteq U_\alpha$ such that $g\circ\alpha|_V=f\circ \beta$. This gives a plot $(\beta,\alpha|_V):V\to X\times_Z Y$ that satisfies ${\mathrm{pr}}_2|_{X\times_ZY}\circ (\beta,\alpha|_V)=\alpha|_V$. The result follows by \[lemma:characterisation of subductions\]. The next result shows how two subductions interact with fibred products: \[lemma:subduction and fibred product\] Consider the following two commuting triangles of diffeological spaces and smooth maps: $$\begin{tikzcd}[column sep = small] {X_1} \arrow[rr, "f"] \arrow[rd, "r"'] & & {Y_1} \arrow[ld, "R"] \\ & A & \end{tikzcd} \qquad\text{and}\qquad \begin{tikzcd}[column sep = small] {X_2} \arrow[rr, "g"] \arrow[rd, "l"'] & & {Y_2} \arrow[ld, "L"] \\ & A, & \end{tikzcd}$$ where both $f$ and $g$ are subductions. Then the map $$(f\times g)|_{{X_1}\times_A{X_2}}:{X_1}\times_A^{r,l}{X_2}\longrightarrow {Y_1}\times_A^{R,L}{Y_2};\qquad (x_1,x_2)\longmapsto (f(x_1),g(x_2))$$ is also a subduction. Clearly $f\times g$ is smooth, so we are left to show that the second condition in \[lemma:characterisation of subductions\] is fulfilled. For that, take a plot $(\alpha_1,\alpha_2):U\to Y_1\times_A^{R,L}Y_2$, i.e., we have two plots $\alpha_1\in{\mathcal{D}}_{Y_1}$ and $\alpha_2\in{\mathcal{D}}_{Y_2}$ such that $R\circ \alpha_1=L\circ\alpha_2$. Now fix a point $t\in U$ in the domain. Then since both $f$ and $g$ are subductive, we can find two plots $\beta_1:U_1\to X_1$ and $\beta_2:U_2\to X_2$, defined on open neighbourhoods of $t\in U$, such that $\alpha_1|_{U_1}=f\circ \beta_1$ and $\alpha_2|_{U_2}=g\circ \beta_2$. Now the plot $$\left( \beta_1|_{U_1\cap U_2},\beta_2|_{U_1\cap U_2} \right) : U_1\cap U_2 \longrightarrow X_1\times X_2$$ takes values in the fibred product because $$r\circ \beta_1|_{U_2} = R\circ f\circ \beta_1|_{U_2} = R\circ \alpha_1|_{U_1\cap U_2} = L\circ \alpha_2|_{U_1\cap U_2} = l\circ \beta_2|_{U_1},$$ and we see that it lifts $(\alpha_1,\alpha_2)|_{U_1\cap U_2}$ along $f\times g$. By setting $A=\lbrace\ast\rbrace$ to be the one-point space, this lemma gives in particular that the product $f\times g$ of two subductions is again a subduction. To end this section, we should also mention the existence of the notion of a *local subduction* (also called *strong subductions*): A smooth surjection $f:X\to Y$ is called a *local subduction* if for every *pointed plot* of the form $\alpha:(U_\alpha,0)\to (Y,f(x))$ there exists a pointed plot $\beta:(V,0)\to (X,x)$, defined on an open neighbourhood $0\in V\subseteq U_\alpha$, such that $\alpha|_V=f\circ \beta$. Compare this to a definition of a subduction, where in general the plot $\beta$ does not have to hit the point $x$ in the domain of $f$. Note also that *local* subduction does not mean *locally a subduction everywhere*. \[proposition:local subductions are surjective submersions\] The local subductions between smooth manifolds are exactly the surjective submersions. Due to the above proposition, the notion of a local subduction will be of interest when studying Lie groupoids in the framework of diffeological Morita equivalence we develop below. See \[section:diffeological bibundles between Lie groupoids\]. Diffeological Groupoids {#section:diffeological groupoids} ======================= We assume that the reader is familiar with the definition of a (Lie) groupoid. A textbook reference for that theory is [@mackenzie2005general]. To fix our notation, we give here an informal description of a set-theoretic groupoid. A *groupoid* consists of two sets: $G_0$ and $G$, together with five *structure maps*. A groupoid will be denoted ${{G}\rightrightarrows{G}_0}$, or just $G$. Here $G_0$ is the set of objects of the groupoid, and $G$ is the set of arrows. The five structure maps are 1. The *source map* ${\mathrm{src}}:G\to G_0$, 2. The *target map* ${\mathrm{trg}}:G\to G_0$, 3. The *unit map* ${\mathrm{u}}:G_0\to G$, mapping $x\mapsto {\mathrm{id}}_x$, 4. The *inversion map* ${\mathrm{inv}}:G\to G$, mapping $g\mapsto g^{-1}$, 5. And the *composition*: $${\mathrm{comp}}: G\times_{G_0}^{{\mathrm{src}},{\mathrm{trg}}}G \longrightarrow G; \qquad (g,h)\mapsto g\circ h.$$ The composition is associative, and the identities and inverses behave as such. We say ${{G}\rightrightarrows{G}_0}$ is a *Lie groupoid* if both $G$ and $G_0$ are smooth manifolds such that the source and target maps are submersions, and each of the other structure maps are smooth. The definition of a diffeological groupoid is a straightforward generalisation of this: \[definition:diffeological groupoid\] A [[*diffeological groupoid*]{}]{} is a groupoid internal to the category of diffeological spaces. Concretely, this means that it is a groupoid ${{G}\rightrightarrows{G}_0}$ such that the object space $G_0$ and arrow space $G$ are endowed with diffeologies that make all of the structure maps smooth. As diffeology subsumes smooth manifolds, so do diffeological groupoids capture Lie groupoids. Note the main difference with the definition of a Lie groupoid is that we put no extra assumptions on the source and target maps. However: \[proposition:source map is subduction\] The source and target maps of a diffeological groupoid are subductions. The smooth structure map ${\mathrm{u}}:G_0\to G$, sending each object to its identity arrow, is a global smooth section of the source map, and hence by \[lemma:properties of subductions\]*(2)* the source map must be a subduction. Since the inversion map is a diffeomorphism, it follows that the target map is a subduction as well. \[definition:isotropy groups\] Let ${{G}\rightrightarrows{G}_0}$ be a diffeological groupoid. The [[*isotropy group*]{}]{} at $x\in G_0$ is the collection $G_x$ consisting of all arrows in $G$ from and to $x$: $$G_x:= \operatorname{Hom}_G(x,x)={\mathrm{src}}^{-1}(\lbrace x\rbrace)\cap {\mathrm{trg}}^{-1}(\lbrace x\rbrace).$$ \[definition:groupoid orbit space\] Let ${{G}\rightrightarrows{G}_0}$ be a diffeological groupoid. The *orbit* of an object $x\in G_0$ is defined as $${\mathrm{Orb}}_G(x) := \lbrace y\in G_0 :\exists x\xrightarrow{~g~}y\rbrace = {\mathrm{trg}}({\mathrm{src}}^{-1}(\lbrace x\rbrace)).$$ The [[*orbit space*]{}]{} of the groupoid is the space $G_0/G$ consisting of these orbits. We furnish the orbit space with the quotient diffeology from \[definition:quotient diffeology\], so that ${\mathrm{Orb}}_G:G_0\to G_0/G$ is a subduction. The orbit space of a Lie groupoid is not necessarily (canonically) a smooth manifold. The flexibility of diffeology allows us to study the smooth structure of orbit spaces of all diffeological groupoids. Below we give some examples of diffeological groupoids. \[example:relation groupoid\] Let $X$ be a diffeological space, and let $R$ be an equivalence relation on $X$. We define the [[*relation groupoid*]{}]{} $X\times_R X{\rightrightarrows}X$ as follows. The space of arrows consists of exactly those pairs $(x,y)\in X\times X$ such that $xRy$. With the composition $(z,y)\circ(y,x):=(z,x)$, this becomes a diffeological groupoid. The orbit space $X/(X\times_R X)$ is just the quotient $X/R$. When $X$ is a smooth manifold, the relation groupoid becomes a Lie groupoid (even when the quotient is not a smooth manifold). \[example:isotropy groupoid\] Let ${{G}\rightrightarrows{G}_0}$ be a diffeological groupoid. We can then consider the subgroupoid of $G$ that only consists of elements in isotropy groups: $$I_G := \bigcup_{x\in G_0}G_x \subseteq G.$$ This becomes a diffeological groupoid $I_G{\rightrightarrows}G_0$ called the [[*isotropy groupoid*]{}]{}. This has been studied in [@bos2007groupoids Example 2.1.9] in the context of Lie groupoids. Note that if ${{G}\rightrightarrows{G}_0}$ is a Lie groupoid, then generally $I_G$ is not a submanifold of $G$, so the isotropy groupoid may no longer be a Lie groupoid. The *thin fundamental groupoid* (or *path groupoid*) $\Pi^\mathrm{thin}(M)$ of any smooth manifold $M$ is a diffeological groupoid [@collier2016parallel Proposition A.25]. The *groupoid of $\Sigma$-evolutions* of a Cauchy surface is a diffeological groupoid [@glowacki2019 Section II.2.2]. For any smooth surjection $\pi:X\to B$ between diffeological spaces, the fibres $X_b:=\pi^{-1}(\lbrace b\rbrace)$ get the subset diffeology from $X$. We then have a diffeological groupoid $\mathbf{G}(\pi){\rightrightarrows}B$ called the *structure groupoid*, whose space of arrows is defined as $$\mathbf{G}(\pi):=\bigcup_{a,b\in B}{\mathrm{Diff}}(X_a,X_b).$$ Structure groupoids play an important rôle in the theory of diffeological fibre bundles [@iglesias2013diffeology Chapter 8]. In general, they are too big to be Lie groupoids. They also generalise the notion of a *frame groupoid* for a smooth vector bundle. Related to this, in [@schaaf2020diffeology-groupoids-and-ME Section 3.4] structure groupoids are used to define a notion of *smooth linear representations* for diffeological groupoids. Given a diffeological space $X$, the *germ groupoid* ${{\mathbf{Germ}}}(X){\rightrightarrows}X$ consists of all *germs* of local diffeomorphisms on $X$. Even if $X$ itself is a smooth manifold, this is generally not a Lie groupoid. Germ groupoids are used in [@iglesias2018noncommutative; @iglesias2020quasifolds]. A detailed construction of the diffeological structure of this groupoid appears in [@schaaf2020diffeology-groupoids-and-ME Section 6.1]. Diffeological Groupoid Actions and -Bundles {#section:diffeological groupoid actions and bundles} =========================================== In the following two sections we generalise the theory of Lie groupoid bibundles to the diffeological setting. The development we present here (as in [@schaaf2020diffeology-groupoids-and-ME Chapter IV]) is analogous to the development of the Lie version, save that we need to find a suitable replacement for the notion of a surjective submersion. Some of the proofs from the Lie theory can be performed almost *verbatim* in our setting. These proofs already appear in the literature in various places: [@blohmann2008stacky; @delHoyo2012Lie; @landsman2001bicategories; @moerdijk2005Poisson], and also in the different setting of [@meyer2015groupoids]. We adopt many definitions and proofs from those sources, and point out how the diffeological theory subtly differs from the Lie theory. This difference mainly stems from the existence of quotients and fibred products of diffeological spaces, whereas in the Lie theory more care has to be taken. Ultimately, this extra care is what leads to a restricted Morita theorem for Lie groupoids, whereas the diffeological theorem is more general. In this section specifically we introduce *diffeological groupoid actions* and *-bundles*, two notions that form the ingredients for the main theory on bibundles. Diffeological groupoid actions {#section:diffeological groupoid actions} ------------------------------ The most basic notion for the upcoming theory is that of a *groupoid action*. For diffeological groupoids, the definition is the same as for Lie groupoids: \[definition:diffeological groupoid actions\] Take a diffeological groupoid ${{G}\rightrightarrows{G}_0}$, and a diffeological space $X$. A *smooth left groupoid action* of $G$ on $X$ *along* a smooth map $l_X:X\to G_0$ is a smooth function $$G\times_{G_0}^{{\mathrm{src}},l_X}X \longrightarrow X; \qquad (g,x) \longmapsto g\cdot x,$$ satisfying the following three conditions: 1. For $g\in G$ and $x\in X$ such that ${\mathrm{src}}(g)=l_X(x)$ we have $l_X(g\cdot x)={\mathrm{trg}}(g)$. 2. For every $x\in X$ we have ${\mathrm{id}}_{l_X(x)}\cdot x=x$. 3. We have $h\cdot (g\cdot x)= (h\circ g)\cdot x$ whenever defined, i.e. when ${\mathrm{src}}(g)=l_X(x)$ and the arrows are composable. The smooth map $l_X:X\to G_0$ is called the *left moment map*. In-line, we denote an action by $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X$. To save space, we may write $(g,x)\mapsto gx$ instead. Right actions are defined similarly: a *smooth right groupoid action* of $G$ on $X$ *along* $r_X:X\to G_0$ is a smooth map $$X\times_{G_0}^{r_X,{\mathrm{trg}}}G \longrightarrow X; \qquad (x,g) \longmapsto xg,$$ satisfying $r_X(xg)={\mathrm{src}}(g)$, $x\cdot{\mathrm{id}}_{r_X(x)}=x$ and $(x\cdot g)\cdot h=x\cdot(g\circ h)$ whenever defined. Note how the rôle of the source and target maps are switched with respect to the definition of a left action. Right actions will be denoted by $X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}G$, and $r_X$ is called the *right moment map*. \[example:groupoid action on itself\] Any diffeological groupoid ${{G}\rightrightarrows{G}_0}$ acts on its own arrow space from the left and right by composition, which gives actions $G{{{\curvearrowright}\hspace{-5pt}^{{\mathrm{trg}}}\hspace{1pt}}}G$ and $G{{\hspace{-1pt}~^{{\mathrm{src}}}\hspace{-5pt}{\curvearrowleft}}}G$ that are both defined by $(g,h)\mapsto g\circ h$. The *orbit* of a point $x\in X$ in the space of an action $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X$ is defined as $${\mathrm{Orb}}_G(x) := \lbrace gx : g\in{\mathrm{src}}^{-1}(\lbrace l_X(x)\rbrace) \rbrace.$$ The *quotient space* (or *orbit space*) of the action is defined as the collection of all orbits, and denoted $X/G$. With the quotient diffeology, the *orbit projection map* ${\mathrm{Orb}}_G:X\to X/G$ becomes a subduction. The following gives a notion of morphism between actions: \[definition:equivariant maps\] Consider two smooth groupoid actions $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X$ and $G{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}}Y$. A smooth map $\varphi:X\to Y$ is called *$G$-equivariant* if $l_X=l_Y\circ\varphi$ and it commutes with the actions whenever defined: $\varphi(gx)=g\varphi(x)$. \[definition:action category\] The *(smooth left) action category* ${{\mathbf{Act}}}({{G}\rightrightarrows{G}_0})$ of a diffeological groupoid ${{G}\rightrightarrows{G}_0}$ is the category consisting of smooth left actions $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X$ as objects, and $G$-equivariant maps as morphisms. This forms the analogue of the category of (left) modules from ring theory. We show in \[section:invariance of representations\] that the action category is in some sense a Morita invariant. ### The balanced tensor product We now give an important construction that will later allow us to define the *composition* of bibundles. \[construction:balanced tensor product\] Consider a diffeological groupoid ${{H}\rightrightarrows{H}_0}$, with a smooth left action $H{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}}Y$ and a smooth right action $X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$. On the fibred product $X\times_{H_0}^{r_X,l_Y}Y$ we define the following smooth right $H$-action. The moment map is $R:=r_X\circ {\mathrm{pr}}_1|_{X\times_{H_0}Y}=l_Y\circ{\mathrm{pr}}_2|_{X\times_{H_0}Y}$, and the action is given by: $$\left( X\times_{H_0}^{r_X,l_Y}Y \right) \times_{H_0}^{R,{\mathrm{trg}}}H \longrightarrow X\times_{H_0}^{r_X,l_Y}Y; \qquad \left((x,y),h\right) \longmapsto (x\cdot h,h^{-1}\cdot y).$$ It is clear that this action is also smooth, and we call it the *diagonal $H$-action*. The *balanced tensor product* is the diffeological space defined as the orbit space of this smooth groupoid action: $$X{\otimes}_H Y := \left(X\times_{H_0}^{r_X,l_Y}Y\right)/{H}.$$ The orbit of a pair of points $(x,y)$ in the balanced tensor product will be denoted $x{\otimes}y$. Whenever we encounter a term of the form $x{\otimes}y\in X{\otimes}_H Y$, we assume that it is well defined, i.e. $r_X(x)=l_Y(y)$. The terminology is explained by the following useful identity: $$xh{\otimes}y = x{\otimes}hy.$$ In the literature on Lie groupoids, this space is sometimes called the *Hilsum-Skandalis tensor product*, named after a construction appearing in [@hilsum1987morphismes]. We note that this marks the first difference with the development of the Lie theory of bibundles and Morita equivalence. There, the balanced tensor product can only be defined when both $X\times_{H_0}^{r_X,l_Y}Y$ and the quotient by the diagonal $H$-action are smooth manifolds. This is usually only done after (bi)bundles are defined, and some principality conditions are presupposed. The principality then exactly ensures the existence of canonical differentiable structures on the fibred product and quotient. Here, the flexibility of diffeology allows us to define the balanced tensor product in an earlier stage of the development, and we do so to demonstrate this conceptual difference. Diffeological groupoid bundles {#section:diffeological groupoid bundles} ------------------------------ A groupoid bundle is a smooth map, whose domain carries a groupoid action, such that the fibres of the map are preserved by this action: A *smooth left diffeological groupoid bundle* is a smooth left groupoid action $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X$ together with a *$G$-invariant* smooth map $\pi:X\to B$. We denote such bundles by $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X\xrightarrow{\pi}B$, and also call them *(left) $G$-bundles*. *Right* bundles are defined similarly, and denoted $B\xleftarrow{\pi}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}G$. The next definition gives a notion of morphism between bundles: Consider two left $G$-bundles $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X\xrightarrow{\pi_X}B$ and $G{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}}Y\xrightarrow{\pi_Y}B$ over the same base. A *$G$-bundle morphism* is a $G$-equivariant smooth map $\varphi:X\to Y$ such that $\pi_X=\pi_Y\circ\varphi$. We make a similar definition for right bundles. In order to define Morita equivalence, we need to define a notion of when a bundle is *principal*. For Lie groupoid bundles, these generalise the ordinary notion of smooth principal bundles of Lie groups and manifolds. That definition involves the notion of a surjective submersion. As we have mentioned, this notion needs to be generalised to diffeology. \[proposition:local subductions are surjective submersions\] suggests that we could take *local subductions*, since they directly generalise the surjective submersions. However, it turns out that *subductions* behave sufficiently like submersions for the theory to work. The following definition then generalises the fact that the underlying bundle of a principal Lie groupoid bundle has to be a submersion: A diffeological groupoid bundle $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X\xrightarrow{\pi}B$ is called *subductive* if the underlying map $\pi:X\to B$ is a subduction. The following generalises the fact that the action of a principal Lie groupoid bundle has to be free and transitive on the fibres: A diffeological groupoid bundle $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X\xrightarrow{\pi}B$ is called *pre-principal* if the *action map* $A_G:G\times_{G_0}^{{\mathrm{src}},l_X}X\to X\times_B^{\pi,\pi}X$ mapping $(g,x)\mapsto (gx,x)$ is a diffeomorphism. Combining these two: A diffeological groupoid bundle is called *principal* if it is both subductive and pre-principal. This definition serves as our generalisation of principal Lie groupoid bundles, cf. [@blohmann2008stacky Definition 2.10] and [@delHoyo2012Lie Section 3.6]. Clearly any principal Lie groupoid bundle in the sense described in those references is also a principal diffeological groupoid bundle. Note that in the Lie theory, most constructions (such as the balanced tensor product) depend on the submersiveness of the underlying bundle map, so it makes little sense to define pre-principality for Lie groupoids. However, as we have already seen, in the diffeological case these constructions can be carried out more generally, and this will allow us to see what parts of the development of the theory depend on either the subductiveness or pre-principality of the bundles, rather than on full principality. In our development of the theory, some proofs can therefore be performed separately, whereas in the Lie theory they have to be performed at once. We hope this makes for clearer exposition. Note also that when a bundle $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X\xrightarrow{\pi}B$ is pre-principal, the action map induces a diffeomorphism $X/{\pi}\cong X/G$, and when the bundle is subductive, \[example:quotient by surjection or subduction\] gives a diffeomorphism $B\cong X/{\pi}$. For a principal bundle we therefore have $B\cong X/G$. The action of any diffeological groupoid ${{G}\rightrightarrows{G}_0}$ on its own arrow space (\[example:groupoid action on itself\]) forms a bundle $G{{{\curvearrowright}\hspace{-5pt}^{{\mathrm{trg}}}\hspace{1pt}}}\xrightarrow{{\mathrm{src}}}G_0$. From \[proposition:source map is subduction\] it follows that this bundle is principal. ### The division map of a pre-principal bundle {#section:division map} The material in this section is similar to [@blohmann2008stacky Section 3.1] for Lie groupoids. If a bundle $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X\xrightarrow{\pi}B$ is pre-principal, the fact that the action map is bijective gives that the action $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X$ has to be *free*, and *transitive* on the $\pi$-fibres. This means that for every two points $x,y\in X$ such that $\pi(x)=\pi(y)$, there exists a *unique* arrow $g\in G$ such that $gy=x$. We denote this arrow by $\langle x,y\rangle_G$, and the map $\langle\cdot,\cdot\rangle_G$ is called the *division map*[^4]: Let $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X\xrightarrow{\pi}B$ be a pre-principal $G$-bundle, and let $A_G$ denote its action map. Then the [[*division map*]{}]{} associated to this bundle is the smooth map $$\langle\cdot ,\cdot\rangle_G : X\times_B^{\pi,\pi}X \xrightarrow{\quad A_G^{-1}\quad } G\times_{G_0}^{{\mathrm{src}},l_X}X \xrightarrow{\quad \left.{\mathrm{pr}}_1\right|_{G\times_{G_0}X}\quad } G.$$ We summarise some algebraic properties of the division map that will be used in our proofs throughout later sections. The proofs are straightforward, and use the uniqueness property described above. \[proposition:properties of division map\] Let $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X\xrightarrow{\pi}B$ be a pre-principal $G$-bundle. Its division map $\langle\cdot,\cdot\rangle_G$ satisfies the following properties: 1. The source and targets are ${\mathrm{src}}(\langle x_1,x_2\rangle_G)=l_X(x_2)$ and ${\mathrm{trg}}(\langle x_1,x_2\rangle_G)=l_X(x_1)$. 2. The inverses are given by $\langle x_1,x_2\rangle_G^{-1}=\langle x_2,x_1\rangle_G$. 3. For every $x\in X$ we have $\langle x,x\rangle_G={\mathrm{id}}_{l_X(x)}$. 4. Whenever well-defined, we have $\langle gx_1,x_2\rangle_G=g\circ\langle x_1,x_2\rangle_G$. Let $\varphi:X\to Y$ be a bundle morphism between two pre-principal $G$-bundles $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X\xrightarrow{\pi_X}B$ and $G{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}}Y\xrightarrow{\pi_Y}B$. Denoting the division maps of these bundles respectively by $\langle\cdot,\cdot\rangle_G^X$ and $\langle\cdot,\cdot\rangle_G^Y$, we have for all $x_1,x_2\in X$ in the same $\pi_X$-fibre that: $$\langle x_1,x_2\rangle_G^X = \langle \varphi(x_1),\varphi(x_2)\rangle_G^Y.$$ Note $\langle \varphi(x_1),\varphi(x_2)\rangle_G^Y$ is the unique arrow such that $\langle \varphi(x_1),\varphi(x_2)\rangle_G^Y\varphi(x_2)=\varphi(x_1)$. However, by $G$-equivariance we get $\varphi(x_1)=\varphi\left(\langle x_1,x_2\rangle_G^Xx_2\right)=\langle x_1,x_2\rangle_G^X\varphi(x_2)$, from which the claim immediately follows. ### Invertibility of $G$-bundle morphisms {#section:invertibility of bundle morphisms} We now prove a result that generalises the fact that morphisms between principal Lie group bundles are always diffeomorphisms. In our case we shall do the proof in two separate lemmas. \[lemma:bundle morphism injective\] Consider a $G$-bundle morphism $\varphi:X\to Y$ between a pre-principal bundle ${G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X\xrightarrow{\pi_X}B}$ and a bundle $G{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}}Y\xrightarrow{\pi_Y}B$ whose underlying action $G{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}} Y$ is free. Then $\varphi$ is injective. Since $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}} X\xrightarrow{\pi_X} B$ is pre-principal, we get a smooth division map $\langle \cdot,\cdot\rangle_G^X$. To start the proof, suppose that we have two points $x_1,x_2\in X$ satisfying $\varphi(x_1)=\varphi(x_2)$. Since $\varphi$ preserves the fibres, we get that $$\pi_X(x_1)=\pi_Y\circ\varphi(x_1)=\pi_Y\circ\varphi(x_2)=\pi_X(x_2).$$ Hence the pair $(x_1,x_2)$ defines an element in $X\times_B X$, so we get an arrow $\langle x_1,x_2\rangle_G^X\in G$, satisfying $\langle x_1,x_2\rangle_G^Xx_2=x_1$. If we apply $\varphi$ to this equation and use its $G$-equivariance, we get $\varphi(x_1)=\langle x_1,x_2\rangle_G^X\varphi(x_2)$. However, by assumption, $\varphi(x_1)=\varphi(x_2)$ and the action $G{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}} Y$ is free, so we must have that $\langle x_1,x_2\rangle_G^X$ is the identity arrow at $l_Y\circ\varphi(x_2)=l_X(x_2)$. Hence we get the desired result: $$x_1=\langle x_1,x_2\rangle_G^X x_2={\mathrm{id}}_{l_X(x_2)}x_2=x_2. \qedhere$$ \[lemma:bundle morphism subduction\] Consider a $G$-bundle morphism $\varphi:X\to Y$ from a subductive bundle $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X\xrightarrow{\pi_X}B$ to a pre-principal bundle $G{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}}Y\xrightarrow{\pi_Y}B$. Then $\varphi$ is a subduction. Denote the smooth division map of $G{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}} Y\xrightarrow{\pi_Y} B$ by $\langle \cdot,\cdot\rangle_G^Y$. Then $\varphi$ and $\langle \cdot,\cdot\rangle_G^Y$ combine into a smooth map $$\psi: X\times_B^{\pi_X,\pi_Y}Y\longrightarrow X; \qquad (x,y)\longmapsto \langle y,\varphi(x)\rangle_G^Yx.$$ Note that this is well-defined because if $\pi_X(x)=\pi_Y(y)$, then $\pi_Y\circ\varphi(x)=\pi_Y(y)$ as well, and moreover $l_Y\circ\varphi(x)=l_X(x)$, showing that the action on the right hand side is allowed. The $G$-equivariance of $\varphi$ then gives $$\varphi\circ \psi = \left.{\mathrm{pr}}_2\right|_{X\times_BY}.$$ Since $\pi_X$ is a subduction, so is ${\mathrm{pr}}_2|_{X\times_BY}$ by \[lemma:restriction of projection is subduction\], and by \[lemma:properties of subductions\]*(2)* it follows $\varphi$ is a subduction. \[proposition:bundle morphism on principal bundle is diffeomorphism\] Any bundle morphism from a principal groupoid bundle to a pre-principal groupoid bundle is a diffeomorphism. In particular, both must then be principal. By \[lemma:bundle morphism subduction\] any such bundle morphism is a subduction, and since in particular the underlying action of a pre-principal bundle is free, it must also be injective by \[lemma:bundle morphism injective\]. The result follows by \[proposition:injective subduction is diffeomorphism\]. That the second bundle is principal too follows from the fact that a bundle map preserves the fibres, so the projection of the second bundle can be written as the composition of a diffeomorphism and a subduction. Diffeological Bibundles and Morita Equivalence {#section:diffeological bibundles} ============================================== This section contains the main definition of this paper: the notion of a *biprincipal bibundle*, which immediately gives our definition of *Morita equivalence*. The definition of groupoid bibundles for diffeology are a straightforward adaptation of the definition in the Lie case: \[definition:diffeological groupoid bibundles\] Let ${{G}\rightrightarrows{G}_0}$ and ${{H}\rightrightarrows{H}_0}$ be two diffeological groupoids. A *diffeological $(G,H)$-bibundle* consists of a smooth left action $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X$ and a smooth right action $X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ such that the left moment map $l_X$ is $H$-invariant and the right moment map $r_X$ is $G$-invariant, and moreover such that the actions commute: $(g\cdot x)\cdot h=g\cdot (x\cdot h)$, whenever defined. We draw: $$\begin{tikzcd}[column sep = small] G{\mathrm{ar}}[r,phantom,"{\curvearrowright}"]{\mathrm{ar}}[d,shift left]{\mathrm{ar}}[d,shift right] & X{\mathrm{ar}}[dl,"l_X",pos=0.6]{\mathrm{ar}}[dr,"r_X", swap,pos=0.7] & {\mathrm{ar}}[l,phantom,"{\curvearrowleft}"]H{\mathrm{ar}}[d,shift left]{\mathrm{ar}}[d,shift right]\\ G_0 & & H_0, \end{tikzcd}$$ and denote them by $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ in-line. Underlying each bibundle are two groupoid bundles: the *left underlying $G$-bundle* $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X\xrightarrow{r_X}H_0$ and the *right underlying $H$-bundle* $G_0\xleftarrow{l_X}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$. It is the properties of these underlying bundles that will determine the behaviour of the bundle itself. \[definition:left pre-principal\] Consider a diffeological bibundle $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$. We say this bibundle is [[*left pre-principal*]{}]{} if the left underlying bundle $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X\xrightarrow{r_X}H_0$ is pre-principal. We say it is [[*right pre-principal*]{}]{} if the right underlying bundle $G_0\xleftarrow{l_X}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ is pre-principal. We make similar definitions for subductiveness and principality. Notice that, in this convention, if a bibundle $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ is *left* subductive, then its *right* moment map $r_X$ is a subduction (and vice versa)[^5]. We now have the main definition of this theory: \[definition:Morita equivalence and biprincipality\] A diffeological bibundle is called: 1. [[*pre-biprincipal*]{}]{} if it is both left- and right pre-principal[^6]; 2. [[*bisubductive*]{}]{} if it is both left- and right subductive; 3. [[*biprincipal*]{}]{} if it is both left- and right principal. Two diffeological groupoids $G$ and $H$ are called [[*Morita equivalent*]{}]{} if there exists a biprincipal bibundle between them, and in that case we write $G\simeq_\mathrm{ME}H$. Compare this to the original definition [@muhly1987equivalence Definition 2.1] of equivalence for locally compact Hausdorff groupoids. We will prove in \[proposition:morita equivalence is equivalence relation\] that Morita equivalence forms a genuine equivalence relation. \[example:Lie ME is also diffeological ME\] Since submersions between manifolds are subductions with respect to the manifold diffeologies, we see that if two *Lie* groupoids ${{G}\rightrightarrows{G}_0}$ and ${{H}\rightrightarrows{H}_0}$ are Morita equivalent in the *Lie* sense (e.g. [@crainic2018orbispaces Definition 2.15]), then they are Morita equivalent in the *diffeological* sense. We remark on the converse question in \[section:diffeological bibundles between Lie groupoids\]. In fact, many elementary examples of Morita equivalences between Lie groupoids generalise straightforwardly to analogously defined diffeological groupoids. We refer to [@schaaf2020diffeology-groupoids-and-ME Section 4.3] for some of these examples. For us, the most important one is: \[example:identity bibundle\] Consider a diffeological groupoid ${{G}\rightrightarrows{G}_0}$. There exists a canonical $(G,G)$-bibundle structure on the space of arrows $G$, which is called the [[*identity bibundle*]{}]{}. The actions are just the composition in $G$ itself, as in \[example:groupoid action on itself\]. Note that the identity bibundle is always biprincipal, because the action map has a smooth inverse $(g_1,g_2)\mapsto (g_1\circ g_2^{-1},g_2)$. This proves that any diffeological groupoid is Morita equivalent to itself, through the identity bibundle $G{{{\curvearrowright}\hspace{-5pt}^{{\mathrm{trg}}}\hspace{1pt}}}G{{\hspace{-1pt}~^{{\mathrm{src}}}\hspace{-5pt}{\curvearrowleft}}}G$. \[construction:opposite bibundle\] Consider a diffeological bibundle $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$. The *opposite bibundle* $H{{{\curvearrowright}\hspace{-5pt}^{l_{\overline{X}}}\hspace{1pt}}}\overline{X}{{\hspace{-1pt}~^{r_{\overline{X}}}\hspace{-5pt}{\curvearrowleft}}}G$ is defined as follows. The underlying diffeological space does not change, $\overline{X}:=X$, but the moment maps switch, meaning that $l_{\overline{X}}:=r_X$ and $r_{\overline{X}}:=l_X$, and the actions are defined as follows: $$\begin{aligned} H{{{\curvearrowright}\hspace{-5pt}^{r_X}\hspace{1pt}}}\overline{X};&\qquad h\cdot x:=xh^{-1}, \\ \overline{X}{{\hspace{-1pt}~^{l_X}\hspace{-5pt}{\curvearrowleft}}}G;&\qquad x\cdot g:=g^{-1}x. \end{aligned}$$ Here the actions on the right-hand sides are the original actions of the bibundle. It is easy to see that performing this operation twice gives the original bibundle back. It is also important to note that for all properties defined in \[definition:left pre-principal\], taking the opposite merely switches the words ‘left’ and ‘right’. The following extends \[proposition:properties of division map\]*(4)*: \[lemma:opposite action division map\] Consider a left pre-principal bibundle $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$, and also the opposite $G$-action $\overline{X}{{\hspace{-1pt}~^{l_X}\hspace{-5pt}{\curvearrowleft}}}G$. Then, whenever defined, we have: $$\langle x_1,x_2 g\rangle_G = \langle x_1,x_2\rangle_G\circ g.$$ This follows directly from \[proposition:properties of division map\] and the definition of the opposite action: $$\langle x_1,x_2g\rangle_G = \langle x_1,g^{-1}x_2\rangle_G = \left(g^{-1}\circ\langle x_2,x_1\rangle_G\right)^{-1} = \langle x_1,x_2\rangle_G\circ g.\qedhere$$ Induced actions {#section:induced actions} --------------- A bibundle $G{\curvearrowright}X{\curvearrowleft}H$ allows us to transfer a groupoid action $H{\curvearrowright}Y$ to a groupoid action $G{\curvearrowright}X{\otimes}_H Y$. This is called the *induced action*, and, together with the balanced tensor product, will be crucial to define the composition of bibundles. The idea is that $G$ acts on the first component of $X{\otimes}_H Y$. \[construction:induced action\] Consider a diffeological bibundle $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$, and a smooth action $H{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}}Y$. We construct a smooth left $G$-action on the balanced tensor product $X{\otimes}_H Y$. The left moment map is defined as $$L_X:X{\otimes}_H Y\longrightarrow G_0; \qquad x{\otimes}y \longmapsto l_X(x).$$ This is well defined because $l_X$ is $H$-invariant, and smooth by \[lemma:properties of subductions\]*(3)*. For an arrow $g\in G$ with ${\mathrm{src}}(g)=L_X(x{\otimes}y)=l_X(x)$, define the action as: $$G{{{\curvearrowright}\hspace{-5pt}^{L_X}\hspace{1pt}}}X{\otimes}_H Y;\qquad g\cdot(x{\otimes}y):=(gx){\otimes}y.$$ Note that the right hand side is well defined because $r_X$ is $G$-invariant, so $r_X(gx)=l_Y(y)$. Since there can be no confusion, we will drop all parentheses and write $gx{\otimes}y$ instead. That the action is smooth follows because $\left(g,(x,y)\right)\mapsto (gx,y)$ is smooth (on the appropriate domains) and by another application of \[lemma:properties of subductions\]*(3)*. Hence we obtain the [[*induced action*]{}]{} $G{{{\curvearrowright}\hspace{-5pt}^{L_X}\hspace{1pt}}}X{\otimes}_H Y$. Now suppose that we are given a smooth $H$-equivariant map $\varphi:Y_1\to Y_2$ between two smooth actions $H{{{\curvearrowright}\hspace{-5pt}^{l_1}\hspace{1pt}}}Y_1$ and $H{{{\curvearrowright}\hspace{-5pt}^{l_2}\hspace{1pt}}}Y_2$. We define a map $${\mathrm{id}}_X{\otimes}\varphi:X{\otimes}_HY_1\longrightarrow X{\otimes}_H Y_2; \qquad x{\otimes}y\longmapsto x\otimes\varphi(y).$$ The underlying map $X\times_{H_0}Y_1\to X\times_{H_0}Y_2:(x,y)\mapsto(x,\varphi(y))$ is clearly smooth. Then by composition of the projection onto $X{\otimes}_HY_2$ and \[lemma:properties of subductions\]*(3)*, we find ${\mathrm{id}}_X{\otimes}\varphi$ is smooth. Moreover, it is $G$-equivariant: $${\mathrm{id}}_X{\otimes}\varphi(gx{\otimes}y)=gx{\otimes}\varphi(y)=g\left({\mathrm{id}}_X{\otimes}\varphi(x{\otimes}y)\right).$$ \[definition:induced action functor\] A diffeological bibundle $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ defines an [[*induced action functor*]{}]{}: $$\begin{aligned} X{\otimes}_H-:{{\mathbf{Act}}}({{H}\rightrightarrows{H}_0})&\longrightarrow {{\mathbf{Act}}}({{G}\rightrightarrows{G}_0}),\\ \left(H{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}}Y\right)&\longmapsto \left(G{{{\curvearrowright}\hspace{-5pt}^{L_X}\hspace{1pt}}}X{\otimes}_H Y\right),\\ \varphi&\longmapsto {\mathrm{id}}_X{\otimes}~\varphi. \end{aligned}$$ sending each smooth left $H$-action $\left(H{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}}Y\right)\mapsto \left(G{{{\curvearrowright}\hspace{-5pt}^{L_X}\hspace{1pt}}}X{\otimes}_H Y\right)$ and each $H$-invariant map ${\varphi\mapsto {\mathrm{id}}_X{\otimes}\varphi}$. We will use this functor in \[section:invariance of representations\]. The bicategory of diffeological groupoids and -bibundles -------------------------------------------------------- Combining the balanced tensor product (\[construction:balanced tensor product\]) and the induced action of a bibundle (\[construction:induced action\]), we can define a notion of composition for diffeological bibundles, and thereby obtain a new sort of category of diffeological groupoids[^7]. Since performing multiple balanced tensor products is not strictly associative, we need to introduce a notion of comparison between diffeological bibundles. \[definition:biequivariant maps\] Let $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ and $G{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}}Y{{\hspace{-1pt}~^{r_Y}\hspace{-5pt}{\curvearrowleft}}}H$ be two bibundles between the same two diffeological groupoids. A smooth map $\varphi:X\to Y$ is called a [[*bibundle morphism*]{}]{} if it is a bundle morphism between both underlying bundles. We also say that $\varphi$ is [[*biequivariant*]{}]{}. Concretely, this means that the following diagram commutes: $$\begin{tikzcd} X \arrow[d, "l_X"'] \arrow[rd, "\varphi"] \arrow[r, "r_X"] & H_0 \\ G_0 & Y, \arrow[l, "l_Y"] \arrow[u, "r_Y"'] \end{tikzcd} \qquad \text{that is:}\qquad \begin{aligned} l_X&=l_Y\circ\varphi,\\ r_X&=r_Y\circ\varphi, \end{aligned}$$ and that $\varphi$ is equivariant with respect to both actions. The isomorphisms of bibundles are exactly the diffeomorphic biequivariant maps. These are the 2-isomorphisms in ${{\mathbf{DiffeolBiBund}}}$. The composition of bibundles is defined as follows: \[construction:bibundle composition\] Consider two diffeological bibundles $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ and $H{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}}Y{{\hspace{-1pt}~^{r_Y}\hspace{-5pt}{\curvearrowleft}}}K$. We shall define on $X{\otimes}_H Y$ a $(G,K)$-bibundle structure using the induced actions from \[construction:induced action\]. On the left we take the induced $G$-action along $L_X:X{\otimes}_H Y\to G_0$, which we recall maps ${x{\otimes}y \mapsto l_X(x)}$, defined by $$G{{{\curvearrowright}\hspace{-5pt}^{L_X}\hspace{1pt}}}X{\otimes}_H Y;\qquad g(x{\otimes}y):=(gx){\otimes}y.$$ This action is well-defined because the $G$- and $H$-actions commute. Similarly, we get an induced $K$-action on the right along $R_Y:X{\otimes}_H Y\to K_0$, which maps ${x{\otimes}y\mapsto r_Y(y)}$, given by $$X{\otimes}_H Y{{\hspace{-1pt}~^{R_Y}\hspace{-5pt}{\curvearrowleft}}}K;\qquad (x{\otimes}y)k:= x{\otimes}(yk).$$ It is easy to see that these two actions form a bibundle $G{{{\curvearrowright}\hspace{-5pt}^{L_X}\hspace{1pt}}}X{\otimes}_HY{{\hspace{-1pt}~^{R_Y}\hspace{-5pt}{\curvearrowleft}}}K$, which we also call the [[*balanced tensor product*]{}]{}. Note that the moment maps are smooth by \[lemma:properties of subductions\]*(3)*. The following two propositions characterise the compositional structure of the balanced tensor product *up to biequivariant diffeomorphism*. The first of these shows that the identity bibundle (\[example:identity bibundle\]) is a *weak identity:* \[proposition:identity bibundle is weak identity\] Let $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ be a diffeological bibundle. Then there are biequivariant diffeomorphisms $$\begin{tikzcd} G{{{\curvearrowright}\hspace{-5pt}^{L_G}\hspace{1pt}}}G{\otimes}_G X{{\hspace{-1pt}~^{R_X}\hspace{-5pt}{\curvearrowleft}}}H \arrow[d,Rightarrow,"\varphi"] \\ G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H \end{tikzcd} \qquad\text{and}\qquad \begin{tikzcd} G{{{\curvearrowright}\hspace{-5pt}^{L_X}\hspace{1pt}}}X{\otimes}_H H{{\hspace{-1pt}~^{R_H}\hspace{-5pt}{\curvearrowleft}}}H \arrow[d,Rightarrow,] \\ G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H. \end{tikzcd}$$ The idea of the proof is briefly sketched on [@blohmann2008stacky p.8]. The map $\varphi:G{\otimes}_G X\to X$ is defined by the action: $g{\otimes}x\mapsto gx$. This map is clearly well defined, and by an easy application of \[lemma:properties of subductions\]*(3)* also smooth. Further note that $\varphi$ intertwines the left moment maps: $$l_X\circ\varphi(g{\otimes}x)=l_X(gx)={\mathrm{trg}}(g)=L_G(g{\otimes}x),$$ and similarly we find it intertwines the right moment maps. Associativity of the $G$-action and the fact that it commutes with the $H$-action directly ensure that $\varphi$ is biequivariant. Moreover, we claim that the smooth map $\psi:X\to G{\otimes}_G X$ defined by $x\mapsto {\mathrm{id}}_{l_X(x)}{\otimes}x$ is the inverse of $\varphi$. It follows easily that $\varphi\circ\psi={\mathrm{id}}_X$, and the other side follows from the defining property of the balanced tensor product: $$\psi\circ\varphi(g{\otimes}x)=\psi(gx)={\mathrm{id}}_{l_X(gx)}{\otimes}gx = ({\mathrm{id}}_{{\mathrm{trg}}(g)}\circ g){\otimes}x = g{\otimes}x.$$ It follows from an analogous argument that the identity bibundle of $H$ acts like a weak right inverse. The second proposition shows that the balanced tensor is associative *up to canonical biequivariant diffeomorphism:* \[proposition:associativity of balanced tensor product\] Let $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$, $H{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}}Y{{\hspace{-1pt}~^{r_Y}\hspace{-5pt}{\curvearrowleft}}}H'$, and $H'{{{\curvearrowright}\hspace{-5pt}^{l_Z}\hspace{1pt}}}Z{{\hspace{-1pt}~^{r_Z}\hspace{-5pt}{\curvearrowleft}}}K$ be diffeological bibundles. Then there exists a biequivariant diffeomorphism $$\begin{tikzcd} G{{{\curvearrowright}\hspace{-5pt}^{L_{X{\otimes}_HY}}\hspace{1pt}}}\left(X{\otimes}_HY\right){\otimes}_{H'}Z{{\hspace{-1pt}~^{R_Z}\hspace{-5pt}{\curvearrowleft}}}K \arrow[d,Rightarrow,"A"] \\ G{{{\curvearrowright}\hspace{-5pt}^{L_X}\hspace{1pt}}}X{\otimes}_H\left(Y{\otimes}_{H'}Z\right){{\hspace{-1pt}~^{R_{Y{\otimes}_{H'}Z}}\hspace{-5pt}{\curvearrowleft}}}K, \end{tikzcd} \quad A:(x{\otimes}y){\otimes}z\longmapsto x{\otimes}(y{\otimes}z).$$ That the map $A$ is smooth follows by \[lemma:properties of subductions\]*(3)*, because the corresponding underlying map ${\left((x,y),z\right)\mapsto \left(x,(y,z)\right)}$ is a diffeomorphism. The inverse of this diffeomorphism on the underlying fibred product induces exactly the smooth inverse of $A$, showing that $A$ is a diffeomorphism. Furthermore, it is easy to check that $A$ is biequivariant. Combining \[proposition:identity bibundle is weak identity,proposition:associativity of balanced tensor product\] gives that the balanced tensor product of bibundles behaves like the composition in a *bicategory*. This is a category where the axioms of composition hold merely up to *canonical 2-isomorphism*. For us, the 2-morphisms are the biequivariant smooth maps. For the precise definition of a bicategory we refer to e.g. [@macLane1998categories; @lack2010companion]. The proof of the following is directly analogous to the one for the Lie theory, as explained throughout [@blohmann2008stacky]. \[theorem:bicategory DiffBiBund\] There is a bicategory ${{\mathbf{DiffeolBiBund}}}$ consisting of diffeological groupoids as objects, diffeological bibundles as morphisms with balanced tensor product as composition, and biequivariant smooth maps as 2-morphisms. As we remarked in \[section:diffeological groupoid actions\], the balanced tensor product for Lie groupoids can only be constructed for *left* (or *right*) *principal* bibundles. This means that in the Lie theory, the category of bibundles only consists of the left (or right) principal bibundles, since otherwise the composition cannot be defined. For diffeology we obtain a bicategory of *all* bibundles. Properties of bibundles under composition and isomorphism {#section:properties of bibundles under composition and isomorphism} --------------------------------------------------------- We study how the properties of diffeological bibundles defined in \[definition:left pre-principal\] are preserved under the balanced tensor product and biequivariant diffeomorphism. These results will be crucial in characterising the weakly invertible bibundles. First we show that left subductive and left pre-principal bibundles are closed under composition. \[proposition:subductive balanced tensor product\] The balanced tensor product preserves left subductiveness. Consider the balanced tensor product $G{{{\curvearrowright}\hspace{-5pt}^{L_X}\hspace{1pt}}}X{\otimes}_H Y{{\hspace{-1pt}~^{R_Y}\hspace{-5pt}{\curvearrowleft}}}K$ of two left subductive bibundles $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ and $H{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}}Y{{\hspace{-1pt}~^{r_Y}\hspace{-5pt}{\curvearrowleft}}}K$. We need to show that the right moment map $R_Y:X{\otimes}_HY\to K_0$ is a subduction. But, note that it fits into the following commutative diagram: $$\begin{tikzcd} {X\times_{H_0}^{r_X,l_Y}Y} \arrow[r, "\pi"] \arrow[d, "{\mathrm{pr}}_2|_{X\times_{H_0}Y}"'] & X{\otimes}_HY \arrow[d, "R_Y"] \\ Y \arrow[r, "r_Y"'] & K_0 . \end{tikzcd}$$ Here $\pi$ is the canonical quotient projection. The restricted projection ${\mathrm{pr}}_2|_{X\times_{H_0}Y}$ is a subduction by \[lemma:restriction of projection is subduction\], since $r_X$ is a subduction. Moreover, $r_Y$ is a subduction, so the bottom part of the diagram is a subduction. It follows by \[lemma:properties of subductions\]*(3)* that $R_Y$ is a subduction. Note that, even though $R_Y$ only explicitly depends on the moment map $r_Y$, the proof still depends on the subductiveness of $r_X$ as well. To prove that the balanced tensor product of two left pre-principal bibundles is again left pre-principal, we need the following lemma, describing how the division map interacts with the bibundle structure, extending the list in \[proposition:properties of division map\] on the algebraic properties of the division map. \[lemma:division map on bibundle\] Let $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ be a left pre-principal bibundle, and denote its division map by $\langle\cdot,\cdot\rangle_G$. Then, whenever defined: $$\langle x_1,x_2h\rangle_G=\langle x_1h^{-1},x_2\rangle_G, \qquad \text{or equivalently:} \qquad \langle x_1h,x_2h\rangle_G=\langle x_1,x_2\rangle_G.$$ The arrow $\langle x_1h,x_2h\rangle_G\in G$ is the unique one so that $\langle x_1h,x_2h\rangle_G(x_2h)=x_1h$. Now, since the actions commute, we can multiply both sides of this equation from the right by $h^{-1}$, which gives $\langle x_1h,x_2h\rangle_X x_2=x_1$, and this immediately gives our result. \[proposition:pre-principal balanced tensor product\] The balanced tensor product preserves left pre-principality. To start the proof, take two left pre-principal bibundles, with our usual notation: $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ and $H{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}}Y{{\hspace{-1pt}~^{r_Y}\hspace{-5pt}{\curvearrowleft}}}K$. Denote their division maps by $\langle\cdot,\cdot\rangle_G^X$ and $\langle \cdot,\cdot\rangle_H^Y$, respectively. Using these, we will construct a smooth inverse of the action map of the balanced tensor product. Let us denote the action map of the balanced tensor product by $$\Phi:G\times_{G_0}^{{\mathrm{src}},L_X}\left(X{\otimes}_HY\right) \longrightarrow \left(X{\otimes}_HY\right)\times_{K_0}^{R_Y,R_Y}\left(X{\otimes}_HY\right),$$ mapping $(g,x{\otimes}y)\mapsto (gx{\otimes}y,x{\otimes}y)$. After some calculations (which we describe below), we propose the following map as an inverse for $\Phi$: $$\begin{aligned} \Psi: \left(X{\otimes}_HY\right)\times_{K_0}^{R_Y,R_Y}\left(X{\otimes}_HY\right) &\longrightarrow G\times_{G_0}^{{\mathrm{src}},L_X}\left(X{\otimes}_HY\right); \\ \left(x_1{\otimes}y_1,x_2{\otimes}y_2\right) &\longmapsto \left(\left\langle x_1\langle y_1,y_2\rangle_H^Y,x_2 \right\rangle_G^X,x_2{\otimes}y_2 \right). \end{aligned}$$ It is straightforward to check that every action and division occurring in this expression is well defined. We need to check that $\Psi$ is independent on the representations of $x_1{\otimes}y_1$ and $x_2{\otimes}y_2$. Only the first component $\Psi_1$ of $\Psi$ could be dependent on the representations, so we focus there. Suppose we have two arrows $h_1,h_2\in H$ satisfying ${\mathrm{trg}}(h_i)=r_X(x_i)=l_Y(y_i)$, so that $x_ih_i{\otimes}h_i^{-1}y_i=x_i{\otimes}y_i$. For the division of $y_2$ and $y_1$ we then use \[proposition:properties of division map\] to get: $$\langle h_1^{-1}y_1,h_2^{-1}y_2\rangle_H^Y = h_1^{-1}\circ\langle y_1,h_2^{-1}y_2\rangle_H^Y = h_1^{-1}\circ \left( h_2^{-1}\circ\langle y_2,y_1\rangle_H^Y\right)^{-1} = h_1^{-1}\circ \langle y_1,y_2\rangle_H^Y\circ h_2.$$ Then, using this and \[lemma:division map on bibundle\], we get: $$\begin{aligned} \Psi_1(x_1h_1{\otimes}h_1^{-1}y_1,x_2h_2{\otimes}h_2^{-1}y_2) &= \left\langle x_1h_1\langle h_1^{-1}y_1,h_2^{-1}y_2\rangle_H^Y,x_2h_2\right\rangle_G^X \\&= \left\langle (x_1h_1)\left(h_1^{-1}\circ\langle y_1,y_2\rangle_H^Y\circ h_2\right),x_2h_2\right\rangle_G^X \\&= \left\langle (x_1\langle y_1,y_2\rangle)h_2,x_2h_2\right\rangle_G^X \\&= \left\langle x_1\langle y_1,y_2\rangle_H^Y,x_2\right\rangle_G^X. \end{aligned}$$ Since the second component of $\Psi$ is by construction independent on the representation, it follows that $\Psi$ is a well-defined function. We now need to show that $\Psi$ is smooth. The second component is clearly smooth, because it is just the projection onto the second component of the fibred product. That the other component is smooth follows from \[lemma:properties of subductions,lemma:subduction and fibred product\]. Writing $$\psi:\left((x_1,y_1),(x_2,y_2)\right)\longmapsto \langle x_1\langle y_1,y_2\rangle_H^Y,x_2\rangle_G^X$$ and $\pi:X\times_{H_0}^{r_X,l_Y}Y\to X{\otimes}_H Y$ for the canonical projection, we get a commutative diagram $$\begin{tikzcd} \left(X\times_{H_0}^{r_X,l_Y}Y\right)\times_{K_0}^{\overline{r_Y},\overline{r_Y}}\left(X\times_{H_0}^{r_X,l_Y}Y\right) \arrow[rr, "(\pi\times\pi)|_{\mathrm{\operatorname{dom}(\psi)}}"] \arrow[rd, "\psi"', bend right=10] & & \left(X{\otimes}_HY\right)\times_{K_0}^{R_Y,R_Y}\left(X{\otimes}_HY\right) \arrow[ld, "\Psi_1",bend left=10] \\ & G. & \end{tikzcd}$$ Here we temporarily use the notation $\overline{r_Y}:=r_Y\circ{\mathrm{pr}}_2|_{X\times_{H_0}Y}$, which satisfies $R_Y\circ\pi = \overline{r_Y}$. Therefore by \[lemma:subduction and fibred product\] the top arrow in this diagram is a subduction. Since the map $\psi$ is evidently smooth, it follows by \[lemma:properties of subductions\]*(3)* that the first component $\Psi_1$, and hence $\Psi$ itself, must be smooth. Thus, we are left to show that $\Psi$ is an inverse for $\Phi$. That $\Psi$ is a right inverse for $\Phi$ now follows by simple calculation using \[proposition:properties of division map,lemma:division map on bibundle\]: $$\Psi\circ\Phi(g,x{\otimes}y) = \Psi(gx{\otimes}y,x{\otimes}y) = \left(\langle gx\langle y,y\rangle_H^Y,x\rangle_G^X,x{\otimes}y\right) = \left(g\circ \langle x,x\rangle_G^X,x{\otimes}y\right) = (g,x{\otimes}y).$$ For the other direction, we calculate: $$\begin{aligned} \Phi\circ\Psi(x_1{\otimes}y_1,x_2{\otimes}y_2) &= \Phi\left(\left\langle x_1\langle y_1,y_2\rangle_H^Y,x_2 \right\rangle_G^X,x_2{\otimes}y_2 \right) \\&= \left(\left\langle x_1\langle y_1,y_2\rangle_H^Y,x_2 \right\rangle_G^Xx_2{\otimes}y_2,x_2{\otimes}y_2\right) \\&= \left(x_1\langle y_1,y_2\rangle_H^Y{\otimes}y_2,x_2{\otimes}y_2\right) \\&= \left(x_1{\otimes}\langle y_1,y_2\rangle_H^Yy_2,x_2{\otimes}y_2\right) \\&= \left(x_1{\otimes}y_1,x_2{\otimes}y_2\right). \end{aligned}$$ Here in the second to last step we use the properties of the balanced tensor product to move the arrow $\langle y_1,y_2\rangle_H^Y$ over the tensor symbol. Hence we conclude that $\Phi$ is a diffeomorphism, which proves that $G{{{\curvearrowright}\hspace{-5pt}^{L_X}\hspace{1pt}}}X{\otimes}_H Y{{\hspace{-1pt}~^{R_Y}\hspace{-5pt}{\curvearrowleft}}}K$ is a left pre-principal bibundle. Next we show that left subductiveness and left pre-principality are also preserved under biequivariant diffeomorphism. \[proposition:pre-principality and isomorphism\] Left pre-principality is preserved by biequivariant diffeomorphism. Suppose that $\varphi:X\to Y$ is a biequivariant diffeomorphism from a left pre-principal bibundle $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ to another diffeological bibundle $G{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}}Y{{\hspace{-1pt}~^{r_Y}\hspace{-5pt}{\curvearrowleft}}}H$. Denote their left action maps by $A_X$ and $A_Y$, respectively. The following square commutes because of biequivariance: $$\begin{tikzcd} {G\times_{G_0}^{{\mathrm{src}},l_X}X} \arrow[d, "({\mathrm{id}}_G\times\varphi)|_{G\times_{G_0}X}"'] \arrow[r, "A_X"] & {X\times_{H_0}^{r_X,r_X}X} \arrow[d, "(\varphi\times\varphi)|_{X\times_{H_0}X}"] \\ {G\times_{G_0}^{{\mathrm{src}},l_Y}Y} \arrow[r, "A_Y"'] & {Y\times_{H_0}^{r_Y,r_Y}Y.} \end{tikzcd}$$ It is easy to see that both vertical maps are diffeomorphisms. Hence it follows $A_Y$ must be a diffeomorphism as well. \[proposition:subductiveness and isomorphism\] Left subductiveness is preserved by biequivariant diffeomorphism. Suppose that $\varphi:X\to Y$ is a biequivariant diffeomorphism from a left subductive bibundle $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ to $G{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}}Y{{\hspace{-1pt}~^{r_Y}\hspace{-5pt}{\curvearrowleft}}}H$. That the first bundle is left subductive means that $r_X$ is a subduction, but since $\varphi$ intertwines the moment maps, it follows immediately that $r_Y=r_X\circ\varphi^{-1}$ is a subduction as well. Of course, these four propositions all hold for their respective ‘right’ versions as well. This can be proved formally, without repeating the work, by using opposite bibundles. \[proposition:morita equivalence is equivalence relation\] Morita equivalence defines an equivalence relation between diffeological groupoids. Morita equivalence is reflexive by the existence of identity bibundles, which are always biprincipal (\[example:identity bibundle\]). It is also easy to check that the opposite bibundle (\[construction:opposite bibundle\]) of a biprincipal bibundle is again biprincipal, showing that Morita equivalence is symmetric. Transitivity follows directly from \[proposition:pre-principal balanced tensor product,proposition:subductive balanced tensor product\] and their opposite versions. Weak invertibility of diffeological bibundles {#section:weak invertibility of diffeological bibundles} --------------------------------------------- In this section we prove the main Morita \[theorem:weakly invertible bibundles are the biprincipal ones\]. As we explained in the , in the bicategory of diffeological groupoids we get a notion of *weak isomorphism*. Let us describe these explicitly: a bibundle $G{\curvearrowright}X{\curvearrowleft}H$ is weakly invertible if and only if there exists a second bibundle $H{\curvearrowright}Y{\curvearrowleft}G$, such that $X{\otimes}_H Y$ is biequivariantly diffeomorphic to $G$ and $Y{\otimes}_G X$ is biequivariantly diffeomorphic to $H$. The Morita theorem says that such a weak inverse exists if and only if the bibundle is biprincipal. Let us recall the corresponding statement in the Lie theory: a (say) left principal bibundle has a left principal weak inverse if and only if it is biprincipal [@landsman2001quantized Proposition 4.21]. Here both the original bibundle and its weak inverse have to be left principal, since everything takes place in a bicategory of Lie groupoids and left principal bibundles. According to \[theorem:bicategory DiffBiBund\] we get a bicategory of arbitrary bibundles, and the question of weak invertibility becomes a slightly more general one, since we do not start out with a bibundle that is already left principal. Instead we have to infer left principality from bare weak invertibility, where neither the weak inverse may be assumed to be left principal. One direction of the claim in the main theorem is relatively straightforward, and is the same as for Lie groupoids: \[proposition:biprincipal bibundle is weakly invertible\] Let $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ be a biprincipal bibundle. Then its opposite bundle $H{{{\curvearrowright}\hspace{-5pt}^{r_X}\hspace{1pt}}}\overline{X}{{\hspace{-1pt}~^{l_X}\hspace{-5pt}{\curvearrowleft}}}G$ is a weak inverse. We construct biequivariant diffeomorphisms $$\begin{tikzcd} G{{{\curvearrowright}\hspace{-5pt}^{L_X}\hspace{1pt}}}X{\otimes}_H \overline{X}{{\hspace{-1pt}~^{R_{\overline{X}}}\hspace{-5pt}{\curvearrowleft}}}G \arrow[d,Rightarrow,"\varphi_G"'] \\ G{{{\curvearrowright}\hspace{-5pt}^{{\mathrm{trg}}}\hspace{1pt}}}G{{\hspace{-1pt}~^{{\mathrm{src}}}\hspace{-5pt}{\curvearrowleft}}}G, \end{tikzcd} \quad\text{and}\quad \begin{tikzcd} H{{{\curvearrowright}\hspace{-5pt}^{L_{\overline{X}}}\hspace{1pt}}}\overline{X}{\otimes}_G X{{\hspace{-1pt}~^{R_X}\hspace{-5pt}{\curvearrowleft}}}H \arrow[d,Rightarrow,"\varphi_H"'] \\ H{{{\curvearrowright}\hspace{-5pt}^{{\mathrm{trg}}}\hspace{1pt}}}H{{\hspace{-1pt}~^{{\mathrm{src}}}\hspace{-5pt}{\curvearrowleft}}}H. \end{tikzcd}$$ Since the original bundle is pre-biprincipal, we have a division map $\langle\cdot,\cdot\rangle_G:X\times_{H_0}^{r_X,r_X}\overline{X}\to G$. We define a new function $$\varphi_G : X{\otimes}_H\overline{X} \longrightarrow G; \qquad x_1{\otimes}x_2 \longmapsto \langle x_1,x_2\rangle_G.$$ This is independent on the representation of the tensor product by \[lemma:division map on bibundle\], and smooth by \[lemma:properties of subductions\]*(3)* since $\varphi_G\circ\pi=\langle\cdot,\cdot\rangle_G$, where $\pi$ is the canonical projection onto the balanced tensor product. We check that $\varphi_G$ is biequivariant. It is easy to check that $\varphi_G$ intertwines the moment maps, for example: $${\mathrm{src}}\circ\varphi_G(x_1{\otimes}x_2) = {\mathrm{src}}\left(\langle x_1,x_2\rangle_G\right) = l_X(x_2) = R_{\overline{X}}(x_1{\otimes}x_2).$$ The left $G$-equivariance of $\varphi_G$ follows directly out of \[proposition:properties of division map\], and the right $G$-equivariance follows from \[lemma:opposite action division map\]. Hence $\varphi_G$ is a genuine bibundle morphism. Since the original bundle is biprincipal, so is its opposite, and therefore by \[proposition:pre-principal balanced tensor product,proposition:subductive balanced tensor product\] it follows that both balanced tensor products are also biprincipal. Therefore $\varphi_G$ is in particular a left $G$-equivariant bundle morphism from a principal bundle $G{{{\curvearrowright}\hspace{-5pt}^{L_X}\hspace{1pt}}}X{\otimes}_H\overline{X}\xrightarrow{R_{\overline{X}}}G_0$ to a pre-principal bundle $G{{{\curvearrowright}\hspace{-5pt}^{{\mathrm{trg}}}\hspace{1pt}}}G\xrightarrow{{\mathrm{src}}}G_0$, and hence a diffeomorphism by \[proposition:bundle morphism on principal bundle is diffeomorphism\]. This proves that the opposite bibundle is a weak right inverse. Note that we already need full biprincipality of the original bibundle for this. To prove that it is also a weak left inverse we make an analogous construction for $\varphi_H$, which we leave to the reader. The rest of this section will be dedicated to proving the converse of this claim, i.e., that a weakly invertible bibundle is biprincipal. First let us remark that by imitating a result from the Lie theory, we can obtain a partial result in this direction. Let us denote by ${{\mathbf{DiffeolBiBund}}}_\mathrm{LP}$ the bicategory of diffeological groupoids and left principal bibundles. Note that by \[section:properties of bibundles under composition and isomorphism\] left principality is preserved by the balanced tensor product, so this indeed forms a subcategory. \[theorem:analogue\] A left principal diffeological bibundle has a left principal weak inverse if and only if it is biprincipal. That is, the weakly invertible bibundles in ${{\mathbf{DiffeolBiBund}}}_\mathrm{LP}$ are exactly the biprincipal ones. This follows by combining \[proposition:biprincipal bibundle is weakly invertible\] with an adaptation of an argument from the Lie groupoid theory as in [@moerdijk2005Poisson Proposition 2.9]. A more detailed proof (for diffeological groupoids) is in [@schaaf2020diffeology-groupoids-and-ME Proposition 4.61]. This theorem is the most direct analogue of [@landsman2001quantized Proposition 4.21] in the setting of diffeology. Our main theorem will be a further generalisation of this, which says that the same claim holds in the larger bicategory ${{\mathbf{DiffeolBiBund}}}$ of *all* bibundles. We break the proof down in several steps, starting with the implication of bisubductiveness: \[proposition:weakly invertible implies bisubductive\] A weakly invertible diffeological bibundle is bisubductive. Suppose we have a bibundle $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ that admits a weak inverse $H{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}}Y{{\hspace{-1pt}~^{r_Y}\hspace{-5pt}{\curvearrowleft}}}G$. Let us denote the included biequivariant diffeomorphisms by $\varphi_G:X{\otimes}_H Y\to G$ and $\varphi_H:Y{\otimes}_G X\to H$, as usual. Since the identity bibundles of $G$ and $H$ are both biprincipal, it follows by \[proposition:subductiveness and isomorphism\] that the moment maps $L_X$, $R_X$, $L_Y$ and $R_Y$ are all subductions. Together with the original moment maps, we get four commutative squares, each of the form: $$\begin{tikzcd} {X\times_{H_0}^{r_X,l_Y}Y} \arrow[d, "{\mathrm{pr}}_1|_{X\times_{H_0}Y}"'] \arrow[r, "\pi"] & X{\otimes}_H Y \arrow[d, "L_X"] \\ X \arrow[r, "l_X"'] & G_0. \end{tikzcd}$$ Here $\pi:X\times_{H_0}^{r_X,l_Y}Y\to X{\otimes}_HY$ is the quotient map of the diagonal $H$-action. By \[lemma:properties of subductions\]*(3)* it follows that, since $L_X$ is a subduction, so is $l_X\circ{\mathrm{pr}}_1|_{X\times_{H_0}Y}$, and in turn by \[lemma:properties of subductions\]*(2)* it follows $l_X$ is a subduction. In a similar fashion we find that $r_X$, $l_Y$ and $r_Y$ are all subductions as well. This proposition gets us halfway to proving that weakly invertible bibundles are biprincipal. To prove that they are pre-biprincipal, it is enough to construct smooth division maps. We will give this construction below (\[construction:local division map\]), which follows from a careful reverse engineering of the division map of a pre-principal bundle. Recall from \[proposition:pre-principal balanced tensor product\] that the smooth inverse of the action map contains the information of both the $G$-division map and the $H$-division map. Specifically, the first component of the inverse is of the form $\langle x_1\langle y_1,y_2\rangle_H^Y,x_2\rangle_G^X$, in which if we set $y_1=y_2$, we simply reobtain the $G$-division map $\langle x_1,x_2\rangle_G^X$. The question is if this “reobtaining” can be done in a smooth way. This is not so obvious at first. Namely, if we vary $(x_1,x_2)$ smoothly within $X\times_{H_0}^{r_X,r_X}X$, can we guarantee that $y_1$ and $y_2$ vary smoothly with it, while still retaining the equalities $r_X(x_i)=l_Y(y_i)$ and $y_1=y_2$? The elaborate \[construction:local division map\] proves that this can indeed be done. An essential part of our argument will be supplied by the following two lemmas. \[lemma:actions of weakly invertible bibundle are free\] When $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ is a weakly invertible bibundle, admitting a weak inverse $H{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}}Y{{\hspace{-1pt}~^{r_Y}\hspace{-5pt}{\curvearrowleft}}}G$, then all four actions are free. This follows from an argument that is used in the proof of [@blohmann2008stacky Proposition 3.23]. Suppose we have an arrow $h\in H$ and a point $y\in Y$ such that $hy=y$. By \[proposition:weakly invertible implies bisubductive\] it follows that in particular $l_X$ is surjective, so we can find $x\in X$ such that $y{\otimes}x\in Y{\otimes}_G X$. Then $$h(y{\otimes}x)=(hy){\otimes}x=y{\otimes}x.$$ But by \[proposition:pre-principality and isomorphism\] the bundle $H{{{\curvearrowright}\hspace{-5pt}^{L_Y}\hspace{1pt}}}Y{\otimes}_G X\xrightarrow{R_X}G_0$, which is equivariantly diffeomorphic to the identity bundle on $H$, is pre-principal. So, the left action $H{\curvearrowright}Y{\otimes}_G X$ is free, and hence $h={\mathrm{id}}_{L_Y(y{\otimes}x)}={\mathrm{id}}_{l_Y(y)}$, proving that $H{\curvearrowright}Y$ is also free. That the three other actions are free follows analogously. \[lemma:free action and balanced tensor product\] Let $X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ and $H{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}}Y$ be smooth actions, so that we can form the balanced tensor product $X{\otimes}_H Y$. Suppose that $H{\curvearrowright}Y$ is free. Then $x_1{\otimes}y= x_2{\otimes}y$ if and only if $x_1=x_2$. Similarly, if $X{\curvearrowleft}H$ is free, then $x{\otimes}y_1=x{\otimes}y_2$ if and only if $y_1=y_2$. If $x_1=x_2$ to begin with, the implication is trivial. Suppose therefore that $x_1{\otimes}y=x_2{\otimes}y$, which means that there exists an arrow $h\in H$ such that $(x_1h^{-1},hy)=(x_2,y)$. In particular $hy=y$, which, because the action on $Y$ is free, implies $h={\mathrm{id}}_{l_Y(y)}$, and it follows that $x_1=x_1 {\mathrm{id}}_{l_Y(y)}^{-1}=x_2$. We shall now describe how the division map arises from local data: \[construction:local division map\] For this construction to work, we start with a diffeological bibundle $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$, admitting a weak inverse $H{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}}Y{{\hspace{-1pt}~^{r_Y}\hspace{-5pt}{\curvearrowleft}}}G$. Then consider a pointed plot $\alpha:(U_\alpha,0)\to (X\times_{H_0}^{r_X,r_X}X,(x_1,x_2))$. We split $\alpha$ into the components $(\alpha_1,\alpha_2)$, which in turn are pointed plots $\alpha_i:(U_\alpha,0)\to (X,x_i)$ satisfying $r_X\circ \alpha_1=r_X\circ\alpha_2:U_\alpha\to H_0$. This equation gives a plot of $H_0$, and since by \[proposition:weakly invertible implies bisubductive\] the moment map $l_Y:Y\to H_0$ is a subduction, for every $t\in U_\alpha$ we can find a plot $\beta:V\to Y$, defined on an open neighbourhood $t\in V\subseteq U_\alpha$, such that $r_X\circ\alpha_i|_V=l_Y\circ\beta$. From this equation it follows that the smooth maps $(\alpha_i|_V,\beta):V\to X\times_{H_0}^{r_X,l_Y}Y$ define two plots of the underlying space of the balanced tensor product. Applying the quotient map $\pi:X\times_{H_0}^{r_X,l_Y}Y\to X{\otimes}_H Y$, we thus get two full-fledged plots $s\mapsto \alpha_i|_V(s){\otimes}\beta(s)$ of the balanced tensor product. We combine these two plots to define yet another smooth map: $$\Omega^\alpha|_V:=\left(\pi\circ\left(\alpha_1|_V,\beta\right),\pi\circ\left(\alpha_2|_V,\beta\right)\right) : V \longrightarrow \left(X{\otimes}_H Y\right)\times_{G_0}^{R_Y,R_Y}\left(X{\otimes}_H Y\right).$$ Note that $\Omega^\alpha|_V$ lands in the right codomain because $R_Y\circ\pi\circ(\alpha_i|_V,\beta)=r_Y\circ\beta$, irrespective of $i\in\lbrace 1,2\rbrace$. We also note that the codomain of $\Omega^\alpha|_V$ is exactly the *do*main of the inverse $\Psi=(\Psi_1,\Psi_2)$ of the action map of the balanced tensor product $G{{{\curvearrowright}\hspace{-5pt}^{L_X}\hspace{1pt}}}X{\otimes}_H Y\xrightarrow{R_Y}H_0$ (given explicitly in \[proposition:pre-principal balanced tensor product\]). In particular we then get a smooth map $$\Psi_1\circ\Omega^\alpha|_V: V \xrightarrow{\quad\Omega^\alpha|_V\quad} \left(X{\otimes}_H Y\right)\times_{G_0}^{R_Y,R_Y}\left(X{\otimes}_H Y\right) \xrightarrow{\quad\Psi_1\quad} G.$$ We now extend this map to the entire domain $U_\alpha$, and show that it is independent on the choice of plot $\beta$. For that, pick two points $t,\overline{t}\in U_\alpha$, so that by subductiveness of the left moment map $l_Y$ we can find two plots, $\beta:V\to Y$ and $\overline{\beta}:\overline{V}\to Y$, defined on open neighbourhoods of $t$ and $\overline{t}$, respectively, such that $r_X\circ \alpha_i|_V=l_Y\circ\beta$ and $r_X\circ \alpha_i|_{\overline{V}}=l_Y\circ\overline{\beta}$. Following the above construction, we get two smooth maps: $$\begin{aligned} \Omega^\alpha|_V :s &\longmapsto \left(\alpha_1|_V(s){\otimes}\beta(s),\alpha_2|_V(s){\otimes}\beta(s)\right),\\ \overline{\Omega}^\alpha|_{\overline{V}}: s &\longmapsto \left(\alpha_1|_{\overline{V}}(s){\otimes}\overline{\beta}(s), \alpha_2|_{\overline{V}}(s){\otimes}\overline{\beta}(s)\right). \end{aligned}$$ We now remark an important characterisation of $\Psi$, as a consequence of it being a diffeomorphism and inverse to the action map. Namely, $\Psi_1(x_1{\otimes}y_1,x_2{\otimes}y_2)\in G$ is the *unique* arrow $g\in G$ satisfying $gx_2{\otimes}y_2=x_1{\otimes}y_1$. Therefore, $\Psi_1\circ\Omega^\alpha|_V(s)\in G$ is the unique arrow such that $$\left[\Psi_1\circ\Omega^\alpha|_V(s)\right]\cdot\left(\alpha_2|_V(s){\otimes}\beta(s)\right)= \alpha_1|_V(s){\otimes}\beta(s).$$ By \[lemma:actions of weakly invertible bibundle are free\] all of the four actions of the original bibundles are free. Consequently, applying \[lemma:free action and balanced tensor product\], since the second component in each term is just $\beta(s)$, this means that $\Psi_1\circ\Omega^\alpha|_V(s)$ is the unique arrow in $G$ such that $$\Psi_1\circ\Omega^\alpha|_V(s)\cdot\alpha_2|_V(s)=\alpha_1|_V(s),$$ where the tensor with $\beta(s)$ can be removed. But, for exactly the same reasons, if we take $s\in V\cap \overline{V}$, then $\Psi_1\circ\overline{\Omega}^\alpha|_{\overline{V}}(s)\in G$ is *also* the unique arrow such that $$\Psi_1\circ\overline{\Omega}^\alpha|_{V\cap \overline{V}}(s)\cdot\alpha_2|_{V\cap\overline{V}}(s)=\alpha_1|_{V\cap\overline{V}}(s),$$ proving that $$\Psi_1\circ\Omega^\alpha|_{V\cap\overline{V}}=\Psi_1\circ\overline{\Omega}^\alpha|_{V\cap\overline{V}}.$$ This shows that on the overlaps $V\cap \overline{V}$ the map $\Psi_1\circ\Omega^\alpha|_{V\cap\overline{V}}$ does *not* depend on the plots $\beta$ and $\overline{\beta}$. This allows us to extend $\Psi_1\circ\Omega^\alpha|_V$, in a well-defined way, to the entire domain of $U_\alpha$. We do this as follows. For every $t\in U_\alpha$ there exists a plot $\beta_t:V_t\to Y$, defined on an open neighbourhood $V_t\ni t$, such that $r_X\circ\alpha_i|_{V_t}=l_Y\circ\beta_t$. Clearly, this gives an open cover $(V_t)_{t\in U_\alpha}$ of $U_\alpha$. For $t\in U_\alpha$ we then set $\Psi_1\circ\Omega^\alpha(t):=\Psi_1\circ\Omega^\alpha|_{V_t}(t)$. Hence we get a well-defined function $\Psi_1\circ\Omega^\alpha:U_\alpha\to G$, which is smooth by the Axiom of Locality. The main observation now is that, as the plot $\alpha$ is centred at $(x_1,x_2)$, we get that $\Psi_1\circ\Omega^\alpha(0)$ is the unique arrow in $G$ such that $\Psi_1\circ\Omega^\alpha(0)\cdot x_2=x_1$. This is exactly the property that characterises the division $\langle x_1,x_2\rangle_G$! \[proposition:weakly invertible is pre-biprincipal\] A weakly invertible diffeological bibundle is pre-biprincipal. The bulk of the work has been done in \[construction:local division map\]. Start with a diffeological bibundle $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ and a weak inverse $H{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}}Y{{\hspace{-1pt}~^{r_Y}\hspace{-5pt}{\curvearrowleft}}}G$. We shall define a smooth division map $\langle\cdot,\cdot\rangle_G$ for the left $G$-action. For $(x_1,x_2)\in X\times_{H_0}^{r_X,r_X}X$, we know by the Axiom of Covering that the constant map ${\mathrm{const}}_{(x_1,x_2)}:\mathbb{R}\to X\times_{H_0}^{r_X,r_X}X$ is a plot centred at $(x_1,x_2)$. We use the shorthand $\Psi_1\circ \Omega^{(x_1,x_2)}$ to denote the map $\Psi_1\circ\Omega^\alpha$ defined by the plot $\alpha={\mathrm{const}}_{(x_1,x_2)}$, and then write: $$\langle x_1,x_2\rangle_G:= \Psi_1\circ\Omega^{(x_1,x_2)}(0).$$ That just leaves us to show that this map is smooth. For that, take an arbitrary plot $\alpha:U_\alpha\to X\times_{H_0}^{r_X,r_X}X$ of the fibred product. We need to show that $\langle \cdot,\cdot\rangle_G\circ \alpha$ is a plot of $G$. For any $t\in U_\alpha$, we have that $$\langle \alpha_1(t),\alpha_2(t)\rangle_G=\Psi_1\circ \Omega^{\alpha(t)}(0)$$ is the unique arrow in $G$ such that $$\Psi_1\circ\Omega^{\alpha(t)}(0)\cdot{\mathrm{const}}^2_{\alpha(t)}(0)={\mathrm{const}}^1_{\alpha(t)}(0),$$ where ${\mathrm{const}}^i$ denotes the $i$th component of the constant plot. But then ${\mathrm{const}}^i_{\alpha(t)}(0)=\alpha_i(t)$, and we already know that $\Psi_1\circ\Omega^\alpha(t)\in G$ is the unique arrow that sends $\alpha_2(t)$ to $\alpha_1(t)$, so we have: $$\Psi_1\circ\Omega^{\alpha(t)}(0)=\Psi_1\circ\Omega^\alpha(t),\qquad\text{which means}\qquad \langle\cdot,\cdot\rangle_G\circ\alpha=\Psi_1\circ\Omega^\alpha.$$ But the right hand side $\Psi_1\circ\Omega^\alpha:U_\alpha\to G$ is a plot of $G$ as per \[construction:local division map\], proving that the map $\langle\cdot,\cdot\rangle_G$ is smooth. It is quite evident from its construction that it satisfies exactly the properties of a division map, and it is now easy to verify that $$\left(\langle\cdot,\cdot\rangle_G,{\mathrm{pr}}_2|_{X\times_{H_0}X}\right):X\times_{H_0}^{r_X,r_X}X\longrightarrow G\times_{G_0}^{{\mathrm{src}},l_X}X$$ is a smooth inverse of the action map (see \[section:division map\]). The fact that it lands in the right codomain, i.e., ${\mathrm{src}}(\langle x_1,x_2\rangle_G)=l_X(x_2)$, follows from the properties of $\Psi$ as the inverse of the action map of the balanced tensor product. Therefore $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X\xrightarrow{r_X}H_0$ is a pre-principal bundle. An analogous argument will show that $G_0\xleftarrow{l_Y}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ is also pre-principal, and hence we have proved the claim. We can now prove our main theorem: \[theorem:weakly invertible bibundles are the biprincipal ones\] A bibundle is weakly invertible in ${{\mathbf{DiffeolBiBund}}}$ if and only if it is biprincipal. That means: two diffeological groupoids are Morita equivalent if and only if they are equivalent in ${{\mathbf{DiffeolBiBund}}}$. One of the implications is just \[proposition:biprincipal bibundle is weakly invertible\]. The other now follows from a combination of \[proposition:weakly invertible implies bisubductive,proposition:weakly invertible is pre-biprincipal\]. This significantly generalises [@landsman2001quantized Proposition 4.21], not only in that we have a generalisation to a diffeological setting, but also in that it considers a more general type of bibundle. It justifies the bicategory ${{\mathbf{DiffeolBiBund}}}$ as being the appropriate setting for Morita equivalence of diffeological groupoids. It also shows that the assumptions of left principality of the Lie groupoid bibundles appear to be more like technical necessities for getting a well defined bicategory of Lie groupoids and bibundles, rather than being meaningful assumptions on the underlying smooth structure of the bibundles. In \[section:diffeological bibundles between Lie groupoids\] we discuss other aspects of diffeological Morita equivalence between Lie groupoids. A possible *category of fractions* approach to Morita equivalence of diffeological groupoids is discussed in [@schaaf2020diffeology-groupoids-and-ME Chapter V]. Some Morita Invariants {#section:some applications} ====================== In theories of Morita equivalence, there are often interesting properties that are naturally Morita invariant. In this section we discuss some results that generalise several well known Morita invariants of Lie groupoids to the diffeological setting. These include: invariance of the orbit spaces (\[definition:groupoid orbit space\]), of being *fibrating* (\[definition:fibrating groupoid\]), and of the action categories (\[definition:action category\]) The proofs are taken from [@schaaf2020diffeology-groupoids-and-ME Chapter IV]. Invariance of orbit spaces {#section:invariance orbit spaces} -------------------------- It is a well known result that if two Lie groupoids ${{G}\rightrightarrows{G}_0}$ and ${{H}\rightrightarrows{H}_0}$ are Morita equivalent (in the Lie groupoid sense), then there is a *homeo*morphism between their orbit spaces $G_0/G$ and $H_0/H$ [@crainic2018orbispaces Lemma 2.19]. The following theorem shows that, for diffeological groupoids, we get a genuine *diffeo*morphism. The construction of the underlying function is the same as for the Lie groupoid case, which is sketched in the proof of [@crainic2018orbispaces Lemma 2.19], and which we describe below in detail. \[theorem:morita equivalent groupoids have diffeomorphic orbit spaces (bibundle proof)\] If ${{G}\rightrightarrows{G}_0}$ and ${{H}\rightrightarrows{H}_0}$ are two Morita equivalent diffeological groupoids, then there is a diffeomorphism $G_0/G\cong H_0/H$ between their orbit spaces. Let $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ be the bibundle instantiating the Morita equivalence. Our first task will be to construct a function $\Phi: G_0/G\to H_0/H$ between the orbit spaces. The idea is to lift a point $a\in G_0$ of the base of the groupoid to its $l_X$-fibre, which by right principality is just an $H$-orbit in $X$, and then to project this orbit down to the other base $H_0$ along the right moment map $r_X$. The fact that the bundle is biprincipal ensures that this can be done in a consistent fashion. We are dealing with *four* actions here, so we need to slightly modify our notation to avoid confusion. If $a\in G_0$ is an object in the groupoid $G$, we shall denote its orbit by ${\mathrm{Orb}}_{G_0}(a)$, which, as usual, is just the set of all points $a'\in G_0$ such that there exists an arrow $g:a\to a'$ in $G$. Similarly, for $b\in H_0$ we write ${\mathrm{Orb}}_{H_0}(b)$. On the other hand, we have two actions on $X$, for whose orbits we use the standard notations ${\mathrm{Orb}}_G(x)$ and ${\mathrm{Orb}}_H(x)$, where $x\in X$. Now, start with a point $a\in G_0$, and consider its fibre $l_X^{-1}(a)$ in $X$. Since the bibundle is right subductive, the map $l_X$ is surjective, so this fibre is non-empty and we can find a point $x_a\in l_X^{-1}(a)$. We claim that the expression ${\mathrm{Orb}}_{H_0}\circ r_X(x_a)$ is independent on the choice of the point $x_a$ in the fibre. For that, take another point $x_a'\in l_X^{-1}(a)$. This gives the equation $l_X(x_a)=l_X(x_a')$, and since bibundle is right pre-principal, we get a unique arrow $h\in H$ such that $x_a'=x_ah$. From the definition of a right groupoid action, this in turn gives the equations $r_X(x_a')= {\mathrm{src}}(h)$ and $r_X(x_a)={\mathrm{trg}}(h)$, which proves the claim. To summarise, whenever $x_a,x_a'\in l_X^{-1}(a)$ are two points in the same $l_X$-fibre, then we have: $$\label{equation:independent H_0 orbit on point in fibre} {\mathrm{Orb}}_{H_0}\circ r_X(x_a) = {\mathrm{Orb}}_{H_0}\circ r_X(x_a'). \tag{{\color{RadboudRed}$\clubsuit$}}$$ Next we want to show that neither is this expression dependent on the point $a\in G_0$, but rather on its orbit ${\mathrm{Orb}}_{G_0}(a)$. For this, take another point $b\in {\mathrm{Orb}}_{G_0}(a)$, so there exists some arrow $g:a\to b$ in $G$. Pick then $x\in l_X^{-1}(a)$ and $y\in l_X^{-1}(b)$. This means that ${\mathrm{src}}(g)=l_X(x)$ and ${\mathrm{trg}}(g)=l_X(y)$, which means that if we let $g$ act on the point $x$ we get a point $gx\in l_X^{-1}(b)$, in the same $l_X$-fibre as $y$. Then using equation applied to $gx$ and $y$, and the $G$-invariance of the right moment map $r_X$, we immediately get: $${\mathrm{Orb}}_{H_0}\circ r_X(x) = {\mathrm{Orb}}_{H_0}\circ r_X(gx) = {\mathrm{Orb}}_{H_0}\circ r_X(y).$$ Using this, we can now conclude that there is a well-defined function $$\Phi : G_0/G \longrightarrow H_0/H; \qquad {\mathrm{Orb}}_{G_0}(a) \longmapsto {\mathrm{Orb}}_{H_0}\circ r_X(x_a),$$ that is neither dependent on the point $a$ in the orbit ${\mathrm{Orb}}_{G_0}(a)$, nor on the choice of the point $x_a\in l_X^{-1}(a)$ in the fibre. Note that this function exists by virtue of right subductivity (and the Axiom of Choice), which ensures that the left moment map $l_X$ is a surjection (and for each $a$ there exists an $x_a$). Either by replacing $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ by its opposite bibundle, or by switching the words ‘left’ and ‘right’, the above argument analogously gives a function going the other way: $$\Psi : H_0/H \longrightarrow G_0/G; \qquad {\mathrm{Orb}}_{H_0}(b) \longmapsto {\mathrm{Orb}}_{G_0}\circ l_X(y_b),$$ where now $y_b\in r_X^{-1}(b)$ is some point in the fibre of the right moment map $r_X$. We claim that $\Phi$ and $\Psi$ are mutual inverses. To see this, pick a point $a\in G_0$, a point $x_a\in l_X^{-1}(a)$, a point $y_{r_X(x_a)}\in r_X^{-1}(r_X(x_a))$. Then we can write $$\Psi\circ \Phi\left( {\mathrm{Orb}}_{G_0}(a) \right) = \Psi\left( {\mathrm{Orb}}_{H_0}(r_X(x_a)) \right) = {\mathrm{Orb}}_{G_0}\left(l_X(y_{r_X(x_a)})\right).$$ We also have, by choice, the equation $r_X(x_a)=r_X(y_{r_X(x_a)})$, so by left pre-principality there exists an arrow $g\in G$ such that $gx_a=y_{r_X(x_a)}$. By definition of a left groupoid action, this then further gives $${\mathrm{src}}(g)=l_X(x_a)=a \qquad\text{and}\qquad {\mathrm{trg}}(g)=l_X(y_{r_X(x_a)}).$$ This proves that the right-hand side of the previous equation is equal to $${\mathrm{Orb}}_{G_0}\left(l_X(y_{r_X(x_a)})\right) = {\mathrm{Orb}}_{G_0}(a),$$ which gives $\Psi\circ\Phi={\mathrm{id}}_{G_0/G}$. Through a similar argument, using right pre-principality, we obtain that $\Phi\circ\Psi={\mathrm{id}}_{H_0/H}$. To finish the proof, it suffices to prove that both $\Phi$ and $\Psi$ are smooth. Again, due to the symmetry of the situation, and since the bibundle $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ is biprincipal, we shall only prove that $\Phi$ is smooth. The proof for $\Psi$ will follow analogously. Since ${\mathrm{Orb}}_{G_0}$ is a subduction, to prove that $\Phi$ is smooth it suffices by \[lemma:properties of subductions\]*(3)* to prove that $\Phi\circ{\mathrm{Orb}}_{G_0}$ is smooth. Since the left moment map $l_X$ is a surjection, using the Axiom of Choice we pick a section $\sigma:G_0\to X$, which replaces our earlier notation of $\sigma(a)=:x_a$. From the way $\Phi$ is defined, we see that we get a commutative diagram: $$\begin{tikzcd} G_0 \arrow[d, "{\mathrm{Orb}}_{G_0}"'] \arrow[r, "\sigma"] & X \arrow[r, "r_X"] & H_0 \arrow[d, "{\mathrm{Orb}}_{H_0}"] \\ G_0/G \arrow[rr, "\Phi"'] & & H_0/H . \end{tikzcd}$$ We are therefore to show that ${\mathrm{Orb}}_{H_0}\circ r_X\circ \sigma$ is smooth. For this, pick a plot $\alpha:U_\alpha\to G_0$ of the base space. By right subductivity, the left moment map $l_X$ is a subduction, so locally $\alpha|_V=l_X\circ \beta$, where $\beta$ is some plot of $X$. Now, note that, for all $t\in V$, both the points $\beta(t)$ and $\sigma\circ l_X\circ \beta(t)$ are elements of the fibre $l_X^{-1}(l_X\circ\beta(t))$. Therefore, by equation we get: $${\mathrm{Orb}}_{H_0}\circ r_X\circ \sigma\circ \alpha|_V = {\mathrm{Orb}}_{H_0}\circ r_X\circ \sigma\circ l_X\circ \beta = {\mathrm{Orb}}_{H_0}\circ r_X\circ \beta.$$ The right-hand side of this equation is clearly smooth (and no longer dependent on the choice of section $\sigma$). By the Axiom of Locality for $G_0$, it follows that ${\mathrm{Orb}}_{H_0}\circ r_X\circ \sigma\circ \alpha$ is globally smooth, and since the plot $\alpha$ was arbitrary, this proves that $\Phi\circ{\mathrm{Orb}}_{G_0}$ is smooth. Hence, $\Phi$ is smooth. After an analogous argument that shows $\Psi$ is smooth, the desired diffeomorphism between the orbit spaces follows. Invariance of fibration {#section:invariance of fibration} ----------------------- The theory of diffeological (principal) fibre bundles is shown in [@iglesias2013diffeology Chapter 8] to be fully captured by the following notion: \[definition:fibrating groupoid\] A diffeological groupoid ${{G}\rightrightarrows{G}_0}$ is called *fibrating* (or a *fibration groupoid*) if the *characteristic map* $({\mathrm{trg}},{\mathrm{src}}):G\to G_0\times G_0$ is a subduction. This leads to a theory of diffeological fibre bundles that is able to treat the standard smooth locally trivial (principal) fibre bundles of smooth manifolds, but also bundles that are not (and could not meaningfully be) locally trivial. It is then natural to ask if this property of diffeological groupoids is invariant under Morita equivalence. The following theorem proves that this is the case: Let ${{G}\rightrightarrows{G}_0}$ and ${{H}\rightrightarrows{H}_0}$ be two Morita equivalent diffeological groupoids. Then ${{G}\rightrightarrows{G}_0}$ is fibrating if and only if ${{H}\rightrightarrows{H}_0}$ is fibrating. Because Morita equivalence is an equivalence relation, it suffices to prove that if ${{G}\rightrightarrows{G}_0}$ is fibrating, then so is ${{H}\rightrightarrows{H}_0}$. Denoting the characteristic maps of these groupoids by $\chi_G=\left({\mathrm{trg}}_G,{\mathrm{src}}_G\right)$ and $\chi_H=\left({\mathrm{trg}}_H,{\mathrm{src}}_H\right)$, assume that $G$ is fibrating, so that $\chi_G$ is a subduction. Our goal is to show $\chi_H$ is also a subduction. To begin with, take an arbitrary plot $\alpha=(\alpha_1,\alpha_2):U_\alpha \to H_0\times H_0$, and fix an element $t\in U_\alpha$. We thus need to find a plot $\Phi:W\to H$, defined on an open neighbourhood $t\in W\subseteq U_\alpha$, such that $\alpha|_W=\chi_H\circ\Phi$. Morita equivalence yields a biprincipal bibundle $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$. To construct the plot $\Phi$, we use almost all of the structure of this bibundle. The right moment map $r_X:X\to H_0$ is a subduction, so for each of the components $\alpha_i$ of $\alpha$ we get a plot $\beta_i:U_i\to X$, defined on an open neighbourhood $t\in U_i\subseteq U_\alpha$, such that $\alpha_i|_{U_i}=r_X\circ\beta_i$. Define $U:=U_1\cap U_2$, which is another open neighbourhood of $t\in U_\alpha$, and introduce the notation $$\beta:=(\beta_1|_U,\beta_2|_U):U\longrightarrow X\times X.$$ Composing with the left moment map $l_X:X\to G_0$, we get $(l_X\times l_X)\circ\beta:U\to G_0\times G_0$. It is here that we use that ${{G}\rightrightarrows{G}_0}$ is fibrating. Because of that, we can find an open neighbourhood $t\in V\subseteq U\subseteq U_\alpha$ and a plot $\Omega:V\to G$ such that $$\label{equation:chi_G Omega} \chi_G\circ\Omega = \left.(l_X\times l_X)\circ\beta\right|_V. \tag{{{\color{RadboudRed}$\spadesuit$}}}$$ This means that ${\mathrm{trg}}_G\circ\Omega = l_X\circ\beta_1|_V$ and ${\mathrm{src}}_G \circ\Omega = l_X\circ \beta_2|_V$. Let ${\varphi_G:X{\otimes}_H \overline{X}\to G}$ be the biequivariant diffeomorphism from \[proposition:biprincipal bibundle is weakly invertible\]. Using the plot $\Omega$ we just obtained, we get another plot $\varphi_G^{-1}\circ\Omega:V\to X{\otimes}_H\overline{X}$. Now, since the canonical projection $\pi_H:X\times_{H_0}^{r_X,r_X}\overline{X}\to X{\otimes}_H\overline{X}$ of the diagonal $H$-action is a subduction, we can find an open neighbourhood $t\in W\subseteq V$ and a plot $\omega:W\to X\times_{H_0}^{r_X,r_X}\overline{X}$ such that $$\label{equation:pi_H omega} \pi_H\circ \omega = \varphi_G^{-1}\circ\Omega|_W. \tag{{\color{RadboudRed}$\clubsuit$}}$$ Note that the plot $\omega$ decomposes into its components $\omega_1,\omega_2:W\to X$, which satisfy $r_X\circ\omega_1=r_X\circ\omega_2$. Using the biequivariance of $\varphi_G$ and the defining relation $L_X\circ\pi_H = l_X\circ {\mathrm{pr}}_1|_{X\times_{H_0}\overline{X}}$ we find: $$l_X\circ\beta_1|_W = {\mathrm{trg}}_G\circ\Omega|_W = L_X\circ\varphi_G^{-1}\circ\Omega|_W = L_X\circ \pi_H\circ\omega = l_X\circ{\mathrm{pr}}_1|_{X\times_{H_0}\overline{X}}\circ\omega = l_X\circ\omega_1,$$ where the first equality follows from the equation , and the third one from . Similarly, we find $l_X\circ\beta_2|_W = l_X\circ\omega_2$. These two equalities give two well-defined plots, one for each $i\in\lbrace 1,2\rbrace$, given by $$\beta_i|_W{\otimes}\omega_i:=\pi_G\circ\left(\beta_i|_W,\omega_i\right) :W \xrightarrow{\quad (\beta_i|_W,\omega_i)\quad} \overline{X}\times_{G_0}^{l_X,l_X}X \xrightarrow{\quad\pi_G\quad} \overline{X}{\otimes}_G X,$$ where $\pi_G:\overline{X}\times_{G_0}^{l_X,l_X}X \to \overline{X}{\otimes}_G X$ is the canonical projection of the diagonal $G$-action. We can now apply the biequivariant diffeomorphism $\varphi_H:\overline{X}{\otimes}_G X{\Rightarrow}H$ from \[proposition:biprincipal bibundle is weakly invertible\] to get two plots in $H$. It is from these two plots that we will create $\Phi$. Here it is absolutely essential that we have constructed the plot $\omega$ such that $r_X\circ\omega_1=r_X\circ\omega_2$, because that means that the sources of these two plots in $H$ will be equal, and hence they can be composed if we first invert one of them component-wise. To see this, use the biequivariance of $\varphi_H$ to calculate $${\mathrm{src}}_H\circ\varphi_H\circ\left(\beta_i|_W{\otimes}\omega_i\right) = R_X\circ\left(\beta_i|_W{\otimes}\omega_i\right) = r_X\circ{\mathrm{pr}}_2|_{\overline{X}\times_{G_0}X}\circ\left(\beta_i|_W,\omega_i\right) =r_X\circ\omega_i,$$ and similarly: $${\mathrm{trg}}_H\circ\varphi_H\circ\left(\beta_i|_W{\otimes}\omega_i\right) = L_{\overline{X}}\circ\left(\beta_i|_W{\otimes}\omega_i\right) = r_X\circ{\mathrm{pr}}_1|_{\overline{X}\times_{G_0}X}\circ\left(\beta_i|_W,\omega_i\right) = r_X\circ \beta_i|_W = \alpha_i|_W.$$ Of course, if we switch $\beta_i|_W{\otimes}\omega_i$ to $\omega_i{\otimes}\beta_i|_W$, which is defined in the obvious way, then the right-hand sides of the above two equations will switch. So, for every $s\in W$, the expression $\varphi_H\left(\omega_2(s){\otimes}\beta_2(s)\right)$ is an arrow in $H$ from $r_X\circ\beta_2(s)=\alpha_2(s)$ to $r_X\circ\omega_2(s)$, and $\varphi_H\left(\beta_1(s){\otimes}\omega_1(s)\right)$ is an arrow from $r_X\circ\omega_1(s)=r_X\omega_2(s)$ to $r_X\circ\beta_1(s)=\alpha_1(s)$, which can hence be composed to give an arrow from $\alpha_2(s)$ to $\alpha_1(s)$. This is exactly the kind of arrow we want. Therefore, for every $s\in W$, we get a commutative triangle in the groupoid $H$, which defines for us the plot $\Phi:W\to H$: $$\begin{tikzcd}[column sep = tiny] \alpha_2(s) \arrow[rd, "\varphi_H\left(\omega_2(s){\otimes}\beta_2(s) \right)"'] \arrow[rr, "\Phi(s)", dashed] & & \alpha_1(s) \\ & r_X\circ\omega_1(s). \arrow[ru, "\varphi_H\left(\beta_1(s){\otimes}\omega_1(s)\right)"'] \end{tikzcd}$$ The map $\Phi$ is clearly smooth, because inversion and multiplication in $H$ are smooth. Hence we have defined the plot $\Phi$, and by the above diagram it is clear that it satisfies $$\chi_H\circ\Phi = ({\mathrm{trg}}_H\circ\Phi,{\mathrm{src}}_H\circ\Phi)=\alpha|_W.$$ Thus we may at last conclude that $\chi_H$ is a subduction, and hence that ${{H}\rightrightarrows{H}_0}$ is also fibrating. Invariance of representations {#section:invariance of representations} ----------------------------- In the Morita theory of rings, it holds that two rings are Morita equivalent if and only if their categories of modules are equivalent. For groupoids, even discrete ones, this is no longer an “if and only if” proposition, but merely an “only if”. Nevertheless, it is known that the result transfers to Lie groupoids as well [@landsman2001bicategories Theorem 6.6], and here we shall prove that it transfers also to diffeology. \[theorem:morita equivalent groupoids have equivalent action categories\] Suppose that ${{G}\rightrightarrows{G}_0}$ and ${{H}\rightrightarrows{H}_0}$ are Morita equivalent diffeological groupoids. Then the action categories ${{\mathbf{Act}}}({{G}\rightrightarrows{G}_0})$ and ${{\mathbf{Act}}}({{H}\rightrightarrows{H}_0})$ are categorically equivalent. If ${{G}\rightrightarrows{G}_0}$ and ${{H}\rightrightarrows{H}_0}$ are Morita equivalent, there exists a biprincipal bibundle $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}} H$. Recall from \[definition:action category\] the notion of action categories and from \[definition:induced action functor\] that of induced action functors. We claim that $$\begin{aligned} X{\otimes}_H - :{{\mathbf{Act}}}({{H}\rightrightarrows{H}_0})&\longrightarrow{{\mathbf{Act}}}({{G}\rightrightarrows{G}_0}), \\ \overline{X}{\otimes}_G-:{{\mathbf{Act}}}({{G}\rightrightarrows{G}_0})&\longrightarrow {{\mathbf{Act}}}({{H}\rightrightarrows{H}_0}) \end{aligned}$$ are mutually inverse functors up to natural isomorphism. To see this, take a left $H$ action $H{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}}Y$. Then $$\left(\overline{X}{\otimes}_G-\right)\circ\left(X{\otimes}_H-\right)[H{{{\curvearrowright}\hspace{-5pt}^{l_Y}\hspace{1pt}}} Y] = \left(\overline{X}{\otimes}_G -\right)\left[G{{{\curvearrowright}\hspace{-5pt}^{L_X}\hspace{1pt}}} X{\otimes}_HY\right] = H{{{\curvearrowright}\hspace{-5pt}^{L_{\overline{X}}}\hspace{1pt}}}\left(\overline{X}{\otimes}_G\left(X{\otimes}_HY\right)\right).$$ Therefore, we need to construct a natural biequivariant diffeomorphism $$\mu_Y:\overline{X}{\otimes}_G\left(X{\otimes}_H Y\right)\longrightarrow Y.$$ For this, we collect the biequivariant diffeomorphisms from \[proposition:identity bibundle is weak identity,proposition:associativity of balanced tensor product,proposition:biprincipal bibundle is weakly invertible\]. Let us denote them by $$\begin{aligned} &A_Y:\overline{X}{\otimes}_G\left(X{\otimes}_H Y\right) \longrightarrow \left(\overline{X}{\otimes}_G X\right){\otimes}_HY, \\ &\varphi_H:\overline{X}{\otimes}_G X\longrightarrow H, \\ &M_{Y}:H{\otimes}_H Y\longrightarrow Y, \end{aligned}$$ describing the association up to isomorphism, the division map of the bibundle, and the left action $H{\curvearrowright}Y$, respectively. We then define $$\mu_Y:= M_Y\circ\left(\varphi_H{\otimes}{\mathrm{id}}_Y\right)\circ A_Y.$$ Note that $(\varphi_H{\otimes}{\mathrm{id}}_Y)$ is still a biequivariant diffeomorphism. The naturality square of the natural transformation ${\mu:\left(\overline{X}{\otimes}_G-\right)\circ\left(X{\otimes}_H-\right){\Rightarrow}{\mathrm{id}}_{{{\mathbf{Act}}}(H)}}$ then becomes: $$\begin{tikzcd}[column sep =large] \overline{X}{\otimes}_G\left(X{\otimes}_H Y\right) \arrow[r, "\mu_Y"] \arrow[d, "{\mathrm{id}}_{\overline{X}}{\otimes}({\mathrm{id}}_X\circ\varphi)"'] & Y \arrow[d, "\varphi"] \\ \overline{X}{\otimes}_G\left(X{\otimes}_H Z\right) \arrow[r, "\mu_Z"'] & {Z,} \end{tikzcd}$$ where $\varphi:Y\to Z$ is an $H$-equivariant smooth map. It follows from the structure of these maps that the naturality square commutes. The top right corner of the diagram becomes: $$\begin{aligned} \varphi\circ\mu_Y\left(x_1{\otimes}(x_2{\otimes}y)\right) &= \varphi\circ M_Y\circ(\varphi_H{\otimes}{\mathrm{id}}_Y)\circ A_Y\left(x_1{\otimes}(x_2{\otimes}y)\right) \\&= \varphi\circ M_Y\circ(\varphi_H{\otimes}{\mathrm{id}}_Y)\left((x_1{\otimes}x_2){\otimes}y\right) \\&= \varphi\circ M_Y\left(\varphi_H(x_1{\otimes}x_2){\otimes}y\right) \\&= \varphi\left(\varphi_H(x_1{\otimes}x_2)y\right) \\&= \varphi_H(x_1{\otimes}x_2)\varphi(y), \end{aligned}$$ where the very last step follows from $H$-equivariance of $\varphi$. Following a similar calculation, the bottom left corner evaluates as $$\begin{aligned} \mu_Z\circ\left({\mathrm{id}}_{\overline{X}}{\otimes}({\mathrm{id}}_X{\otimes}\varphi)\right) &= M_Z\circ(\varphi_H{\otimes}{\mathrm{id}}_Z)\circ A_Z\circ\left({\mathrm{id}}_{\overline{X}}{\otimes}({\mathrm{id}}_X{\otimes}\varphi)\right) \\&= M_Z\circ(\varphi_H{\otimes}{\mathrm{id}}_Z)\circ\left(({\mathrm{id}}_{\overline{X}}{\otimes}{\mathrm{id}}_X){\otimes}\varphi\right) \\&= M_Z\circ(\varphi_H{\otimes}\varphi), \end{aligned}$$ which, when evaluated, gives exactly the same as the above expression for the top right corner. This proves that $\mu$ is natural, and since every of its components is an $H$-equivariant diffeomorphism, it follows that $\mu$ is a natural isomorphism. The fact that the composition $\left(X{\otimes}_H -\right)\circ\left(\overline{X}{\otimes}_G-\right)$ is naturally isomorphic to ${\mathrm{id}}_{{{\mathbf{Act}}}(G)}$ follows from an analogous argument. Hence the categories ${{\mathbf{Act}}}({{G}\rightrightarrows{G}_0})$ and ${{\mathbf{Act}}}({{H}\rightrightarrows{H}_0})$ are equivalent, as was to be shown. Discussion and Suggestions for Future Research {#section:closing section} ============================================== Diffeological bibundles between Lie groupoids {#section:diffeological bibundles between Lie groupoids} --------------------------------------------- As we saw in \[example:Lie ME is also diffeological ME\], if two Lie groupoids are *Lie* Morita equivalent (i.e. Morita equivalent in the Lie groupoid sense [@crainic2018orbispaces Definition 2.15]), then they are also *diffeologically* Morita equivalent. This is simply due to the fact that surjective submersions between smooth manifolds are in particular also subductions, and hence a Lie principal groupoid bundle is also diffeologically principal. But, what if ${{G}\rightrightarrows{G}_0}$ and ${{H}\rightrightarrows{H}_0}$ are two *Lie* groupoids, such that there exists a *diffeological* biprincipal bibundle $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ between them. What does that say about the *Lie* Morita equivalence of $G$ and $H$? This still remains an open question (\[question:does inclusion pseudofunctor reflect weak equivalence\]). In this section we discuss some related results, which also pertain to our choice of subductions over *local* subductions for the development of the general theory. A slightly more detailed discussion is in [@schaaf2020diffeology-groupoids-and-ME Section 4.4.3]. In light of \[proposition:local subductions are surjective submersions\], the source and target maps of a Lie groupoid are local subductions (cf. \[proposition:source map is subduction\]), and we can therefore introduce the following class of diffeological groupoids: \[definition:locally subductive groupoids\] We say a diffeological groupoid ${{G}\rightrightarrows{G}_0}$ is [[*locally subductive*]{}]{} if its source and target maps are local subductions[^8]. Clearly, every Lie groupoid is a locally subductive diffeological groupoid. Looking at the structure of the proofs in \[section:diffeological groupoid actions and bundles,section:diffeological bibundles\], it appears as if they can be generalised to a setting where we replace all subductions by local subductions. In doing so, we would get a theory of locally subductive groupoids, locally subductive groupoid bundles, and the corresponding notions for bibundles and Morita equivalence, which, as it appears, would follow the same story as we have so far presented. An upside to that framework would be that it directly returns the original theory of Morita equivalence for Lie groupoids, once we restrict our diffeological spaces to smooth manifolds. In this section we shall prove that, even in the slightly more general setting of \[section:diffeological bibundles\], the diffeological bibundle theory reduces to the Lie groupoid theory in the correct way. We do this by proving that the moment maps of a biprincipal bibundle between locally subductive groupoids have to be local subductions as well (\[lemma:biprincipal bib between loc subd groupoids is locally bisubductive\]). In hindsight, this provides more justification for our choice of starting with subductions instead of local subductions. One consequence of this choice is that it allows for groupoid bundles that are truly *pseudo*-bundles, in the sense of [@pervova2016diffeological]. The notion of pseudo-bundles seems to be the correct notion in the setting of diffeology to generalise all bundle constructions on manifolds, at least if we want to treat (internal) tangent bundles as such (see [@christensen2016tangent]). There exists diffeological spaces whose internal tangent bundle is not a local subduction [@christensen2016tangent Example 3.17]. If we had defined principality of a groupoid bundle to include *local* subductiveness, these examples would not be treatable by our theory of Morita equivalence. \[lemma:projection onto balanced tensor product is local subduction\] Let $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ be a diffeological bibundle, where ${{H}\rightrightarrows{H}_0}$ is a locally subductive groupoid. Then the canonical projection map $\pi_H:X\times_{H_0}^{r_X,r_X}\overline{X}\to X{\otimes}_H\overline{X}$ is a local subduction. Let $\alpha:(U_\alpha,0)\to (X{\otimes}_H\overline{X},x_1{\otimes}x_2)$ be a pointed plot of the balanced tensor product. Since $\pi_H$ is already a subduction, we can find a plot $\beta:V\to X\times_{H_0}\overline{X}$, defined on an open neighbourhood $0\in V\subseteq U_\alpha$ of the origin, such that $\alpha|_V=\pi_H\circ \beta$. This plot decomposes into two plots $\beta_1,\beta_2\in{\mathcal{D}}_X$ on $X$, satisfying $r_X\circ\beta_1=r_X\circ\beta_2$. We use the notation $\alpha|_V=\beta_1{\otimes}\beta_2$. In particular, we get an equality $x_1{\otimes}x_2=\beta_1(0){\otimes}\beta_2(0)$ inside the balanced tensor product, which means that we can find an arrow $h\in H$ such that $\beta_i(0)=x_ih$. The target must be ${\mathrm{trg}}(h)=r_X(x_1)=r_X(x_2)$. This arrow allows us to write a pointed plot $r_X\circ\beta_i:(V,0)\to (H_0,{\mathrm{trg}}(h^{-1}))$, so that now we can use that ${{H}\rightrightarrows{H}_0}$ is locally subductive. Since the target map of $H$ is a local subduction, we can find a pointed plot $\Omega:(W,0)\to (H,h^{-1})$ such that $r_X\circ\beta_i|_W = {\mathrm{trg}}_H\circ\Omega$. This relation means that, for every $t\in W$, we have a well-defined action $\beta_i(t)\cdot\Omega(t)\in X$. Hence we get a pointed plot $$\Psi:(W,0)\longrightarrow (X\times_{H_0}^{r_X,r_X}\overline{X},(x_1,x_2)); \qquad t\longmapsto\left(\beta_1(t)\Omega(t),\beta_2(t)\Omega(t)\right).$$ It then follows by the definition of the balanced tensor product that $$\pi_H\circ \Psi(t) = \beta_1|_W(t)\Omega(t){\otimes}\beta_2|_W(t)\Omega(t) = \beta_1|_W(t){\otimes}\beta_2|_W(t) =\alpha|_W(t),$$ proving that $\pi_H$ is a local subduction. \[lemma:biprincipal bib between loc subd groupoids is locally bisubductive\] If $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ is a biprincipal bibundle between locally subductive groupoids, then the moment maps $l_X$ and $r_X$ are local subductions as well. If the bibundle $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$ is biprincipal, we get two biequivariant diffeomorphisms $\varphi_G:X{\otimes}_H\overline{X}\to G$ and $\varphi_H:\overline{X}{\otimes}_G X\to H$ (\[proposition:biprincipal bibundle is weakly invertible\]). It follows that the local subductivity of the source and target maps of $G$ and $H$ transfer to the four moment maps of the balanced tensor products. For example, the left moment map $L_X:X{\otimes}_H\overline{X}\to G_0$ can be written as $L_X= {\mathrm{trg}}_G\circ\varphi_G$, where the right hand side is clearly a local subduction. We know as well that $L_X$ fits into a commutative square with the original moment map $l_X$: $$\begin{tikzcd} X\times_{H_0}^{r_X,r_X}\overline{X}\arrow[r,"\pi_H"]\arrow[d,"{\mathrm{pr}}_1|_{X\times_{H_0}\overline{X}}"'] & X{\otimes}_H \overline{X}\arrow[d,"L_X"]\\ X\arrow[r,"l_X"'] & G_0. \end{tikzcd}$$ Since local subductions compose, and since by \[lemma:projection onto balanced tensor product is local subduction\] the projection $\pi_H$ is a local subduction, we find that the upper right corner $L_X\circ\pi_H$ must be a local subduction. Hence the composition $l_X\circ {\mathrm{pr}}_1|_{X\times_{H_0}\overline{X}}$ is a local subduction, which by an argument that is analogous to the proof of \[lemma:properties of subductions\]*(2)* gives the local subductiveness of $l_X$. That the right moment map $r_X$ is a local subduction follows from a similar argument. The lemma suggests that, if we refine our notion of principality something we might call [[*pure-principality*]{}]{}, by passing from subductions to local subductions, then biprincipality between locally subductive groupoids means the same thing as this new notion of pure-principality. Let us make this precise. Two diffeological groupoids are called *purely Morita equivalent* if there exists a biprincipal bibundle between them, such that the two underlying moment maps are local subductions. Clearly, pure Morita equivalence implies ordinary Morita equivalence in the sense of \[definition:Morita equivalence and biprincipality\], since local subductions are, in particular, subductions. The question is if the converse implication holds as well. We have a partial answer, since \[lemma:biprincipal bib between loc subd groupoids is locally bisubductive\] can now be restated as follows: \[proposition:pure-ME is the same as ME for locally subductive groupoids\] Two locally subductive groupoids are Morita equivalent if and only if they are purely Morita equivalent. Especially in light of the existence of subductions that are not local subductions (see e.g. [@iglesias2013diffeology Exercise 61, p.60]), and the fact that the proof of \[lemma:biprincipal bib between loc subd groupoids is locally bisubductive\] relies so heavily on the assumption that the groupoids are locally subductive, it seems that the ordinary diffeological Morita equivalence of \[definition:Morita equivalence and biprincipality\] is not equivalent to pure-Morita equivalence in general. We do not, however, know of an explicit counter-example. This discussion leaves us an open question: \[question:does inclusion pseudofunctor reflect weak equivalence\] Does diffeological Morita equivalence reduce to Lie Morita equivalence on Lie groupoids? That is to ask, if two Lie groupoids are diffeologically Morita equivalent, are they also Lie Morita equivalent? If two Lie groupoids $G$ and $H$ are diffeologically Morita equivalent, then there exists a diffeological biprincipal bibundle $G{{{\curvearrowright}\hspace{-5pt}^{l_X}\hspace{1pt}}}X{{\hspace{-1pt}~^{r_X}\hspace{-5pt}{\curvearrowleft}}}H$, where $X$ is a diffeological space. A positive answer to \[question:does inclusion pseudofunctor reflect weak equivalence\] could consist of a proof that $X$ is in fact a smooth manifold. Since $G$ and $H$ are both manifolds, it follows that $X{\otimes}_H\overline{X}$ and $\overline{X}{\otimes}_G X$ are also manifolds. We do not know if this is sufficient to imply that $X$ itself has to be a manifold. One suggestion is to use [@iglesias2013diffeology Article 4.6], which gives a characterisation for when a quotient of a diffeological space by an equivalence relation is a smooth manifold. Since the balanced tensor products are quotients of diffeological spaces, one may try to use this result to obtain a special family of plots for their underlying fibred products. This could potentially be used to define an atlas on $X$. Directions for future research ------------------------------ We list here some possible directions for future research. These are also proposed at the end of [@schaaf2020diffeology-groupoids-and-ME Section 1.2.3]. - Finding an answer to the open \[question:does inclusion pseudofunctor reflect weak equivalence\] about *diffeological* Morita equivalence between *Lie* groupoids. - The construction of a theory of bibundles for a more general framework of generalised smooth spaces. One possibility is to look at the *generalised spaces* of [@baez2011convenient Definition 4.11] (subsuming diffeology), or even to look at arbitrary classes of sheaves. What is the relation between our theory of Morita equivalence and the discussion in [@meyer2015groupoids]? A theory of principal bibundles seems to exist in a general setting for groupoids in $\infty$-toposes: [@nlab2018bibundle]. - What is the precise relation between differentiable stacks and diffeological groupoids (cf. [@watts2019diffeological])? Using our notion of Morita equivalence, what types of objects are *“diffeological stacks”* (i.e., Morita equivalence classes of diffeological groupoids)? - Can the *Hausdorff Morita equivalence* for holonomy groupoids of singular foliations introduced in [@garmendia2019hausdorff] be understood as a Morita equivalence between diffeological groupoids? - Can the bridge between diffeology and noncommutative geometry that is being built in [@bertozzini2016spectral; @iglesias2018noncommutative; @androulidakis2019diffeological; @iglesias2020quasifolds] be strengthened by our theory of Morita equivalence? Morita equivalence of Lie groupoids is already an important concept in relation to noncommutative geometry, especially for the theory of groupoid [C$^\ast$]{}-algebras. Can this link be extended to the diffeological setting, possibly through a theory of groupoid [C$^\ast$]{}-algebras for (a large class of) diffeological groupoids? If such a theory exists, what is the relation between Morita equivalence of diffeological groupoids and the Morita equivalence of their groupoid [C$^\ast$]{}-algebras? Is Morita equivalence preserved just like in the Lie case? [^1]: This is essentially due to the fact that the subductions are the *strong epimorphisms* in the category of diffeological spaces [@baez2011convenient Proposition 5.10]. [^2]: The etymology of the word is explained in the afterword to [@iglesias2013diffeology]. Souriau first used the term *“différentiel”*, as in ‘differential’ (from the Latin *differentia*, “difference”). Through a suggestion by Van Est, the name was later changed to *“difféologie,”* as in *“topologie”* (‘topology’, from the Ancient Greek *tópos*, “place,” and *-(o)logy*, “study of”). Hence the term: diffeology. [^3]: This shows that there are meaningful notions of smooth space that do not rely on the regnant philosophy of “smooth space = topological space + extra structure.” [^4]: The notational resemblance to an inner-product is not accidental. The division map plays a very similar rôle to the inner product of a Hilbert [C$^\ast$]{}-module. For more on this analogy, see [@blohmann2008stacky Section 3]. [^5]: Note: [@delHoyo2012Lie Section 4.6] defines this differently, where “\[a\] bundle is left (resp. right) principal if only the right (resp. left) underlying bundle is so.” We suspect this may be a typo, since it apparently conflicts with their use of terminology in the proof of [@delHoyo2012Lie Theorem 4.6.3]. We stick to the terminology defined above, where *left* principality pertains to the *left* underlying bundle. [^6]: The prefixes *bi-* and *pre-* commute: “bi-(pre-principal) = pre-(biprincipal)”. [^7]: The most straightforward way to obtain a (2-)category of diffeological groupoids is to consider the *smooth functors* and *smooth natural transformations*. We will not be studying this category in the current paper. [^8]: It would be tempting to call such groupoids *“diffeological Lie groupoids,”* but this would conflict with earlier established terminology of so-called *diffeological Lie groups* in [@iglesias2013diffeology Article 7.1] and [@leslie2003diffeological; @magnot2018group].
--- abstract: 'We study the production of a forward $J/\psi$ meson and a backward jet with a large rapidity separation at the LHC using the BFKL formalism. We compare the predictions given by the Non Relativistic QCD (NRQCD) approach to charmonium prediction and by the Color Evaporation Model. In NRQCD, we find that the $^3 S_1^{\, 8}$ part of the onium wavefunction is completely dominating the process. NRQCD and the color evaporation model give similar results, although a discrepancy seems to appear as the value of the transverse momenta of the charmonium and of the jet decrease.' address: - '$^1$ Laboratoire de Physique Théorique, CNRS, Université Paris Sud, Université Paris Saclay, 91405 Orsay, France' - '$^2$ Department of Physics, University of Jyväskylä, P.O. Box 35, 40014 University of Jyväskylä, Finland' - '$^3$ Helsinki Institute of Physics, P.O. Box 64, 00014 University of Helsinki, Finland' - '$^4$ National Centre for Nuclear Research (NCBJ), Warsaw, Poland' - '$^5$ Centre de Physique Théorique, Ecole Polytechnique, CNRS, Université Paris Saclay, F91128 Palaiseau, France' - '$^6$ UPMC Université Paris 6, Faculté de physique, 4 place Jussieu, 75252 Paris Cedex 05, France' author: - 'R Boussarie$^1$, B Ducloué $^{2,3}$, L Szymanowski$^{1,4,5}$ and S Wallon$^{1,6}$' title: 'Production of a forward $J/\psi$ and a backward jet at the LHC' --- Introduction ============ The high energy behaviour of QCD in the perturbative Regge limit is among the important longstanding theoretical questions in particle physics. QCD dynamics in such a limit are usually described using the BFKL formalism [@Fadin:1975cb; @Kuraev:1976ge; @Kuraev:1977fs; @Balitsky:1978ic], which relies on $k_t$-factorization [@Cheng:1970ef; @FL; @GFL; @Catani:1990xk; @Catani:1990eg; @Collins:1991ty; @Levin:1991ry]. Many processes have been proposed as a way to probe the BFKL resummation effects which result from these dynamics. One of the most promising ones is the production of two forward jets with a large interval of rapidity, as proposed by Mueller and Navelet [@Mueller:1986ey]. Recent $k_t$-factorization studies of Mueller-Navelet jets [@Colferai:2010wu; @Ducloue:2013hia; @Ducloue:2013bva; @Ducloue:2014koa] were successful in describing such events at the LHC [@CMS-PAS-FSQ-12-002].\ We propose to apply a similar formalism to study the production of a forward $J/\psi$ meson and a backward jet with a rapidity gap that is large enough to probe the BFKL dynamics but small enough for the meson to be tagged at LHC experiments such as ATLAS or CMS. Although $J/\psi$ mesons were first observed more than 40 years ago, the theoretical mechanism for their production is still to be fully understood and the validity of some models remains a subject of discussions (for recent reviews see for example [@Brambilla:2010cs; @Bodwin:2013nua]). In addition, most predictions for charmonium production rely on collinear factorization, in which one considers the interaction of two on-shell partons emitted by the incoming hadrons, to produce a charmonium accompanied by a fixed number of partons. On the contrary, in this work the $J/\psi$ meson and the tagged jet are produced by the interaction of two collinear partons, but with the resummation of any number of accompanying unobserved partons, as usual in the $k_t$-factorization approach.\ Here we will compare two different approaches for the description of charmonium production. First we will use the NRQCD formalism [@Bodwin:1994jh], in which the charmonium wavefunction is expanded as a series in powers of the relative velocity of its constituents. Next we will apply the Color Evaporation Model (CEM), which relies on the local-duality hypothesis [@Fritzsch:1977ay; @Halzen:1977rs]. Finally we will show numerical estimates of the cross section obtained in both approaches. Further details will be provided elsewhere [@us]. The scattering cross section in $k_t$-factorization =================================================== ![The $k_t$-factorized amplitude for the production of a forward $J/\psi$ meson and a backward jet.[]{data-label="Fig:kt"}](JPsiCrossSectionOctet.eps){width="14pc"} Within the $k_t$-factorization approach for inclusive processes, one writes the cross section as the convolution in transverse momenta of $t$-channel gluons of the impact factor $\Phi_1$ for $J/\psi$ meson production, the impact factor $\Phi_2$ for the production of the backward jet and the BFKL Green’s function $\mathcal{G}\,,$ as illustrated in Fig. \[Fig:kt\]. Each impact factor contains itself the convolution, in the longitudinal momentum fraction of a parton from the incoming hadron, of a parton distribution function (PDF) with the vertex for the fusion of this parton and a $t$-channel gluon into a $J/\psi$ or a jet. Depending on the quantum numbers of the $c \bar{c}$ pair from which the charmonium will be produced, the upper impact factor may take into account the production of a real gluon. In that case since this gluon will not be tagged, its contribution will be integrated out. Thus, introducing the azimuthal angles $(\phi_{J/\psi},\phi_{\rm jet})$, the rapidities $(y_{J/_\psi},y_{\rm jet})$ and the transverse momenta $(\mathbf{k}_{J/\psi},\mathbf{k}_{\rm jet})$, one can write the differential cross section as follows : $$\begin{aligned} \frac{\mathrm{d}\sigma}{\mathrm{d}|\mathbf{k}_{J/\psi}|\mathrm{d}|\mathbf{k}_{\rm jet}|\mathrm{d}y_{J/\psi}\mathrm{d}y_{\rm jet}} = \int \! \mathrm{d}\phi_{J/\psi} \int \! \mathrm{d}\phi_{\rm jet} \int \! \mathrm{d}^2\mathbf{k}_1 \mathrm{d}^2\mathbf{k}_2 \, \mathcal{G} ( \mathbf{k}_1, \, \mathbf{k}_2, \, \hat{s} ) \\ \nonumber \Phi_1 ( \mathbf{k}_{J/\psi}, \, x_{J/\psi}, \, -\mathbf{k}_1 ) \, \, \Phi_2 ( \mathbf{k}_{\rm jet}, \, x_{\rm jet}, \, \mathbf{k}_2 ). \end{aligned}$$ Charmonium production in the Non Relativistic QCD formalism =========================================================== The NRQCD formalism is based on the static approximation. Basically, one postulates that the charmonium production can be factorized into two parts. First, the production of an on-shell $c\bar{c}$ pair is computed using the usual Feynman diagram perturbative methods. Then their binding into a charmonium state is encoded in a non-perturbative quarkonium wave-function. Said wavefunction is expanded in terms of the relative velocity $v \sim \frac{1}{\mathrm{log}M}$ of the quarkonium’s constituents. In the case of an $S$-state charmonium $J/\psi$ with zero orbital momentum one expands it as follows : $$\left| \Psi \right\rangle = O(1) \left| Q \bar{Q} \left[^3S_1^{(1)} \right] \right\rangle + O(v) \left| Q \bar{Q} \left[^3S_1^{(8)} \right] g \right\rangle + O(v^2).$$ The first term in this expansion corresponds to the production of a quarkonium from a $c \bar{c}$ pair in a color singlet $S^{(1)}$ state. Due to charge parity conservation, the emission of an additional gluon must be taken into account in the hard part. However, in the second term this additional gluon is included in the wavefunction so it does not appear in the hard part which will then contain only the production of a $c \bar{c}$ pair in a color octet $S^{(8)}$ state. In the inclusive process studied here and to the first order in $v$, both contributions should be included in the cross section. The color singlet contribution ------------------------------ In this case, the hard part consists of six Feynman diagrams, of which two are illustrated in Fig. \[Fig:CSM\], computed using the color singlet $c \bar{c}$ to $J/\psi$ transition vertex obtained from the NRQCD expansion $$v_{\alpha}^i(q_2)\, \bar{u}_{\beta}^j(q_1) \rightarrow \frac{\delta^{ij}}{4N_c} \left( \frac{\langle \mathcal{O}_1 \rangle_{J/\psi}}{m} \right)^{\frac{1}{2}} \left[ \hat{\varepsilon}_{J/\psi}^* \left( \hat{k}_{J/\psi} +M \right) \right]_{\alpha, \, \beta}\,.$$ In this equation, $i$ and $j$ are color indices, $\alpha$ and $\beta$ are spinor indices, while $\varepsilon_{J/\psi}$ and $k_{J/\psi}$ are respectively the $J/\psi$ polarization vector and momentum. The $\frac{1}{4N_c}$ factor comes from the projection on spinor indices and on the color singlet. We denote as $m$ the charm quark mass and $M$ the mass of the meson. In the lowest orders in NRQCD one can assume that $M=2m$. One also assumes that the quark and the antiquark carry the same momentum $q$, so that $k_{J/\psi}=2q$, with $q^2=m^2$. The operator $\mathcal{O}_1$ arises from the non relativistic hamiltonian, and its vacuum expectation value can be fixed by a fit to data. Indeed, it appears for example in the $J/\psi \rightarrow \mu^+ \mu^-$ decay rate. ![Two examples out of the six diagrams contributing to $J/\psi$ production from a $c\bar{c}$ pair in the color singlet state.[]{data-label="Fig:CSM"}](JPsiVertex1.eps){width="14cm"} The color octet contribution ---------------------------- The computation of the hard part in the color octet case [@Cho:1995vh; @Cho:1995ce] is done in a similar way. It consists of three Feynman diagrams, with two examples shown in Fig. \[Fig:COM\]. We use the color octet $c \bar{c}$ to $J/\psi$ transition vertex $$\left[ v^i_{\alpha}(q_2) \, \bar{u}^j_{\alpha}(q_1) \right]^a \rightarrow \frac{t^a_{ij}}{4N_c} \left( \frac{\langle \mathcal{O}_8 \rangle_{J/\psi}}{m} \right)^{\frac{1}{2}} \left[ \hat{\varepsilon}_{J/\psi}^* \left( \hat{k}_{J/\psi} +M \right) \right]_{\alpha, \, \beta}\,,$$ where the vacuum expectation value of $\mathcal{O}_8$ needs to be determined using experimental data. ![Two examples out of the three diagrams contributing to $J/\psi$ production from a $c\bar{c}$ pair in the color octet state.[]{data-label="Fig:COM"}](JPsiVertexOctetTripleGluon.eps){width="14cm"} The color evaporation model =========================== While the NRQCD formalism relies on a postulated factorization, the CEM relies on the so-called local duality hypothesis. One assumes that a heavy quark pair $Q \bar{Q}\,,$ with an invariant mass below twice the one of the lightest meson that contains a single heavy quark, will produce a bound $Q \bar{Q}$ state in $\frac{1}{9}$ of the cases, independently of its color. The $\frac{1}{9} = \frac{1}{1+ \left( N_c^2-1 \right) }$ factor accounts for the probability for the quark pair to eventually form a colorless state after a series of randomized soft interactions between its production and its confinement. In the case of a charm quark, the upper limit for the invariant mass corresponds to the threshold $2\,m_D$ for the production of a pair of $D$ mesons. The resulting bound state will correspond to any possible heavy quarkonium. One assumes that the repartition between them is universal.\ In other words the cross section for the production of a $J/\psi$ meson will be a fraction $F_{J/\psi}$ of the cross section for the production of a $c \bar{c}$ pair with an invariant mass $M$ between $2m_c$ and $2m_D$, summed over spins and colors $$\sigma_{J/\psi} = F_{J/\psi} \int_{4m_c^{\, 2}}^{4m_D^{\, 2}} dM^2 \frac{d\sigma_{c\bar{c}}}{dM^2}\,.$$ Where $F_{J/\psi}$ is supposed to be process-independent and needs to be fitted to data. The diagrams to be computed are similar to the color octet case. Let us however emphasize that the quark and the antiquark no longer carry the same momentum, as required to cover the whole range in allowed invariant masses. This is illustrated in Fig. \[Fig:CEM\]. ![Two examples out of the three diagrams contributing to $J/\psi$ production in the color evaporation model.[]{data-label="Fig:CEM"}](JPsiCEM2.eps){width="14cm"} Numerical results ================= ![Differential cross section as a function of $Y$ obtained in NRQCD and in the color evaporation model for three values of $p_T\equiv|\mathbf{k}_{J/\psi}|=|\mathbf{k}_{\rm{jet}}|$.](pT1_10_pT2_10.eps "fig:"){width="7cm"} $|\mathbf{k}_{J/\psi}|=|\mathbf{k}_{\rm{jet}}|=10$ GeV ![Differential cross section as a function of $Y$ obtained in NRQCD and in the color evaporation model for three values of $p_T\equiv|\mathbf{k}_{J/\psi}|=|\mathbf{k}_{\rm{jet}}|$.](pT1_20_pT2_20.eps "fig:"){width="7cm"} $|\mathbf{k}_{J/\psi}|=|\mathbf{k}_{\rm{jet}}|=20$ GeV ![Differential cross section as a function of $Y$ obtained in NRQCD and in the color evaporation model for three values of $p_T\equiv|\mathbf{k}_{J/\psi}|=|\mathbf{k}_{\rm{jet}}|$.](pT1_30_pT2_30.eps "fig:"){width="7cm"} $|\mathbf{k}_{J/\psi}|=|\mathbf{k}_{\rm{jet}}|=30$ GeV \[Fig:Num\] We can now combine the leading order charmonium production vertex obtained above with the BFKL Green’s function and the jet vertex. Our implementation is very similar to Ref. [@Ducloue:2013bva], in particular we use the next-to-leading order jet vertex and the BFKL Green’s function at next-to-leading logarithmic accuracy and we use the same scale setting. We note that to perform a complete next-to-leading order study of this process, one would need to compute the NLO corrections to the charmonium production vertex, which could be sizable. In Fig. \[Fig:Num\] we show our results for the cross section as a function of the rapidity separation between the jet and the $J/\psi$, $Y \equiv y_{J/\psi}-y_{\rm jet}$. We use the rapidity cuts $0<y_{J/\psi}<2.5$ and $-4.5<y_{\rm jet}<0$, which are similar to the acceptances for $J/\psi$ and jet tagging at ATLAS and CMS for example. Here we fix $|\mathbf{k}_{J/\psi}|=|\mathbf{k}_{\rm{jet}}|\equiv p_T$ and we show results for $p_T=10$, 20 and 30 GeV. For the NRQCD calculation we use the same values for $\langle \mathcal{O}_1 \rangle$ and $\langle \mathcal{O}_8 \rangle$ as in Ref. [@Hagler:2000eu], where they were determined by comparing a $k_t$-factorization calculation with experimental data. The value of the CEM parameter $F_{J/\psi}$ extracted from data depends on several details of the calculation, such as the PDF parametrization used. In Ref. [@Bedjidian:2004gd], values between 0.0144 and 0.0248 are quoted. Here we use a value of 0.02 which is approximately in the center of this interval. We observe from Fig. \[Fig:Num\] that in the NRQCD formalism the color singlet contribution is almost negligible compared to the color octet contribution. The cross section in the color evaporation model is of the same order of magnitude as in the NRQCD case, but the two calculations seem to have different behaviours with the kinematics: the decrease of the cross section with increasing $Y$ is slightly more pronounced in the NRQCD approach, while the CEM calculation shows a stronger variation with $p_T$. We thank E. M. Baldin, A. V. Grabovsky and J.-P. Lansberg for discussions. B. Ducloué acknowledges support from the Academy of Finland, Project No. 273464. This work was partially supported by the PEPS-PTI PHENODIFF, the PRC0731 DIFF-QCD, the Polish Grant NCN No. DEC-2011/01/B/ST2/03915, the ANR PARTONS (ANR-12-MONU-0008-01), the COPIN-IN2P3 Agreement and the Theorie-LHC France intiative. This work was done using computing resources from CSC – IT Center for Science in Espoo, Finland. References {#references .unnumbered} ========== [10]{} V. S. Fadin, E. Kuraev, and L. Lipatov, Phys. Lett. [**B60**]{}, 50 (1975). E. A. Kuraev, L. N. Lipatov, and V. S. Fadin, Sov. Phys. JETP [**44**]{}, 443 (1976). E. Kuraev, L. Lipatov, and V. S. Fadin, Sov. Phys. JETP [**45**]{}, 199 (1977). I. Balitsky and L. Lipatov, Sov. J. Nucl. Phys. [**28**]{}, 822 (1978). H. Cheng and T. T. Wu, Phys. Rev. [**D1**]{}, 3414 (1970). G. Frolov and L. Lipatov, Sov. J. Nucl. Phys. [**13**]{}, 333 (1971). V. Gribov, G. Frolov, and L. Lipatov, Yad. Fiz. [**12**]{}, 994 (1970). S. Catani, M. Ciafaloni, and F. Hautmann, Phys. Lett. [**B242**]{}, 97 (1990). S. Catani, M. Ciafaloni, and F. Hautmann, Nucl. Phys. [**B366**]{}, 135 (1991). J. C. Collins and R. K. Ellis, Nucl. Phys. [**B360**]{}, 3 (1991). E. M. Levin, M. G. Ryskin, [Yu]{}. M. Shabelski, and A. G. Shuvaev, Sov. J. Nucl. Phys. [**53**]{}, 657 (1991), . A. H. Mueller and H. Navelet, Nucl. Phys. [**B282**]{}, 727 (1987). D. Colferai, F. Schwennsen, L. Szymanowski, and S. Wallon, JHEP [**1012**]{}, 026 (2010), 1002.1365. B. Ducloué, L. Szymanowski, and S. Wallon, JHEP [**1305**]{}, 096 (2013), 1302.7012. B. Ducloué, L. Szymanowski, and S. Wallon, Phys. Rev. Lett. [**112**]{}, 082003 (2014), 1309.3229. B. Ducloué, L. Szymanowski, and S. Wallon, Phys. Lett. [**B738**]{}, 311 (2014), 1407.6593. CMS, S. Chatrchyan [*et al.*]{}, Report No. FSQ-12-002, 2013 (unpublished). N. Brambilla [*et al.*]{}, Eur. Phys. J. [**C71**]{}, 1534 (2011), 1010.5827. G. T. Bodwin [*et al.*]{}, , in [*[Community Summer Study 2013: Snowmass on the Mississippi (CSS2013) Minneapolis, MN, USA, July 29-August 6, 2013]{}*]{}, 2013, 1307.7425. G. T. Bodwin, E. Braaten, and G. P. Lepage, Phys. Rev. [**D51**]{}, 1125 (1995), hep-ph/9407339, . H. Fritzsch, Phys. Lett. [**B67**]{}, 217 (1977). F. Halzen, Phys. Lett. [**B69**]{}, 105 (1977). R. Boussarie, B. Ducloué, L. Szymanowski, and S. Wallon, in preparation. P. L. Cho and A. K. Leibovich, Phys. Rev. [**D53**]{}, 150 (1996), hep-ph/9505329. P. L. Cho and A. K. Leibovich, Phys. Rev. [**D53**]{}, 6203 (1996), hep-ph/9511315. P. Hagler, R. Kirschner, A. Schafer, L. Szymanowski, and O. V. Teryaev, Phys. Rev. [**D63**]{}, 077501 (2001), hep-ph/0008316. M. Bedjidian [*et al.*]{}, , 2004, hep-ph/0311048.
--- abstract: 'We have recently showed that it is possible to deal with collections of indistinguishable elementary particles (in the context of quantum mechanics) in a set-theoretical framework by using hidden variables, in a sense. In the present paper we use such a formalism as a model for quasi-set theory. Quasi-set theory, based on Zermelo-Fraenkel set theory, was developed for dealing with collections of indistinguishable but, in a sense, not identical objects.' author: - | Adonai Sant’Anna\ Department of Mathematics - Federal University at Paraná\ P.O. Box 19081, Curitiba, PR, 81531-990, Brazil date: title: 'Hidden variables, quasi-sets, and elementary particles' --- \[section\] \[section\] \[section\] Introduction ============ We begin by considering that it is necessary to settle some philosophical terms in order to avoid confusions. When we say that $a$ and $b$ are [*identicals*]{}, we mean that they are the very [*same*]{} individual, that is, there are no ‘two’ individuals at all, but only one which can be named indifferently by either $a$ or $b$. By [*indistinguishability*]{} we simply mean agreement with respect to attributes. We recognize that this is not a rigorous definition. Nevertheless such an intuition is better clarified in the next section. In physics, elementary particles that share the same set of state-independent (intrinsic) properties are usually refered to as [*indistinguishable*]{}. Although ‘classical particles’ can share all their intrinsic properties, there is a sense in saying that they ‘have’ some kind of [*quid*]{} which makes them individuals, since we are able to follow the trajectories of classical particles, at least in principle. That allows us to identify them. In quantum physics that is not possible, i.e., it is not possible, [*a priori*]{}, to keep track of individual particles in order to distinguish among them when they share the same intrinsic properties. In other words, it is not possible to label quantum particles. The problems regarding individuality of quantum particles have been discussed in recent literature by several authors. Some few of them are [@daCosta-94] [@DallaChiara-93] [@Krause-95b] [@Krause-98] [@Redhead-91] [@Sant'Anna-97] [@vanFraassen-91]. Many intrincate puzzles on the logical and philosophical foundations of quantum theory have been raised from these questions. For instance, there is the possibility that the collections of such entities may be not considered as sets in the usual sense. Yu. Manin [@Manin-76] proposed the search for axioms which should allow to deal with collections of indistinguishable elementary particles. Other authors [@DallaChiara-93] [@Krause-92] [@Krause-95b] have also considered that standard set theories are not adequate to cope with some questions regarding microphysical phenomena. These authors have emphasized that the ontology of microphysics apparently does not reduce to that one of usual sets, due to the fact that [*sets*]{} are collections of distinct objects. Quasi-set theory, based on Zermelo-Fraenkel set theory, was developed for dealing with collections of indistinguishable but, in a sense, not identical objects [@Krause-92]. Hence, quasi-set theory provides a mathematical background for dealing with collections of indistinguishable elementary particles as it has been shown in [@Krause-98]. In that paper, it has been shown how to obtain the quantum statistics into the scope of this non-standard approach. Nevertheless, it has been recently proposed that standard set theory is strong enough to deal with collections of [*physically*]{} indistinguishable quantum particles [@Sant'Anna-97] [@Sant'Anna-9*], if we use some sort of hidden variable formalism. In the present paper we establish a connection between our hidden variable approach and quasi-set theory. Section 2 presents our hidden variable picture. Section 3 presents a very brief introduction for quasi-set theory and section 4 shows how to use the hidden variable formalism as a standard model for quasi-set theory. Finally, at section 5, we discuss some related lines of work. The Hidden Variable Formalism ============================= Here we intend to show that it is possible to distinguish, at least in principle, among particles that are ‘[*physically*]{} indistinguishable’, where by ‘physically indistinguishable’ particles we mean, roughly speaking, those particles which share the same set of measurement values for their intrinsic properties.[^1] In a previous work [@Sant'Anna-97] we assumed that ‘[*physically*]{} indistinguishable particles’ are those particles which have the same set of measurement values for a correspondent complete set of observables. It seems clear that such a modification simplifies our conceptual framework and it is still related to the usual understanding about the meaning of (physical) indistinguishability. A kind of distinction is possible if we consider each particle as an ordered pair whose first element is the mentioned set of measurement values of the intrinsic properties and the second element is a hidden property (a hidden variable) which intuitively corresponds to something which was not yet measured in laboratory. The mentioned ‘hidden property’ does assume different values for each individual particle in a manner that it allows us to distinguish those particles which are in principle ‘physically’ indistinguishable. Obviously that such a hidden property seems to have a metaphysical nature. Our proposed hidden variable hypothesis does have a metaphysical status. This is the kind of metaphysics that we advocate. The ‘reasonable’ metaphysics should be that one which could provide a hope for a future new physics. This future new physics may correspond to more extended physical systems that are not, untill now, measured in laboratories. As remarked above, our concern here is only with the process of labeling physically indistinguishable particles. So, although we are not interested in describing here an axiomatic framework for quantum physics, quantum mechanics or even mechanics, we expect that our approach can be extended in order to encompass them. All that follows is performed in a standard set theory like Zermelo-Fraenkel with [*Urelemente*]{} (ZFU).[^2] Our picture for describing indistinguishability issues in quantum physics is a set-theoretical predicate, following P. Suppes’ ideas about axiomatization of physical theories [@Suppes-67]. Hence, our system has five primitive notions: $\lambda$, $X$, $P$, $m$, and $M$. $\lambda$ is a function $\lambda:N\rightarrow\Re$, where $N$ is the set $\{1,2,3...,n\}$, $n$ is a positive integer, and $\Re$ is the set of real numbers; $X$ and $P$ are finite sets; $m$ and $M$ are unary predicates defined on elements of $P$. Intuitivelly, the images $\lambda_{i}$ of the function $\lambda$, where $i\in N$, correspond to our hidden variable. We denote by $\Lambda_{N}$ the set of all $\lambda_{i}$, where $i\in N$. $X$ is a set whose elements should be intuitivelly interpreted as measurement values of the state-independent properties like rest mass, electric charge, spin, etc.. The elements of $X$ are denoted by $x$, $y$, etc. $P$ is to be interpreted as a set of particles. $m(p)$, where $p\in P$, means that $p$ is a microscopic particle, or a micro-object. $M(p)$ means that $p\in P$ is a macroscopic particle, or a macro-object. Actually, the distinction between microscopic and macroscopic objects, as mentioned here does not reflect, at least in principle, the great problem of explaining the distinguishability among macroscopic objects, since these are composed of physically indistinguishable things. As it is well known, Schrödinger explained that in terms of a [*Gestalt*]{} [@Schroedinger-52]. Nevertheless, this still remains as an open problem from the foundational (axiomatic) point of view. The following is a set-theoretical predicate for a system of ontologically distinguishable particles. We use the symbol ‘$=$’ for the standard equality.\ [**Definition 2.1**]{} *${\cal D_{O}} = \langle\lambda,X,P,m,M\rangle$ is a system of [*ontologically distinguishable particles*]{}, abbreviated as ${\cal D_O}$-system, if and only if the following six axioms are satisfied:* D1 : $\lambda:N\rightarrow\Re$ is an injective function, whose set of images is denoted by $\Lambda_N$. D2 : $P\subset X\times\Lambda_{N}$. [We denote the elements of $P$ by $p,q,r,...$ when there is no risk of confusion.\ ]{} [**Definition 2.2**]{} $\langle x,\lambda_{i}\rangle\doteq \langle y,\lambda_{j}\rangle\; \mbox{if, and only if,}\; x=y$.\ [**Definition 2.3**]{} If $p\in P$ and $q\in P$, we say that $p$ is [*ontologically indistinguishable*]{} from $q$ if, and only if, $p=q$, where $=$ is the usual equality between ordered pairs.\ The usual equality among ordered pairs $p = \langle x,\lambda_i\rangle\in P$ is a binary relation which corresponds to our ontological indistinguishability between particles, while $\doteq$ is another binary relation which corresponds to the physical indistinguishability between particles. Two particles are ontologically indistinguishable if and only if they share the same set of measurement values for their intrinsic physical properties and the same value for their hidden variables. Definition 2.3 says that two particles are physically indistinguishable if, and only if, they share the same set of measurement values for their intrinsic (physical) properties. D3 : $(\forall x,y\in X)(\forall\lambda_i\in\Lambda_N)((\langle x,\lambda_{i}\rangle\in P \wedge \langle y,\lambda_{i}\rangle\in P) \rightarrow x = y).$ D4 : $(\forall p,q\in P)(M(p)\wedge M(q)\rightarrow (p\doteq q\rightarrow p=q))$. D5 : $(\forall p,q\in P)(p\doteq q\wedge \neg(p=q)\to m(p)\wedge m(q)).$ D6 : $(\forall p\in P)((m(p)\vee M(p))\wedge\neg(m(p)\wedge M(p))$. Axiom [**D1**]{} allows us to deduce that the cardinality of $\Lambda_{N}$ coincides with the cardinality of $N$ ($\#\Lambda_{N} = \# N$). Axiom [**D2**]{} just says that particles are represented by ordered pairs[^3], where the first element intuitivelly corresponds to measurement values of all the intrinsic physical properties, while the second element corresponds to the hidden inner property that allows us to distinguish particles at an ontological level. Yet, axioms [**D2**]{} and [**D3**]{} guarantee that two particles that share the same values for their hidden variable are the very same particle, since our structure is set-theoretical and the equality $=$ is the classical one. Axiom [**D4**]{} says that macroscopic objects that are physically indistinguishable, are necessarily identicals. Axiom [**D5**]{} says that two particles physically indistinguishable that are not ontologically indistinguishable (they are ontologically distinguishable) are both microscopic particles. Axiom [**D6**]{} means that a particle is either microscopic or macroscopic, but not both. Axiom [**D4**]{} deserves further explanation. Let us observe that it was not postulated the existence of (in particular) micro-objects; but the axiomatic is compatible with such an hypothesis. Axiom [**D4**]{} entails that (ontologically) distinct macro-objects are always distinguished by a measurement value; if two particles are macro-objects, then there exists a value for a measurement which distinguish them. Then, macro-objects, in particular, obey Leibniz’s Principle of the Identity of Indiscernibles and we may say that (according to our axiomatics) classical logic holds with respect to them while micro-objects may be physically indistinguishable without the necessity of being ‘the same’ object. In [@Sant'Anna-97] the axiomatic framework for a system of ontologically distinguishable particles is a little different from the present formulation. The main difference is on axiom [**D5**]{}, which does not exist in [@Sant'Anna-97]. Such an axiom is necessary to prove the theorem that we present in the next subsection. We discuss in [@Sant'Anna-97] how our approach is out of the range of the well known proofs on the impossibility of hidden variables in the quantum theory, like von Neumann’s theorem, Gleason’s work, Kochen and Specker results, Bell’s inequalities or other works where it is sustained that no distribution of hidden variables can account for the statistical predictions of the quantum theory [@Bohm-95]. Quasi-set theory ================ The quasi-set theory ${\cal Q}$ is based on Zermelo-Fraenkel-like axioms and allows the presence of two sorts of atoms ([*Urelemente*]{}), termed $m$-atoms and $M$-atoms.[^4] Concerning the $m$-atoms, a weaker ‘relation of indistinguishability’ (denoted by the symbol $\equiv$), is used instead of identity, and it is postulated that $\equiv$ has the properties of an equivalence relation. The predicate of equality cannot be applied to the $m$-atoms, since no expression of the form $x = y$ is a formula if $x$ or $y$ denote $m$-atoms. Hence, there is a precise sense in saying that $m$-atoms can be indistinguishable without being identical. This justifies what we said above about the ‘lack of identity’ to some objects. The universe of ${\cal Q}$ is composed by $m$-atoms, $M$-atoms and [*quasi-sets*]{} (qsets, for short). The axiomatics is adapted from that of ZFU (Zermelo-Fraenkel with [*Urelemente*]{}), and when we restrict the theory to the case which does not consider $m$-atoms, quasi-set theory is essentially equivalent to ZFU, and the corresponding quasi-sets can then be termed ‘ZFU-sets’ (similarly, if also the $M$-atoms are ruled out, the theory collapses into ZFC, i.e., Zermelo-Fraenkel $+$ axiom of choice). The $M$-atoms play the role of the [*Urelemente*]{} in the sense of ZFU. The specific symbols of ${\cal Q}$ are three unary predicates $m$, $M$ and $Z$, two binary predicates $\equiv$ and $\in$ and an unary functional symbol $qc$. Terms and (well-formed) formulas are defined in the standard way, as are the concepts of free and bound variables, etc.. We use $x$, $y$, $z$, $u$, $v$, $w$ and $t$ to denote individual variables, which range over quasi-sets (henceforth, qsets) and [*Urelemente*]{}. Intuitively, $m(x)$ says that ‘$x$ is a microobject’ ($m$-atom), $M(x)$ says that ‘$x$ is a macroobject’ ($M$-atom) while $Z(x)$ says that ‘$x$ is a set’. The term $qc(x)$ stands for ‘the quasi-cardinal of (the qset) $x$’. The [*sets*]{} will be characterized as exact copies of the sets in ZFU. We also define that $x$ is a quasi-set, i.e., $Q(x)$ if, and only if, $x$ is neither an $m$-atom nor an $M$-atom. In order to preserve the concept of identity for the ‘well-behaved’ objects, an [*Extensional Equality*]{} is introduced for those entities which are not $m$-atoms, on the following grounds: for all $x$ and $y$, if they are not $m$-atoms, then $$x =_{E} y\;\mbox{if, and only if,}\;$$ $$(Q(x)\wedge Q(y)\wedge\forall z ( z \in x \leftrightarrow z \in y )) \vee (M(x) \wedge M(y) \wedge x \equiv y)$$ It is possible to prove that $=_{E}$ has all the properties of classical identity and so these properties hold regarding $M$-atoms and ‘sets’ (see below). In this paper, all references to ‘$=$’ stand for ‘$=_E$’, and similarly ‘$\leq$’ and ‘$\geq$’ stand, respectively, for ‘$\leq_E$’ and ‘$\geq_E$’. Among the specific axioms of ${\cal Q}$, few of them deserve explanation. The other axioms are adapted from ZFU. For instance, to form certain elementary quasi-sets, such as those containing ‘two’ objects, we cannot use something like the usual ‘pair axiom’, since its standard formulation pressuposes identity; we use the weak relation of indistinguishability instead: \[[*The ‘Weak-Pair’ Axiom*]{}\] For all $x$ and $y$, there exists a quasi-set whose elements are the indistinguishable objects from either $x$ or $y$. In symbols,[^5] $$\forall x \forall y \exists_{Q} z \forall t (t \in z \leftrightarrow t \equiv x \vee t \equiv y)$$ Such a quasi-set is denoted by $[x, y]$ and, when $x \equiv y$, we have $[x]$ by definition. We remark that this quasi-set [*cannot*]{} be regarded as the ‘singleton’ of $x$, since its elements are [*all*]{} the objects indistinguishable from $x$, so its ‘cardinality’ (see below) may be greater than $1$. A concept of [*strong singleton*]{}, which plays an important role in the applications of quasi-set theory, may be defined, as we shall mention below. In ${\cal Q}$ we also assume a Separation Schema, which intuitivelly says that from a quasi-set $x$ and a formula $\alpha(t)$, we obtain a sub-quasi-set of $x$ denoted by $$[t\in x : \alpha(t)].$$ We use the standard notation with ‘$\{$’ and ‘$\}$’ instead of ‘$[$’ and ‘$]$’ only in the case where the quasi-set is a [*set*]{}. It is intuitive that the concept of [*function*]{} cannot also be defined in the standard way, so we introduce a weaker concept of [*quasi-function*]{}, which maps collections of indistinguishable objects into collections of indistinguishable objects; when there are no $m$-atoms involved, the concept is reduced to that of function as usually understood. Relations, however, can be defined in the usual way, although no order relation can be defined on a quasi-set of indistinguishable $m$-atoms, since partial and total orders require antisymmetry, which cannot be stated without identity. Asymmetry also cannot be supposed, for if $x \equiv y$, then for every relation $R$ such that $\langle x, y \rangle \in R$, it follows that $\langle x, y \rangle = [[x]] = \langle y, x \rangle \in R$, by force of the axioms of ${\cal Q}$.[^6] It is possible to define a translation from the language of ZFU into the language of ${\cal Q}$ in such a way that we can obtain a ‘copy’ of ZFU in ${\cal Q}$. In this copy, all the usual mathematical concepts (like those of cardinal, ordinal, etc.) can be defined; the ‘sets’ (in reality, the qsets which are ‘copies’ of the ZFU-sets) turn out to be those quasi-sets whose transitive closure (this concept is like the usual one) does not contain $m$-atoms. Although some authors like Weyl [@Weyl-49] sustain that (in what regard cardinals and ordinals) “the concept of ordinal is the primary one”, quantum mechanics seems to present strong arguments for questioning this thesis, and the idea of presenting collections which have a cardinal but not an ordinal is one of the most basic pressupositions of quasi-set theory. The concept of [*quasi-cardinal*]{} is taken as primitive in ${\cal Q}$, subject to certain axioms that permit us to operate with quasi-cardinals in a similar way to that of cardinals in standard set theories. Among the axioms for quasi-cardinality, we mention those below, but first we recall that in ${\cal Q}$, $qc(x)$ stands for the ‘quasi-cardinal’ of the quasi-set $x$, while $Z(x)$ says that $x$ is a [*set*]{} (in ${\cal Q}$). Furthermore, $Cd(x)$ and $card(x)$ mean ‘$x$ is a cardinal’ and ‘the cardinal of $x$’ respectively, defined as usual in the ‘copy’ of ZFU we can define in ${\cal Q}$. \[[*Quasi-cardinality*]{}\] Every qset has an unique quasi-cardinal which is a cardinal (as defined in the ‘ZFU-part’ of the theory) and, if the quasi-set is in particular a set, then this quasi-cardinal is its cardinal [*stricto sensu*]{}:[^7] $$\forall_{Q} x \exists_{Q} ! y (Cd(y) \wedge y = qc(x) \wedge (Z(x) \to y = card(x)))$$ ${\cal Q}$ still encompasses an axiom which says that if the quasi-cardinal of a quasi-set $x$ is $\alpha$, then for every quasi-cardinal $\beta \leq \alpha$, there is a subquasi-set of $x$ whose quasi-cardinal is $\beta$, where the concept of [*subquasi-set*]{} is like the usual one. In symbols, \[[*The quasi-cardinals of subquasi-sets*]{}\] $$\forall_{Q} x (qc(x) = \alpha \to \forall \beta (\beta \leq \alpha \to \exists_{Q} y (y \subseteq x \wedge qc(y) = \beta))$$ Another axiom states that \[[*The quasi-cardinal of the power quasi-set*]{}\] $$\forall_{Q} x (qc({\cal P}(x)) = 2^{qc(x)})$$ where $2^{qc(x)}$ has its usual meaning. As remarked above, in ${\cal Q}$ there may exist qsets whose elements are $m$-atoms only, called ‘pure’ qsets. Furthermore, it may be the case that the $m$-atoms of a pure qset $x$ are indistinguishable from one another, in the sense of sharing the indistinguishability relation $\equiv$. In this case, the axiomatics provides the grounds for saying that nothing in the theory can distinguish among the elements of $x$. But, in this case, one could ask what it is that sustains the idea that there is more than one entity in $x$. The answer is obtained through the above mentioned axioms (among others, of course). Since the quasi-cardinal of the power qset of $x$ has quasi-cardinal $2^{qc(x)}$, then if $qc(x) = \alpha$, for every quasi-cardinal $\beta \leq \alpha$ there exists a subquasi-set $y \subseteq x$ such that $qc(y) = \beta$, according to the axiom about the quasi-cardinality of the subquasi-sets. Thus, if $qc(x) = \alpha \not= 0$, the axiomatics does not forbid the existence of $\alpha$ subquasi-sets of $x$ which can be regarded as ‘singletons’. Of course, the theory cannot prove that these ‘unitary’ subquasi-sets (supposing now that $qc(x) \geq 2$) are distinct, since we have no way of ‘identifying’ their elements, but qset theory is compatible with this idea.[^8] In other words, it is consistent with ${\cal Q}$ to maintain that $x$ has $\alpha$ elements, which may be regarded as absolutely indistinguishable objects. Since the elements of $x$ may share the relation $\equiv$, they may be further understood as belonging to a same ‘equivalence class’ (for instance, being indistinguishable electrons) but in such a way that we cannot assert either that they are identical or that they are distinct from one another (i.e., they act as ‘identical electrons’ in the physicist’s jargon).[^9] We define $x$ and $y$ as [*similar*]{} qsets (in symbols, $Sim(x,y)$) if the elements of one of them are indistinguishable from the elements of the another, that is, $Sim(x,y)$ if and only if $\forall z \forall t (z \in x \wedge t \in y \to z \equiv t)$. Furthermore, $x$ and $y$ are [*Q-Similar*]{} ($QSim(x,y)$) if and only if they are similar and have the same quasi-cardinality. Then, since the quotient qset $x/_{\equiv}$ may be regarded as a collection of equivalence classes of indistinguishable objects, then the ‘weak’ axiom of extensionality is: \[[*Weak Extensionality*]{}\] $$\begin{aligned} \forall_{Q} x \forall_{Q} y (\forall z (z \in x/_{\equiv} \to \exists t (t \in y/_{\equiv} \wedge \, QSim(z,t)) \wedge \forall t(t \in y/_{\equiv} \to\nonumber\\ \exists z (z \in x/_{\equiv} \wedge \, QSim(t,z)))) \to x \equiv y)\nonumber\end{aligned}$$ In other words, the axiom says that those qsets that have ‘the same quantity of elements of the same sort[^10] are indistinguishable. Finally, let us remark that quasi-set theory is equiconsistent with standard set theories (like ZFC) (see [@Krause-95a]). A Set-Theoretical Model For Quasi-Sets in Terms of Hidden Variables =================================================================== In this section we use $\cal D_{\cal O}$-system as a model for quasi-set theory. The interpretation is done according to the table given below: $$\begin{array}{|c|c|}\hline \mbox{Quasi-set theory} & \cal D_{\cal O}\mbox{-system} \\ \hline\hline \mbox{Urelemente} & \mbox{elements of $P$}\\ \hline \mbox{$p$ is a $m$-atom} & \mbox{$m(p)$}\\ \hline \mbox{$p$ is a $M$-atom} & \mbox{$M(p)$}\\ \hline \mbox{$Z(x)$} & \mbox{$x$ is set in ZFU}\\ \hline \mbox{$qc(x)$} & \mbox{$card(x)$}\\ \hline \mbox{$p=_Eq$} & p=q\\ \hline \mbox{$p\equiv q$} & p\doteq q\\ \hline \end{array}$$ In ${\cal D}_{\cal O}$-system we denote by $p_{\doteq}$ any set such that, ($\forall p\forall q$) ($p\in p_{\doteq}\wedge q\in p_{\doteq} \to p\doteq q$). Hence, the weak-pair in ${\cal D}_{\cal O}$-system corresponds to the [*set* ]{} $\{p_{\doteq},q_{\doteq}\}$. With the table given above there is no difficulty to write the weak-pair axiom in ${\cal D}_{\cal O}$-system and to prove its translation (given by the table above) as a theorem of ZFU. The same occurs for the separation schema and the quasi-cardinality axioms. To prove the weak extensionality axiom we should consider another relation in ${\cal D}_{\cal O}$-system, which may be defined as follows: $P\doteq Q\;\mbox{if, and only if,}\;(\forall p\forall q)((p\in P\wedge q\in Q)\to p\doteq q)\wedge (card(P) = card(Q))$, where $card(X)$ stands for the standard cardinality of the set $X$. The proofs of the translations of the other axioms of quasi-set theory in ${\cal D}_{\cal O}$-system are not difficult since ${\cal D}_{\cal O}$-system is defined as a set-theoretical predicate, and these proofs demand just simple properties of set-theory (ZFU). We do not give the details because there is no space in this book for a longer paper. But that is not a difficult task for the reader. As a final remark on this section, we note that it is not really necessary to interpret extensional equality, since this is given as a definition in quasi-set theory. Other Models and Related Questions ================================== It is also possible to interpret quasi-sets in the set of rational numbers. We interpret the Urelemente in quasi-set theory as either rational numbers or nonconvergent Cauchy sequences of rational numbers.[^11] To say that $x$ is an M-atom corresponds to say, in our proposed interpretation, that $x$ is a rational number. On the other hand, to say that $x$ is an m-atom means that $x$ is a nonconvergent Cauchy sequence of rational numbers. We say that two Cauchy sequences $(x_n)$ and $(y_n)$ are equivalent $((x_n)\sim (y_n))$ if, and only if, $z_n = (x_n - y_n)$ is a convergent sequence such that $z_n\to 0$. Such a binary relation $\sim$ is an equivalence relation. We say that two nonconvergent Cauchy sequences are [*indistinguishable*]{} if, and only if, they belong to the same equivalence class with respect to $\sim$. Extensional equality corresponds to the usual equality between rational numbers and sets of rational numbers. Quasi-cardinality may be interpreted as the usual cardinality in set-theory. And $Z(x)$ may be interpreted as ‘$x$ is a set. The proof that such an interpretation is a model for quasi-set theory is another task that we let as an exercise for the reader. We may also find non-standard interpretations for quasi-sets like the infinitesimals in non-standard analysis. Nevertheless, the main point we want to make in this paper is that quasi-sets are not so ‘weird’, if we compare them with Zermelo-Fraenkel-like sets. If collections of elementary particles, in quantum physics, may be described by means of quasi-set theory, that does not mean that quantum world is completelly different from classical (macroscopic) world, at least from the mathematical point of view. Hence, we suggest the possibility of a complete classical picture for microscopic phenomena. Some authors have tried a similar way. Bohmiam mechanics is a well known example of a semi-classical picture for quantum mechanics [@Bohm-95]. Suppes and collaborators have also developed a particular description for some microscopic phenomena usually described by quantum physics [@Suppes-94] [@Suppes-96a] [@Suppes-96b]. One of the main points that makes quantum physics quite different from classical physics is the presence of nonlocal phenomena in the quantum world. Nevertheless, it is usually considered that the interference produced by two light beams, which is a nonlocal phenomenon, is determined by both their mutual coherence and the indistinguishability of the quantum particle paths. Mandel [@Mandel-91], e.g., has proposed a quantitative link between the wave and the particle descriptions by using an adequate decomposition of the density operator. So, perhaps an adequate treatment for the problem of indistinguishability between elementary particle trajectories, in terms of hidden variables, may allow a classical picture for interference. Obviously, there is another nonlocal phenomenon, namely, Einstein-Podolsky-Rosen (EPR) experiment, which entails some fascinating results like teleportation [@Watson-97]. If we take very seriously the non individuality of quantum particles, space-time coordinates cannot be used to label elementary particles. Nevertheless, at the moment, we have no answer to the question if there is a relation between indistinguishability and nonlocality in the sense of EPR correlations. Acknowledgments =============== We acknowledge with thanks the suggestions and criticisms made by Décio Krause. [99]{} Bohm, D. and B.J. Hilley, 1995, [*The undivided universe*]{}, Routledge, London. da Costa, N.C.A., and D. Krause, 1994, ‘Schrödinger logics’, [*Studia Logica*]{} [**53**]{}, 533-550. Dalla Chiara, M.L., and G. Toraldo di Francia, 1993, ‘Individuals, kinds and names in physics’, in G. Corsi et al. (eds.), [*Bridging the gap: philosophy, mathematics, physics*]{}, Kluwer, Ac. Press, Dordrecht, 261-283. Reprint from 1985, [*Versus*]{} [**40**]{}, 29-50. Krause, D., 1992, ‘On a quasi-set theory’, [*Notre Dame Journal of Formal Logic*]{} [**33**]{} 402-411. Krause, D, ‘The theories of quasi-sets and ZFC are equiconsistent’, in W.A. Carnielli and L.C.P.D. Pereira (eds.) [*Logic, sets and information*]{}, (CLE-UNICAMP, 1995) 145-155. Krause, D. and S. French, 1995, ‘A formal framework for quantum non-individuality’, [*Synthese*]{} [**102**]{} 195-214. Krause, D., ‘Axioms for collections of indistinguishable objects’, forthcoming in [*Logique et Analyse*]{}. Krause, D., A.S. Sant’Anna and A.G. Volkov, ‘Quasi-set theory for bosons and fermions’, forthcomming. L. Mandel, 1991, ‘Coherence and indistinguishability’ [*Optics Letters*]{} [**16**]{} 1882-1883. Manin, Yu. I., 1976, ‘Problems of present day mathematics: I (Foundations)’, in Browder, F.E. (ed.) [*Proceedings of Symposia in Pure Mathematics*]{} [**28**]{} American Mathematical Society, Providence, 36. Redhead, M., and P. Teller, 1991, ‘Particles, particle labels, and quanta: the toll of unacknowledged metaphysics’, [*Foundations of Physics*]{} [**21**]{}, 43-62. Sant’Anna, A.S., and D. Krause, 1997, ‘Indistinguishable particles and hidden variables’, [*Found. Phys. Lett.*]{} [**10**]{} 409-426. Sant’Anna, A. S., ‘Some remarks about individuality and quantum particles’, forthcomming. Schrödinger, E., 1952, [*Science and humanism*]{}, Cambridge University Press, Cambridge. Suppes, P., 1967, [*Set-theoretical structures in science*]{}, mimeo. Stanford University, Stanford. Suppes, P. and J.A. de Barros, 1994, ‘Diffraction with well-defined photon trajectories: a foundational analysis’, [*Found. Phys. Lett.*]{} [**7**]{}, 501. Suppes, P., A.S. Sant’Anna and J.A. de Barros, 1996, ‘A particle theory of the Casimir effect’, [*Found. Phys. Lett.*]{} [**9**]{} 213-223. Suppes, P., J.A. de Barros and A.S. Sant’Anna, 1996, ‘Violation of Bell’s inequalities with a local theory of photons’, [*Found. Phys. Lett.*]{} [**9**]{} 551-560. van Fraassen, B., 1991, [*Quantum mechanics: an empiricist view*]{}, Clarendom Press, Oxford. Watson, A., 1997, ‘Teleportation beams up a photon’s state’, [*Science*]{} [**278**]{} 1881-1882. Weyl, H., 1963, [*Philosophy of mathematics and natural science*]{}, Atheneum, New York. [^1]: By measurement values of intrinsic properties of a given particle we mean real numbers times an adequate unit of rest mass, charge, spin, etc., associated to the respective rest mass, charge, spin, etc. of this particle. [^2]: We interpret [*Urelemente*]{} as particles (in the sense of mechanics). [^3]: In [@daCosta-94] the authors discuss the possible representation of quantum particles by means of ordered pairs $\left< E,L\right>$, where $E$ corresponds to a predicate which in some way characterizes the particle in terms, e.g., of its rest mass, its charge, and so on, while $L$ denotes an apropriate label, which could be, for example, the location of the particle in space-time. Then, even in the case that the particles (in a system) have the same $E$, they might be distinguished by their labels. But if the particles have the same label, the tools of classical mathematics cannot be applied, since the pairs should be identified. In order to provide a mathematical distinction between particles with the same $E$ and $L$, these authors use quasi-set theory [@Krause-92] [@Krause-95b]. In the present picture, according to axioms [**D1**]{}-[**D3**]{}, it is prohibited the case where two particles have the same (ontological) label. [^4]: All the details of this section may be found in [@Krause-97]. [^5]: In all that follows, $\exists_Q$ and $\forall_Q$ are the quantifiers relativized to quasi-sets. [^6]: We remark that $[[x]]$ is the same ($=_{E}$) as $\langle x, x \rangle$ by the Kuratowski’s definition. [^7]: Then, every quasi-cardinal is a cardinal and the above expression ‘there is a unique’ makes sense. Furthermore, from the fact that the empty set $\emptyset$ is a set, it follows that its quasi-cardinal is 0. [^8]: The differences among such ‘unitary’ qsets may perhaps be obtained from a distinction between ‘intensions’ and ‘extensions’ of concepts like ‘electron’. By this way we engage our approach into what Dalla-Chiara and Toraldo di Francia [@DallaChiara-93] termed the “world of intensions”. [^9]: The application of this formalism to the concept of non-individual quantum particles has been proposed in [@Krause-95b]. [^10]: In the sense that they belong to the same equivalence class of indistinguishable objects. [^11]: We are obviously considering that the set of rational numbers is endowed with a metric. In our case, the metric is $d(x,y) = |x-y|$, where $x$ and $y$ are rational numbers.
--- abstract: 'The SW Sextantis stars are a group of cataclysmic variables with distinctive observational characteristics, including absorption features in the emission line cores at phases 0.2–0.6. Hellier and Robinson have proposed that these features are caused by the accretion stream flowing over the accretion disk. However, in a simple model the absorption occurred at all orbital phases, which is contradicted by the data. I show that invoking a flared accretion disk resolves this problem.' author: - | Coel Hellier\ Department of Physics, Keele University, Keele, Staffordshire, ST5 5BG, U.K. date: 'Accepted for PASP April 1998 issue (despite the MN macros)' title: 'The phase 0.5 absorption in SW Sextantis-type cataclysmic variables' --- \#1[$^{\mbox{{\scriptsize #1}}}$]{} \#1[$\times10^{#1}$]{} Introduction ============ The SW Sextantis stars are a subclass of the novalike variables, which are themselves a subclass of the cataclysmic variables (CVs). The qualification for novalike status is a stable accretion disk, presumably due to a mass transfer rate sufficiently high to prevent the instabilities associated with dwarf novae (e.g. Warner 1995). The SW Sex stars are novalikes showing all or most of the following properties (e.g. Thorstensen  1991): (i) single-peaked emission lines, particularly , incompatible with an origin in a Keplerian disk; (ii) gross asymmetries in the emission lines from the disk, so that they do not reflect the orbital motion of the white dwarf, (iii) peculiar ‘phase 0.5’ absorption features in the core of the line during the orbital phases 0.2–0.6; (iv) a tendency to have orbital periods in the range 3–4 hr, just above the CV period gap; and (v) a high probability of being eclipsing (this one is most likely a selection effect). Many ideas have been proposed to explain SW Sex stars, the favourites being strong accretion disk winds (Honeycutt  1986; Dhillon  1991; Hellier 1996); magnetically controlled accretion (Williams 1989; Casares  1996); and mass-transfer streams which penetrate or flow over the disk (Shafter  1988; Hellier and Robinson 1994). This paper addresses the combination of those ideas proposed in Hellier and Robinson (1994; hereafter Paper 1) and Hellier (1996; Paper 2), although for alternative views and discussion of other models see Dhillon (1991), Casares  (1997) and Hoard and Szkody (1997). We suggested that SW Sex stars are novalikes with abnormally high mass-transfer rates. This causes, first, strong winds from the inner disk or boundary layer, explaining the single-peaked line profiles. Second, it allows the accretion stream to overflow the initial impact with the edge of the disk rim, and continue on a free-fall trajectory to a second impact much nearer the white dwarf. Line emission from this re-impact causes the highly asymmetric line profiles. Further, we proposed that the stream is seen in absorption between the initial impact with the disk and its re-impact. This means that the normal emission ‘S-wave’ common in other CVs is absent, or even in absorption. Also, we noted that the absorption from the stream could explain the phase 0.5 absorption features of SW Sex stars. The remaining characteristic, the concentration on orbital periods just above the period gap, presumably results from evolutionary effects driving systems with those periods to higher mass transfer rates (; e.g. Shafter 1992). Indeed, in Paper 2 we noted that SW Sex stars often show VY Scl low states, implying that the abnormally high  in the high state is balanced by periods of much lower . This could result from irradiation-driven feedback cycles in  (e.g. Wu  1995). In Papers 1 and 2 we computed the velocities expected from an accretion stream flowing over the disk, as a function of orbital phase, and turned these into model line profiles for comparison with data from PX And and V1315 Aql. Overall the simulations supported the model, showing that the overflowing stream had the right velocity variations to explain the distorted line profiles and the phase 0.5 absorption. However, these simulations had a fundamental limitation in that they calculated velocities only, making no allowance for obscuration of one component by another, and so could not reproduce variations in the strength of features round the orbital cycle. Thus the simulations contained ‘phase 0.5 absorption’ at all orbital phases, whereas the data doesn’t. Since several authors (e.g. Casares  1996; Hoard and Szkody 1997) have cited this flaw as a primary reason for doubting the model, I have re-written the simulation code to include all spatial and obscuration effects (it is still a simple geometric model, though, making no attempt at radiative transfer). I pick up a suggestion from Paper 1 that disk flaring can confine the absorption to a limited phase range. This gives a much closer resemblance to the data, solving the biggest discrepancy between the data and models from Papers 1 and 2. The modelling code ================== The modelling code is a development of that presented in Papers 1 and 2. It adds together the spectrum expected from a flared Keplerian disk, a single-peaked profile expected from a wind, and a component with the velocities of a stream flowing over the disk. In contrast to the previous versions of the code it calculates spatial obscuration including: eclipses of the disk and stream by the secondary; obscuration of parts of the disk by the stream; the degree to which the stream can be seen over the rim of the flared disk; and obscuration of the stream by other regions of stream. The code employs a large number of parameters, although many of them are either confined to a narrow range by other data, or have little effect on the simulation. In what follows I give further details of the code and justification for the parameters adopted. I used an orbital period of 3.5 hr, primary and secondary masses of 0.8 and 0.3  and a disk radius of 4m, typical of SW Sex systems. Changes in these parameters tend to rescale the simulated spectra, rather than changing details of the absorption, which is the primary interest of this paper. Obscuration effects in these deeply eclipsing binaries are very sensitive to the inclination and the disk flaring angle. I used an inclination, $i$, of 82, from Dhillon ’s (1991) estimate of 82.1$\pm$3.6 for V1315 Aql. For the flare (opening semi-angle, $\alpha$) I used 4. This is based on theoretical estimates that the photosphere has an $\alpha$ of 3–4 for a high  system (Smak 1992; Wade 1996), and on two observational estimates: In Z Cha during outburst Robinson  (1995) measured $\alpha$ = 8 (although consideration of limb-darkening reduces the estimate to 4; Wood, private communication). Also, an $\alpha$ $>$3.5$\pm$1.6 in DQ Her results from an inclination of 86.5$\pm$1.6 (Horne  1993), and the fact the no X-rays are seen, so that the white dwarf is presumably always obscured. We expect that the inner disk produces line absorption and the outer disk line emission. The code includes this effect, giving a rough match to the empirical data presented by Rutten  (1993) for the novalike UX UMa (the detailed implementation of this made very little difference to the simulation). The code doesn’t include continuum or line flux from the outer wall of the flared disk, assuming this region to be a minor contributor compared to the irradiated surface of the disk. I also included a single-peaked profile, moving with the velocity of the white dwarf, based on the synthetic wind profile of Hoare (1994). In the Balmer lines this component simply fills in the double-peaks of the disk profile, but it dominates in lines such as  (Dhillon  1991; Paper 2). This component is often attributed to an accretion disk wind; however even if this interpretation is incorrect it is an empirical fact that SW Sex stars have this component, and that in the Balmer lines it is largely un-eclipsed. Thus, in my code the wind component is not eclipsed. Since the simulation produces a plot normalised to the continuum (to match the presentation of the data in Papers 1 and 2) this means that the wind is enhanced during eclipse. For the stream flowing over the disk I used the calculations of Lubow (1989). This assumes that the stream is wider than the disk at initial impact so that some portion of it continues on a ballistic trajectory to a second impact near the white dwarf (see Papers 1 and 2 for a fuller account). The flare angle of 4 results in a disk height similar to that of the stream as calculated by Lubow (1989), which is 0.13m for typical SW Sex parameters. This presents a potential problem for the disk-overflow hypothesis, in that too thick a disk will block the stream. Note, though, that the height relevant to the optical properties is that of the photosphere, and that material at that height is not necessarily substantial enough to impede the stream. Using hydrodynamical simulations rather than the analytical treatment of Lubow (1989), Armitage and Livio (1996; 1997) obtain, unsurprisingly, a far messier result. They confirm that some of the stream material overflows the disk, but in a wider fan beam and with a trajectory deflected from the original direction. Despite this, I use the Lubow (1989) calculations for the height and direction of the stream, simply because they are much easier to compute. Note that my results are fairly insensitive to the overflow trajectory, but if future simulations produce outcomes very different from the ballistic approximation then they could refute my model. As in Papers 1 and 2 I suppose that for the first two-thirds of the stream trajectory over the disk the stream is seen in absorption. Then, for the final third when it re-impacts the disk, it produces line emission. The absorption profile of the stream was taken (somewhat arbitrarily) to be the same depth as that of the inner disk (20% of the continuum). In fact, to a very large extent the depth of the absorption from each stream element can be traded against the width of the stream. I therefore used a fixed absorption depth and let the stream width be a free parameter. To recapitulate, the effect of the stream in the code is to block the contribution from any disk element which is behind the stream, and to add in absorption for any stream element in its first two-thirds and emission for elements in the last third. In constructing the synthetic line profiles each pixel of each component contributed a Gaussian profile centered on the correct line-of-sight velocity with a width ($\sigma$) of 25% of the speed of the element. Results ======= To show the effect I am trying to model, Fig. 1 presents typical  line profiles cut into by the phase 0.5 absorption in the line core. I also show, Fig. 2, the velocity trend of the absorption and the range of phases over which it appears. These measurements are reproduced from Thorstensen  (1991). They are from a metal complex at $\lambda5175$ in PX And, chosen because it shows no apparent emission and so allows the cleanest measurement of the absorption. Thorstensen  noted that they obtained similar results from the  lines, but these were less reliable due to the strong and variable emission. Further presentations of data can be found in Papers 1 and 2 and references therein. To compare with the data, Fig. 3 shows my simulation of the Balmer line of an SW Sex star as a function of orbital phase. The central core of the line is made up of the disk and wind components. The wind is uneclipsed, leading to the significant increase in equivalent width during eclipse (phase 1). A high amplitude S-wave, arising from the stream re-impact with the disk, has maximum blueshift at phase 1 and maximum redshift at phase 0.5. The absorption, caused by the stream flowing over the disk, appears on the red side of the line at phase 0.2 and moves to the blue, before disappearing at phase 0.7. The comparison of simulation to data shows an excellent match to the phases at which absorption occurs (0.2–0.7) and to the velocity trend during those phases. The match is not so good to the $\gamma$ velocity of the absorption, although Thorstensen  (1991) note that the $\lambda5175$ feature is probably a blend of several lines of uncertain relative contribution, so the $\gamma$ velocity of the data is essentially arbitrary. More problematical is that the velocity change has a bigger amplitude in the model than the data. This could result from my use of free-fall velocities. The simulations by Armitage & Livio (1997) show that the velocity of the overflowing material is reduced by up to a half by interactions with the disk, although the reduction depends to a large extent on whether an isothermal or an adiabatic equation of state is used. This might solve another outstanding issue for the disk-overflow model, since when modelling V1315 Aql (Paper 2) I found that the best match to the data occurred with velocities 25% below the free-fall velocity, which is now explained by Armitage and Livio’s (1997) work. This is one reason why tomograms of SW Sex stars often show most line emission in the lower-left quadrant, at velocities lower than that of the ballistic stream. The other reason is that such bright regions on a tomogram can be caused by the overlap of emission from the overflow and disk components, and do not necessarily indicate components in their own right (see Papers 1 & 2). To summarise, a flared disk solves the problem of confining the absorption to orbital phases around 0.5. The only parameter having a significant effect on this aspect of the simulation is the flare angle $\alpha$, or more precisely the departure of $i\,+\,\alpha$ from 90. If $i\,+\alpha$ is 10the absorption becomes weaker and visible at all orbital phases, as in the simulations in Papers 1 and 2. The stream width used in the simulation was 0.4m, which is 1/10 of the disk radius. The resulting absorption depth is comparable to that in SW Sex stars (e.g. Fig. 1). Again, the absorption depth from each stream element (currently 20%) could be altered to give the same result for a different stream width. Note, also, that the simulated absorption could be filled in at the extremes of its appearance (phases 0.2 and 0.6) by increasing the relative intensity of the emission from the re-impact. This would produce absorption at a narrower range of phases, centered at 0.4–0.5, as is more common in the Balmer lines of these stars. Conclusions =========== The ‘phase 0.5 absorption’ features seen in SW Sex stars can be explained by a proportion of the accretion stream flowing over the disk. The fact that absorption is seen only at phases 0.2–0.6, and not at every phase as predicted by simulations in Papers 1 and 2, is explained if the accretion disk is flared at an angle of $\approx$ 4. The simulation code has been re-written to include all spatial and obscuration effects, and the results in Papers 1 and 2 are still valid. If disk-overflow is a generally correct model for SW Sex stars the degree of overflow could still vary with time or between systems. A reduction in disk-overflow would leave an SW Sex star looking much like a normal novalike (as seems to have been the case for SW Sex when observed by Dhillon, Marsh & Jones 1997). In a high inclination novalike the splash caused by the stream–disk impact can itself produce absorption when it is at inferior conjunction, which would produce absorption dips centered on phase 0.8, rather than the phase 0.2–0.6 characteristic of the SW Sex stars. Dips at phase 0.8 are seen in novalikes such as TV Col (Hellier 1993) and BP Lyn (Hoard & Szkody 1997). Armitage, P.J., and Livio, M. 1996, ApJ, 470, 1024 Armitage, P.J., and Livio, M. 1997, ApJ, in press Casares, J., Martinez-Pais, I.G., Marsh, T.R., Charles, P.A., and Lazaro, C. 1996, MNRAS, 278, 219 Dhillon V.S., Marsh T.R., and Jones D.H.P. 1991, MNRAS, 252, 342 Dhillon V.S., Marsh T.R., and Jones D.H.P. 1997, MNRAS, 291, 694 Hellier, C. 1993, MNRAS, 264, 132 Hellier, C. 1996, ApJ, 471, 949 (Paper 2) Hellier, C., and Robinson, E.L. 1994, ApJ, 431, L107 (Paper 1) Hoard, D., and Szkody P. 1996, ApJ, 470, 1052 Hoard, D., and Szkody P. 1997, ApJ, 481, 433 Hoare, M.G. 1994, MNRAS, 267, 153 Honeycutt, R.K., Schlegel, E.M., and Kaitchuck, R.H. 1986, ApJ, 302, 388 Horne, K., Welsh, W.F., and Wade, R.A., 1993, ApJ, 410, 357 Lubow, S.H. 1989, ApJ, 340, 1064 Robinson, E.L.  1995, ApJ, 443, 295 Rutten, R.G.M., Dhillon, V.S., Horne, K., Kuulkers, E., and van Paradijs, J. 1993, Nature, 362, 518 Shafter, A.W. 1992, ApJ, 394, 268 Shafter, A.W., Hessman, F.V., and Zhang, E.H. 1988, ApJ, 327, 248 Smak, J. 1992, Acta Astr., 42, 323 Thorstensen, J.R., Ringwald, F.A., Wade, R.A., Schmidt, G.D., and Norsworthy, J.E. 1991, AJ, 102, 272 Wade, R.A. 1996, in Evans, A., Wood, J.H., eds, Proc. IAU Colloq. 158 (Dordrecht, Kluwer), p. 119 Warner, B. 1995, Cataclysmic variables (Cambridge, Cambridge University Press) Williams, R.E. 1989, AJ, 97, 1752 Wu, K., Wickramasinghe, D.T., and Warner, B., 1995, Publ. Astron. Soc. Aust., 12, 60
--- author: - | \ Division of Astronomy and Space Physics, Department of Physics and Astronomy, Uppsala University, Sweden\ E-mail: title: 'David vs. Goliath: pitfalls and prospects in abundance analyses of dwarf vs. giant stars' --- Introduction ============ Impressive abundance results, e.g. for $r$-process-enhanced giant stars (Sneden, these proceedings), were presented at this conference. Quantitative stellar spectroscopy seems to be able to yield hard boundary conditions for nuclear physics. Abundance trends practically without cosmic scatter have been uncovered down to metallicities of \[Fe/H\]=$-4$ (see the case of chromium in giants, Cayrel et al. 2004). Let us, however, not forget that chemical surface abundances inevitably reflect physical processes taking place in stars. For studies of Galactic chemical evolution, the most relevant processes are those that alter the surface abundances making stars imperfect data carriers. The stellar spectroscopist infers the chemical composition of stars using models usually containing a great number of simplifying assumptions. This sentence contains the often made simplifying assumption that one can equate the surface composition with that of the star as a whole. For ordinary Population I/II main-sequence stars more or less like the Sun ($T_{\rm eff} = 5800 \pm 500$K), this assumption is violated at the 20-60% level (e.g. Korn et al. 2007, Meléndez et al. 2009). In the following, I shall discuss a few of these assumptions highlighting those leading to sizable effects. What is sizable in this context? Setting the scale ================= On what level do we need to be worried about the validity of derived chemical abundances? The answer to this question will depend very heavily on the application envisioned. It is, however, clear that it is easier to achieve good relative abundances (precision) than absolute ones (accuracy). Furthermore, a higher level of precision can be safeguarded by analysing homogeneous samples of stars. In this respect, a sample of stars consisting of dwarfs and giants may be problematic. From a user perspective, it is less interesting from which stars the abundances were inferred. The chemical-evolution modeller tries to reproduce structure(s) in abundance-abundance diagrams. At a given metallicity, such structures are usually present on or below the 0.3dex level in logarithmic abundance. Fuhrmann (2004) has shown that thick-disk stars typically have 0.3dex less iron at a given magnesium abundance (cf. his Fig. 34). Nissen and Schuster (2010) give evidence for two halo populations differing in the abundances of the $\alpha$-elements by less than 0.2dex at a given metallicity. If we want to learn something about chemical structures of this sort, then an abundance precision on the same scale may suffice for statistical samples of stars. For a star-by-star classification, a precision 5-10 times higher should be aimed for (Fuhrmann’s and Schuster & Nissen’s differential abundances are good to 0.03dex). Dwarf vs. giants ================ There are numerous advantages that dwarfs have over giants, and vice versa. The evolution on the main sequence is slow, this evolutionary phase is thus well-represented in the Hertzsprung-Russell diagram, even of very old populations (but see Bromm, these proceedings, regarding Population III). Number statistics is, however, only one side of the coin. There are typically three magnitudes in visual brightness between a main-sequence turn-off star and a red giant below the bump. At 2kpc, such an RGB star shines at 13th magnitude, within spectroscopic reach of 4m-class telescopes. A turn-off star at the same distance is, in practical terms, an 8m application. And in Galactic terms, 2kpc does not really get you into the Galactic halo. Cool giants are not only more luminous, the lower densities in their atmospheres favour the formation of spectral lines in the majority species (like in Fe[ii]{}). It is in these stars that we can behold the full glory of the $r$-process pattern. Rare, even radioactive species like uranium can be found under favourable circumstances (Cayrel et al. 2005) allowing nucleo-chronometric age dating. The lower photospheric temperatures favour the formation of molecules which allow us to probe isotopic ratios (e.g. C-12/C-13, Mg-25/Mg-24, see e.g. Yong et al. 2004). But the evolution to higher luminosities comes at a price: dredge-up episodes take place mixing newly fused elements (He, C and N) to the surface. Lithium is burned in this process. Some of the most abundant and interesting elements can thus not be studied in giants, other than to learn about stellar structure and evolution. The uncertainties related to when exactly the dredge-up episodes take place (isotopic ratios are sensitive probes of this) and what the structural consequences are (e.g. the effect of the helium abundance on the surface gravity) add to the complexity of interpreting abundance data of giant stars. Given that dwarfs eventually become giants, should one not expect chemical abundances of, say, iron-peak elements from both groups to agree? The answer is yes and no. For one thing, one has to make sure one samples the same population (not always a given in magnitude-limited surveys). For another, the limitations inherent to our models may affect dwarfs and giant differently. Additional biases may arise from physics that is not included in our modelling. Below, a few such effects beyond classical modelling are briefly discussed. Non-LTE ======= I wrote an invited review on this topic two years ago (Korn 2008). The situation is essentially unchanged: we know that non-LTE plays a significant role in the line formation of minority species like Fe[i]{}, to a lesser extent also for majority species. But this very general statement pays no justice to the complexity of the specific atom. For many elements, non-LTE effects remain unexplored. For all but the lightest species, there are sizable uncertainties in the computations stemming from the unknown strength of collisions with neutral hydrogen. The choices made in connection with hydrogen collisions often decide on the overall strength of departures from LTE. We have, for lack of a better theory, taken an observational approach, calibrating the strength of hydrogen collisions by means of well-studied stars with significant HIPPARCOS parallaxes. This tends to work very well (see Korn et al. (2003) for iron, Mashonkina et al. (2007) for calcium). While this approach has been criticized for subsuming all modelling biases into a single, poorly modelled collisional strength, it may, for the time being, be better than ignoring such collisions altogether. Proper quantum-mechanical calculations are underway (e.g. Barklem et al. (2010) for Na+H), but it will likely take 5-10 years until a complex atom like iron can be tackled. Until disproven, it cannot be ruled out that mismatches between dwarfs and giants (see e.g. Bonifacio et al. 2009 for the case of chromium) are due to our simplistic modelling in LTE. Until this is cleared up, modelling such LTE-based abundance trends in terms of Galactic chemical evolution may be futile. Hydrodynamics ============= The realization that full account of hydrodynamics significantly changes the $T-\tau$-relations of solar-type stars has led to rather drastic changes in stellar abundances, not the least for metal-poor stars. At \[Fe/H\]=$-3$, adiabatic cooling lowers the temperatures in the upper photospheres by a couple of thousand K, with consequences for all lines formed in these layers (neutral species, molecules). Effects can be large in both dwarfs and giants and reach up to 1dex for molecules. Asplund (2005) gives a competent overview of the subject. Coupling of hydrodynamics and non-LTE is nowadays possible, at least for atoms of moderate complexity. Curiously, 1D–LTE and 3D–non-LTE of lithium practically coincide for stars on the Spite plateau (Barklem et al. 2003). This does, however, likely not hold when analysing cooler stars. So trends of lithium with effective temperature may well be affected by 3D–non-LTE. This should be explored further, as it may tell us how well we understand the structural changes in connection with the first dredge-up. Atomic diffusion ================ It has been speculated for decades that the surface abundances of metal-poor halo dwarfs are systematically affected by gravitational settling and radiative levitation (e.g. Michaud, Fontaine & Baudet 1984), collectively referred to as atomic diffusion. In this physical picture, helium and lithium would settle appreciably (with important effects for stellar ages), while elements like chlorine or potassium could be accumulated in the atmosphere to abundance levels above the composition of the gas from which the star once formed. These effects are predicted to be prominent whenever convection is weak. It is thus not surprising that metal-poor turn-off stars of spectral type F would be significantly affected (also from the point of view of their proximity to all the classes of chemically peculiar stars between early F and late B). More than 20 years of research into atomic diffusion in solar-type stars made it clear that uninhibited diffusion is unlikely to occur: its abundance effects would have been seen, and the Spite plateau of lithium would be distorted at the hot end. Extra mixing below the convective envelope was introduced as a moderating process, albeit without specifying the physical mechanism (Proffitt & Michaud 1991). A careful differential analysis of unevolved and evolved stars in the metal-poor globular cluster NGC 6397 (\[Fe/H\]=$-2.1$) revealed systematic trends of abundance with evolutionary stage (Korn et al. 2006, 2007). The abundance trends are not overwhelmingly large (up to 0.2dex), but there is excellent agreement of the element-specific amplitude of the trends with predictions of stellar-structure models with atomic diffusion. The efficiency of the postulated extra mixing is found to be close to the lowest efficiency required to keep the Spite plateau thin and flat (cf. Richard et al. 2005). The inferred initial lithium abundance is $\log \varepsilon$(Li)=2.54$\pm$0.1, in good agreement with WMAP-calibrated BBN predictions (cf. Steigman, these proceedings). Meléndez et al. (2010) took a fresh look at the Spite plateau among field stars comparing its morphology to atomic-diffusion predictions on a stellar-mass scale (rather than on the customary metallicity scale). They confirm that the Spite plateau is shaped by atomic diffusion and that stellar physics alone can account for the cosmological lithium-7 problem. We (Nordlander, Korn & Richard) are currently investigating whether or not there is a (hypothetical) effective-temperature scale which fully explains the cosmological lithium-7 problem, is compatible with the white-dwarf cooling age for NGC 6397 (11.5$\pm$0.5Gyr, Hansen et al. 2007) and describes the abundance trends at a high mixing efficiency à la Meléndez et al. (2010). Seemingly, such a scale does not exist. More work is needed here. Clusters at both lower and higher metallicities are being scrutinized for atomic-diffusion signatures. This is observationally challenging, as low-reddening metal-poor globular clusters tend to be further away than NGC 6397. In M 92, turn-off stars have $V\,\approx$18.3. One really needs a 10m telescope like Keck to take decent spectra of such stars. Figure 1 shows a preliminary analysis of stars in NGC 6752 at \[Fe/H\]=$-1.6$ taken with VLT/FLAMES-UVES. Systematic trends seem to exist between the groups of stars, but the overall amplitude is low, reaching 0.1dex for iron. The trends are thus compatible with predictions from stellar-structure models including atomic diffusion, radiative acceleration and extra mixing below the convective envelope, with rather efficient extra mixing. It is interesting to speculate what happens at lower metallicities. If the two data points constraining the turbulent-mixing efficiency (NGC 6752 @ $-1.6$ and NGC 6397 @ $-2.1$) constitute a trend, then one would expect relatively inefficient extra mixing in the most metal-poor globular clusters. This would lead to sizable abundance differences between dwarfs and giants, in excess of 0.3dex for certain elements like magnesium. This is currently being tested at the VLT (M 30, Lind et al.) and at Keck (M 92, Cohen et al.). At even lower metallicities, inefficient extra mixing could potentially explain the breakdown of the Spite plateau (Sbordone et al. 2010), as only models within a certain range of extra-mixing efficiencies can produce a thin and flat Spite plateau (see Richard et al. 2005). [***Tertium non datur?***]{} ============================ Indeed, there is more to stellar astrophysics than dwarfs and giants. In particular, subgiants deserve to be mentioned as a third and intermediate group of stars, as they seem to combine some of the best properties of both the less evolved and the more evolved objects: more luminous than dwarfs, but still rather high-gravity objects, not yet affected by the first dredge-up. The main showstopper has so far been the fact that they are relatively rare. But this will no longer be an obstacle in the era of all-sky surveys like Gaia. Subgiants really come into their own right when we talk stellar ages. A beautiful example is given by Bernkopf, Fiedler & Fuhrmann (2001): the few known thick-disk subgiants with significant HIPPARCOS parallaxes seem to be systematically older than comparable thin-disk stars, indicating a possible hiatus in star formation between thick- and thin-disk formation of several Gyr. In the end, the ambition has to be to reliably derive the chemical compositions of dwarfs and giants alike, across the full range of metallicities, with full account of hydrodynamics, non-equilibrium line formation, atomic diffusion and dredge-up. The ability to combine kinematical, chemical and age information for carefully selected subsets of stars ([*tertium datur*]{}: subgiants!) will undoubtedly propel us into the era of Precision Galactic Archaeology. I would like to thank Frank Grundahl and Olivier Richard who both provided crucial input data used in the preliminary analysis of dwarf-to-giant stars in NGC 6752. [99]{} R. Cayrel et al. 2004, *First stars V. Abundance patterns from C to Zn and supernova yields in the early Galaxy*, *A&A* [**416**]{} 1117 A.J. Korn et al. 2007, *Atomic Diffusion and Mixing in Old Stars. I. Very Large Telescope FLAMES-UVES Observations of Stars in NGC 6397*, *ApJ* [**671**]{} 402 J. Meléndez et al. 2009, *The Peculiar Solar Composition and Its Possible Relation to Planet Formation*, *ApJ* [**704**]{} L66 K. Fuhrmann 2004, *Nearby stars of the Galactic disk and halo. III.*, *AN* [**325**]{} 3 P.E. Nissen & W.J. Schuster 2010, *Two distinct halo populations in the solar neighborhood. Evidence from stellar abundance ratios and kinematics*, *A&A* [**511**]{} L10 R. Cayrel et al. 2001, *Measurement of stellar age from uranium decay*, *Nature* [**409**]{} 691 D. Yong et al. 2003, *Mg isotopic ratios in giant stars of the globular cluster NGC 6752*, *A&A* [**402**]{} 985 A.J. Korn 2008, *NLTE line formation*, in proceedings of *A Stellar Jouney*, *Physica Scripta* [**133**]{} 014009 A.J. Korn, J. Shi & T. Gehren 2003, *Kinetic equilibrium of iron in the atmospheres of cool stars. III. The ionization equilibrium of selected reference stars*, *A&A* [**407**]{} 691 L. Mashonkina, A.J. Korn & N. Przybilla 2007, *A non-LTE study of neutral and singly-ionized calcium in late-type stars*, *A&A* [**461**]{} 261 P.S. Barklem et al. 2010, *Inelastic Na+H collision data for non-LTE applications in stellar atmospheres*, *A&A* [**519**]{} 20 P. Bonifacio et al. 2009, *First stars XII. Abundances in extremely metal-poor turnoff stars, and comparison with the giants*, *A&A* [**501**]{} 519 M. Asplund 2005, *New Light on Stellar Abundance Analyses: Departures from LTE and Homogeneity*, *ARA&A* [**43**]{} 481 P.S. Barklem, A.K. Belyaev & M. Asplund 2003, *Inelastic H+Li and H$^-$+Li$^+$ collisions and non-LTE Li I line formation in stellar atmospheres*, *A&A* [**409**]{} L1 G. Michaud, G. Fontaine & G. Baudet 1984, *The lithium abundance – Constraints on stellar evolution*, *ApJ* [**282**]{} 206 C.R. Proffitt & G. Michaud 1991, *Gravitational settling in solar models*, *ApJ* [**380**]{} 238 A.J. Korn et al. 2006, *A probable stellar solution to the cosmological lithium discrepancy*, *Nature* [**442**]{} 657 O. Richard, G. Michaud & J. Richer 2005, *Implications of WMAP Observations on Li Abundance and Stellar Evolution Models*, *ApJ* [**619**]{} 538 J. Meléndez et al. 2010, *Observational evidence for a broken Li Spite plateau and mass-dependent Li depletion*, *A&A* [**515**]{} L3 B.M.S. Hansen et al. 2007, *The White Dwarf Cooling Sequence of NGC 6397*, *ApJ* [**671**]{} 380 L. Sbordone et al. 2010, *A&A*, to appear, arXiv:1003.4510 J. Bernkopf, A. Fiedler & K. Fuhrmann 2001, *The Dark Side of the Milky Way*, in proceedings of *Astrophysical Ages and Times Scales*, [*A*SP Conference Series]{} [**245**]{} (San Francisco) 207
--- author: - 'Justyna Ogorzały[^1]' title: '**<span style="font-variant:small-caps;">Quasistatic contact problem with unilateral constraint for elastic-viscoplastic materials</span>**' --- [**Abstract.**]{} This paper consists of two parts. In the first part we prove the unique solvability for the abstract variational-hemivariational inequality with history-dependent operator. The proof is based on the existing result for the static variational-hemivariational inequality and a fixed point argument. In the second part, we consider a mathematical model which describes quasistatic frictional contact between a deformable body and a rigid foundation. In the model the material behaviour is modelled by an elastic-viscoplastic constitutive law. The contact is described with a normal damped response, unilateral constraint and memory term. In the analysis of this model we use the abstract result from the first part of the paper.\ [**Key words:**]{} variational-hemivariational inequality, history-dependent operator, frictional contact, elastic-viscoplastic material, normal damped response\ [**Mathematics Subject Classification**]{} (2010): 34G20, 47J20, 47J22, 74M10, 74M15, 74H20, 74H25 Introduction ============ Many mechanical problems involving nonmonotone, multivalued relations between stresses and strains, between reactions and displacements or between generalized forces and fluxes. These relations expressed in terms of nonconvex superpotentials (cf. [@Pan1; @Pan2]) lead to hemivariational inequalities. Let us add, that the nonconvex superpotentials (cf. [@clarke]) generalize the notion of convex superpotential introduced by Moreau [@MOR]. The convex superpotentials describe monotone possibly multivalued mechanical laws and they lead to variational inequalities. The variational-hemivariational inequalities were introduced by Panagiotopoulos and they represent a special class of inequalities, in which both convex and nonconvex functions occur. These type of inequalities are a useful tool in the study of nonsmooth variational problems with constraints and boundary value problems with discontinuous nonlinearities. The results associated with variational-hemivariational inequalities and its applications can be found in the monographs, e.g. [@CLM; @GMDR; @GM; @NP; @Pan]. The aim of this paper is to study the existence and uniqueness of the solution of the variational-hemivariational inequality with history-dependent operator and to apply obtained result into the analysis of a quasistatic contact problem for elastic-viscoplastic materials. It should be noted that the existence and uniqueness result for the static variational-hemivariational inequality without history-dependent operator is obtained by Migórski et al. in [@MOSVHV]. This paper generalizes the result from [@MOSVHV]. The first novelty in our work is that we consider the variational-hemivariational inequality defined on a bounded interval of time. The second novelty related to the special structure of the variational-hemivariational inequality which we consider. Namely, our inequality contains convex and nonconvex functionals and, moreover, it contains so-called history-dependent operator which at any moment $t \in (0,T),$ depend on the history of the solution up to the moment $t$. Furthermore, we present the example of a contact problem which leads to the variational-hemivariational inequality with history-dependent operator. The rest of the paper is structured as follows. Section 2 contain notation and definitions. In Section 3 we consider the abstract problem and we prove it unique solvability. Finally, in Section 4 we apply the result obtained in Sections 3 in the analysis of the contact problem. Preliminary =========== We introduce the notation and we recall some preliminary material which will be used in the next parts of this paper. Let $V$ and $X$ are separable and reflexive Banach spaces with the duals $V^*$ and $X^*$, respectively, and $K \subset V$. We consider also the space $\v = L^2(0,T;V),$ where $0 < T < + \infty.$ Moreover, by $\mathcal{L}(V,X)$ we denote a space of linear and bounded operators with a Banach space $V$ with values in a Banach space $X$ with the norm $\Vert \cdot \Vert_{\mathcal{L}(V,X)}.$ The duality pairing between $X^*$ and $X$ is denoted by $\langle \cdot, \cdot \rangle_{X^* \times X},$ whereas the duality pairing between $\v^*$ and $\v$ is given by $ \langle u,v \rangle_{\v^* \times \v} = \int\limits_{0}^{T}\,\langle u(t), v(t) \rangle_{V^* \times V}\,dt$ for $u \in \v^*, v \in \v.$ If $X$ is a Hilbert space thus the inner product is denoted by $(\cdot,\cdot)_X.$ We use the following concepts of the generalized directional derivative, the Clarke subdifferential and the subgradient of a convex function. \[GENERAL\] The generalized directional derivative (in the sense of Clarke) of a locally Lipschitz function $\varphi \colon X \longrightarrow \mathbb{R}$ at the point $x \in X$ in the direction $v \in X,$ denoted by $\varphi^{0} (x;v)$ is defined by $$\varphi^{0} (x;v) = \limsup_{y \to x, \, \lambda \downarrow 0} \frac{\varphi(y + \lambda v) - \varphi (y)}{\lambda}.$$ \[DDEF2\] Let $\varphi \colon X \longrightarrow \mathbb{R}$ be a locally Lipschitz function. The Clarke generalized gradient (subdifferential) of $\varphi$ at $x \in X,$ denoted by $\partial \varphi (x),$ is the subset of a dual space $X^{*}$ defined by $ \partial \varphi (x) = \lbrace \zeta \in X^{*} \, \vert \, \varphi^{0} (x;v) \geqslant \langle \zeta, v \rangle_{X^{*} \times X} \ \mbox{for all} \ v \in X \rbrace . $ \[DDEF1\] Let $\varphi \colon X \to \R \cup \{ +\infty \}$ be a proper, convex and lower semicontinuous function. The subdifferential $\partial \varphi$ is generally a multivalued mapping $\partial \varphi \colon X \to 2^{X^*}$ defined by $ \partial \varphi (x) = \{ \, x^*\in X^* \mid \langle x^*, v -x \rangle_{X^* \times X} \leqslant \varphi(v)-\varphi(x) \ \mbox{for all} \ v \in X \, \} $ for $x \in X$. The elements of the set $\partial \varphi (x)$ are called subgradients of $\varphi$ in $x$. In this paper by $c$ we will denote a positive constant which can change from line to line. The following lemma is a consequence of the Banach contraction principle. \[lematkulig\] Let $X$ be a Banach space with a norm $\Vert \cdot \Vert_X$ and $T>0$. Let $\Lambda : L^2 (0,T;X) \longrightarrow L^2 (0,T;X)$ be an operator satisfying $ \Vert (\Lambda \eta_1)(t) - (\Lambda \eta_2)(t) \Vert^2_X \leqslant c \int\limits_{0}^{t} \Vert \eta_1(s) - \eta_2 (s) \Vert^2_X\,ds $ for every $\eta_1, \eta_2 \in L^2 (0,T;X),$ a.e. $t \in (0,T).$ Then $\Lambda$ has a unique fixed point in $L^2(0,T;X),$ i.e., there exists a unique $\eta^* \in L^2(0,T;X)$ such that $\Lambda \eta^* = \eta^*.$ Now, we recall the concept of the history-dependent operator. An operator $\s \colon \v \longrightarrow \v^*$ that satisfies the inequality $$\label{wS} \Vert (\s u_1)(t) - (\s u_2) (t) \Vert_{V^*} \leqslant L_{\s} \int\limits_{0}^{t} \Vert u_1 (s) - u_2 (s) \Vert_{V}\, ds$$ $ {\rm for}\ u_1, u_2 \in \v,\ {\rm for\ a.e.}\ t \in (0,T)\ {\rm with}\ L_{\s} >0, $ is called [*the history-dependent operator*]{}. The following property of the history-dependent operators will be used later. \[PwS\] Let $\s_1,\s_2 \colon \v \longrightarrow \v^*$ be the operators which satisfy , then the operator $\s \colon \v \longrightarrow \v^*$ given by $(\overline{\s}u)(t) = (\s_1 u)(t) + (\s_2 u)(t)$ for $u \in \v,$ satisfies . The proof is straightforward so we omit it. Finally, we present the result which concerns the existence and uniqueness of the solution of the static variational-hemivariational inequality. Consider the following abstract problem. \[probabst\] Find an element $u \in V$ such that $u \in K$ and $$\langle Au, v - u \rangle_{V^* \times V} + \varphi(u,v) - \varphi(u,u) + J^0(Mu;Mv - Mu) \geqslant \langle f, v -u \rangle_{V^* \times V} \ {\rm for\ all}\ v \in K.$$ We introduce the following hypotheses. $$\label{A0} \left. \begin{array}{l} A: V \longrightarrow V^*\ \mbox{is such that}\hspace{0.5cm}\\ \ \ {\rm (a)}\ A\ \mbox{is pseudomonotone}.\hspace{0.5cm}\\ \ \ {\rm (b)}\ A\ \mbox{is coercive, i.e., there exist}\ \alpha_A > 0, \beta ,\beta_1 \in \R\ \mbox{and}\ u_0 \in K\ \mbox{such that}\hspace{0.5cm}\\ \hspace{1cm}\langle Av, v-u_0 \rangle_{V^* \times V} \geqslant \alpha_A \Vert v \Vert^2_V - \beta \Vert v \Vert_V - \beta_1\ {\rm for\ all} \ v \in V.\hspace{0.5cm}\\ \ \ {\rm (c)}\ A\ \mbox{is strongly monotone, i.e., there exists}\ m_A > 0\ \mbox{such that}\hspace{0.5cm}\\ \hspace{1cm} \langle A v_1 - A v_2, v_1 - v_2 \rangle_{V^* \times V} \geqslant m_A \Vert v_1 - v_2 \Vert^2_V \ \mbox{for all}\ v_1, v_2 \in V.\hspace{0.5cm} \end{array} \right\}$$ $$\begin{aligned} \label{fi0} \left. \begin{array}{l} \varphi: K \times K \longrightarrow \R\ \mbox{is such that}\\ \ \ {\rm (a)}\ \varphi (u, \cdot): K \longrightarrow \R \ \mbox{is convex and lower semicontinuous on}\ K,\ \mbox{for all}\ u \in K.\\ \ \ {\rm (b)}\ \mbox{there exists}\ \alpha_{\varphi} > 0\ \mbox{such that}\\ \hspace{1cm} \varphi (u_1, v_2) - \varphi (u_1, v_1) + \varphi (u_2, v_1) - \varphi (u_2, v_2) \leqslant \alpha_{\varphi} \Vert u_1 - u_2 \Vert_V \Vert v_1-v_2 \Vert_V\\ \hspace{1cm} \mbox{for all}\ u_1, u_2, v_1, v_2 \in K. \end{array} \right\}\end{aligned}$$ $\\$ $$\begin{aligned} \label{j0} \left. \begin{array}{l} J\colon X \longrightarrow \R\ \mbox{is such that}\\ \ \ {\rm (a)}\ J\ \mbox{is locally Lipschitz}.\hspace{1.4cm}\\ \ \ {\rm (b)}\ \Vert \partial J(v) \Vert_{X^*} \leqslant c_0 + c_1\,\Vert v \Vert_X \ {\rm for\ all}\ v \in X\ \mbox{with}\ c_0, c_1 \geqslant 0.\hspace{1.4cm}\\ \ \ {\rm(c)}\ \mbox{there exists}\ \alpha_J > 0\ \mbox{such that}\\ \hspace{1cm} J^0(v_1;v_2-v_1) + J^0(v_2;v_1-v_2) \leqslant \alpha_J \Vert v_1-v_2 \Vert_X^2\ \mbox{for all}\ v_1, v_2 \in X.\hspace{1.4cm} \end{array} \right\}\end{aligned}$$ $$\label{opm} M \colon V \longrightarrow X \ \mbox{is a linear, continuous and compact operator.}\qquad\qquad\qquad\qquad\qquad$$ $$\label{k0} K \ \mbox{is a nonempty, closed and convex subset of}\ V. \qquad \qquad \qquad \qquad \qquad \qquad \qquad$$ $$\label{f0} f \in V^*.$$ \[RELAXED\] Hypothesis (\[j0\])(c) is used in the proof of the uniqueness of solution to hemivariational inequalities. This hypothesis is equivalent to the following condition $$\begin{aligned} \label{REQUIV} \langle z_1 - z_2, v_1-v_2 \rangle_{X^* \times X} \geqslant - \alpha_J \,\Vert v_1 - v_2 \Vert_X^2\end{aligned}$$ for all $z_i \in \partial J (v_i), z_i,v_i \in X, i=1,2$ with $\alpha_J > 0$. This condition is called [*the relaxed monotonicity condition*]{} for a locally Lipschitz function $J$. It can be proved that for a convex function, condition (\[j0\])(c), or equivalently , holds with $\alpha_J = 0.$ \[twabst\] Under hypotheses – and $$\label{w0} m_A > \alpha_{\varphi} + \alpha_J\,\Vert M \Vert^2,\ \ \alpha_A > 2\,\alpha_J \,\Vert M \Vert^2$$ Problem \[probabst\] has a unique solution $u \in V.$ The proof of Theorem \[twabst\] is similar to the proof of Theorem 16 in [@MOSVHV]. History-dependent variational-hemivariational inequality {#s31} ======================================================== In this section, we study an abstract variational-hemivariational inequality which contains a history-dependent operator. We start with the time-dependent version of Problem \[probabst\]. To this end, we consider the operators $A\colon (0,T)\times V \longrightarrow V^*,\ M\colon V \longrightarrow X,$ the functional $J\colon (0,T)\times X \longrightarrow \R,$ the functions $\varphi\colon K \times K \longrightarrow \R$ and $f\colon (0,T) \longrightarrow V^*.$ With these data we deal with the following variational-hemivariational inequality in which the time variable plays the role of parameter. \[probabst1\] Find $u \in \v$ such that $u(t) \in K$ and $$\begin{aligned} \label{vhv} \begin{split} \langle A(t, u(t)), v -u(t)\rangle_{V^* \times V} &+ \varphi (u(t),v) - \varphi (u(t), u(t))\\ &+ J^0 (t, Mu(t);M(v-u(t))) \geqslant \langle f(t), v-u(t) \rangle_{V^* \times V} \end{split}\end{aligned}$$ for all $v \in K$ and a.e. $t \in (0,T).$ In the study of Problem \[probabst1\], we assume that the assumptions , and hold. Moreover, we need the following assumptions on the data. $$\begin{aligned} \label{wA} \left. \begin{array}{l} A\colon (0,T) \times V \longrightarrow V^*\ \mbox{is such that}\\ \ \ {\rm (a)}\ A(\cdot,v)\ \mbox{is measurable on}\ (0,T)\ \mbox{for all}\ v \in V.\\ \ \ {\rm (b)}\ A(t,\cdot)\ \mbox{is strongly monotone, i.e., there exists}\ m_A > 0\ \mbox{such that}\\ \hspace{1cm} \langle A (t, v_1) - A (t, v_2), v_1 - v_2 \rangle_{V^* \times V} \geqslant m_A \Vert v_1 - v_2 \Vert^2_V \\ \hspace{1cm} \mbox{for all}\ v_1, v_2 \in V\ \mbox{and a.e.}\ t \in (0, T). \\ \ \ {\rm (c)}\ A(t,\cdot)\ \mbox{is continuous on}\ V\ \mbox{for a.e.}\ t \in (0,T).\\ \ \ {\rm (d)}\ \Vert A(t, v) \Vert_{V^*} \leqslant a_0 (t) + a_1 \Vert v \Vert_V\ \mbox{for all}\ v \in V, \ \mbox{a.e.}\ t \in (0,T)\\ \hspace{1cm} \mbox{with}\ a_0 \in L^2 (0, T), a_0 \geqslant 0\ \mbox{and}\ a_1 >0.\\ \ \ {\rm (e)}\ A(t,\cdot)\ \mbox{is coercive, i.e., there exists}\ \alpha_A > 0,\ \beta \in \R,\ \beta_1(t) \in L^2(0,T)\\ \hspace{1cm} \mbox{and}\ u_0 \in K\ \mbox{such that}\ \langle A(t,v),v - u_0 \rangle_{V^* \times V} \geqslant \alpha_A\,\Vert v \Vert^2_V - \beta \Vert v \Vert_V - \beta_1(t)\\ \hspace{1cm} {\rm for\ all}\ v \in V \ {\rm a.e.} \ t \in (0,T). \end{array} \right\}\end{aligned}$$ $$\begin{aligned} \label{wJ} \left. \begin{array}{l} J\colon (0,T) \times X \rightarrow \mathbb{R}\ \mbox{is such that}\hspace{1.5cm}\\ \ \ {\rm (a)}\ J(\cdot,v)\ \mbox{is measurable on}\ (0,T)\ \mbox{for all}\ v \in X.\hspace{1.5cm}\\ \ \ {\rm (b)}\ J (t, \cdot)\ \mbox{is locally Lipschitz on}\ X\ \mbox{for a.e.}\ t \in (0,T).\hspace{1.5cm}\\ \ \ {\rm (c)}\ \Vert \partial J (t,v) \Vert_{X^*} \leqslant c_0(t) + c_1 \Vert v \Vert_X\ \mbox{for all}\ v \in X,\ \mbox{a.e.}\ t \in (0,T)\ \mbox{with}\\ \hspace{1cm} c_0 \in L^2(0,T),\; c_0,c_1 \geqslant 0.\hspace{1.5cm}\\ \ \ {\rm (d)}\ J(t, \cdot)\ \mbox{or}\ - J (t, \cdot)\ \mbox{is regular (in the sense of Clarke) on}\ X\ \mbox{ for}\hspace{1.5cm}\\ \hspace{1cm} \mbox{a.e.}\ t \in (0,T).\hspace{1.5cm}\\ \ \ {\rm (e)}\ \mbox{there exists}\ m_J > 0\ \mbox{such that} \hspace{1.5cm}\\ \hspace{1cm} J^0 (t, v_1; v_2-v_1) + J^0(t,v_2;v_1-v_2) \leqslant m_J \Vert v_1-v_2 \Vert^2_{X}\hspace{1.5cm}\\ \hspace{1cm} \mbox{for all}\ v_1, v_2 \in X\ \mbox{and a.e.}\ t \in (0,T).\hspace{1.5cm} \end{array} \right\}\end{aligned}$$ Moreover, we assume that $$\begin{aligned} \label{WF} \left. \begin{array}{l} \ \ {\rm (a)}\ f \in \v^*. \hspace{0.4cm}\\ \ \ {\rm (b)}\ m_A >\alpha_{\varphi}+ m_J \Vert M \Vert^2,\ \ \alpha_A > 2\,m_J\,\Vert M \Vert^2,\ \ \mbox{where}\ \ \Vert M \Vert = \Vert M \Vert_{\mathcal{L} (V,X)}.\hspace{0.4cm} \end{array} \right\}\end{aligned}$$ We have the following existence and uniqueness result. \[twierdzenie1\] Under the assumptions , , and –, Problem \[probabst1\] has a unique solution $u \in \v$. We use Theorem \[twabst\] for $t \in (0,T)$ fixed. Note that, from the hypothesis , it follows that the operator $A(t, \cdot)$ satisfies for a.e.$t \in (0,T).$ From (b),(c),(d) we observe, that $A$ is monotone and hemicontinuous and bounded. Hence and from Theorem 3.69 in [@MOSBOOK], we know that the operator $A(t, \cdot)$ is pseudomonotone, so the condition (a) holds for a.e. $t \in (0,T)$. Moreover, for a.e. $t \in (0,T),$ the condition (e) implies (b). We also see, that from the hypothesis and , it follows that the function $ J(t, \cdot)$ satisfies for a.e. $t \in (0,T).$ Note that, the assumption (b) implies the assumption with $\alpha_J = m_J$. Hence, exploiting Theorem \[twabst\], we deduce that, for a.e. $t \in (0,T),$ Problem \[probabst1\] has a unique solution $u(t) \in K.$ Now, we prove that the function $t \longmapsto u(t)$ is measurable on $(0,T).$ Let $g \in V^*$ be given and $u(t)\in V$ be the unique solution of the inequality . We claim that the solution $u$ depends continuously on the right-hand side $g$, for a.e. $t \in (0,T).$ Namely, let $g_1, g_2 \in V^*$ and $u_1(t), u_2(t) \in K$ be the corresponding solutions to . Then $$\begin{aligned} \label{ab} \begin{split} \langle A(t, &u_1(t)), v - u_1(t) \rangle_{V^* \times V} + \varphi(u_1(t),v) - \varphi(u_1(t),u_1(t))\\ &+ J^0 (t, M u_1(t);M(v-u_1(t))) \geqslant \langle g_1, v - u_1(t) \rangle_{V^* \times V} \end{split}\end{aligned}$$ and $$\begin{aligned} \label{cd} \begin{split} \langle A(t, &u_2(t)), v - u_2(t) \rangle_{V^* \times V} + \varphi(u_2(t),v) - \varphi(u_2(t),u_2(t))\\ &+ J^0 (t, M u_2(t);M(v-u_2(t))) \geqslant \langle g_2, v - u_2(t) \rangle_{V^* \times V} \end{split}\end{aligned}$$ for all $v \in K$ and a.e. $t\in (0,T).$ We put $v = u_2(t)$ into and $v = u_1(t)$ into . Adding the obtained inequalities, we get $$\begin{aligned} \begin{split} \langle A(t, &u_1(t)) - A(t, u_2(t)), u_1(t)-u_2(t) \rangle_{V^* \times V} - \big( \varphi(u_1(t),u_2(t)) - \varphi(u_1(t),u_1(t))\\ &+ \varphi(u_2(t),u_1(t)) - \varphi(u_2(t),u_2(t))\big) -\big( J^0(t,Mu_1(t);M(u_2(t)-u_1(t)))\\ &+ J^0(t,Mu_2(t);M(u_1(t)-u_2(t)))\big) \leqslant \langle g_1-g_2,u_1(t)-u_2(t) \rangle_{V^* \times V}. \end{split}\end{aligned}$$ From this, conditions (b), (b) and (e), we have $$\begin{aligned} \begin{split} m_A \Vert u_1(t) -u_2(t) \Vert_V^2 - \alpha_{\varphi}\Vert u_1(t) -u_2(t) \Vert_V^2 &-m_J\Vert M \Vert^2\Vert u_1(t) -u_2(t) \Vert_V^2\\ \leqslant &\Vert g_1 - g_2\Vert_{V^*}\Vert u_1(t) - u_2(t) \Vert_V. \end{split}\end{aligned}$$ Exploiting (b), we deduce that $$\label{oszacu} \Vert u_1(t) - u_2(t) \Vert_V \leqslant c\,\Vert g_1 - g_2 \Vert_{V^*} \quad {\rm for\ a.e. }\ t \in (0,T).$$ Hence, we conclude that the mapping $\psi \colon V^* \ni g \longmapsto u(t) \in V$ is continuous for a.e. $t \in (0,T),$ which proves the claim. By the condition (a) we know that the function $f\colon [0,T] \longrightarrow V^*$ is measurable. From Lemma 2.27(iii) in [@MOSBOOK], we have that $\psi \circ f \colon [0,T] \longrightarrow V$ is measurable. So, the solution $u(t)$ of Problem \[probabst1\] is measurable on $(0,T).$ Next, we prove that the solution of Problem \[probabst1\] satisfies $u \in \v.$ Let $v_0 \in K.$ Thus, from the inequality , we get $$\begin{aligned} \label{WZOR} \begin{split} \langle A(t, &u(t)) - A(t,v_0), v_0 -u(t)\rangle_{V^* \times V} \leqslant \langle A(t,v_0), v_0 - u(t) \rangle_{V^* \times V} + \varphi (u(t),v_0)\\ &- \varphi (u(t), u(t)) + J^0 (t, Mu(t);M(v_0 -u(t))) + \langle f(t), v_0-u(t) \rangle_{V^* \times V}. \end{split}\end{aligned}$$ Now, we show the estimations which are needed in the next part of proof. Choosing $u_1 = u(t), u_2 = v_0, v_1 = u(t), v_2= v_0$ in (b), we obtain $$\begin{aligned} \varphi (u(t), v_0) - \varphi(u(t),u(t)) + \varphi(v_0, u(t)) - \varphi(v_0, v_0) \leqslant \alpha_{\varphi}\,\Vert u(t) - v_0 \Vert^2_V \end{aligned}$$ and $$\begin{aligned} \label{oszfi*} \varphi (u(t), v_0) - \varphi(u(t),u(t)) \leqslant - \varphi(v_0, u(t)) + \varphi(v_0, v_0) + \alpha_{\varphi}\,\Vert u(t) - v_0 \Vert^2_V.\end{aligned}$$ Since $\varphi(u, \cdot)$ is convex and lower semicontinuous for $u \in K$, it admits an affine minorant (cf. Proposition 5.2.25 in [@DMP1]), i.e., there are $l_{v_0} \in V^*$ and $b_{v_0} \in \R$ such that $\varphi(v_0, v) \geqslant \langle l_{v_0},v \rangle_{V^* \times V} + b_{v_0}$ for all $v \in V$. Using this inequality, we deduce that $- \varphi (v_0, u) \leqslant \Vert l_{v_0} \Vert_{V^*}\;\Vert u \Vert_V - b_{v_0}$ for all $u \in V,$ so $$\begin{aligned} \label{oszfi} \begin{split} \varphi(v_0, v_0) &- \varphi (v_0, u(t)) \leqslant \Vert l_{v_0} \Vert_{V^*}\;\Vert u(t) \Vert_V - b_{v_0} + \varphi(v_0, v_0) \leqslant\\ &\Vert l_{v_0} \Vert_{V^*}\;\Vert v_0 - u(t) \Vert_V + \Vert l_{v_0} \Vert_{V^*}\;\Vert v_0 \Vert_V + \vert b_{v_0} \vert + \vert \varphi(v_0, v_0) \vert. \end{split}\end{aligned}$$ Combining and , we conclude that $$\begin{aligned} \label{oszfiK} \begin{split} \varphi (u(t), v_0) &- \varphi(u(t),u(t)) \leqslant \Vert l_{v_0} \Vert_{V^*}\;\Vert v_0 - u(t) \Vert_V + \Vert l_{v_0} \Vert_{V^*}\;\Vert v_0 \Vert_V + \vert b_{v_0} \vert\\ &+ \vert \varphi(v_0, v_0)\vert + \alpha_{\varphi}\,\Vert u(t) - v_0 \Vert^2_V. \end{split}\end{aligned}$$ On the other hand, from Proposition 3.23(iii) in [@MOSBOOK], the Cauchy-Schwartz inequality and the condition (c), we obtain $$\begin{aligned} \label{JOTY} \begin{split} J^0 (t, Mu(t);M(v_0 -u(t))) &= \mbox{max} \lbrace \langle \zeta(t), M(v_0 - u(t))\rangle_{X^* \times X} \ \vert \ \zeta(t) \in \partial J(t, Mu(t)) \rbrace\\ &\leqslant \Vert \partial J(t, Mu(t)) \Vert_{X^*}\;\Vert M(v_0 - u(t)) \Vert_X\\ &\leqslant (c_0(t) + c_1 \Vert M \Vert\,\Vert u(t) \Vert_V)\Vert M \Vert\,\Vert v_0 - u(t) \Vert_V. \end{split}\end{aligned}$$ Using conditions (b),(d) and estimates , into the inequality , we see that $$\begin{aligned} \begin{split} (m_A - \alpha_{\varphi})\,\Vert &v_0 - u(t) \Vert^2_V \leqslant (a_0(t) + a_1\,\Vert v_0 \Vert_V)\,\Vert v_0 - u(t) \Vert_V + \Vert l_{v_0} \Vert_{V^*}\;\Vert v_0 - u(t) \Vert_V\\ &+ \Vert l_{v_0} \Vert_{V^*}\;\Vert v_0 \Vert_V + \vert b_{v_0} \vert + \vert \varphi(v_0, v_0)\vert + (c_0(t) + c_1 \Vert M \Vert\,\Vert u(t) \Vert_V)\Vert M \Vert\,\Vert v_0 - u(t) \Vert_V\\ &+ \Vert f(t)\Vert_{V^*}\,\Vert v_0 - u(t)\Vert_V. \end{split}\end{aligned}$$ Hence, from the condition (b) and the elementary property, namely, $x^2 \leqslant ax + b$ imply $x^2 \leqslant a^2 + b$ for $x,a,b \geqslant 0,$ we have $$\begin{aligned} \begin{split} \Vert v_0 - u(t) \Vert^2_V &\leqslant c^2\,(a_0(t) + a_1\,\Vert v_0 \Vert_V + \Vert l_{v_0} \Vert_{V^*} + \Vert M \Vert\,c_0(t) + c_1 \Vert M \Vert^2\,\Vert u(t) \Vert_V + \Vert f(t)\Vert_{V^*})^2\\ &+ \Vert l_{v_0} \Vert_{V^*}\;\Vert v_0 \Vert_V + \vert b_{v_0} \vert + \vert \varphi(v_0, v_0)\vert. \end{split}\end{aligned}$$ From this and inequality $\Vert u(t) \Vert^2 \leqslant 2\,\Vert u(t) - v_0\Vert_V^2 + 2\,\Vert v_0 \Vert_V^2,$ we conclude that $$\begin{aligned} \begin{split} \Vert u(t) \Vert^2_V &\leqslant 2\,[c^2\,(a_0(t) + a_1\,\Vert v_0 \Vert_V + \Vert l_{v_0} \Vert_{V^*} + \Vert M \Vert\,c_0(t) + c_1 \Vert M \Vert^2\,\Vert u(t) \Vert_V + \Vert f(t)\Vert_{V^*})^2\\ &+ \Vert l_{v_0} \Vert_{V^*}\;\Vert v_0 \Vert_V + \vert b_{v_0} \vert + \vert \varphi(v_0, v_0)\vert] + 2\,\Vert v_0 \Vert_V^2. \end{split}\end{aligned}$$ Thus, the inequality $\big(\sum\limits_{i=1}^{m}\,a_i \big)^2 \leqslant m\, \sum\limits_{i=1}^{m}\,a_i^2$ for $a_i \geqslant 0$ implies that $$\Vert u(t) \Vert_V^2 \leqslant c_1^2\,(a_0^2(t) + c_0^2(t) + \Vert f(t)\Vert_{V^*}^2 + c_2^2) + c_3,$$ where $c_1, c_2, c_3 \geqslant 0$ are constants. Integrating the last inequality over the interval $(0,T)$, we deduce that $\Vert u \Vert_{\v} \leqslant c$. Hence and the fact that $f \in \v^*$, we deduce that $u \in \v.$ The proof is finished. In the next problem, in contrast to Problem \[probabst1\], the convex function $\tilde \varphi$ depends on the three arguments which follows directly from the application (cf. Section \[s41\]). \[problem1a\] Find $u\in \v$ such that $u(t) \in K$ and $$\begin{aligned} \label{var-hemi} \begin{split} \langle A(t, u(t)), v -u(t)\rangle_{V^* \times V} &+ \tilde \varphi ((\s u)(t), u(t),v) - \tilde \varphi ((\s u)(t), u(t), u(t))\\ &+ J^0 (t, Mu(t);M(v-u(t))) \geqslant \langle f(t), v-u(t) \rangle_{V^* \times V} \end{split}\end{aligned}$$ for all $v \in K$ and a.e. $t \in (0,T).$ The inequality represents a [*variational-hemivariational inequality with history-dependent operator*]{}. As before, we assume that the operators $\s, A$ and the functions $ J, f$ satisfy conditions , , and (a), respectively. Additionally, we assume that the function $$\begin{aligned} \label{fi01} \left. \begin{array}{l} \tilde \varphi \colon V^* \times K \times K\longrightarrow \R\ \mbox{is such that}\\ \ \ {\rm (a)}\ \tilde \varphi (w,u, \cdot)\colon K \longrightarrow \R\ \mbox{is convex and lower semicontinuous on}\ K,\ \mbox{for all}\\ \hspace{1cm} w \in V^*,\ u \in K.\\ \ \ {\rm (b)}\ \mbox{there exists}\ \alpha_{\tilde \varphi} > 0\ \mbox{such that}\\ \hspace{1cm} \tilde \varphi (w_1, u_1, v_2) - \tilde \varphi (w_1,u_1, v_1) + \tilde \varphi (w_2,u_2, v_1) - \tilde \varphi (w_2,u_2, v_2)\\ \hspace{1cm} \leqslant \alpha_{\tilde \varphi} (\Vert u_1 - u_2 \Vert_V + \Vert w_1 - w_2 \Vert_V)\Vert v_1-v_2 \Vert_V\ \mbox{for all}\ w_1, w_2 \in V^*,\\ \hspace{1cm} u_1, u_2, v_1, v_2 \in K. \end{array} \right\}\end{aligned}$$ \[twierdzenie11\] Under the assumptions , – and , Problem \[problem1a\] has a unique solution $u \in \v$. Let $\eta \in \v^*$ be fixed and we consider the following auxiliary problem. \[problem2a\] Find $u_{\eta}\in \v$ such that $u_{\eta}(t) \in K$ and $$\begin{aligned} \begin{split} \langle A(t, u_\eta(t)), v -u_\eta(t)\rangle_{V^* \times V} &+ \tilde \varphi (\eta(t),u_\eta(t),v) - \tilde \varphi (\eta(t),u_\eta(t), u_\eta(t)) \\ &+ J^0 (t, Mu_\eta(t);M(v-u_\eta(t))) \geqslant \langle f(t), v-u_{\eta}(t) \rangle_{V^* \times V} \end{split}\end{aligned}$$ for all $v \in K$ and a.e. $t \in (0,T).$ Let $\phi_\eta \colon K \times K \longrightarrow \R$ be defined by $\phi_\eta(w,v)= \tilde \varphi (\eta(t), w, v)$ for all $w,v \in K$ and for a.e. $t \in (0,T)$. We show that function $\phi_\eta$ satisfies . It is easy to see that function $\phi_\eta (u,\cdot)$ satisfies (a) for a.e.$t \in (0,T)$ and for all $u \in K$. Moreover, using (b), we infer that $$\begin{aligned} \begin{split} &\phi_\eta(u_1,v_2) - \phi_\eta(u_1,v_1) + \phi_\eta(u_2,v_1) - \phi_\eta(u_2,v_2) = \tilde \varphi (\eta(t),u_1,v_2)\\ &- \tilde \varphi (\eta(t),u_1,v_1) + \tilde \varphi (\eta(t),u_2,v_1) - \tilde \varphi (\eta(t),u_2,v_2) \leqslant \alpha_{\tilde \varphi}\,\Vert u_1 - u_2 \Vert_V \Vert v_1 - v_2 \Vert_V \end{split}\end{aligned}$$ ${\rm for\ all}\ u_1, u_2, v_1, v_2 \in K,$ and a.e. $t \in (0,T).$ So, the condition (b) holds with $\alpha_{\varphi} = \alpha_{\tilde \varphi}.$ Hence and from Theorem \[twierdzenie1\], we deduce that Problem \[problem2a\] has the unique solution $u_{\eta} \in \v$. Next, we define the operator $\Lambda: \v^* \longrightarrow \v^*$ by $ \Lambda \eta = \s u_{\eta} \ {\rm for\ all} \ \eta \in \v^*, $ where $u_{\eta} \in \v$ is the solution to Problem \[problem2a\]. \[lematlambda\] The operator $\Lambda$ has a unique fixed point $\eta^* \in \v^*$. Let $\eta_1, \eta_2 \in \v^*,\ t \in (0,T)$ and let $u_i= u_{\eta_i} \in \v$ for $i=1,2,$ be the corresponding solutions to Problem \[problem2a\]. We put into the inequality in Problem \[problem2a\], $v=u_2(t) - u_1(t)$ and $v= u_1(t) - u_2(t),$ respectively. Thus, $$\begin{aligned} \begin{split} \langle A(t, u_1(t)), u_2(t) -u_1(t)\rangle_{V^* \times V} &+ \tilde \varphi (\eta_1(t),u_1(t),u_2(t)) - \tilde \varphi (\eta_1(t),u_1(t), u_1(t)) \\ &+ J^0 (t, Mu_1(t);M(u_2(t)-u_1(t))) \geqslant \langle f(t), u_2(t)-u_1(t) \rangle_{V^* \times V} \end{split}\end{aligned}$$ and $$\begin{aligned} \begin{split} \langle A(t, u_2(t)), u_1(t) -u_2(t)\rangle_{V^* \times V} &+ \tilde \varphi (\eta_2(t),u_2(t),u_1(t)) - \tilde \varphi (\eta_2(t),u_2(t), u_2(t)) \\ &+ J^0 (t, Mu_2(t);M(u_1(t)-u_2(t))) \geqslant \langle f(t), u_1(t)-u_2(t) \rangle_{V^* \times V}. \end{split}\end{aligned}$$ Adding obtained inequalities, we have $$\begin{aligned} \begin{split} &\langle A(t, u_1(t)) - A(t, u_2(t)), u_1(t) - u_2(t) \rangle_{V^* \times V} - \Big( J^0 (t, Mu_1(t);M(u_2(t)-u_1(t)))\\ &+ J^0 (t, Mu_2(t);M(u_1(t)-u_2(t)))\Big) \leqslant \tilde \varphi(\eta_1(t), u_1(t),u_2(t)) - \tilde \varphi(\eta_1(t),u_1(t),u_1(t))\\ &+ \tilde \varphi(\eta_2(t),u_2(t),u_1(t)) - \tilde \varphi(\eta_2(t),u_2(t),u_2(t)). \end{split}\end{aligned}$$ Using (b), (e) and (b), we get $$\begin{aligned} \begin{split} m_A\,\Vert u_1(t) - u_2(t) \Vert^2_V &- \Big( \alpha_{\tilde \varphi}\, \Vert u_1(t) - u_2(t) \Vert^2_V + m_J\Vert M \Vert^2\,\Vert u_1(t) - u_2(t) \Vert^2_V \Big)\\ &\leqslant \alpha_{\tilde \varphi}\Vert \eta_1(t) - \eta_2(t) \Vert_{V^*} \Vert u_1(t) - u_2(t) \Vert_{V}. \end{split}\end{aligned}$$ Hence, by the condition (b) with $\alpha_{\varphi} = \alpha_{\tilde \varphi}$, we obtain $$\Vert u_1(t) - u_2(t) \Vert_V \leqslant c\, \Vert \eta_1(t) - \eta_2(t) \Vert_{V^*}$$ which together with the inequality (cf. ) $$\Vert (\Lambda \eta_1)(t) - (\Lambda \eta_2)(t) \Vert_{V^*} = \Vert (\s u_1)(t) - (\s u_2)(t) \Vert_{V^*} \leqslant L_{\s}\, \int\limits_{0}^{t}\, \Vert u_1(s) - u_2(s) \Vert_{V}\, ds$$ imply that $ \Vert (\Lambda \eta_1)(t) - (\Lambda \eta_2)(t) \Vert_{V^*}\leqslant cL_{\s} \, \int\limits_{0}^{t}\, \Vert \eta_1(s) - \eta_2(s) \Vert_{V^*}\, ds. $ From the last inequality and the Hölder inequality, we conclude that $$\Vert (\Lambda \eta_1)(t) - (\Lambda \eta_2)(t) \Vert_{V^*}^2\leqslant c\, \int\limits_{0}^{t}\, \Vert \eta_1(s) - \eta_2(s) \Vert_{V^*}^2\, ds \ \ {\rm for\ a.e.}\;t \in (0,T).$$ Applying Lemma \[lematkulig\], we deduce that there exists a unique $\eta^* \in \v^*$ such that $\Lambda \eta^* = \eta^*,$ which concludes the proof of the claim. Now, we continue the proof of Theorem \[twierdzenie11\].\ [**Existence.**]{} Let $\eta^* \in \v^*$ be the fixed point of the operator $\Lambda$ (cf. Claim \[lematlambda\]). We put $\eta = \eta^*$ in Problem \[problem2a\] and since $\eta^* = \Lambda \eta^* = \s u_{\eta^*}$, we see that $u_{\eta^*} \in \v$ is a solution to Problem \[problem1a\].\ [**Uniqueness.**]{} Here, we use the Gronwall-type argument. Let $u_1, u_2 \in \v$ be solutions to Problem \[problem1a\] and $t \in (0,T).$ Then, proceeding similarly as in the proof of Theorem \[twierdzenie1\], we see that $$\begin{aligned} \begin{split} \langle A(t, &u_1(t)) - A(t, u_2(t)), u_1(t) -u_2(t)\rangle_{V^* \times V} -\Big( J^0 (t, Mu_1(t);M(u_2(t)-u_1(t)))\\ &+ J^0 (t, Mu_2(t);M(u_1(t)-u_2(t))) \Big) \leqslant \tilde \varphi ((\s u_1)(t),u_1(t),u_2(t)) - \tilde \varphi ((\s u_1)(t),u_1(t), u_1(t))\\ &+ \tilde \varphi ((\s u_2)(t),u_2(t),u_1(t)) - \tilde \varphi ((\s u_2)(t),u_2(t), u_2(t)) . \end{split}\end{aligned}$$ Using conditions (b), (e) and (b), we get $$\begin{aligned} \begin{split} m_A\,\Vert u_1(t) - u_2(t) \Vert^2_{V} &- (\alpha_{\tilde \varphi} + m_J\Vert M \Vert^2) \Vert u_1(t) - u_2(t) \Vert^2_V \\ &\leqslant \alpha_{\tilde \varphi}\,\Vert (\s u_1)(t) - (\s u_2)(t) \Vert_{V^*} \Vert u_1(t) - u_2(t) \Vert_V. \end{split}\end{aligned}$$ Next, from and (b), we have $ \Vert u_1(t) - u_2(t) \Vert_V \leqslant c\, \int\limits_{0}^{t}\, \Vert u_1(s) - u_2(s) \Vert_V\,ds$ for a.e. $t \in (0,T). $ Using the Gronwall inequality, we obtain $ \Vert u_1(t) - u_2(t) \Vert_V = 0$ for a.e. $ t \in (0,T)$, which implies that $u_1(t) = u_2(t)$ for a.e. $t \in (0,T).$ The proof of the theorem is complete. Quasistatic elastic-viscoplastic contact problem with normal damped response, unilateral constraint and memory term {#s41} =================================================================================================================== In this section we use the results obtained in Section \[s31\] into the study the elastic-viscoplastic contact problem with normal damped response, unilateral constraint and memory term. The physical setting is as follows. An elastic-viscoplastic body occupies a bounded domain $\Omega \subset \R^d$, where $d=2,3$ in applications. The boundary $\Gamma$ of the domain $\Omega$ is Lipschitz continuous and it is partitioned into three disjoint measurable parts $\Gamma_1, \Gamma_2$ and $\Gamma_3$ with $\mbox{meas}(\Gamma_1) > 0.$ The body is subject to the action of body forces of density $f_0$ and surface tractions of density $f_2$ which act on $\Gamma_2.$ We assume that the body is clamped on $\Gamma_1$ and it is in contact on $\Gamma_3$ with a rigid foundation. Furthermore the mechanical process is quasistatic and we study it in the time interval $[0,T]$ with $T>0.$ We use the notation $\R^d$ and $\es^d$ for the $d-$dimensional real linear space and the space of second order symmetric tensors on $\R^d$, respectively, which are equipped with the following canonical inner products and norms $$u \cdot v = u_{i} v_{i}, \quad \Vert v \Vert_{\mathbb{R}^{d}} = (v \cdot v)^{\frac{1}{2}} \quad \mathrm{for \, all} \quad u= (u_{i}), \, v = (v_{i}) \in \mathbb{R}^{d},$$ $$\sigma : \tau = \sigma_{ij} \tau_{ij}, \quad \Vert \tau \Vert_{\mathbb{S}^{d}} = (\tau : \tau)^{\frac{1}{2}} \quad \mathrm{for \, all} \quad \sigma= (\sigma_{ij}), \, \tau = (\tau_{ij}) \in \mathbb{S}^{d},$$ where the indices $i$ and $j$ run between $1$ and $d$. Let us add, that the summation convention over repeated indices is used. Let $u' = \frac{\partial u}{\partial t}$ represent the velocity field and let ${\rm Div} \sigma = (\sigma_{ij,j})$ be the divergence operator. We use the standard notation for the Lebesgue and Sobolev spaces and we introduce the following Hilbert spaces $$H = L^2(\Omega;\R^d) = \lbrace v = (v_i)\ \vert\ v_i \in L^2(\Omega),\ 1 \leqslant i \leqslant d \rbrace,$$ $$\h =L^2(\Omega; \es^d) = \lbrace \tau = (\tau_{ij})\ \vert\ \tau_{ij} = \tau_{ji} \in L^2(\Omega),\ 1 \leqslant i,\ j \leqslant d \rbrace,\quad \h_1 = \lbrace \tau \in \h \ \vert \ {\rm Div}\tau \in H \rbrace.$$ It is worth mentioning, that the Hilbert space, presented above, are equipped with the canonical inner products $$( u, v )_{H} = \int\limits_{\Omega}^{} u \cdot v \ dx, \quad ( \sigma, \tau )_{\mathcal{H}} = \int\limits_{\Omega}^{} \sigma : \tau \ dx, \quad ( \sigma,\tau )_{\h_1} = ( \sigma, \tau )_{\h} + \big( {\rm Div} \sigma, {\rm Div} \tau \big)_{H}$$ and the associated norms $$\Vert v \Vert_{H} = \Big(\int\limits_{\Omega}^{}\, (\Vert v(x) \Vert_{\R^d})^2\,dx \Big)^{\frac{1}{2}},\ \ \Vert \tau \Vert_{\h} = \Big(\int\limits_{\Omega}^{}\,\Vert \tau(x) \Vert_{\es^d}^2\, dx \Big)^{\frac{1}{2}},\ \ \Vert \tau \Vert_{\h_1} = \Vert \tau \Vert_{\h} + \Vert {\rm Div}\,\tau \Vert_H,$$ respectively. We consider also the real Hilbert space for the displacement $$V = \lbrace v \in H^1(\Omega;\R^d)\ \vert \ v=0 \ {\rm a.e.\ on}\ \Gamma_1\ \mbox{and}\ v_\nu = 0\ \mbox{a.e. on}\ \Gamma_3 \rbrace.$$ This space is endowed with the inner product and the associated norm given by $$\begin{aligned} ( u,v )_V = ( \varepsilon (u), \varepsilon(v) )_{\h} \quad {\rm and} \quad \Vert v \Vert_V = \Vert \varepsilon(v) \Vert_{\h},\end{aligned}$$ where $ \varepsilon (u) = (\varepsilon_{ij} (u))$ such that $\varepsilon_{ij} (u) = \frac{1}{2} \Big( \frac{\partial u_{i}}{\partial x_{j}} + \frac{\partial u_{j}}{\partial x_{i}}\Big) $ is the deformation operator. Additionally, the inequality $\Vert v \Vert_{L^2(\Gamma_3;\R^d)} \leqslant c_0\, \Vert v \Vert_V$ holds for all $v \in V$, where $c_0$ is a constant which depends on $\Omega,\ \Gamma_1$ and $\Gamma_3$. Assume that $\nu$ denote the outward unit normal vector on $\Gamma, v \in H^{1}(\Omega;\R^d)$ and $\sigma$ is a regular function. Thus, the normal and tangential components of the displacement field (stress field) on the boundary $\Gamma,$ are defined by $ v_{\nu} = v \cdot \nu,\ v_{\tau} = v - v_{\nu} \cdot \nu\ \big(\sigma_{\nu} = (\sigma \nu) \cdot \nu,\ \sigma_{\tau} = \sigma \nu - \sigma_{\nu} \cdot \nu\big)$. In order to derive variational formulations of the contact problems we will use the Green formula and the decomposition formula which are presented below. $$\label{GF} \big( \sigma, \varepsilon (v) \big)_{\h} + \big( {\rm Div} \sigma, v \big)_{H} = \int\limits_{\Gamma}^{} \sigma \nu \cdot v\, d \Gamma\quad {\rm for\ all}\ v \in H^1(\Omega;\R^d).$$ $$\label{decom} \sigma \nu \cdot v = \sigma_{\nu} v_{\nu} + \sigma_{\tau} \cdot v_{\tau}.$$ For simplicity, we will write $v$ instead of $\gamma v,$ where $\gamma$ denotes the trace of $v$ on the boundary $\Gamma$. For simplicity, we use the following notation $Q= \Omega \times (0,T)$ and $\Sigma_i = \Gamma_i \times (0,T)$ for $i=1,2,3.$ We study the elastic-viscoplastic contact problem which classical formulation is the following. \[model1\] Find a displacement field $u \colon Q \longrightarrow \R^d$ and a stress field $\sigma\colon Q \longrightarrow \es^d$ such that $$\begin{aligned} \sigma (t) = \a(t, \varepsilon(u'(t))) + \b (t,\varepsilon(u(t))) + \int\limits_{0}^{t}\, \g(s, \sigma(s) - \a(s, \varepsilon(u'(s))), \varepsilon(u(s)))\,ds \ \ \mbox{in}\ \ Q,\qquad \label{constitutivelaw}\\ {\rm Div}\, \sigma(t) + f_0(t) = 0 \ \ \mbox{in}\ \ Q,\qquad \label{equationofmotion}\\ u(t) = 0 \ \ \mbox{on}\ \ \Sigma_1,\qquad \label{boundary1}\end{aligned}$$ $$\begin{aligned} \sigma(t) \nu = f_2(t)\ \ \mbox{on}\ \ \Sigma_2,\hspace{-10.5cm} \label{boundary2}\\ \hspace{-1.7cm} - \sigma_{\tau}(t) \in \partial j_{\tau}(t, u'_{\tau}(t)) \ \ \mbox{on}\ \ \Sigma_3,\hspace{-10.5cm} \label{boundary3}\end{aligned}$$ $$\label{signorini} \hspace{3.15cm} \left. \begin{array}{l} \hspace{-0.2cm} u'_{\nu}(t) \leqslant g,\quad \sigma_{\nu}(t) + p(u'_{\nu}(t)) + \int\limits_{0}^{t}\,b(t-s) (u^{'}_{\nu})^{+}(s)\,ds \leqslant 0 \\ [2mm] \hspace{-0.4cm} (u'_{\nu}(t) - g)\Big(\sigma_{\nu}(t) + p(u'_{\nu}(t)) + \int\limits_{0}^{t}\,b(t-s) (u^{'}_{\nu})^{+}(s)\,ds \Big)= 0 \end{array} \right\}\ \ \mbox{on}\ \ \Sigma_3,$$ $$\begin{aligned} \hspace{11.9cm} u(0) = u_0\ \ \mbox{in}\ \ \Omega.\qquad \label{po}\end{aligned}$$ Let us note that equation is the elastic-viscoplastic constitutive law in which $\a$ is the viscosity operator, $\b$ is the elasticity operator and $\g$ is the viscoplasticity operator. The equilibrium equation is presented by . The displacement and the traction boundary conditions are expressed by and , respectively. The conditions and are the friction law and the contact condition with normal compliance, unilateral constraint and memory term. The law without memory term, is considered in [@BDS]. Here, $p$ and $b$ represent given contact functions. Finally, is the initial condition and $u_0$ denotes the initial displacement. In the study of Problem \[model1\], we need the following assumptions. $$\begin{aligned} \label{wkA} \left. \begin{array}{l} \a: Q \times \es^d \longrightarrow \es^d\ \mbox{is an operator such that}\\ \ \ {\rm (a)}\ \a (\cdot,\cdot, \varepsilon)\ \mbox{is measurable on}\ Q\ \mbox{for all}\ \varepsilon \in \es^d.\\ \ \ {\rm (b)}\ \a(x,t,\cdot)\ \mbox{is strongly monotone, i.e., there exists}\ m_{\a} > 0\ \mbox{such that}\\ \hspace{1cm} \big( \a (x,t, \varepsilon_1) - \a (x,t, \varepsilon_2)\big):( \varepsilon_1 - \varepsilon_2) \geqslant m_{\a} \Vert \varepsilon_1 - \varepsilon_2 \Vert^2_{\es^d} \\ \hspace{1cm} \mbox{for all}\ \varepsilon_1, \varepsilon_2 \in \es^d\ \mbox{and a.e.}\ (x,t) \in Q.\\ \ \ {\rm (c)}\ \a(x,t,\cdot)\ \mbox{is continuous on}\ \es^d,\ \mbox{for a.e.}\ (x,t) \in Q.\\ \ \ {\rm (d)}\ \Vert \a(x,t, \varepsilon) \Vert_{\es^d} \leqslant \overline{a}_0 (x,t) + \overline{a}_1 \Vert \varepsilon \Vert_{\es^d}\ \mbox{for all}\ \varepsilon \in \es^d\ \mbox{and a.e.}\ (x,t) \in Q\\ \hspace{1cm} \mbox{with}\ \overline{a}_0 \in L^2 (Q), \overline{a}_0 \geqslant 0\ \mbox{and}\ \overline{a}_1 >0.\\ \ \ {\rm (e)}\ \mbox{there exists}\ \alpha_{\a} > 0\ \mbox{such that}\ \a(x,t,\varepsilon): \varepsilon \geqslant \alpha_{\a}\Vert \varepsilon \Vert^2_{\es^d} \\ \hspace{1cm} {\rm for\ all} \ \varepsilon \in \es^d \ {\rm and\ a.e.} \ (x,t) \in Q. \end{array} \right\}\end{aligned}$$ $$\begin{aligned} \label{wB} \left. \begin{array}{l} \b: Q \times \es^d \longrightarrow \es^d\ \mbox{is an operator such that}\\ \ \ {\rm (a)}\ \b (\cdot, \cdot,\varepsilon)\ {\rm is\ measurable\ on}\ Q\ {\rm for\ all}\ \varepsilon \in \es^{d}$ and $\b (\cdot,\cdot,0) \in L^2 (Q ; \es^d).\\ \ \ {\rm (b)}\ \Vert \b (x, t, \varepsilon_1) - \b (x,t, \varepsilon_2)\Vert_{\es^{d}} \leqslant L_{\b}\ \Vert \varepsilon_1 - \varepsilon_2 \Vert_{\es^d}\ {\rm for\ all}\ \varepsilon_1, \varepsilon_2 \in \es^{d},\\ \hspace{1cm} {\rm a.e.}\ (x, t) \in Q\ {\rm with}\ L_{\b} > 0. \end{array} \right\}\end{aligned}$$ $$\begin{aligned} \label{wC} \left. \begin{array}{l} \g: Q \times \es^d \times \es^d \longrightarrow \es^d\ \mbox{is an operator such that}\\ \ \ {\rm (a)}\ \g (\cdot, \cdot,\sigma, \varepsilon)\ {\rm is\ measurable\ on}\ Q\ {\rm for\ all}\ \sigma, \varepsilon \in \es^{d}\ \mbox{and}\\ \hspace{1cm} \g (\cdot,\cdot,0,0)\ \in L^2 (Q ; \es^d).\\ \ \ {\rm (b)}\ \Vert \g (x, t, \sigma_1, \varepsilon_1) - \g (x,t, \sigma_2, \varepsilon_2)\Vert_{\es^{d}} \leqslant L_{\g}\ (\Vert \sigma_1 - \sigma_2 \Vert_{\es^d} + \Vert \varepsilon_1 - \varepsilon_2 \Vert_{\es^d})\ \mbox{for}\\ \hspace{1cm} {\rm all}\ \sigma_1, \sigma_2, \varepsilon_1, \varepsilon_2 \in \es^{d},\ {\rm a.e.}\ (x, t) \in Q\ {\rm with}\ L_{\g} > 0. \end{array} \right\}\end{aligned}$$ $$\begin{aligned} \label{wj} \left. \begin{array}{l} j_{\tau}: \Sigma_{3} \times \R^d \longrightarrow \R\ \mbox{is such that}\hspace{1cm}\\ \ \ {\rm (a)}\ j_{\tau} (\cdot, \cdot,\xi)\ {\rm is\ measurable\ on}\ \Sigma_3\ {\rm for\ all}\ \xi \in \R^d,\ \mbox{and there exists}\\ \hspace{1cm} {\rm e} \in L^2 (\Gamma_3; \R^d)\ \mbox{such that}\ j_{\tau}(\cdot, \cdot, {\rm e}(\cdot)) \in L^1(\Sigma_3).\hspace{1cm}\\ \ \ {\rm (b)}\ j_{\tau} (x,t,\cdot)\ \mbox{is locally Lipschitz on}\ \R^d\ \mbox{for a.e.}\ (x,t) \in \Sigma_3.\hspace{1cm}\\ \ \ {\rm (c)}\ \Vert \partial j_{\tau} (x,t,\xi) \Vert_{\R^d} \leqslant \overline{c}_0(t) + \overline{c}_1 \Vert \xi \Vert_{\R^d}\ \mbox{for all}\ \xi \in \R^d,\ {\rm a.e.}\ (x,t) \in \Sigma_3,\hspace{1cm}\\ \hspace{1cm} c_0 \in L^2(0,T)\ \mbox{with}\ \overline{c}_0, \overline{c}_1 \geqslant 0.\hspace{1cm}\\ \ \ {\rm (d)}\ \mbox{there exists}\ \alpha_j > 0\ \mbox{such that} \hspace{1cm}\\ \hspace{1cm} j_\tau^0(x,t,\xi_2;\xi_1-\xi_2) + j_\tau^0 (x,t, \xi_1; \xi_2-\xi_1) \leqslant \alpha_j \Vert \xi_1-\xi_2 \Vert^2_{\R^d} \mbox{for all}\hspace{1cm}\\ \hspace{1cm} \xi_1, \xi_2 \in \R^d,\ \mbox{and a.e.}\ (x,t) \in \Sigma_3.\hspace{1cm}\\ \ \ {\rm (e)}\ j_{\tau}(x,t,\cdot)\ \mbox{or}\ - j_{\tau}(x,t,\cdot)\ \mbox{is regular on}\ \R^d,\ \mbox{for a.e.}\ (x,t) \in \Sigma_3.\hspace{1cm} \end{array} \right\}\end{aligned}$$ $$\begin{aligned} \label{wpa} \left. \begin{array}{l} p\colon \Gamma_3 \times \R \longrightarrow \R_+\ \mbox{is such that}\hspace{1.2cm}\\ \ \ {\rm (a)}\ p(\cdot,r)\ \mbox{is measurable on}\ \Gamma_3\ \mbox{for all}\ r \in \R\ \mbox{and}\ p(\cdot,0) \in L^2(\Gamma_3).\hspace{1.2cm}\\ \ \ {\rm (b)}\ \mbox{there exists}\ L_p > 0\ \mbox{such that}\ \vert p(x,r_1) - p(x,r_2) \vert \leqslant L_p \vert r_1 - r_2 \vert \hspace{1.2cm}\\ \hspace{1cm} \mbox{for all}\ r_1,r_2 \in \R,\ \mbox{a.e.}\ x \in \Gamma_3.\hspace{1.2cm} \end{array} \right\}\end{aligned}$$ $$\label{gap} g \in L^{\infty}(\Gamma_3), \quad g > 0.$$ $$\label{wopb} b \in L^1 (0,T;L^{\infty}(\Gamma_3)).$$ $$\label{wf1f2} f_0 \in L^2(0,T;L^2(\Omega;\R^d)), \quad f_2 \in L^2(0,T;L^2(\Gamma_2;\R^d)).$$ $$\label{uzero} u_0 \in V$$ The concrete example of the function $j_\tau$ which satisfies condition is as follows $j_\tau (\xi)= \Vert \xi \Vert_{\R^d}$ for all $\xi \in \R^d$. Here, for simplicity, we omit the dependence on variables $(x,t)$. The subdifferential of function $j_\tau$ has the form $$\partial j_\tau(\xi) = \left\{ \begin{array}{ll} \overline{B}(0, 1) & \textrm{if $\xi = 0$}\\ \frac{\xi}{\Vert \xi \Vert_{\R^d}} & \textrm{if $\xi \neq 0$} \end{array} \right.$$ for all $\xi \in \R^d,$ where $\overline{B}(0, 1)$ denotes the closed unit ball in $\R^d.$ Note that the function $j_\tau$ is convex and regular. We see that holds with $\overline{c}_0 = 1, \overline{c}_1 = 0$ and $\alpha_j = 0$ (cf. Section 7.4 in [@MOSBOOK]). Now, we provide the variational formulation of Problem \[model1\]. To this end, we introduce the set of admissible displacement fields defined by $$\label{zbioru} K = \{ v \in V\ \vert \ v_{\nu} \leqslant g\ {\rm a.e.\ on}\ \Gamma_3 \}.$$ Assume that $(u,\sigma)$ are sufficiently smooth functions which solve –. Let $t \in (0,T)$ be fixed and $v \in K.$ We use the Green formula and the equation to obtain $$\int\limits_{\Omega}^{}\,\sigma(t) : (\varepsilon(v) - \varepsilon(u'(t)))\,dx = \int\limits_{\Omega}^{}\,f_0(t)\cdot (v-u'(t))\,d x + \int\limits_{\Gamma}^{}\,\sigma(t)\nu \cdot (v - u'(t))\,d\Gamma.$$ Using , and the decomposition formula , we get $$\begin{aligned} \label{wzorg} \begin{split} &\int\limits_{\Omega}^{}\,\sigma(t) : (\varepsilon(v) - \varepsilon(u'(t)))\,d x = \int\limits_{\Omega}^{}\,f_0(t)\cdot (v-u'(t))\,dx \\ &+ \int\limits_{\Gamma_2}^{}\,f_2(t)\cdot (v-u'(t))\,d \Gamma + \int\limits_{\Gamma_3}^{}\,\Big(\sigma_{\nu}(t)(v_{\nu} - u'_{\nu}(t)) + \sigma_{\tau}(t)\cdot(v_{\tau} - u'_{\tau}(t))\Big)\, d \Gamma. \end{split}\end{aligned}$$ From , we see that $$\begin{aligned} \label{sigmani} \begin{split} \sigma_{\nu}(t)(v_{\nu} - u'_{\nu}(t)) &= \Big( \sigma_{\nu}(t) + p(u'_{\nu}(t)) + \int\limits_{0}^{t}\,b(t-s) (u^{'}_{\nu})^{+}(s)\,ds \Big)(v_{\nu} - g)\\ &+ \Big( \sigma_{\nu}(t) + p( u'_{\nu}(t)) + \int\limits_{0}^{t}\,b(t-s) (u^{'}_{\nu})^{+}(s)\,ds \Big) (g - u'_{\nu}(t))\\ &- \Big( p(u'_{\nu}(t)) + \int\limits_{0}^{t}\,b(t-s) (u^{'}_{\nu})^{+}(s)\,ds \Big)(v_{\nu} - u'_{\nu}(t)) \quad {\rm on}\ \Gamma_3. \end{split}\end{aligned}$$ From the contact condition and the definition of set $K$ (cf. ), we have $$\begin{aligned} \begin{split} \sigma_{\nu}(t)(v_{\nu} - u'_{\nu}(t)) \geqslant - \Big( p(u'_{\nu}(t)) + \int\limits_{0}^{t}\,b(t-s) (u^{'}_{\nu})^{+}(s)\,ds \Big) (v_{\nu} - u'_{\nu}(t)) \quad {\rm on}\ \Gamma_3, \end{split}\end{aligned}$$ and $$\begin{aligned} \begin{split} \int\limits_{\Gamma_3}^{}\,\sigma_{\nu}(t)(v_{\nu} &- u'_{\nu}(t))\, d \Gamma \geqslant\\ &- \int\limits_{\Gamma_3}^{}\,\Big( p(u'_{\nu}(t)) + \int\limits_{0}^{t}\,b(t-s) (u^{'}_{\nu})^{+}(s)\,ds \Big)(v_{\nu} - u'_{\nu}(t))\,d\Gamma. \end{split}\end{aligned}$$ The definition of the Clarke subdifferential and the boundary condition imply that $$\begin{aligned} \label{sigmatau} \int\limits_{\Gamma_3}^{}\,\sigma_{\tau}(t)\cdot(v_{\tau} - u'_{\tau}(t))\, d \Gamma \geqslant \int\limits_{\Gamma_3}^{}\,j^0_{\tau}(t,u'_{\tau}(t);v_{\tau} - u'_{\tau}(t))\,d \Gamma.\end{aligned}$$ Using the definition of the space $V$, we note that $$v \longmapsto \int\limits_{\Omega}^{}\,f_0(t)\cdot v\, d x + \int\limits_{\Gamma_2}^{}\,f_2(t) \cdot v \, d \Gamma \quad {\rm for\ a.e.}\ t \in (0,T)$$ is a linear, continuous functional on $V.$ Therefore, we may apply the Riesz representation theorem to define the function $f \colon (0,T) \longrightarrow V^*$ by $$\begin{aligned} \label{deff} \langle f(t), v \rangle_{V^* \times V} = ( f_0(t),v )_H + ( f_2(t), v )_{L^2(\Gamma_2; \R^d)}\end{aligned}$$ for all $v \in V$ and a.e. $t \in (0,T).$ Combining and –, we obtain the following variational formulation of Problem \[model1\]. \[model2\] Find $u \in \w$ such that $u(t) \in K,\, \sigma \in L^2(0,T;\h)$ and $$\begin{aligned} \begin{split} &\sigma(t) = \a(t,\varepsilon(u'(t)) + \b(t,\varepsilon(u(t))+\int\limits_{0}^{t}\, \g(s, \sigma(s) - \a(s, \varepsilon(u'(s))), \varepsilon(u(s)))\,ds,\ \mbox{a.e.}\ t \in (0,T)\\ &(\sigma(t), \varepsilon(v) - \varepsilon(u'(t)))_{\h} + \int\limits_{\Gamma_3}{}\,p(u'_{\nu}(t))(v_{\nu} - u'_{\nu}(t))\,d \Gamma + \int\limits_{\Gamma_3}\,\Big( \int\limits_{0}^{t}\,b(t-s) (u^{'}_{\nu})^{+}(s)\,ds \Big)(v_{\nu} - u'_{\nu}(t))\,d \Gamma\\ &+ \int\limits_{\Gamma_3}^{}\,j^0_{\tau}(t,u'_{\tau}(t);v_{\tau} - u'_{\tau}(t))\,d \Gamma \geqslant \langle f(t), v - u'(t)\rangle_{V^* \times V} \end{split}\end{aligned}$$ for all $v \in K$ and a.e. $t \in (0,T)$ with $u(0) = u_0.$ The existence and uniqueness result for Problem \[model2\] is the following. \[twierdzeniemodel1\] Under the assumptions – and $$\label{nier4} m_{\a} > \mbox{\rm max}\,\lbrace 1, L_P \rbrace + \alpha_j \Vert \gamma \Vert^2,\ \ \alpha_{\a} > 2\,\alpha_j\,\Vert \gamma \Vert^2$$ Problem \[model2\] has a unique solution. The proof of this theorem will be carried out in two steps.\ [**[Step 1.]{}**]{} We need the following auxiliary result. \[OPHISIGMA\] Assume that and hold. Then, for all $u \in \v,$ there exists a unique function $\sigma^{I}(u) \in L^2(0,T;\h)$ such that $$\label{SI} \sigma^I(u(t)) = \int\limits_{0}^{t}\,\g\big(s,\b(t,\varepsilon(u(s))) + \sigma^I(u(s)), \varepsilon(u(s))\big)\,ds$$ for a.e. $t\in (0,T).$ Moreover, if $u_1, u_2 \in \v,$ then $$\Vert \sigma^I(u_1)(t) - \sigma^I (u_2)(t) \Vert_{\h} \leqslant L_{\sigma^I}\,\int\limits_{0}^{t}\,\Vert u_1(s) - u_2(s) \Vert_V\,ds$$ for a.e. $t \in (0,T)$ with $L_{\sigma^I}>0.$ The proof of the lemma is presented in Lemma 6.1 in [@CMOS]. In order to formulate an equivalent form of Problem \[model2\], we use Lemma \[OPHISIGMA\]. We consider the following intermediate problem. \[model2I\] Find $u \in \w$ such that $u(t) \in K,\ \sigma \in L^2(0,T;\h)$ and $$\begin{aligned} \begin{split} &\sigma(t) = \a(t,\varepsilon(u'(t)) + \b(t,\varepsilon(u(t))+ \sigma^I(u(t))\ \mbox{a.e.}\ t \in (0,T)\\ &(\sigma(t), \varepsilon(v) - \varepsilon(u'(t)))_{\h} + \int\limits_{\Gamma_3}{}\,p(u'_{\nu}(t))(v_{\nu} - u'_{\nu}(t))\,d \Gamma + \int\limits_{\Gamma_3}\,\Big( \int\limits_{0}^{t}\,b(t-s) (u^{'}_{\nu})^{+}(s)\,ds \Big)(v_{\nu} - u'_{\nu}(t))\,d \Gamma\\ &+ \int\limits_{\Gamma_3}^{}\,j^0_{\tau}(t,u'_{\tau}(t);v_{\tau} - u'_{\tau}(t))\,d \Gamma \geqslant \langle f(t), v - u'(t)\rangle_{V^* \times V} \end{split}\end{aligned}$$ for all $v \in K$ and a.e. $t \in (0,T)$ with $u(0) = u_0,$ where $\sigma^I(u) \in L^2(0,T;\h)$ is the unique function defined in Lemma \[OPHISIGMA\]. [**[Step 2.]{}**]{} Let $u' = w$. We define the operator $\r: \v \longrightarrow \v$ such that $$\label{OPHIR} (\r w)(t) = \int\limits_{0}^{t}\,w(s)\,ds + u_0\ {\rm for}\ w \in \v,\ \mbox{a.e.}\ t \in (0,T).$$ Hence, Problem \[model2I\] can be formulated as follows. \[ap1\] Find $w \in \v$ such that $w(t) \in K,\ \sigma \in L^2(0,T;\h)$ and $$\begin{aligned} \label{AP1} \begin{split} \sigma(t) = \a(t,\varepsilon(w(t))) + \b(t,\varepsilon((\r w)(t)) + \sigma^I((\r w)(t))\ \mbox{a.e.}\ t \in (0,T)\qquad \qquad \quad \end{split}\end{aligned}$$ $$\begin{aligned} \label{AP2} \begin{split} &\big(\sigma(t) , \varepsilon(v) - \varepsilon(w(t)) \big)_{\h} + \int\limits_{\Gamma_3}{}\,p(w_{\nu}(t))(v_{\nu} - w_{\nu}(t))\,d \Gamma\\ &+ \int\limits_{\Gamma_3}\,\Big( \int\limits_{0}^{t}\,b(t-s) w^{+}_{\nu}(s)\,ds \Big)(v_{\nu} - w_{\nu}(t))\,d \Gamma + \int\limits_{\Gamma_3}^{}\,j^0_{\tau}(t,w_{\tau}(t);v_{\tau} - w_{\tau}(t))\,d \Gamma\\ &\geqslant \langle f(t), v - w(t)\rangle_{V^* \times V}\ \ \mbox{for all}\ v \in K\ \mbox{and a.e.}\ t \in (0,T). \end{split}\end{aligned}$$ Combining and , we obtain the following problem. \[ap\] Find $w \in \v$ such that $w(t) \in K$ and $$\begin{aligned} \begin{split} &\big( \a(t,\varepsilon(w(t))), \varepsilon(v) - \varepsilon(w(t)) \big)_{\h} +\big( \b(t,\varepsilon((\r w)(t))) + \sigma^I((\r w)(t)), \varepsilon(v) - \varepsilon(w(t)) \big)_{\h}\\ &+ \int\limits_{\Gamma_3}{}\,p(w_{\nu}(t))(v_{\nu} - w_{\nu}(t))\,d \Gamma + \int\limits_{\Gamma_3}\,\Big( \int\limits_{0}^{t}\,b(t-s) w^{+}_{\nu}(s)\,ds \Big)(v_{\nu} - w_{\nu}(t))\,d \Gamma\\ &+ \int\limits_{\Gamma_3}^{}\,j^0_{\tau}(t,w_{\tau}(t);v_{\tau} - w_{\tau}(t))\,d \Gamma \geqslant \langle f(t), v - w(t)\rangle_{V^* \times V} \end{split}\end{aligned}$$ for all $v \in K$ and a.e. $t \in (0,T).$ Next, we introduce the operator $A \colon (0,T) \times V \longrightarrow V^*$ defined by $$\label{opB} \langle A(t,u),v\rangle_{V^* \times V} = \big( \a (t, \varepsilon(u)),\varepsilon(v) \big)_{\h}$$ for all $u,v \in V$ and a.e. $t\in (0,T).$ The operator $A$ satisfies (b)–(d) with $m_A = m_{\a} >$ $0$, $a_0(t)= \sqrt{2}\, \Vert \overline{a}_0(t) \Vert_{L^2(\Omega)},\ a_1 = \sqrt{2}\, \overline{a}_1 > 0$ (see [@MOSBOOK], p. 205 and [@MOS13], p. 3394). Now, we prove the property (f). Let $u_0 \in K$ be given. Using the Cauchy-Schwartz inequality and the conditions (d),(e), we get $$\begin{aligned} \begin{split} \langle A(t,u), u &- u_0 \rangle_{V^* \times V} = \big(\a(t,\varepsilon(u)),\varepsilon(u) - \varepsilon(u_0)\big)_{\h} = \big(\a(t,\varepsilon(u)),\varepsilon(u)\big)_{\h} + \big(\a(t,\varepsilon(u)),- \varepsilon(u_0)\big)_{\h}\\ &\geqslant \alpha_{\a}\,\Vert \varepsilon(u) \Vert_{\h}^2 - \Vert \a(t,\varepsilon(u))\Vert_{\h}\,\Vert \varepsilon(u_0) \Vert_{\h} = \alpha_{\a}\,\Vert u \Vert_V^2 - \overline{a}_1\,\Vert u \Vert_V - \overline{a}_0(t)\,\Vert u_0 \Vert_V. \end{split}\end{aligned}$$ So, we conclude that (f) holds with $\alpha_A = \alpha_{\a},\ \beta = \overline{a}_1$ and $\beta_1(t) = \overline{a}_0(t)\,\Vert u_0 \Vert_V$. We also define the operators $\s_1, \s_2,\s_3 \colon \v \longrightarrow \v^*$ by $$\begin{aligned} \label{ops1} \begin{split} &\langle (\s_1 w)(t),v \rangle_{V^* \times V} = \big( \b(t,\varepsilon((\r w)(t))), \varepsilon(v) \big)_{\h} \\ &\langle (\s_2 w)(t),v \rangle_{V^* \times V} = \big( \sigma^I((\r w)(t)), \varepsilon(v) \big)_{\h}, \\ &\langle (\s_3 w)(t),v \rangle_{V^* \times V} = \int\limits_{\Gamma_3}^{}\,\big( \int\limits_{0}^{t}\,b(t-s)w^+_\nu(s)\,ds \big)v_\nu\,d\Gamma \end{split} \begin{minipage}{0.1\linewidth} $\left. \begin{tabular}{c} \\ \\ \\ \\ \end{tabular} \right\}$ \end{minipage}\end{aligned}$$ for all $w \in \v,\ v \in V$ and a.e. $t\in (0,T).$ The hypotheses , , and the definition imply that the following inequalities hold (cf. [@CMOS]). $$\begin{aligned} \big( \b (t, \varepsilon ((\mathcal{R} w_1)(t) )) - \b (t, \varepsilon ((\mathcal{R} w_2)(t) )), \varepsilon(v) \big)_{\h} \leqslant L_{\b}\,\Big(\int\limits_{0}^{t}\,\Vert w_1(s) - w_2(s) \Vert_V\,ds \Big)\Vert v \Vert_V,\end{aligned}$$ $$\begin{aligned} \begin{split} &\big( \sigma^I((\r w_1)(t)) - \sigma^I((\r w_2)(t)), \varepsilon(v) \big)_{\h} \leqslant c\,T\,\Big( \int_{0}^{t}\,\Vert w_1(s) - w_2(s) \Vert_V\,ds \Big)\,\Vert v \Vert_V, \end{split}\end{aligned}$$ $$\begin{aligned} \begin{split} \int\limits_{\Gamma_3}^{} \Big(\int\limits_{0}^{t}& b(t-s) (w_{1 \nu}^+ (s) - w_{2 \nu}^+(s))\, ds \Big)v_\nu\, d \Gamma\\ &\leqslant \Vert b \Vert_{L^1(0,T;L^\infty(\Gamma_3))}\,\Vert \gamma \Vert^2 \Big(\int\limits_{0}^{t}\,\Vert w_1(s) - w_2(s) \Vert_V\,ds \Big)\Vert v \Vert_V \end{split}\end{aligned}$$ for $w_1$, $w_2 \in \v$, $v \in V$, a.e. $t \in (0,T)$. Hence, the operators $\s_1, \s_2$ and $\s_3,$ defined by satisfy with $L_{\s_1} = L_{\b}, L_{\s_2} = c\,T$ and $L_{\s_3} = \Vert \gamma \Vert^2 \Vert b \Vert_{L^1(0,T; L^{\infty}(\Gamma_3))},$ respectively. Moreover, from Lemma \[PwS\], we conclude that the operator $\s \colon \v \longrightarrow \v^*$ defined by $ \langle (\s w)(t),v \rangle_{V^* \times V} = \sum_{i=1}^3\,\langle (\s_i w)(t), v\rangle_{V^* \times V} $ for all $w \in \v,\ v \in V$ and a.e. $t\in (0,T)$ satisfies with $L_{\s} = L_{\b} + c\,T + \Vert \gamma \Vert^2 \Vert b \Vert_{L^1(0,T; L^{\infty}(\Gamma_3))}.$ Next, we define the operator $P \colon V \longrightarrow V^*$ by $$\label{opp} \langle P(u), v \rangle_{V^* \times V} = \int\limits_{\Gamma_3}{}\,p(u_{\nu})v_{\nu}\,d \Gamma$$ for all $u,v \in V.$ From (b) and the Hölder inequality, we see that $$\begin{aligned} \begin{split} &\langle P(u) - P(v), u - v \rangle_{V^* \times V} \leqslant \int\limits_{\Gamma_3}^{}\,(p(u_\nu) - p(v_\nu))(v_\nu - u_\nu)\,d\Gamma\\ &\leqslant \Vert p(u_\nu) - p(v_\nu)\Vert_{L^2(\Gamma_3)}\,\Vert u_\nu - v_\nu\Vert_{L^2(\Gamma_3)} \leqslant L_p\,\Vert u_\nu - v_\nu\Vert_{L^2(\Gamma_3)}\,\Vert u_\nu - v_\nu\Vert_{L^2(\Gamma_3)}\\ &\leqslant L_p\,\Vert \gamma \Vert^2\,\Vert u - v \Vert_{V}\,\Vert u - v \Vert_{V}. \end{split}\end{aligned}$$ Hence, we conclude that the operator $P$ is Lipschitz continuous with $L_P = L_p \, \Vert \gamma \Vert^2$. Finally, we define the functional $J \colon (0,T) \times L^2(\Gamma_3; \R^d) \longrightarrow \R$ by $$\label{opJ} J(t,u)= \int\limits_{\Gamma_3}^{}\,j_\tau(x,t, u_\tau(x))\,d \Gamma$$ for all $u \in L^2(\Gamma_3; \R^d)$ and a.e.$t \in (0,T).$ Under the assumption the functional $J\colon (0,T) \times L^2(\Gamma_3; \R^d) \longrightarrow \R$ defined above satisfies with $ c_0 = \sqrt{2\, {\rm meas}(\Gamma_3)}\,\overline{c}_0,\ c_1 = \sqrt{2}\,\overline{c}_1, \ d_0= \overline{d}_0 \geqslant 0\ \mbox{and}\ m_J = \alpha_j \Vert \gamma \Vert^2. $ (see [@MOS1], p. 280). Under the above notation Problem \[model2\] can be written in the following equivalent form. $$\label{ll} \left. \begin{array}{l} \hspace{-0.2cm} {\rm Find}\ w \in \v\ {\rm such\ that}\ w(t) \in K\ {\rm and}\\ [2mm] \langle A(t,w(t)), v - w(t) \rangle_{V^* \times V} + \langle P(w(t)), v - w(t) \rangle_{V^* \times V} \\ [2mm] +\langle (\s w)(t), v - w(t) \rangle_{V^* \times V} + J^0(t,\gamma w(t);\gamma v - \gamma w(t)) \geqslant \langle f(t), v - w(t)\rangle_{V^* \times V}\\[2mm] {\rm for\ all}\ v \in K\ {\rm and\ a.e.}\ t \in (0,T). \end{array} \right\}$$ We introduce the function $\tilde \varphi \colon V^* \times K \times K \longrightarrow \R$ defined by $$\label{operatorp} \tilde \varphi (z, u, v) = \langle z, v \rangle_{V^* \times V} + \langle u, v \rangle_{V^* \times V}$$ for all $z \in V^*, u,v \in K.$ Hence and from the Cauchy-Schwartz inequality, we have $$\begin{aligned} \begin{split} \tilde \varphi(z_1,&u_1,v_2)-\tilde \varphi(z_1,u_1,v_1)+\tilde\varphi(z_2, u_2,v_1)-\tilde \varphi(z_2,u_2,v_2)\\ &= \langle z_1 - z_2, v_2 - v_1 \rangle_{V^* \times V} + \langle u_1 - u_2, v_2 - v_1 \rangle_{V^* \times V}\\ &\leqslant (\Vert z_1 - z_2 \Vert_{V^*} + \Vert u_1 - u_2 \Vert_V )\Vert v_2 - v_1 \Vert_V \end{split} \end{aligned}$$ for all $z_1, z_2 \in V^*, u_1,u_2,v_1,v_2 \in K$. Thus, the condition holds with $\alpha_{\tilde \varphi} = 1$. Using the definition of the function and the fact that $M = \gamma$, Problem \[ll\] has the following form. $$\label{nier} \left. \begin{array}{l} \hspace{-0.2cm} {\rm Find}\ w \in \v\ {\rm such\ that}\ w(t) \in K\ {\rm and} \\ [2mm] \langle A(t,w(t)), v - w(t) \rangle_{V^* \times V} + \tilde \varphi((\s w)(t),w(t),v)\\[2mm] -\tilde \varphi((\s w)(t),w(t),w(t))+ J^0(t,M w(t);M v - M w(t)) \geqslant \langle f(t), v - w(t)\rangle_{V^* \times V}\\[2mm] {\rm for\ all}\ v \in K\ {\rm and\ a.e.}\ t \in (0,T). \end{array} \right\}$$ We observe that the condition implies (b) with $m_A = m_{\a}$, $\alpha_A = \alpha_{\a},$ $\alpha_{\varphi} = \mbox{max}\,\lbrace 1, L_P \rbrace$, $m_J = \alpha_j$ and $M = \gamma$. Now, applying Theorem \[twierdzenie11\] (cf. Section \[s31\]), we deduce that there exists a unique function $w \in \v$ that solves . From this and the definitions , , , and , we deduce that the pair $(w,\sigma) \in \v \times L^2(0,T;\h)$ is a solution to Problem \[ap1\]. Let $u(t) = (\mathcal{R} w)(t)$ for a.e.$t \in (0,T)$ and $w=u'$. Thus, we conclude that the pair $(u, \sigma) \in \w \times L^2(0,T;\h)$ solves Problem \[model2I\]. Hence and Lemma \[OPHISIGMA\], we deduce that the pair $(u, \sigma) \in \w \times L^2(0,T;\h)$ is a solution to Problem \[model2\]. The proof of the theorem is complete. A couple of functions $(u,\sigma)$ which satisfies – is called a [*weak solution*]{} to Problem \[model2\]. We conclude that, under the assumptions of Theorem \[twierdzeniemodel1\], Problem \[model2\] has a unique weak solution with regularity $u \in W^{1,2}(0,T;V)$ and $\sigma \in L^2(0,T;\h)$. We observe, that the regularity of the stress field is, in fact, $\sigma \in L^2(0,T;\h_1)$. Indeed, using and , we deduce that Div$\sigma \in L^2(0,T;L^2(\Omega;\R^d))$ and hence $\sigma \in L^2(0,T;\h_1).$ [l]{} M. Barboteu, D. Danan, M. Sofonea, *Analysis of a contact problem with normal damped response and unilateral constraint*, ZAMM Journal of Applied Mathematics and Mechanics, doi: 10.1002/zamm.201400304, 2015. S. Carl, V. K. Le, D. Montreanu, *Nonsmooth Variational Problems and Their Inequalities. Comparison Principles and Applications*, Springer, New York, 2007. X. Cheng, S. Migórski, A. Ochal, S. Sofonea, *Analysis of two quasistatic history-dependent contact models*, Discrete and Continuous Dynamical Systems, Series B 8 (19) (2014), 2425–2445. S. Migórski, A.Ochal, S. Sofonea, *History-dependent subdifferential inclusions and hemivariational inequality in contact mechanics,* Nonlinear Analysis: Real Word Applications 12 (2011), 3385–3396. F. H. Clarke, *Optimization and Nonsmooth Analysis*, Canad. Math. Soc. Ser. Monogr. Adv. Texts, John Wiley & Sons, New York, 1983. Z. Denkowski, S. Migórski, N.S. Papageorgiou, *An Introduction to Nonlinear Analysis: Theory*, Kluwer Academic/Plenum Publishers, Boston, Dordrecht, London, New York, 2003. D. Goeleven, D. Motreanu, Y. Dumont, M. Rochdi, *Variational and Hemivariational Inequalities: Theory, Methods and Applications, vol. I, Unilateral Analysis and Unilateral Mechanics*, Nonconvex Optimization and its Applications vol. 69, Boston, MA, Kluwer, 2003. D. Goeleven, D. Motreanu, *Variational and Hemivariational Inequalities: Theory, Methods and Applications, vol. II, Unilateral Problems*, Nonconvex Optimization and its Applications, vol. 70, Boston, MA, Kluwer, 2003. S. Migórski, A. Ochal, M. Sofonea, *A class of variational-hemivariational inequalities in reflexive Banach space*, Jagiellonian University, Institute of Computer Science and Laboratoire de Mathématiques et Physique, Université de Perpignan, paper submitted to Journal of Elasticity, 2015. S. Migórski, A. Ochal, M. Sofonea, *Nonlinear Inclusions and Hemivariational Inequalities. Models and Analysis of Contact Problems*, Advances in Mechanics and Mathematics 26, Springer, New York, 2013. S. Migórski, A. Ochal, M. Sofonea, *Integrodifferential hemivariational inequalities with applications to viscoelastic frictional contact*, Mathematical Models and Methods in Applied Sciences 18 (2) (2008), 271–290. J. J. Moreau, *La notion de sur-potentiel et les liaisons unilatérales en élastostatique*, Comptes Rendus de l’Académie des Sciences Paris 267A (1968), 954–957. Z. Naniewicz, P. D. Panagiotopoulos, *Mathematical Theory of Hemivariational Inequalities and Applications*, Marcel Dekker, Inc., New York, Basel, Hong Kong, 1995. P. D. Panagiotopoulos, *Hemivariational Inequalities. Applications in Mechanics and Engineering*, Springer-Verlag Berlin Heidelberg 1993. P. D. Panagiotopoulos, *Non-convex superpotentials in the sense of F.R. Clarke and applications*, Mechanics Research Communications 8 (1981), 335–340. P. D. Panagiotopoulos, *Inequality Problems in Mechanics and Applications. Convex and Non-convex Energy Functions*, Birkhauser Verlag, Basel, Boston, Stuttgart 1985 (Russian Transl. MIR Publ. Moscow 1988). Research supported by the Marie Curie International Research Staff Exchange Scheme Fellowship within the 7th European Community Framework Programme under Grant Agreement No. 295118 and the National Science Center of Poland under the Maestro Project no. DEC-2012/06/A/ST1/00262. [^1]: Institute of Mathematics, Jagiellonian University in Kraków, ul. prof. S. Lojasiewicza 6, 30-348, Kraków, Poland, [**[email protected]**]{}
--- abstract: 'A novel concept of Joint Source and Channel Sensing (JSCS) is introduced in the context of Cognitive Radio Sensor Networks (CRSN). Every sensor node has two basic tasks: application-oriented source sensing and ambient-oriented channel sensing. The former is to collect the application-specific source information and deliver it to the access point within some limit of distortion, while the latter is to find the vacant channels and provide spectrum access opportunities for the sensed source information. With in-depth exploration, we find that these two tasks are actually interrelated when taking into account the energy constraints. The main focus of this paper is to minimize the total power consumed by these two tasks while bounding the distortion of the application-specific source information. Firstly, we present a specific slotted sensing and transmission scheme, and establish the multi-task power consumption model. Secondly, we jointly analyze the interplay between these two sensing tasks, and then propose a proper sensing and power allocation scheme to minimize the total power consumption. Finally, simulation results are given to validate the proposed scheme.' author: - title: Energy Efficient Joint Source and Channel Sensing in Cognitive Radio Sensor Networks --- Introduction ============ Wireless Sensor Networks (WSN) are capable of monitoring physical or environmental information (e.g. temperature, sound, pressure), and collecting them to certain access points according to various applications. The extensive deployment of WSN has changed our lives dramatically. However, current WSN nodes usually operate on license-exempt Industrial, Scientific and Medical (ISM) frequency bands [@CogSeNet], and these bands are shared with many other successful systems such as Wi-Fi and Bluetooth, causing severe spectrum scarcity problems [@CWSN]. To deal with such problems, a new sensor networking paradigm of Cognitive Radio Sensor Network (CRSN) which incorporates cognitive radio capability on the basis of traditional wireless sensor networks was introduced [@CRSN]. CRSN nodes operate on licensed bands and can periodically sense the spectrum, determine the vacant channels, and use them to report the collected source information. The main design principles and features of CRSNs are discussed openly in literature [@CogSeNet]-[@CWSNsurvey]. According to these literatures, CRSN enjoys many advantages, such as efficient spectrum usage, flexible deployment and good radio propagation property. However, WSN nodes are low cost and usually equipped with a limited energy source, such as a battery, and CRSN nodes also inherit this fundamental limitation. What¡¯s more, the CRSN node bears one more task of spectrum sensing, and this task also consumes energy. This fact makes the energy scarcity problem in CRSN even more severe. Hence, how to minimize the total energy consumption for CRSN node and thus make the system the most energy efficient has become an urgent problem. In our viewpoint, there are two basic types of sensing tasks for the CRSN node, one is Application-Oriented Source Sensing (AppOS) and the other is Ambient-Oriented Channel Sensing (AmOS). By source sensing we mean the process of collecting source information (e.g. temperature, sound) and delivering it to the Access Point (AP), and by channel sensing we mean the process of periodically sensing the ambient radio environment and determining the vacant channels for opportunistic spectrum access. The energy saving problems of both AppOS and AmOS have been investigated separately in existing literature. The energy consumption models for AppOS have been established in the context of conventional WSN. The energy-distortion tradeoffs in energy-constrained sensor networks is investigated in [@E-D1], and energy efficient lossy transmission for wireless sensor networks is studied in [@E-D3], for Gaussian sources and unlimited bandwidth. Another issue of energy efficient AmOS has also been studied separately, in the cognitive radio scenario. Maleki has designed a sleep/censor scheme to reduce spectrum sensing energy [@E-CR2]. Su and Zhang proposed an energy saving spectrum sensing scheme by adaptively adjusting the spectrum sensing periods utilizing PU’s activity patterns [@E-CR3]. [@E-CR5] studied the influence of sensing time on the probability of detection and probability of false alarm. However, the unique mechanism of CRSN is that every CRSN node performs AppOS and AmOS at the same time. This requires us to consider the resource saving problems of AppOS and AmOS jointly. In order to prolong the lifespan, there is a need to properly distribute limited power into these two concurrent tasks. On one hand, if we put excessive power into AppOS, the resources left for AmOS will be diminished. We can obtain more precise and unaffected application-specific source information. But, due to the lack of channel resource information, acquired source information can not be delivered timely and effectively to AP. Furthermore, the probability of miss detection of Primary Signals can be prominently high. It will cause interference to the underlying Primary System. On the other hand, if we put too much power into AmOS, we can obtain enough reliable channel access opportunities, and reduce the interference to the Primary System. Vice versa, the power left for AppOS is not enough for delivering source information at a coding rate capable of meeting the distortion requirement despite the implementing of distributed source coding in the sensor network. Therefore, our paper mainly aims at tackling this joint energy saving problem, which has not been considered before. The main contributions of our work are as follows: we jointly model the power consumption of AmOS and AppOS and use the transmission probability to bond these two interrelated tasks; we find that within bounded distortion, there is always a minimal total power consumption and corresponding power allocation scheme for the CRSN system, which is the most power efficient solution. The rest of this paper is organized as follows. In Section II, we make basic assumptions about the CRSN, and provide a brief introduction of the considered system. In Section III, we give detailed models and jointly analyze the power consumption of AppOS and AmOS. Then, several simulation results are presented in Section IV to further validate our analysis. Finally, the whole paper is concluded in Section V. System Model ============ ![An overview of the considered system model.[]{data-label="Fig1"}](Fig1){width="50.00000%"} In this paper, we consider the Multi-task Sensing architecture in CRSN nodes, as shown in Fig.\[Fig1\]. We observe two dominating features of the considered CRSN: **Feature 1**: In a CRSN node, two major tasks need to be modeled as follows: 1. Application Oriented Source Sensing (AppOS) We define source sensing as the process of collecting various source information (e.g. temperature, pressure, position, etc.) according to the application-specific demand and delivering it to the Access Point (AP). The main objective of AppOS is realizing accurate acquisition of the source information. 2. Ambient Oriented Channel Sensing (AmOS) Channel sensing is the process of periodically sensing the ambient radio environment by means of spectrum sensing and energy detection. It, thus, determines the vacant channels for opportunistic spectrum access or perceives energy distribution of surrounding nodes for cooperation. The main objective of AmOS is to realize effective and efficient exploration of spectrum resources. **Feature 2**: As a characteristic inherited from the traditional WSN, every CRSN node is power-constrained due to limited energy supply. Both AmOS and AppOS consume energy. We have to save as much energy as possible while delivering the source information to AP within bounded distortion. ![The interplay between the two tasks.[]{data-label="Fig2"}](Fig2){width="30.00000%"} Fig.\[Fig2\] depicts the subtle interplay between AppOS and AmOS sensing under the interference, distortion and power resource constraint. It’s like you have two ears listening to two distinct but related objects in a noisy environment. On the one hand, your left ear listens to the monitored source, trying to hear the most undistorted sound. On the other hand, your right ear listens to the slight ambient sound on the radio spectrum, because you are not allowed to speak when others talk. Our goal is to optimally balance the two ears and make them the most efficient. Slotted Sensing and Transmission Scheme --------------------------------------- We present a specific sensing and transmission scheme below in Fig.\[Fig3\]: ![Slotted sensing and transmission scheme.[]{data-label="Fig3"}](Fig3){width="45.00000%"} Every node partitions the time domain into periods, namely slots, of equal length $T$. At the beginning of each slot, the cognitive sensor node makes a decision on whether or not to transmit based on the $N$ samples energy detection spectrum sensing result. The spectrum sensing is always performed ahead of the data transmission. In our scheme, we should point out three basic assumptions: **AS1:** Both spectrum sensing and data transmission consume energy. And the total energy is limited in one CRSN node. **AS2:** The slot length $T$ is short enough, so that the status of primary user activity remains the same during one slot. **AS3:** The time period of spectrum sensing is rather short compared with transmission period and thus can be omitted. In the following sections, we will establish detailed models for both sensing tasks and analyze the power consumption tradeoff between them. Energy Efficient Joint Source and Channel Sensing ================================================= In this section, the relationships between power consumption and performances are discussed. We present specific models for both AmOS and AppOS, and then jointly analyze the relationship and tradeoff between them. We prove that optimal power allocation scheme can indeed be obtained. AmOS: Energy Detection based Spectrum Sensing --------------------------------------------- In recent years, many methods have been developed for spectrum sensing, including matched filter detection, energy detection and cyclostationary feature detection. Among them, energy detection is the most popular spectrum sensing scheme. It is the most suitable for CRSN node due to its simplicity of hardware implementation and low signal processing cost. Therefore, we choose energy detection as our spectrum sensing technique for CRSN node. We assume that the CRSN node operates at certain carrier frequency $f_c$ with bandwidth $W$, and samples the signal within this range $N$ times per slot. The discrete signal that the CRSN node receives can be represented as: $$y\left( n \right) = \left\{ {\begin{array}{*{20}c} {s\left( n \right) + u\left( n \right),\quad {\cal H}_1 :{\rm{primary\; user\; is\; active}\quad\quad}} \\ {u\left( n \right),\quad\quad\quad{\cal H}_0 :{\rm{primary\; user\; is\; inactive}}} \\ \end{array}} \right.$$ The primary signal $s(n)$ is independent, identically distributed (i.i.d) random process with zero mean and variance $E\left[ {\left| {s(n)} \right|^2 } \right] = \sigma _s^2$, and the noise $u(n)$ is i.i.d random process with zero mean and variance $E\left[ {\left| {u(n)} \right|^2 } \right] = \sigma _u^2$. We assume that the primary signal $s(n)$ is MPSK complex signal, and the noise $s(n)$ is complex Gaussian. As the performance criteria for the proposed spectrum sensing method, the two important parameters worth mentioning are: probability of detection and probability of false alarm. The probability of detection, denoted as $P_D$, is the probability that the CRSN node successfully detects the primary user when it’s active, under hypothesis $ \mathcal {H}_1$. The probability of false alarm, denoted as $P_{FA}$, is the probability that the CRSN node falsely determines the presence of primary signal when the primary user is actually inactive, under hypothesis $\mathcal {H}_0$. The energy detector is as follows: $$T(y) = \frac{1}{N}\sum\limits_{n = 1}^N {\left| {y(n)} \right|^2 }\label{eqn3}$$ According to the Central Limit Theorem, the statistics $T(y)$ is approximately Gaussian distributed when $N$ is large enough under both hypothesis $\mathcal {H}_1$ and $\mathcal {H}_0$. The probability density function(PDF) of statistics $T(y)$ can be expressed as: $$T(y) \sim \left\{ {\begin{array}{*{20}c} {\mathcal {N}\left( {\mu _0 ,\sigma _0^2 } \right),{\rm{\ \ \ under \ \ \mathcal {H}_0}}} \\ {\mathcal {N}\left( {\mu _1 ,\sigma _1^2 } \right),{\rm{\ \ \ under \ \ \mathcal {H}_1}}} \\ \end{array}} \right. \label{eqn4}$$ When the primary signal $s(n)$ is MPSK complex signal, and the noise $s(n)$ is complex Gaussian [@E-CR5], we can derive the probability of false alarm: $$P_{FA} \left( {\varepsilon ,N} \right) = Q\left( {(\frac{\varepsilon }{{\sigma _u^2 }} - 1)\sqrt N } \right) \label{eqn7}$$ where $Q\left( x \right) = \frac{1}{{2\pi }}\int_x^\infty {\exp ( - \frac{{t^2 }}{2})dt}$ is the tail probability of the standard normal distribution (also known as the Q function). For certain threshold $\varepsilon$, the probability of detection can be expressed as: $$P_D \left( {\varepsilon ,N} \right) = Q\left( {(\frac{{\varepsilon - \sigma _s^2 }}{{\sigma _u^2 }} - 1)\sqrt {\frac{N}{{2\sigma _s^2 /\sigma _u^2 + 1}}} } \right)\label{eqn8}$$ Since the slot period $T$ is short enough, we can assume that the primary user activity keeps unchanged during a single slot. When the CRSN node fails to detect the PU signal, its signal will collide with the primary user signal and bring interference into the PU system. We denote $P_E=1- P_D$ as the probability of missed detection, and have the following assumption: **AS4:** There is a maximal missed detection probability that the PU system can tolerate, and a typical value for this parameter is 0.1 [@E-CR6]. $P_E$ should be smaller than this value. Because $Q\left( \cdot \right)$ is monotonically decreasing, we find that the probability of false alarm drops as the sample number $N$ increases: $$P_{FA} = Q\left( {\sqrt {2\sigma _s^2 /\sigma _u^2 + 1} Q^{ - 1} \left( {1 - P_E } \right) + \sqrt N \sigma _s^2 /\sigma _u^2 } \right) \label{eqn10}$$ The resulted probability that the CRSN node is allowed to transmit is: $$\begin{array}{*{20}c} {p'_t = \left( {1 - P_{FA} } \right)p({\rm{H}}_0 ) + \left( {1 - P_D } \right)p({\rm{H}}_1 )} \\ {\begin{array}{*{20}c} { = \left( {1 - Q\left( {\sqrt {\frac{{2\sigma _s^2 }}{{\sigma _u^2 }} + 1} Q^{ - 1} \left( {1 - P_E } \right) + \sqrt N \frac{{\sigma _s^2 }}{{\sigma _u^2 }}} \right)} \right)} \\ { \times p({\rm{H}}_0 ) + P_E p({\rm{H}}_1 )} \\ \end{array}} \\ \end{array} \label{eqn11}$$ where $p({\rm{H}}_0 )$ and $p({\rm{H}}_1 )$ are the inactive and active probabilities of the primary user, respectively. Leaving out the collision probability $P_C = P_E p({\rm H}_1 )$, we can obtain the effective transmission probability available for CRSN node: $$\begin{array}{l} p_t = p'_t - P_E p({\rm{H}}_1 ) \\ = \left( {1 - Q\left( {\sqrt {\frac{{2\sigma _s^2 }}{{\sigma _u^2 }} + 1} Q^{ - 1} \left( {1 - P_E } \right) + \sqrt N \frac{{\sigma _s^2 }}{{\sigma _u^2 }}} \right)} \right)p({\rm{H}}_0 ) \\ \end{array} \label{eqn12}$$ Denoting the energy consumed in one sample as $E_{sample}$, the average AmOS power consumption can be expressed as: $$P_{_{AmOS}} = \frac{{E_{sample} \times N}}{T} \label{eqn13}$$ Rewriting the AmOS power expression with respect to the effective transmission probability $p_t$ gives: $$\begin{array}{l} P_{_{AmOS}} \left( {p_t } \right) \\ = \left( {\frac{{Q^{ - 1} \left( {1 - \frac{{p_t }}{{p({\rm{H}}_0 )}}} \right) - \sqrt {\frac{{2\sigma _s^2 }}{{\sigma _u^2 }} + 1} Q^{ - 1} \left( {1 - P_E } \right)}}{{\sigma _s^2 /\sigma _u^2 }}} \right)^2 \times \frac{{E_{sample} }}{T} \\ \end{array} \label{eqn14}$$ Note that (10) is valid only when $p_t$ falls in the range of: $$\left( {1 - Q\left( {\sqrt {\frac{{2\sigma _s^2 }}{{\sigma _u^2 }} + 1} Q^{ - 1} \left( {1 - P_E } \right)} \right)} \right)p({\rm{H}}_0 ) < p_t < p({\rm{H}}_0 ) \label{eqn15}$$ When $0 < p_t < \left( {1 - Q\left( {\sqrt {\frac{{2\sigma _s^2 }}{{\sigma _u^2 }} + 1} Q^{ - 1} \left( {1 - P_E } \right)} \right)} \right)p({\rm{H}}_0 ) $, the transmission probability is so small that the requirement of **AS4** can always be met. In this case, we don’t have to do any spectrum sensing, and $ P_{_{AmOS}} \left( {p_t } \right) = 0$. AppOS: Distortion-Constrained Source Sensing -------------------------------------------- In this subsection, we step forward to explore the connection between $p_t$ and average AppOS power $P_{_{AppOS}}$. We model the power consumption of the AppOS task, which comprises the target sensing application, source-channel coding and transmission. ![Gaussian cognitive radio sensor network.[]{data-label="Fig4"}](Fig4){width="40.00000%"} As shown in Fig.\[Fig4\], we consider the Gaussian source $S$ with zero mean and variance $\sigma _S^2$. The source generates symbols at a constant rate $L$ symbols per second. Every CRSN node’s observation includes a Gaussian noise $W_i$ with zero mean and equal variance $\sigma _W^2$. The source $S$ is finally recovered as $\hat S$ at the Access Point(AP). Every CRSN node first compresses its observation, then transmit it to AP over the MAC using independently generated channel codes. This is a multiterminal source coding system, and can be classified as the CEO problem. For the symmetric Gaussian CEO problem, the $K$ nodes rate-distortion function [@R-D2] is: $$R_{source} \left( D \right) = \frac{L}{2}\log _2 \left( {\frac{{\left( {\frac{{\sigma _S^2 }}{D}} \right)^{\frac{1}{K}} }}{{1 - \frac{{\sigma _W^2 }}{K}\left( {\frac{1}{D} - \frac{1}{{\sigma _S^2 }}} \right)}}} \right) \label{eqn16}$$ Note that we will only use the Gaussian source for illustration later. For other sources, explicit form of the rate-distortion function hasn’t been derived. However, the outer bound can be obtained, which is exactly in the form of (12) [@R-D2]. The outer bound represents the worst case, which means for a given source variance $\sigma _S^2$, the Gaussian sources are the most difficult to compress. We assume that the communication channel of interest is AWGN channel. According to the Shannon Channel Capacity Theorem: $$R_{channel} \le W\log _2 \left\{ {1 + \frac{P}{{N_0 W}}} \right\} \label{eqn17}$$ where $W$ is the channel bandwidth, and $N_0$ is the unilateral noise power spectral density. The energy for correctly delivering of every bit of source information is: $$E_{bit} = \frac{P}{R} = N_0 W\frac{{2^{\frac{{R_{channel} }}{W}} - 1}}{{R_{channel} }} \label{eqn18}$$ Thus, the average AppOS power consumption can be expressed as: $$\begin{array}{l} P_{_{AppOS}} = p'_t E_{bit} R_{channel} \\ = p'_t N_0 W\left( {2^{\frac{{R_{channel} }}{W}} - 1} \right) \\ \end{array}\label{eqn19}$$ We should point out that the source is encoded at rate $R_{source}$. And $R_{source}$ is determined by the distortion $D$, the number of nodes $K$, the variance of source $\sigma _S^2$ and the variance of noise $\sigma _W^2$, regardless of the PU activity. However, only a fraction of $p_t$ throughout the time domain can be used for effective transmission. Therefore, in order to offset the slots forbidden for transmission, the channel coding rate should be higher than source coding rate: $$\begin{array}{l} R_{channel} = \frac{{R_{source} }}{{p_t }} \\ = \frac{L}{{2p_t }}\log _2 \left( \frac{{\left( {\frac{{\sigma _S^2 }}{D}} \right)^{\frac{1}{K}} }}{{1 - \frac{{\sigma _W^2 }}{K}\left( {\frac{1}{D} - \frac{1}{{\sigma _S^2 }}} \right)}} \right) \\ \end{array} \label{eqn20}$$ From (8), (15) and (16), we formulate the average AppOS power with respect to $p_t$: $$\begin{array}{l} P_{_{AppOS}} \left( {p_t} \right) = \left( {p_t + P_E p({\rm H}_1 )} \right)N_0 W \\ \times \left( {\left( \frac{{\left( {\frac{{\sigma _S^2 }}{D}} \right)^{\frac{1}{K}} }}{{1 - \frac{{\sigma _W^2 }}{K}\left( {\frac{1}{D} - \frac{1}{{\sigma _S^2 }}} \right)}} \right)^{\frac{L}{{2p_t W}}} - 1} \right) \\ \end{array} \label{eqn21}$$ *Proposition 1:* $P_{_{AppOS}} \left( {p_t}\right)$ is a monotonically decreasing function. *Proof: See Proof of Proposition 1 in Appendix B* The result can be confusing at the first glance, since we may intuitively think that the AppOS power would grow with the transmission probability. However, this is not the case. Now we provide a heuristic understanding. If the transmission probability $p_t$ is very low, the channel coding rate in the transmitting slots has to be very high to make up for those silent slots. According to (14), the transmission becomes less power efficient. Therefore, for certain distortion and source coding rate, the average AppOS power decreases with $p_t$. Joint Power Consumption Model ----------------------------- On the one hand, if we allocate more power for AmOS, we are more confident about the status of the primary user, therefore we can grasp more opportunities for transmission. On the other hand, delivering the information of the target source to the AP also requires energy; the more power we allocate to AppOS, the higher source and channel coding rate we can achieve. Under the condition that power is constrained in CRSN node, we face a dilemma on how to balance the two tasks. The effective transmission probability $p_t$ is the key parameter that naturally connects the two sensing tasks. From (10) and (17), the total power consumption can be modeled as a function of $p_t$: $$\begin{array}{l} P_{total} \left( {p_t} \right) \\ = \left( {\frac{{Q^{ - 1} \left( {1 - \frac{{p_t }}{{p({\rm H}_0 )}}} \right) - \sqrt {\frac{{2\sigma _s^2 }}{{\sigma _u^2 }} + 1} Q^{ - 1} \left( {1 - P_E } \right)}}{{\sigma _s^2 /\sigma _u^2 }}} \right)^2 \times \frac{{E_{sample} }}{T} + \\ \left( {p_t + P_E p({\rm H}_1 )} \right)N_0 W\left( {\left( \frac{{\left( {\frac{{\sigma _S^2 }}{D}} \right)^{\frac{1}{K}} }}{{1 - \frac{{\sigma _W^2 }}{K}\left( {\frac{1}{D} - \frac{1}{{\sigma _S^2 }}} \right)}} \right)^{\frac{L}{{2p_t W}}} - 1} \right) \\ \end{array}\label{eqn22}$$ *Proposition 2:* When the probability of false alarm $P_{FA}<\frac{1}{2}$, $P_{total} \left( {p_t} \right)$ is a convex function with respect to $p_t$. That is to say, we can obtain the minimal total power consumption and a unique power efficient allocation solution for the CRSN node, if $P_{FA}$ falls into this range. *Proof: See Proof of Proposition 2 in Appendix A* *Theorem 1:* Under our slotted sensing and transmission scheme, there is always a minimal total power consumption and corresponding optimal power allocation scheme for the CRSN to achieve certain distortion constraint. *Proof: See Proof of Theorem 1 in Appendix B* We end this section by summarizing the above results. In the cases when $P_{FA}<\frac{1}{2}$, we know $P_{total} \left( {p_t} \right)$ is convex from *Proposition 1*. We can thus design efficient search algorithm to find the optimal power consumption. Otherwise, *Theorem 1* shows that, though the function is not convex, we can still find the optimal power consumption through exhaustive search, and calculate the corresponding power allocation scheme. Simulation Result ================= To validate the analysis of the proposed energy efficient Joint Source and Channel Sensing scheme, we present several numerical results. We use Matlab as our simulator. For all scenarios, we set the PU occupation rate to be $0.3$, which means the PU is active with this probability. The max miss detection probability in **AS4** is $0.1$; the energy consumed per sample in spectrum sensing is $E_{sample}=0.1$mW; the source is of unit variance, i.e. $\sigma _S^2 = 1$; the symbol rate of the source is $L = 1$M bauds; the distortion is constrained to be 0.1. There are $K=10$ nodes and the bandwidth of the considered AWGN channel is $W = 5$MHz. ![Average AmOS power under different PU SNR.[]{data-label="Fig5"}](Fig5){width="50.00000%"} ![Average AppOS power under different source SNR.[]{data-label="Fig6"}](Fig6){width="50.00000%"} ![Average total power under different PU SNR and source SNR.[]{data-label="Fig7"}](Fig7){width="50.00000%"} From Fig.5, we find that the average AmOS power increases with the effective allowed transmission probability, i.e. the more we pay on spectrum sensing, the more chances we obtain for transmission. We can see from Fig.6 that the average AppOS power drops as transmission probability increases, and this is consistent with the analysis of *Proposition 1*. Fig.5 and Fig.6 also show that as the spectrum environment and monitored source become noisier, the corresponding AmOS and AppOS power consumption increase. Finally, Fig.7 shows that there is a unique valley point in every curve, which corresponds to the optimal total power. Any other transmission probability and power allocation scheme will result in a higher total power consumption. In Fig.7, when the source SNR is $10$dB and the PU SNR is $-15$dB, the optimal $p_t$ is $0.42$, and the optimal total power is $4.8$W. $5.1\%$ of the power should be allocated to AmOS to achieve optimality. Conclusion ========== In this paper, we introduced a novel concept of Joint Source and Channel Sensing for Cognitive Radio Sensor Networks, which seeks to deliver the application source information to the access point in a most power efficient manner. We presented a specific slotted sensing and transmission scheme. By exploiting the relation between AmOS and AppOS tasks, we modeled their power consumption properly and jointly analyzed them. We proved that optimal power consumption and corresponding power allocation scheme exist for fixed distortion requirement. Finally, we present simulation results to support our analysis. Appendix A: Proof of Proposition 2 ================================== *Proof:* The former part of (18) can be viewed as a composite function $h\left( {p_t } \right){\rm{ }} = {\rm{ }}g\left( {f\left( {p_t } \right)} \right)$, where $f\left( {p_t } \right) = Q^{ - 1} \left( {1 - \frac{{p_t }}{{p({\rm{H}}_0 )}}} \right)$, and $$g\left( x \right) = \left( {\frac{{x - \sqrt {\frac{{2\sigma _s^2 }}{{\sigma _u^2 }} + 1} Q^{ - 1} \left( {1 - P_E } \right)}}{{\sigma _s^2 /\sigma _u^2 }}} \right)^2 \times \frac{{E_{sample} }}{T}$$ Since$f\left( {p_t } \right) - \sqrt {\frac{{2\sigma _s^2 }}{{\sigma _u^2 }} + 1} Q^{ - 1} \left( {1 - P_E } \right) = \sqrt N \frac{{\sigma _s^2 }}{{\sigma _u^2 }} > 0$, and all other parameters in $g(x)$ are non-negative, $g(x)$ is a convex and non-decreasing function. According to the property of inverse Q function, $f(p_t)$ is convex as long as $p_t > \frac{1}{2}p\left( {H_0 } \right)$, which is equivalent to $P_{FA} < \frac{1}{2}$. Now that $f\left( \cdot \right)$ and $g\left( \cdot \right)$ are convex functions and $g\left( \cdot \right)$ is non-decreasing, then the former part $h(x) = g(f(x))$ is convex. The latter part of (18) can prove to be convex through its second order derivative: $$\begin{array}{l} \frac{{\partial ^2 P_{_{AppOS}} }}{{\partial p_t ^2 }} = \\ \frac{{LN_0 \ln \left( C \right)C^{\frac{L}{{2p_t W}}} \left( {L\ln \left( C \right)\left( {p_t + P_E p\left( {H_1 } \right)} \right) + 4p_t WP_E p\left( {H_1 } \right)} \right)}}{{4p_t^4 W}} \\ \end{array}$$ where $C = \frac{{\left( {\frac{{\sigma _S^2 }}{D}} \right)^{\frac{1}{K}} }}{{1 - \frac{{\sigma _W^2 }}{K}\left( {\frac{1}{D} - \frac{1}{{\sigma _S^2 }}} \right)}} >1$, and all other parameters are positive. Obviously (20) is positive, thus the latter part of (18) is also convex. The sum power $P_{total} \left( {p_t} \right)$, as the sum of two convex functions, is convex. Appendix B: Proof of Theorem 1 ============================== *Proof:* The derivative of $P_{_{AppOS}}$ is: $$\begin{array}{l} P'_{_{AppOS}} \left( {p_t } \right) = \\ - \frac{{\left( {\left( {L\left( {1 + p\left( {H_1 } \right)} \right)\ln C - 2p_t W} \right)C^{\frac{L}{{2p_t W}}} + 2p_t W} \right)N_0 }}{{2p_t }} \\ \end{array}$$ After observing (21), we can easily find that $P'_{_{AppOS}} \left( {0^+}\right)=-\infty$ and $P'_{_{AppOS}} \left( {+\infty}\right)=0$. Given that (24) is positive, we can conclude that $P'_{_{AppOS}} \left( {p_t}\right)<0$, and $P_{_{AppOS}} \left( {p_t}\right)$ is monotonically decreasing. Thus *Proposition 1* is proved. When $p_t$ falls in the range of (11), $$\begin{array}{l} P_{_{AmOS}} ^\prime \left( {p_t } \right) = \frac{{2E_{sample} }}{T}Q^{ - 1} \left( {1 - \frac{{p_t }}{{p({\rm{H}}_0 )}}} \right)^\prime \times \\ \left( {\frac{{Q^{ - 1} \left( {1 - \frac{{p_t }}{{p({\rm{H}}_0 )}}} \right) - \sqrt {\frac{{2\sigma _s^2 }}{{\sigma _u^2 }} + 1} Q^{ - 1} \left( {1 - P_E } \right)}}{{\sigma _s^2 /\sigma _u^2 }}} \right) \\ \end{array}$$ It can be verified from (21) and (22) that $$\begin{array}{l} P'_{total} \left( {0^ + } \right) = P'_{_{AmOS}} \left( {0^ + } \right) + P'_{_{AppOS}} \left( {0^ + } \right) \\ = 0 + \left( { - \infty } \right) = - \infty \\ \end{array}$$ Since $Q^{ - 1} \left( 0 \right)^\prime = + \infty $, we get: $$\begin{array}{l} P'_{total} \left( {p\left( {H_0 } \right)^ - } \right) \\ = P'_{_{AmOS}} \left( {p\left( {H_0 } \right)^ - } \right) + P'_{_{AppOS}} \left( {p\left( {H_0 } \right)^ - } \right) = + \infty \\ \end{array}$$ \(23) and (24) show that the continuous function $P_{total} \left( {p_t} \right)$ decreases sharply at the left end and increases sharply at the right end. Thus, there is a minimal total power consumption point within the range of $p_t$, and we can calculate the optimal $P_{_{AmOS}}$ and $P_{_{AppOS}}$ respectively. [1]{} Goh H.G., Kae Hsiang Kwong, Chong Shen, Michie C., and Andonovic, I. “CogSeNet: A Concept of Cognitive Wireless Sensor Network," in *Proc. IEEE CCNC*, pp.1-2, Jan. 2010. Zahmati A.S., Hussain S., Fernando X., and Grami A., “Cognitive Wireless Sensor Networks: Emerging topics and recent challenges,", in *Proc. IEEE TIC-STH* pp.593-596 Sept. 2009. Akan O., Karli O., Ergul O., and M. Haardt, “Cognitive radio sensor networks," in *IEEE Network*, vol.23, no.4, pp.34-40 July 2009. Vijay G., Bdira E., and Ibnkahla M. “Cognitive approaches in Wireless Sensor Networks: A survey," *Proc. QBSC*, pp.177-180, May 2010. Gastpar M., “A lower bound to the AWGN remote rate-distortion function," *Proc. IEEE/SSP*, pp.1176-1181, 2005. Oohama Y., “Rate-distortion theory for Gaussian multiterminal source coding systems with several side informations at the decoder," in *IEEE Trans. Info. Theory*,vol.51, no.7, pp.2577-2593, July 2005. Jain A., Gunduz D., Kulkarni S.R., Poor H.V., and Verdu S., “Energy-distortion tradeoffs in multiple-access channels with feedback," in *Proc. IEEE ITW*, pp.1-5, Jan 2010. Jain A., Gunduz D., Kulkarni S.R., Poor H.V., and Verdu S., “Energy efficient lossy transmission over sensor networks with feedback," in *Proc. IEEE ICASSP*, pp.5558-5561 March 2010. Maleki S., Pandharipande A., and Leus G., “Energy-Efficient Distributed Spectrum Sensing for Cognitive Sensor Networks," in *Sensors Journal, IEEE*, pp.1-1 no.99 June 2010. Hang Su, and Xi Zhang, “Energy-Efficient Spectrum Sensing for Cognitive Radio Networks," in *Proc. IEEE ICC*, pp.1-5 May 2010. Y.C. Liang, Y.H. Zeng, Peh E.C.Y., and Anh Tuan Hoang, “Sensing-Throughput Tradeoff for Cognitive Radio Networks," in *IEEE Trans. Wireless Commun.*, vol.7, no.4, pp.1326-1337, April 2008 Y.C. Liang, et al., “System description and operation principles for IEEE 802.22 WRANs,¡± \[Online\]. Available: http://www.ieee802.org/22/
--- abstract: 'Learning representation from relative similarity comparisons, often called ordinal embedding, gains rising attention in recent years. Most of the existing methods are batch methods designed mainly based on the convex optimization, say, the projected gradient descent method. However, they are generally time-consuming due to that the singular value decomposition (SVD) is commonly adopted during the update, especially when the data size is very large. To overcome this challenge, we propose a stochastic algorithm called SVRG-SBB, which has the following features: (a) SVD-free via dropping convexity, with good scalability by the use of stochastic algorithm, i.e., stochastic variance reduced gradient (SVRG), and (b) adaptive step size choice via introducing a new stabilized Barzilai-Borwein (SBB) method as the original version for convex problems might fail for the considered stochastic *non-convex* optimization problem. Moreover, we show that the proposed algorithm converges to a stationary point at a rate $\mathcal{O}(\frac{1}{T})$ in our setting, where $T$ is the number of total iterations. Numerous simulations and real-world data experiments are conducted to show the effectiveness of the proposed algorithm via comparing with the state-of-the-art methods, particularly, much lower computational cost with good prediction performance.' author: - | Ke Ma^1,2^, Jinshan Zeng^3,4^, Jiechao Xiong^5^, Qianqian Xu^1^, **\ ^1^ State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences\ ^2^ School of Cyber Security, University of Chinese Academy of Sciences\ ^3^ School of Computer Information Engineering, Jiangxi Normal University\ ^4^ Department of Mathematics, Hong Kong University of Science and Technology ^5^ Tencent AI Lab\ {make, xuqianqian, caoxiaochun}@iie.ac.cn, [email protected]\ [email protected], [email protected], [email protected]** bibliography: - 'aaai18.bib' title: | Stochastic Non-convex Ordinal Embedding with\ Stabilized Barzilai-Borwein Step Size --- Introduction ============ Ordinal embedding aims to learn representation of data objects as points in a low-dimensional space. The distances among these points agree with a set of relative similarity comparisons as well as possible. Relative comparisons are often collected by workers who are asked to answer the following question: *“Is the similarity between object $i$ and $j$ larger than the similarity between $l$ and $k$?"* The feedback of individuals provide us a set of quadruplets, *i.e.*, $\{(i,j,l,k)\}$, which can be treated as the supervision for ordinal embedding. Without prior knowledge, the relative similarity comparisons always involve all objects and the number of potential quadruplet could be $\mathcal{O}(n^4)$. The ordinal embedding problem was firstly studied by [@Shepard1962a; @Shepard1962b; @Kruskal1964a; @Kruskal1964b] in the psychometric society. In recent years, it has drawn a lot of attention in machine learning [@jamieson2011low; @53e99af7b7602d97023851bf; @2015arXiv150102861A; @NIPS2016_6554], statistic ranking [@McFee:2011:LMS:1953048.1953063; @kevin2011active], artificial intelligence [@Heikinheimo2013TheCA; @503], information retrieval [@7410580], and computer vision [@wah2014similarity; @wilberKKB2015concept], etc. One of the typical methods for ordinal embedding problem is the well-known Generalized Non-Metric Multidimensional Scaling (GNMDS) [@agarwal2007generalized], which aims at finding a low-rank Gram (or kernel) matrix $\mathbf{G}$ in Euclidean space such that the pairwise distances between the embedding of objects in Reproducing Kernel Hilbert Space (RKHS) satisfy the relative similarity comparisons. As GNMDS uses hinge loss to model the relative similarity between the objects, it neglects the information provided by satisfied constraints in finding the underlying structure in the low-dimensional space. To alleviate this issue, the Crowd Kernel Learning (CKL) was proposed by [@tamuz2011adaptiive] via employing a scale-invariant loss function. However, the objective function used in CKL only considers the constraints which are strongly violated. Latter, [@vandermaaten2012stochastic] proposed the Stochastic Triplet Embedding (STE) that jointly penalizes the violated constraints and rewards the satisfied constraints, via using the logistic loss function. Note that the aforementioned three typical methods are based on the convex formulations, and also employ the projected gradient descent method and singular value decomposition (SVD) to obtain the embedding. However, a huge amount of comparisons and the computational complexity of SVD significantly inhibit their usage to large scale and on-line applications. Structure Preserving Embedding (SPE) [@Shaw:2009:SPE:1553374.1553494] and Local Ordinal Embedding (LOE) [@Terada2014LocalOE] embed unweighted nearest neighbor graphs to Euclidean spaces with convex and non-convex objective functions. The nearest neighbor adjacency matrix can be transformed into ordinal constraints, but it is not a standard equipment in those scenarios which involve relative comparisons. With this limitation, SPE and LOE are not suitable for ordinal embedding via quadruplets or triple comparisons. In contrast to the kernel-learning or convex formulation of ordinal embedding, the aforementioned methods have the analogous non-convex counterparts. The non-convex formulations directly obtain the embedding instead of the Gram matrix. Batch gradient descent is not suitable for solving these large scale ordinal embedding problems because of the expense of full gradients in each iteration. Stochastic gradient descent (SGD) is a common technology in this situation as it takes advantage of the stochastic gradient to devise fast computation per iteration. In [@ghadimi2013stochastic], the $\mathcal{O}(\frac{1}{\sqrt{T}})$ convergence rate of SGD for the stochastic non-convex optimization problem was established, in the sense of convergence to a stationary point, where $T$ is the total number of iterations. As SGD has slow convergence due to the inherent variance, stochastic variance reduced gradient (SVRG) method was proposed in [@rie2013accelerating] to accelerate SGD. For the strongly convex function, the linear convergence of SVRG with Option-II was established in [@rie2013accelerating], and latter the linear convergence rates of SVRG with Option-I and SVRG incorporating with the Barzilai-Borwein (BB) step size were shown in [@NIPS2016_6286]. In the non-convex case, the $\mathcal{O}(\frac{1}{T})$ convergence rates of SVRG in the sense of convergence to a stationary point were shown in [@pmlr-v48-reddi16; @pmlr-v48-allen-zhua16] under certain conditions. Although the BB step size has been incorporated into SVRG and its effectiveness has been shown in [@NIPS2016_6286] for the strongly convex case, it might not work when applied to some stochastic non-convex optimization problems. Actually, in our latter simulations, we found that the absolute value of the original BB step size is unstable when applied to the stochastic non-convex ordinal embedding problem studied in this paper (see, Figure \[fig:step\](a)). The absolute value of the original BB step size varies dramatically with respect to the epoch number. Such phenomenon is mainly due to without the strong convexity, the denominator of BB step size might be very close 0, and thus the BB step size broken up. This motivates us to investigate some new stable and adaptive strategies of step size for SVRG when applied to the stochastic non-convex ordinal embedding problem. In this paper, we introduce a new adaptive step size strategy called stabilized BB (SBB) step size via adding another positive term to the absolute of the denominator of the original BB step size to overcome the instability of the BB step size, and then propose a new stochastic algorithm called SVRG-SBB via incorporating the SBB step size for fast solving the considered non-convex ordinal embedding model. In a summary, our main contribution can be shown as follows: - We propose a non-convex framework for the ordinal embedding problem via considering the optimization problem with respect to the original embedding variable but not its Gram matrix. By exploiting this idea, we get rid of the positive semi-definite (PSD) constraint on the Gram matrix, and thus, our proposed algorithm is SVD-free and has good scalability. - The introduced SBB step size can overcome the instability of the original BB step size when the original BB step size does not work. More importantly, the proposed SVRG-SBB algorithm outperforms most of the the state-of-the-art methods as shown by numerous simulations and real-world data experiments, in the sense that SVRG-SBB often has better generalization performance and significantly reduces the computational cost. - We establish $O(\frac{1}{T})$ convergence rate of SVRG-SBB in the sense of convergence to a stationary point, where $T$ is the total number of iterations. Such convergence result is comparable with the existing best convergence results in literature. Stochastic Ordinal Embedding ============================ A. Problem Description ---------------------- There is a set of $n$ objects $\{o_1,\dots,o_n\}$ in abstract space $\mathbf{O}$. We assume that a certain but unknown dissimilarity function $\xi:\mathbf{O}\times\mathbf{O}\rightarrow\mathbb{R}^{+}$ assigns the dissimilarity value $\xi_{ij}$ for a pair of objects $(o_i,o_j)$. With the dissimilarity function $\xi$, we can define the ordinal constraint $(i,j,l,k)$ from a set $\mathcal{P}\subset[n]^4$, where $$\mathcal{P}=\{(i,j,l,k)\ |\ \text{if exist }o_i,o_j,o_k,o_l\text{ satisfy }\xi_{ij}<\xi_{lk}\}$$ and $[n]$ is the set of $\{1,\dots,n\}$. Our goal is to obtain the representations of $\{o_1,\dots,o_n\}$ in Euclidean space $\mathbb{R}^{p}$ where $p$ is the desired embedding dimension. The embedding $\mathbf{X}\in\mathbb{R}^{n\times d}$ should preserve the ordinal constraints in $\mathcal{P}$ as much as possible, which means $$0 (i,j,l,k)\in\mathcal{P} \Leftrightarrow \xi_{ij} < \xi_{lk} \Leftrightarrow d^2_{ij}(\mathbf{X}) < d^2_{lk}(\mathbf{X})$$ where $d^2_{ij}(\mathbf{X})=\|\mathbf{x}_i-\mathbf{x}_j\|^2$ is the squared Euclidean distance between $\mathbf{x}_i$ and $\mathbf{x}_j$, and $\mathbf{x}_i$ is the $i^{th}$ row of $\mathbf{X}$. Let $\mathbf{D}=\{d^2_{ij}(\mathbf{X})\}$ be the distance matrix of $\mathbf{X}$. There are some existing methods for recovering $\mathbf{X}$ given ordinal constraints on distance matrix $\mathbf{D}$. It is known that $\mathbf{D}$ can be determined by the Gram matrix $\mathbf{G} = \mathbf{X}\mathbf{X}^T = \{g_{ij}\}^{n}_{i,j=1}$ as $ d^2_{ij}(\mathbf{X}) = g_{ii}-2g_{ij}+g_{jj}, $ and $$\mathbf{D} = \textit{diag}(\mathbf{G})\cdot\mathbf{1}^T-2\mathbf{G}+\mathbf{1}\cdot\textit{diag}(\mathbf{G})^T$$ where $\textit{diag}(\mathbf{G})$ is the column vector composed of the diagonal of $\mathbf{G}$ and $\mathbf{1}^T=[1,\dots,1]$. As $\text{rank}(\mathbf{G})\leq\min(n, d)$ and it always holds $d\ll n$, these methods [@agarwal2007generalized; @tamuz2011adaptiive; @vandermaaten2012stochastic] can be generalized by a semidefinite programming (SDP) with low rank constraint, $$\label{eq:1} \underset{\mathbf{G}\in\mathbb{R}^{n\times n}}{\min} \ \ L(\mathbf{G})+\lambda\cdot\text{tr}(\mathbf{G}) \quad \text{s.t.} \quad \mathbf{G}\succeq 0$$ where $ L(\mathbf{G})=\frac{1}{|\mathcal{P}|}\sum_{p\in\mathcal{P}}l_p(\mathbf{G}) $ is a convex function of $\mathbf{G}$ which satisfies $$l_p(\mathbf{G}): \left\{ \begin{matrix} > 0,\ & d^2_{ij}(\mathbf{X}) > d^2_{lk}(\mathbf{X})\\ \leq 0,\ & \text{otherwise.} \end{matrix} \right.$$ $\text{tr}(\mathbf{G})$ is the trace of matrix $\mathbf{G}$. To obtain the embedding $\mathbf{X}\in\mathbb{R}^{n\times d}$, the projected gradient descent is performed. The basic idea of the projected gradient descent method is: the batch gradient descent step with all $p\in\mathcal{P}$ is firstly used to learn the Gram matrix $\mathbf{G}$, $${\mathbf{G}}'_{t} = \mathbf{G}_{t-1}-\eta_{t}(\nabla L(\mathbf{G}_{t-1})+\lambda \mathbf{I})$$ where $t$ denotes the current iteration, $\eta_t$ is the step size; then ${\mathbf{G}}'_{t}$ is projected onto a positive semi-definite (PSD) cone $\mathbb{S}_+$, $ \mathbf{G}_{t} = \Pi_{\mathbb{S}_+}({\mathbf{G}}'_{t}); $ and latter, once the iterates converge, the embedding $X$ is obtained by projecting $\mathbf{G}$ onto the subspace spanned by the largest $p$ eigenvectors of $\mathbf{G}$ via SVD. B. Stochastic Non-convex Ordinal Embedding ------------------------------------------ Although the SDP (\[eq:1\]) is a convex optimization problem, there are some disadvantages of this approach: (i) the projection onto PSD cone $\mathbb{S}_+$, which is performed by an expensive SVD due to the absence of any prior knowledge on the structure of $\mathbf{G}$, is a computational bottleneck of optimization; and (ii) the desired dimension of the embedding $\mathbf{X}$ is $d$ and we hope the Gram matrix $\mathbf{G}$ satisfies $\text{rank}(\mathbf{G})\leq d$. If $\text{rank}(\mathbf{G})\gg d$, the freedom degree of $\mathbf{G}$ is much larger than $\mathbf{X}$ with over-fitting. Although $\mathbf{G}$ is a global optimal solution of (\[eq:1\]), the subspace spanned by the largest $d$ eigenvectors of $\mathbf{G}$ will produce less accurate embedding. We can tune the regularization parameter $\lambda$ to force $\{\mathbf{G}_t\}$ to be low-rank and cross-validation is the most utilized technology. This needs extra computational cost. In summary, projection and parameter tuning render gradient descent methods computationally prohibitive for learning the embedding $\mathbf{X}$ with ordinal information $\mathcal{P}$. To overcome these challenges, we will exploit the non-convex and stochastic optimization techniques for the ordinal embedding problem. To avoid projecting the Gram matrix $\mathbf{G}$ onto the PSD cone $\mathbb{S}_{+}$ and tuning the parameter $\lambda$, we directly optimize $\mathbf{X}$ and propose the unconstrained optimization problem of learning embedding $\mathbf{X}$, $$\label{eq:2} \underset{\mathbf{X}\in\mathbb{R}^{n\times d}}{\min}\ F(\mathbf{X}):=\frac{1}{|\mathcal{P}|}\ \underset{p\in\mathcal{P}}{\sum}\ f_p(\mathbf{X})$$ where $f_p(\mathbf{X})$ evaluates $$\triangle_p = d^2_{ij}(\mathbf{X})-d^2_{lk}(\mathbf{X}),\ p=(i,j,l,k).$$ and $$f_p(\mathbf{X}): \left\{ \begin{matrix} \leq 0,\ & \triangle_p\leq 0\\ > 0,\ & \text{otherwise.} \end{matrix} \right.$$ The loss function $f_p(\mathbf{X})$ can be chosen as the hinge loss [@agarwal2007generalized] $$\label{eq:hinge} f_p(\mathbf{X}) = \max\{0, 1+\triangle_p\},$$ the scale-invariant loss [@tamuz2011adaptiive] $$\label{eq:scale-invariant} f_p(\mathbf{X}) = \log\frac{d^2_{lk}(\mathbf{X})+\delta}{d^2_{ij}(\mathbf{X})+d^2_{lk}(\mathbf{X})+2\delta},$$ where $\delta\neq 0$ is a scalar which overcomes the problem of degeneracy and preserve numerical stable, the logistic loss [@vandermaaten2012stochastic] $$\label{eq:logistic} f_p(\mathbf{X}) = -\log(1+\exp(\triangle_p)),$$ and replacing the Gaussian kernel in (\[eq:logistic\]) by the Student-$t$ kernel with degree $\alpha$ [@vandermaaten2012stochastic] $$\label{eq:student} f_p(\mathbf{X}) = -\log\frac{\left(1+\frac{d^2_{ij}(\mathbf{X})}{\alpha}\right)^{-\frac{\alpha+1}{2}}}{\left(1+\frac{d^2_{ij}(\mathbf{X})}{\alpha}\right)^{-\frac{\alpha+1}{2}}+\left(1+\frac{d^2_{lk}(\mathbf{X})}{\alpha}\right)^{-\frac{\alpha+1}{2}}}.$$ Since is an unconstrained optimization problem, it is obvious that SVD and parameter $\lambda$ are avoided. Moreover, instead of the batch methods like the gradient descent method, we use a fast stochastic gradient descent algorithm like SVRG to solve the non-convex problem . SVRG with Stabilized BB Step Size ================================= A. Motivation ------------- One open issue in stochastic optimization is how to choose an appropriate step size for SVRG in practice. The common method is either to use a constant step size to track, a diminishing step size to enforce convergence, or to tune a step size empirically, which can be time consuming. Recently, [@NIPS2016_6286] proposed to use the Barzilai-Borwein (BB) method to automatically compute step sizes in SVRG for strongly convex objective function shown as follows $$\label{eq:bb_step} \eta_{s} = \frac{1}{m}\frac{\|\tilde{\mathbf{X}}^{s}-\tilde{\mathbf{X}}^{s-1}\|^2_F}{\text{vec}(\tilde{\mathbf{X}}^{s}-\tilde{\mathbf{X}}^{s-1})^T\text{vec}(\mathbf{g}^s-\mathbf{g}^{s-1})},$$ where $\tilde{\mathbf{X}}^{s}$ is the $s$-th iterate of the outer loop of SVRG and $\mathbf{g}^{s} = \nabla F(\tilde{\mathbf{X}}^s)$. However, if the objective function $F$ is non-convex, the denominator of might be close to 0 and even negative that fail the BB method. For example, Figure \[fig:step\](a) shows that in simulations one can observe the instability of the absolute value of the original BB step size (called SBB$_0$ henceforth) in non-convex problems. Due to this issue, the original BB step size might not be suitable for the non-convex ordinal embedding problem. B. Stabilized BB step size -------------------------- An intuitive way to overcome the flaw of BB step size is to add another positive term in the absolute value of the denominator of the original BB step size, which leads to our introduced stabilized Barzilai-Borwein (SBB) step size shown as follows, $$\label{eq:rbb_step} \begin{aligned} & \eta_{s} &=&\ \ \frac{1}{m}\cdot\left\|\tilde{\mathbf{X}}^{s}-\tilde{\mathbf{X}}^{s-1}\right\|^2_F\\ & &\times& \ \ \left(\left|\text{vec}(\tilde{\mathbf{X}}^{s}-\tilde{\mathbf{X}}^{s-1})^T\text{vec}(\mathbf{g}^s-\mathbf{g}^{s-1})\right|\right.\\ & &+&\ \ \left.\epsilon\left\|\tilde{\mathbf{X}}^{s}-\tilde{\mathbf{X}}^{s-1}\right\|^2_F\right)^{-1}, \quad \text{for some} \ \epsilon>0. \end{aligned}$$ By the use of SBB step size, the SVRG-SBB algorithm is presented in Algorithm \[alg:svrg-bb\]. Actually, as shown by our latter theorem (i.e., Theorem \[svrg\_bb\_nonconvex\]), if the Hessian of the objective function $\nabla^2 F(X)$ is nonsingular and the magnitudes of its eigenvalues are lower bounded by some positive constant $\mu$, then we can take $\epsilon=0$. In this case, we call the referred step size SBB$_0$ henceforth. Even if we have no information of the Hessian of the objective function in practice, the SBB$_\epsilon$ step size with an $\epsilon>0$ is just a more consecutive step size of SBB$_0$ step size. From , if the gradient $\nabla F$ is Lipschitz continuous with constant $L>0$, then the SBB$_\epsilon$ step size can be bounded as follows $$\begin{aligned} \label{eq:bound-rbb} \frac{1}{m(L+\epsilon)} \leq \eta_k \leq \frac{1}{m\epsilon},\end{aligned}$$ where the lower bound is obtained by the $L$-Lipschitz continuity of $\nabla F$, and the upper bound is directly derived by its specific form. If further $\nabla^2 F(X)$ is nonsingular and the magnitudes of its eigenvalues has a lower bound $\mu>0$, then the bound of SBB$_0$ becomes $$\begin{aligned} \label{eq:bound-rbb0} \frac{1}{m L} \leq \eta_k \leq \frac{1}{m \mu}.\end{aligned}$$ As shown in Figure \[fig:step\] (b), SBB$_\epsilon$ step size with a positive $\epsilon$ can make SBB$_0$ step size more stable when SBB$_0$ step size is unstable and varies dramatically. Moreover, SBB$_\epsilon$ step size usually changes significantly only at the initial several epoches, and then quickly gets very stable. This is mainly because there are many iterations in an epoch of SVRG-SBB, and thus, the algorithm might close to a stationary point after only one epoch, and starting from the second epoch, the SBB$_\epsilon$ step sizes might be very close to the inverse of the sum of the curvature of objective function and the parameter $\epsilon$ used. C. Convergence Results ---------------------- In this subsection, we establish the convergence rate of SVRG-SBB as shown in the following theorem. \[svrg\_bb\_nonconvex\] Let $\{\{{\bf X}_t^s\}_{t=1}^m\}_{s=1}^S$ be a sequence generated by Algorithm \[alg:svrg-bb\]. Suppose that $F$ is smooth, and $\nabla F$ is Lipschitz continuous with Lipschitz constant $L>0$ and bounded. For any $\epsilon>0$, if $$\begin{aligned} \label{Eq:cond-m} m > \max \left\{ \frac{L^2}{\epsilon}\left(1+\frac{2L}{\epsilon}\right), 1+ \sqrt{1+\frac{8L^3}{\epsilon}}\right\} \cdot \epsilon^{-1},\end{aligned}$$ then for the output ${\bf X}_{\mathrm{out}}$ of Algorithm \[alg:svrg-bb\], we have $$\begin{aligned} \label{eq:rate} \mathbb{E}[\|\nabla F({\bf X}_{\mathrm{out}})\|^2] \leq \frac{F(\tilde{\bf X}^0)-F({\bf X}^*)}{T \cdot \gamma_S},\end{aligned}$$ where ${\bf X}^*$ is an optimal solution of , $T = m \cdot S$ is the total number of iterations, $\gamma_S$ is some positive constant satisfying $$\gamma_S \geq \min_{0\leq s \leq S-1} \left\{ \eta_s \left[ \frac{1}{2} - \eta_s\left(1+4(m-1)L^3\eta_s^2\right)\right]\right\},$$ and $\{\eta_s\}_{s=0}^{S-1}$ are SBB step sizes specified in . If further the Hessian $\nabla^2 F(X)$ exists and $\mu$ is the lower bound of the magnitudes of eigenvalues of $\nabla^2 F(X)$ for any bounded $X$, then the convergence rate still holds for SVRR-SBB with $\epsilon$ replaced by $\mu+\epsilon$. In addition, if $\mu>0$, then we can take $\epsilon=0$, and still holds for SVRR-SBB$_0$ with $\epsilon$ replaced by $\mu$. Theorem \[svrg\_bb\_nonconvex\] is an adaptation of [@pmlr-v48-reddi16 Theorem 2] via noting that the used SBB step size specified in satisfies . The proof of this theorem is presented in supplementary material. Theorem \[svrg\_bb\_nonconvex\] shows certain non-asymptotic rate of convergence of the Algorithm \[alg:svrg-bb\] in the sense of convergence to a stationary point. Similar convergence rates of SVRG under different settings have been also shown in [@pmlr-v48-reddi16; @pmlr-v48-allen-zhua16]. Note that the Lipschitz differentiability of the objective function is crucial for the establishment of the convergence rate of SVRG-SBB in Theorem \[svrg\_bb\_nonconvex\]. In the following, we give a lemma to show that a part of aforementioned objective functions (\[eq:scale-invariant\]), (\[eq:logistic\]) and (\[eq:student\]) in the ordinal embedding problem are Lipschitz differentiable. Considering the limited space of this paper, the readers are refered to ([]()) for detailed proofs. \[lemma:1\] The ordinal embedding functions (\[eq:scale-invariant\]), (\[eq:logistic\]) and (\[eq:student\]) are Lipschitz differentiable for any bounded variable $X$. Experiments {#section:experiment} =========== In this section, we conduct a series of simulations and real-world data experiments to demonstrate the effectiveness of the proposed algorithms. Three models including *GNMDS*, *STE* and *TSTE* are taken into consideration. Our source code could be found on the web[^1]. A. Simulations -------------- \[tabl:1\] We start with a small-scale synthetic experiment to show how the methods perform in an idealized setting, which provides sufficient ordinal information in noiseless case.\ \ **Settings.** The synthesized dataset consists of $100$ points $\{\mathbf{x}_i\}_{i=1}^{100}\subset\mathbb{R}^{10}$, where $\mathbf{x}_i\sim\mathcal{N}(\mathbf{0}, \frac{1}{20}\mathbf{I})$, where $\mathbf{I}\in \mathbb{R}^{10\times 10}$ is the identity matrix. The possible similarity triple comparisons are generated based on the Euclidean distances between $\{\mathbf{x}_i\}$. As [@NIPS2016_6554] has proved that the Gram matrix $\mathbf{G}$ can be recovered from $\mathcal{O}(pn\log n)$ triplets, we randomly choose $|\mathcal{P}|=10,000$ triplets as the training set and the rest as test set. The regularization parameter and step size settings for the convex formulation follow the default setting of the STE/TSTE implementation[^2], so we do not choose the step size by line search or the halving heuristic for convex formulation. The embedding dimension is selected just to be equal to $10$ without variations, because the results of different embedding dimensions have been discussed in the original papers of GNMDS, STE and TSTE.\ \ **Evaluation Metrics.** The metrics that we used in the evaluation of various algorithms include the generalization error and running time. As the learned embedding $\mathbf{X}$ from partial triple comparisons set $\mathcal{P}\subset[n]^3$ may be generalized to unknown triplets, the percentage of held-out triplets which is satisfied in the embedding $\mathbf{X}$ is used as the main metric for evaluating the quality. The running time is the duration of a algorithm when the training error is larger than $0.15$.\ \ **Competitors.** We evaluate both convex and non-convex formulations of three objective functions (i.e. GNMDS, STE and TSTE). We set the two baselines as : ($1$) the convex objective function whose results are denoted as “convex”, and ($2$) these non-convex objective functions solved by batch gradient descent denoted as “ncvx batch”. We compare the performance of SVRG-SBB$_\epsilon$ with SGD, SVRG with a fixed step size (called SVRG for short henceforth) as well as the batch gradient descent methods. As SVRG and its variant (SVRG-SBB$_\epsilon$) runs $2m+|\mathcal{P}|$ times of (sub)gradient in each epoch, the batch and SGD solutions are evaluated with the same numbers of (sub)gradient of SVRG. In Figure \[fig:synthetic\], the $x$-axis is the computational cost measured by the number of gradient evaluations divided by the total number of triple-wise constraints $|\mathcal{P}|$. The generalization error is the result of $50$ trials with different initial $\mathbf{X}_0$. For each epoch, the median number of generalization error over 50 trials with \[0.25, 0.75\] confidence interval are plotted. The experiment results are shown in Figure \[fig:synthetic\] and Table \[tabl:1\].\ \ **Results.** From to Figure \[fig:synthetic\], the following phenomena can be observed. First, the algorithm SVRG-SBB$_0$ will be unstable at the initial several epoches for three models, and latter get very stable. The eventual performance of SVRG-SBB$_0$ and that of SVRG-SBB$_\epsilon$ are almost the same in three cases. Second, compared to the batch methods, all the stochastic methods including SGD, SVRG and SVRG-SBB$_{\epsilon}$ ($\epsilon =0$ or $\epsilon>0$) converge fast at the initial several epoches and quickly get admissible results with relatively small generalization error. This is one of our main motivations to use the stochastic methods. Particularly, for all three models, SVRG-SBB$_\epsilon$ outperforms all the other methods in the sense that it not only converges fastest but also achieves almost the best generalization error. Moreover, the outperformance of SVRG-SBB$_\epsilon$ in terms of the cpu time can be also observed from Table \[tabl:1\]. Specifically, the speedup of SVRG-SBB$_\epsilon$ over SVRG is about 4 times for all three models. Table \[tabl:1\] shows the computational complexity achieved by SGD, SVRR-SBB$_\epsilon$ and batch gradient descent for convex and non-convex objective functions. All computation is done using MATLAB$^\text{\textregistered{}}$ R2016b, on a desktop PC with Windows$^\text{\textregistered{}}$ $7$ SP$1$ $64$ bit, with $3.3$ GHz Intel$^\text{\textregistered{}}$ Xeon$^\text{\textregistered{}}$ E3-1226 v3 CPU, and $32$ GB $1600$ MHz DDR3 memory. It is easy to see that for all objective functions, SVRG-SBB$_\epsilon$ gains speed-up compared to the other methods. Besides, we notice that the convex methods could be effective when $n$ is small as the projection operator will not be the bottleneck of the convex algorithm. [c||cc||cc||cc]{} & & &\ & MAP& Precision@40 & MAP & Precision@40 & MAP & Precision@40\ \ cvx & 0.2691 & 0.3840 & 0.2512 & 0.3686 & 0.2701 & 0.3883\ ncvx Batch & 0.3357 & 0.4492 & 0.3791 & 0.4914 & 0.3835 & 0.4925\ ncvx SGD & 0.3245 & 0.4379 & 0.3635 & 0.4772 & 0.3819 & 0.4931\ ncvx SVRG & 0.3348 & 0.4490 & *0.3872* & *0.4974* & *0.3870* & *0.4965*\ ncvx SVRG-SBB$_0$ & **0.3941** & **0.5040** & 0.3700 & 0.4836 & 0.3550 & 0.4689\ ncvx SVRG-SBB$_\epsilon$ & *0.3363* & *0.4500* & **0.3887** & **0.4981** & **0.3873** & **0.4987**\ \ cvx & 0.2114 & 0.3275 & 0.1776 & 0.2889 & 0.1989 & 0.3190\ ncvx Batch & 0.2340 & 0.3525 & 0.2252 & 0.3380 & 0.2297 & 0.3423\ ncvx SGD & 0.3369 & 0.4491 & 0.2951 & 0.4125 & 0.2390 & 0.3488\ ncvx SVRG & 0.3817 & 0.4927 & 0.3654 & 0.4804 & 0.3245 & 0.4395\ ncvx SVRG-SBB$_0$ & **0.3968** & **0.5059** & **0.3958** & **0.5054** & *0.3895* & **0.5002**\ ncvx SVRG-SBB$_\epsilon$ & *0.3940* & *0.5036* & *0.3921* & *0.5012* & **0.3896** & *0.4992*\ \ ncvx Batch & 0.2268 & 0.3470 & 0.2069 & 0.3201 & 0.2275 & 0.3447\ ncvx SGD & 0.2602 & 0.3778 & 0.2279 & 0.3415 & 0.2402 & 0.3514\ ncvx SVRG & 0.3481 & 0.4617 & 0.3160 & 0.4332 & 0.2493 & 0.3656\ SVRG-SBB$_0$ &**0.3900** & **0.4980** & **0.3917** & **0.5018** & **0.3914** & *0.5007*\ ncvx SVRG-SBB$_\epsilon$ & *0.3625* & *0.4719* & *0.3845* & *0.4936* & *0.3897* & **0.5013**\ \[tabl:3\] B. Image Retrieval on SUN397 ---------------------------- Here, we apply the proposed SVRG-SBB algorithm for a real-world dataset, i.e., SUN 397, which is generally used for the image retrieval task. In this experiment, we wish to see how the learned representation characterizes the “relevance” of the same image category and the “discrimination” of different image categories. Hence, we use the image representation obtained by ordinal embedding for image retrieval.\ \ **Settings.** We evaluate the effectiveness of the proposed stochastic non-convex ordinal embedding method for visual search on the SUN397 dataset. SUN$397$ consists of around 108K images from $397$ scene categories. In SUN$397$, each image is represented by a $1,600$-dimensional feature vector extracted by principle component analysis (PCA) from $12,288$-dimensional Deep Convolution Activation Features [@Gong2014]. We form the training set by randomly sampling $1,080$ images from $18$ categories with $60$ images in each category. Only the training set is used for learning an ordinal embedding and a nonlinear mapping from the original feature space to the embedding space whose dimension is $p=18$. The nonlinear mapping is used to predict the embedding of images which do not participate in the relative similarity comparisons. We use Regularized Least Square and Radial basis function kernel to obtain the nonlinear mapping. The test set consists of $720$ images randomly chose from $18$ categories with $40$ images in each category. We use ground truth category labels of training images to generate the triple-wise comparisons without any error. The ordinal constraints are generated like [@7410580]: if two images $i,\ j$ are from the same category and image $k$ is from the other categories, the similarity between $i$ and $j$ is larger than the similarity between $i$ and $k$, which is indicated by a triplet $(i,j,k)$. The total number of such triplets is $70,000$. Errors are then synthesized to simulate the human error in crowd-sourcing. We randomly sample $5\%$ and $10\%$ triplets to exchange the positions of $j$ and $k$ in each triplet $(i,j,k)$.\ \ **Evaluation Metrics.** To measure the effectiveness of various ordinal embedding methods for visual search, we consider three evaluation metrics, *i.e.*, precision at top-K positions (Precision@K), recall at top-K positions (Recall@K), and Mean Average Precision (MAP). Given the mapping feature $\mathbf{X}=\{\mathbf{x}_1, \mathbf{x}_2,\dots,\mathbf{x}_{720}\}$ of test images and chosen an image $i$ belonging to class $c_i$ as a query, we sort the other images according to the distances between their embeddings and $\mathbf{x}_i$ in an ascending order as $\mathcal{R}_i$. True positives ($\text{TP}^K_i$) are images correctly labeled as positives, which involve the images belonging to $c_i$ and list within the top K in $\mathcal{R}_i$. False positives ($\text{FP}^K_i$) refer to negative examples incorrectly labeled as positives, which are the images belonging to $c_l(l\neq i)$ and list within the top K in $\mathcal{R}_i$. True negatives ($\text{TN}^K_i$) correspond to negatives correctly labeled as negatives, which refer to the images belonging to $c_l(l\neq i)$ and list after the top K in $\mathcal{R}_i$. Finally, false negatives ($\text{FN}^K_i$) refer to positive examples incorrectly labeled as negatives, which are relevant to the images belonging to class $c_i$ and list after the top K in $\mathcal{R}_i$. We are able to define Precision@K and Recall@K used in this paper as: $ \text{Precision@}K = \frac{1}{n}\sum_i^{n}p_i^K=\frac{1}{n}\sum_i^{n}\frac{\text{TP}^K_i}{\text{TP}^K_i+\text{FP}^K_i} $ and $ \text{Recall@}K = \frac{1}{n}\sum_i^{n}r_i^K=\frac{1}{n}\sum_i^{n}\frac{\text{TP}^K_i}{\text{TP}^K_i+\text{FN}^K_i}. $ Precision and recall are single-valued metrics based on the whole ranking list of images determined by the Euclidean distances among the embedding $\{\mathbf{x}_i\}_{i=1}^n$. It is desirable to also consider the order in which the images from the same category are embedded. By computing precision and recall at every position in the ranked sequence of the images for query $i$, one can plot a precision-recall curve, plotting precision $p_i(r)$ as a function of recall $r_i$. Average Precision(AP) computes the average value of $p_i(r)$ over the interval from $r_i=0$ to $r_i=1$: $\text{AP}_i = \int_{0}^1 p_i(r_i)dr_i$, which is the area under precision-recall curve. This integral can be replaced with a finite sum over every position $q$ in the ranked sequence of the embedding: $\text{AP}_i = \sum_{q=1}^{40} p_i(q)\cdot\triangle r_i(q)$, where $\triangle r_i(q)$ is the change in recall from items $q-1$ to $q$. The MAP used in this paper is defined as $\text{MAP} = \frac{1}{n}\sum_{i=1}^{n}\text{AP}_i$.\ \ **Results.** The experiment results are shown in Table \[tabl:3\] and Figure \[fig:sun\]. As shown in Table \[tabl:3\] and Figure \[fig:sun\] with $K$ varying from $40$ to $100$, we observe that non-convex SVRG-SBB$_\epsilon$ consistently achieves the superior Precision@K, Recall@K and MAP results comparing against the other methods with the same gradient calculation. The results of GNMDS illustrate that SVRG-SBB$_\epsilon$ is more suitable for non-convex objective functions than the other methods. Therefore, SVRG-SBB$_\epsilon$ has a very promising potential in practice, because it generates appropriate step sizes automatically while running the algorithm and the result is robust. Moreover, under our setting and with small noise, all the ordinal embedding methods achieve the reasonable results for image retrieval. It illustrates that high-quality relative similarity comparisons can be used for learning meaningful representation of massive data, thereby making it easier to extract useful information in other applications. Conclusions =========== In this paper, we propose a stochastic non-convex framework for the ordinal embedding problem. We propose a novel stochastic gradient descent algorithm called SVRG-SBB for solving this non-convex framework. The proposed SVRG-SBB is a variant of SVRG method incorporating with the so-called stabilized BB (SBB) step size, a new, stable and adaptive step size introduced in this paper. The main idea of the SBB step size is adding another positive term in the absolute value of the denominator of the original BB step size such that SVRG-SBB can overcome the instability of the original BB step size when applied to such non-convex problem. A series of simulations and real-world data experiments are implemented to demonstrate the effectiveness of the proposed SVRG-SBB for the ordinal embedding problem. It is surprising that the proposed SVRG-SBB outperforms most of the state-of-the-art methods in the perspective of both generalization error and computational cost. We also establish the $O(1/T)$ convergence rate of SVRG-SBB in terms of the convergence to a stationary point. Such convergence rate is comparable to the existing best convergence results of SVRG in literature. Acknowledgment {#acknowledgment .unnumbered} ============== The work of Ke Ma is supported by National Key Research and Development Plan (No.2016YFB0800403), National Natural Science Foundation of China (No.U1605252 and 61733007). The work of Jinshan Zeng is supported in part by the National Natural Science Foundation of China (No.61603162), and the Doctoral start-up foundation of Jiangxi Normal University. The work of Qianqian Xu was supported in part by National Natural Science Foundation of China (No. 61672514), and CCF-Tencent Open Research Fund. Yuan Yao’s work is supported in part by Hong Kong Research Grant Council (HKRGC) grant 16303817, National Basic Research Program of China (No. 2015CB85600, 2012CB825501), National Natural Science Foundation of China (No. 61370004, 11421110001), as well as grants from Tencent AI Lab, Si Family Foundation, Baidu BDI, and Microsoft Research-Asia. [^1]: <https://github.com/alphaprime/Stabilized_Stochastic_BB> [^2]: <http://homepage.tudelft.nl/19j49/ste/Stochastic_Triplet_Embedding.html>
--- abstract: 'A negative differential resistance (NDR) in nanotransport is often ascribed to electron correlations. We present a simple example revealing that finite electrode bandwidths and energy dependent electrode density of states can cause a significant NDR, which may occur even in uncorrelated systems. So, special care is needed in assessing the role of electron correlations in the NDR.' author: - Ioan Bâldea - Horst Köppel title: '**Sources of negative differential resistance in electric nanotransport**' --- The fact that the current-voltage ($I$-$V$) characteristics of the dc-transport can exhibit a negative differential resistance (NDR) in systems described within a single-particle picture, and is not necessarily related to electron correlations is well known in semiconductor physics.[@Frensley:91] However, in the nanophysics community the NDR in the $I$-$V$ curve is often ascribed to (presumably strong) electron correlations. In fact, some calculations performed on simple but nontrivial models of correlated electrons, like the interacting resonant level model, found no NDR effect far away from resonance,[@Mehta:06; @Mehta:07] while other calculations revealed a more [@Schmitteckert:08] or less [@Doyon:07; @Nishino:09] pronounced NDR effect at resonance. At the end of this note, we shall return to the NDR effect within the interacting resonant model. Beforehand — and this is the main aim of the present work — we want to emphasize that other, more common sources of the NDR are relevant for nanotransport as well. Therefore, special care is needed if one attempts to ascribe the NDR to electron correlations. The naive “argument” behind the confusion that the NDR is an electron correlation effect seems to be the following. Within the Landauer approach of the transport in uncorrelated systems, the current resulting from the imbalance between the source and drain chemical potentials $\mu_S = \varepsilon_F + eV_{sd}/2$ and $\mu_D = \varepsilon_F - eV_{sd}/2$ is expressed as an integral of the transmission coefficient $T(\varepsilon)$ over energies from $\varepsilon = \mu_D$ to $\varepsilon = \mu_S$. An NDR cannot occur because the current monotonically increases, since the integrand is positive \[$T(\varepsilon)\geq 0$\] and the integration range increases as the voltage $V_{sd}$ becomes higher. To illustrate that this is not the case, let us consider a two-terminal setup (Fig. \[fig:setup\]), consisting of a nanosystem \[quantum dot(s) or molecule(s)\] linked to semi-infinite leads (source and drain) at zero temperature. For simplicity, their bandwidth $4t$ as well as their coupling to (say,) the dot $\tau$ will be supposed to be identical. By gradually rising the source-drain voltage $V_{sd}$ starting from $V_{sd}=0$, the drain current $I_{sd}$ will first progressively increase because the energy window $\Delta E$ of the (elastic) electron tunneling processes allowed by Pauli’s principle becomes broader (Fig. \[fig:setup\]a). However, further increasing $V_{sd}$ beyond half of the electrode bandwidth ($e V_{sd}^{\ast} \equiv 2 t$) will diminish this energy window (Fig. \[fig:setup\]b), and this will be accompanied by a current reduction, which becomes more and more pronounced as the electrode band edge is approach. For $e V_{sd} \geq 4t$, elastic tunneling is no longer possible, and the current is completely blocked ($I_{sd}=0$). This fact that the current $I_{sd}$ should diminish as $V_{sd}$ exceeds $V_{sd}^{\ast}$ and is completely suppressed above the band edge ($4 t$) applies for a general two-terminal setup for a sufficiently weak hybridization $\Gamma_0 \equiv 2 \tau^2/t$. ![\[fig:setup\] (Color online) Schematical representation of a typical two-terminal setup. By gradually increasing the source-drain voltage $V_{sd}$ the energy window $\Delta E$ of the allowed elastic tunneling processes (a) increases for $ e V_{sd} < 2 t$, but (b) beyond the point $e V_{sd} = 2t$ (electrode half-bandwidth) it decreases. Elastic tunneling cannot occur for $e V_{sd} \geq 4 t$.](fig1a.eps){width="50.00000%"} ![\[fig:setup\] (Color online) Schematical representation of a typical two-terminal setup. By gradually increasing the source-drain voltage $V_{sd}$ the energy window $\Delta E$ of the allowed elastic tunneling processes (a) increases for $ e V_{sd} < 2 t$, but (b) beyond the point $e V_{sd} = 2t$ (electrode half-bandwidth) it decreases. Elastic tunneling cannot occur for $e V_{sd} \geq 4 t$.](fig1b.eps){width="50.00000%"} To make the analysis more specific, let us consider a point contact (noninteracting resonant level) model, wherein the nanosystem consists of a single nondegenerate energy level $\varepsilon_g$ linked to one-dimensional semi-infinite electrodes. The second-quantized Hamiltonian reads $$\begin{aligned} H & = & -t \sum_{l\leq -1} \left (c_{l}^{\dagger} c_{l-1} + h.c.\right) + \mu_S \sum_{l\leq -1} c_{l}^{\dagger} c_{l} \nonumber \\ & & -t \sum_{l\geq 1} \left (c_{l}^{\dagger} c_{l+1} + h.c.\right) + \mu_D \sum_{l\geq 1} c_{l}^{\dagger} c_{l} \label{eq-ham}\\ & & + \varepsilon_g c_{0}^\dagger c_{0} - \tau \left( c_{-1}^\dagger c_{0} + c_{1}^\dagger c_{0} + h.c. \right) \ . \nonumber\end{aligned}$$ As usual, we set $t = 1$ and $\varepsilon_F = 0$. We assume $\varepsilon_g \geq 0$ (n-type conduction) for simplicity, but because model (\[eq-ham\]) possesses particle-hole symmetry, one can replace $\varepsilon_g$ by $\vert\varepsilon_g\vert$ below. The electrode-dot coupling $\tau$ yields well known expressions of the embedding self-energies $ \Sigma_x(\varepsilon) = \Delta_x(\varepsilon) - i\Gamma_x(\varepsilon)/2 $ ($x=S,D$), where [@Caroli:71; @Nitzan:01] $$\begin{aligned} \displaystyle \Delta_x(\varepsilon) & = & \Delta(\varepsilon - \mu_x); \Gamma_x(\varepsilon) = \Gamma(\varepsilon - \mu_x); \nonumber \\ \Delta(\varepsilon) & = & \frac{\tau^2\varepsilon}{2 t^2}; \Gamma(\varepsilon) = \frac{\tau^2}{t^2} \sqrt{4 t^2 - \varepsilon^2} \ \theta(2 t - \vert \varepsilon\vert) . \label{eq-Sigma-x}\end{aligned}$$ They can be inserted into the Dyson equation $$G^{-1}(\varepsilon) = \varepsilon - \varepsilon_g - \Sigma_S (\varepsilon) - \Sigma_D(\varepsilon)$$ to obtain the retarded Green function $G(\varepsilon)$ of the embedded dot. With the aid of the latter, the electric current can be expressed as (electron spin is disregarded) $$\begin{aligned} \displaystyle I_{sd} & = & \frac{e}{h}\int_{\mu_D}^{\mu_S} d\,\varepsilon T(\varepsilon) = \frac{e}{h}\int_{\mu_D}^{\mu_S} d\,\varepsilon \Gamma_S(\varepsilon) \Gamma_D(\varepsilon) \vert G(\varepsilon)\vert^2 , \nonumber \\ & = & \frac{e}{h}\int_{\mu_D}^{\mu_S} d\,\varepsilon \frac{\Gamma_D(\varepsilon) \Gamma_S(\varepsilon) } { \left[\varepsilon - \varepsilon_g - \overline{\Delta}(\varepsilon))\right]^2 + \overline{\Gamma}(\varepsilon)^2/4 } \ , \label{eq-I}\end{aligned}$$ where $\overline{\Gamma}(\varepsilon) \equiv \Gamma_D(\varepsilon) + \Gamma_S(\varepsilon)$ and $\overline{\Delta}(\varepsilon) \equiv \Delta_D(\varepsilon) + \Delta_S(\varepsilon)$. $I$-$V$ characteristics computed exactly by means of Eq. (\[eq-I\]) at resonance ($\varepsilon_g = 0$) are depicted by the thick lines in Fig. \[fig:on\]. These curves show that, indeed, the current is suppressed as the bias approaches the bandwidth and disappears beyond $e V_{sd} > 4t$. Away from resonance ($\varepsilon_g \neq 0$), a new aspect is visible in Fig. \[fig:off\]a. The current vanishes even below the bandwidth $4 t$. Practically, the suppression is complete at $V_{sd} = 4 t - \varepsilon_g$; beyond this value, the $I$-$V$ curves only exhibit negligible tails of widths $\sim \Gamma_0 = 2\tau^2/t$. On the other side, the exact $I$-$V$ characteristics of Figs. \[fig:on\] and \[fig:off\]a reveal that the current decreases well before reaching the value $V_{sd}^{\ast}=2 t/e$, which one could expect from Fig. \[fig:setup\]. This demonstrates that the finite bandwidth effect discussed above is only *one* reason why the NDR should occur. ![\[fig:on\] (Color online) $I$-$V$ curves at resonance ($\varepsilon_g =\varepsilon_F = 0$) for $\tau = 0.05; 0.1; 0.2; 0.4$ computed exactly (thick lines) and within approximation (i) described in the text (thin lines). Current $I_{sd}$ in units $ I_{sd}^{s} = \pi e\Gamma_0/h$.](fig2.eps){width="50.00000%"} ![\[fig:off\](Color online) $I$-$V$ curves out of resonance for $\tau = 0.1$ ($\Gamma_0=0.02$) computed (a) exactly and (b) within approximation (i) described in the text for $\varepsilon_g =0; 0.2; 0.4; 0.6; 0.8; 1$ (values increasing downwards). Current in units $et/h$.](fig3a.eps){width="50.00000%"} $ $\ ![\[fig:off\](Color online) $I$-$V$ curves out of resonance for $\tau = 0.1$ ($\Gamma_0=0.02$) computed (a) exactly and (b) within approximation (i) described in the text for $\varepsilon_g =0; 0.2; 0.4; 0.6; 0.8; 1$ (values increasing downwards). Current in units $et/h$.](fig3b.eps){width="50.00000%"} Significant physical insight can be gained by examining three limits of Eq. (\[eq-I\]): \(i) One can approximate the embedding energies by their values at $\varepsilon = \mu_x$ ($\Sigma_{S,D} \simeq -i\Gamma_0/2$) in the *whole* integration range, which means to simply ignore the $\theta$ step functions in Eq. (\[eq-Sigma-x\]). One then gets the current $$\displaystyle I_{sd}^{low} = \frac{e\Gamma_0}{h} % \frac{e}{h} \Gamma_0 \left( \arctan\frac{e V - 2\varepsilon_g}{2\Gamma_0} + \arctan\frac{e V + 2\varepsilon_g}{2\Gamma_0} \right) . \label{eq-wbl}$$ As this amounts to assume that the electrode bandwidth is the largest energy scale (more precisely, for $ V_{sd}, \varepsilon_g, \tau \ll t$), Eq. (\[eq-wbl\]) is usually referred to as the wide band limit. \(ii) Next, one can compute the current using the electrode density of states (DOS) $\Gamma_x$ for $\varepsilon = \mu_x$, but unlike above, considering the Heaviside $\theta$ functions in Eq. (\[eq-Sigma-x\]) $$\displaystyle I_{sd}^{fb} = \frac{e \Gamma_0}{h\left(1-\tau^2/t^2\right)} \left( \arctan\frac{\Lambda_{+}}{2\Gamma_0} + \arctan\frac{\Lambda_{-}}{2\Gamma_0} \right) , \label{eq-fb}$$ where $\Lambda_{\pm} \equiv \left[\min(e V_{sd}, 4 t - e V_{sd}) \pm 2 \varepsilon_g\right] \times \left(1 - \tau^2/t^2\right)$. Similar to approximation (i), the electrode DOS is assumed constant, but the fact that the electrode bandwidths are *finite* (the main physical aspect underlying Fig. \[fig:setup\]) is taken into account by this approximation. \(iii) Because the main contribution to the integral in Eq. (\[eq-I\]) comes from the pole of the Green function of the isolated dot, one can use the embedding energies calculated at $\varepsilon = \varepsilon_g$. In fact, this approximation yields very accurate $I$-$V$ curves, which are not shown because they could be hardly distinguished from the exact curves within the drawing accuracy of Figs. \[fig:on\], \[fig:off\]a, \[fig:exact-vs-approx\], and \[fig:fb\]. More instructive is however to furthermore assume that the voltage $V_{sd}$ is sufficiently high and extend the integration in Eq. (\[eq-I\]) from $-\infty$ to $+\infty$. The result is $$\displaystyle I_{sd}^{high} = \frac{e}{\hbar} \frac{\Gamma(\varepsilon_g - eV/2) \Gamma(\varepsilon_g + eV/2)} {\Gamma(\varepsilon_g - eV/2) + \Gamma(\varepsilon_g + eV/2)} . \label{eq-be}$$ $I$-$V$ curves in the limit (i) are depicted in Figs. \[fig:on\] (thin lines), \[fig:off\]b, and \[fig:exact-vs-approx\]. They show a monotonically increasing current, which exhibits a step at $e V_{sd} \simeq 2\varepsilon_g$ of width $\delta V_{sd}$ increasing with $\tau$ and rapidly saturates at an $\varepsilon_g$-independent value $I_{sd}^{s}=\pi e\Gamma_0/h$. Such curves are usually shown in textbooks, and this feeds the lore of the absent NDR in uncorrelated systems. What is wrong with the naive argument against the NDR in uncorrelated systems is that the transmission is *not* independent of $V_{sd}$. The $V_{sd}$-dependence enters via the electrode densities of states $\Gamma_{S,D}$ \[cf. Eq. (\[eq-Sigma-x\])\]. On one side, this dependence is considered by the $\theta$ functions of Eq. (\[eq-Sigma-x\]), which diminish the window of allowed tunneling processes. Approximation (ii) that accounts for this yields two qualitatively correct results: an NDR beyond $V_{sd}^{\ast}$, where the predicted $I$-$V$ curve exhibits a cusp (Fig. \[fig:fb\]) and a vanishing current for $e V_{sd} \geq 4t$. Quantitatively, the NDR onset (at $V_{sd} = V_{sd}^{\ast}$) is unsatisfactory; compare these approximate curves (label $fb$) with the exact ones in Figs. \[fig:exact-vs-approx\] and \[fig:fb\]. The NDR occurs well below the point predicted by this approximation. On the other side, not only the $\theta$ functions, but also the $\varepsilon$-dependence of the electrode DOS \[the square roots in Eq. (\[eq-Sigma-x\])\] is important. It is this fact that makes the finite bandwidth argument incomplete. The $\varepsilon$-dependence of $\Gamma_{S,D}$ is accounted for within approximation (iii). The comparison with the exact curves (Fig. \[fig:exact-vs-approx\]) reveals an excellent agreement at sufficiently higher voltages (as assumed within this approximation) and demonstrates that, to describe quantitatively the NDR, one has to consider both the allowed energy window, which is finite, and the energy dependence of the electrode DOS. In Fig. \[fig:exact-vs-approx\], we present exact $I$-$V$ characteristics from Eq. (\[eq-I\]) along with those computed within the three aforementioned approximations, Eqs. (\[eq-wbl\]), (\[eq-fb\]), and (\[eq-be\]). ![\[fig:exact-vs-approx\] (Color online) $I$-$V$ curves for $\tau = 0.1$ computed exactly and within the approximations described in the text: (a) at resonance $\varepsilon_d = 0$ and (b) out of resonance, $\varepsilon_d = 0.2$. Current in units $et/h$. Labels as in Eqs. (\[eq-wbl\]), (\[eq-fb\]), and (\[eq-be\]).](fig4a.eps){width="50.00000%"} $ $\ ![\[fig:exact-vs-approx\] (Color online) $I$-$V$ curves for $\tau = 0.1$ computed exactly and within the approximations described in the text: (a) at resonance $\varepsilon_d = 0$ and (b) out of resonance, $\varepsilon_d = 0.2$. Current in units $et/h$. Labels as in Eqs. (\[eq-wbl\]), (\[eq-fb\]), and (\[eq-be\]).](fig4b.eps){width="50.00000%"} As visible there, approximation (i) is accurate for lower voltages, while approximation (iii) is accurate for higher voltages. The crossover occurs at a voltage $V_{sd}^{NDR}$, which can be identified with the NDR onset. This value can be obtained by equating $$\label{eq-V-ndr} I_{sd}^{low}(V_{sd}^{NDR}) = I_{sd}^{high}(V_{sd}^{NDR}) .$$ Curves for $V_{sd}^{NDR}$ are presented in Fig. \[fig:V-cross\]. They show that for situations not very far away from resonance and sufficiently weak electrode-dot couplings $\tau$, $V_{sd}^{NDR}$ is considerably smaller than the value $e V_{sd}^{\ast} = 2 t$ expected from the finite bandwidth argument. The significant departure of the NDR onset predicted exactly and within approximation (ii) is also clearly depicted in Fig. \[fig:fb\]. For smaller $\tau$’s one can deduce an analytical estimate ($c\simeq 4$) $$\label{eq-ndr-onset} V_{sd}^{NDR} \simeq 2\varepsilon_g + c (t\tau^2)^{1/3} .$$ ![\[fig:fb\] (Color online) $I$-$V$ curves on resonance ($\varepsilon_g = 0$) for the three electrode-dot couplings $\tau$ specified in the inset computed exactly and within approximation (ii) described in the text (label $fb$). Notice that the latter exhibit a cusp at $ e V_{sd} = 2 t$ that marks the NDR onset in this approximation, which can be substantially higher than the exact NDR onset.](fig5.eps){width="50.00000%"} ![\[fig:V-cross\] (Color online) Curves for the NDR onset voltage $V_{sd}^{NDR}$ computed from Eq. (\[eq-V-ndr\]) for several level energies $\varepsilon_g$. Notice that for smaller electrode-dot couplings $\tau$ and not too far away from resonance, $V_{sd}^{NDR}$ is significantly smaller than 2 (half of electrode’s bandwidth).](fig6.eps){width="50.00000%"} Interesting for nanotransport are the electron level(s) not too misaligned with electrode’s Fermi level; otherwise, as illustrated by the curve for $\varepsilon_g = t$ in Fig. \[fig:off\]a, the current is very small. Therefore, the results on $V_{sd}^{NDR}$ expressed by Eq. (\[eq-ndr-onset\]) and Fig. \[fig:V-cross\] are perhaps the most relevant ones from an experimental perspective. At resonance and realistic parameters ($t\simeq 1$eV, $\tau \simeq 1$meV [@Goldhaber-GordonNature:98]), Eq. (\[eq-ndr-onset\]) yields $V_{sd}^{NDR} \simeq 40$meV. Based on this estimate, we argue that the NDR discussed here can be observed. On one side, correlations are important only at much lower voltages; in single-electron transistors,[@Goldhaber-GordonNature:98] the relevant scale is the Kondo temperature $T_K$ ($e V_{sd} \alt k_BT_K \alt 0.1$meV). For voltages of tens of mV, correlation effects (e. g., Kondo’s) are supprressed; the present uncorrelated limit is justifiable. On the other side, the estimated NDR onset voltages ($\sim 10$mV) are much lower than the electrode bandwidth ($\sim 1$eV), and a material damage prior to the NDR onset can be ruled out. For Si-based SETs, the material can support even much higher values, $V_{sd}\sim 1$V.[@Fujiwara] So, we hope that the present estimate will stimulate experimentalists to search NDR effects at moderate $V_{sd}$. Again quite relevant for experiments, the NDR onset can be controlled by tuning the level’s energy $\varepsilon_g$ with the aid of a gate potential. Gating methods were routinely employed for nanosystems in the past[@Goldhaber-GordonNature:98] and recently also in molecular transport.[@Reed:09] In (weakly-correlated) molecules, the level $\varepsilon_g$ would be either the highest occupied molecular orbital (HOMO)[@Reed:09] or the lowest unoccupied molecular orbital (LUMO, as in Fig. \[fig:setup\]), depending on which is closer to $\varepsilon_F$. There, $\tau \sim 1$eV and $\vert\varepsilon_g\vert \sim 1$eV.[@Reed:09] So, the NDR-onset \[cf. Eq. (\[eq-V-ndr\]) and Fig. \[fig:V-cross\]\] is expected at $V_{sd}$-values of a few eV, slightly higher than used in experiment.[@Reed:09] The present analysis can be extended without difficulty to nanosystems/molecules with several “active” electron levels. As long as these levels $\varepsilon_{g 1}, \varepsilon_{g 2},\ldots$ are well separated energetically and the hybridization is weak enough (a different situation can also be encountered, see Ref. ), they manifest themselves as current steps at the voltages $e V_{sd} \approx 2 \varepsilon_{g 1}, 2 \varepsilon_{g 2}, \ldots$. However, even in this case the finite electrode bandwidth and the energy dependence of the electrode DOS remain possible important sources of an NDR. Similar to other situations encountered in nanotransport,[@Baldea:2008b; @Baldea:2009c] we believe that the results for uncorrelated systems are instructive and could also be useful to correctly interpret the nanotransport in correlated systems. In the present concrete case, they could help to unravel the physical origin of the NDR. In the light of the present analysis, it is plausible to ascribe an NDR as an electron correlation effect in cases where the NDR was found within calculations to a correlated nanosystem carried out within the wide band limit. This is, e. g., the case of Refs.  and , where a weaker NDR effect was obtained at resonance at stronger Coulomb contact interactions. As suggested by Fig. \[fig:off\], the farther away from resonance, the more is the NDR onset pushed towards higher voltages ($e V_{sd}^{NDR} > 2 \vert\varepsilon_g\vert$). The values of $V_{sd}$ chosen in the figures shown in Ref.  do not belong to this range and the absence of an NDR could be related to this fact. Unlike the wide (infinite) band limit assumed in the aforementioned references, a discrete model of the electrodes, with a finite bandwidth $4 t$, *exactly* as in Eq. (\[eq-ham\]), has been utilized for the numerical calculations of Ref.  at resonance. The $I$-$V$ curves reported there exhibit a pronounced NDR effect. However, in view of the finite bandwidth assumed in that work, attributing this effect to electron correlations at rather high voltages should be made with special care. We believe that in order to interpret this effect reliably, one should first carefully subtract the contribution to the NDR due to the finite bandwidth and the energy dependent electrode DOS discussed above. The financial support for this work from the Deutsche Forschungsgemeinschaft is gratefully acknowledged. [10]{} W. R. Frensley, Rev. Mod. Phys. [**63**]{}, 215 (1991). P. Mehta and N. Andrei, Phys. Rev. Lett. [**96**]{}, 216802 (2006). P. Mehta, S. Chao, and N. Andrei, cond-mat/0703426. E. Boulat, H. Saleur, and P. Schmitteckert, Phys. Rev. Lett. [**101**]{}, 140601 (2008). A. Nishino, T. Imamura, and N. Hatano, Phys. Rev. Lett. [**102**]{}, 146803 (2009). B. Doyon, Phys. Rev. Lett. [**99**]{}, 076806 (2007). C. Caroli, R. Combescot, P. Nozières, and D. Saint-James, J. Phys. C: Solid State Physics [**4**]{}, 916 (1971). A. Nitzan, Ann. Rev. Phys. Chem. [**52**]{}, 681 (2001). D. Goldhaber-Gordon, H. Shtrikman, D. Mahalu, D. Abusch-Magder, U. Meirav, and M. A. Kastner, Nature [**391**]{}, 156 (1998). H. Liu, T. Fujisawa, H. Inokawa, Y. Ono, A. Fujiwara, and Y. Hirayama, Appl. Phys. Lett. [**92**]{}; A. Fujiwara (private communication). H. Song, Y. Kim, Y. Kim, Y. H. Youngsang, H. Jeong, M. A. Reed, and T. Lee, Nature [**462**]{}, 1039 (2009). M. C. Toroker and U. Peskin, J. Phys. B: Atom. Mol. Opt. Phys. [**42**]{}, 044013 (2009). I. Bâldea and H. Köppel, Phys. Rev. B [**78**]{}, 115315 (2008). I. Bâldea and H. Köppel, Phys. Rev. B [**80**]{}, 165301 (2009).
--- abstract: 'Within the idealized scheme of a 1-dimensional Frenkel-Kontorova-like model, a special “quantized” sliding state was found for a solid lubricant confined between two periodic layers [@Vanossi06]. This state, characterized by a nontrivial geometrically fixed ratio of the mean lubricant drift velocity $\langle v_{\rm cm}\rangle$ and the externally imposed translational velocity $v_{\rm ext}$, was understood as due to the kinks (or solitons), formed by the lubricant due to incommensuracy with one of the substrates, pinning to the other sliding substrate. A quantized sliding state of the same nature is demonstrated here for a substantially less idealized 2-dimensional model, where atoms are allowed to move perpendicularly to the sliding direction and interact via Lennard-Jones potentials. Clear evidence for quantized sliding at finite temperature is provided, even with a confined solid lubricant composed of multiple (up to 6) lubricant layers. Characteristic backward lubricant motion produced by the presence of “anti-kinks” is also shown in this more realistic context.' address: - ' $^1$Dipartimento di Fisica and CNR-INFM, Università di Milano, Via Celoria 16, 20133 Milano, Italy ' - ' $^2$International School for Advanced Studies (SISSA) and CNR-INFM Democritos National Simulation Center, Via Beirut 2-4, I-34014 Trieste, Italy ' - | $^3$CNR-INFM National Research Center S3 and Department of Physics,\ University of Modena and Reggio Emilia, Via Campi 213/A, 41100 Modena, Italy - ' $^4$International Centre for Theoretical Physics (ICTP), P.O.Box 586, I-34014 Trieste, Italy ' author: - 'Ivano Eligio Castelli$^1$, Nicola Manini$^{1,2}$, Rosario Capozza$^3$, Andrea Vanossi$^3$, Giuseppe E. Santoro$^{2,4}$, and Erio Tosatti$^{2,4}$' date: 'January 31, 2008' title: 'Role of transverse displacements for a quantized-velocity state of the lubricant' --- Introduction ============ The problem of lubricated friction is a fascinating one, both from the fundamental point of view and for applications. Lubricants range from thick fluid layers to few or even single mono-layers, often in a solid or quasi-solid phase (boundary lubrication). In the present work, we address the effects of lattice parameter mismatch of the solid boundary lubricant and the two confining crystalline surfaces. In general, perfect inter-atomic matching tends to produce locking, while sliding is always favored by “defective” lines (misfit dislocations), which can be introduced precisely by incommensuration of the lubricant and the sliding substrate lattice parameters. In our 3-length scale slider-lubricant-slider confined geometry, this lattice mismatch may give rise to a very special “quantized” sliding regime, where the mean lubricant sliding velocity, is fixed to an exact fraction of the relative substrate sliding velocity. This velocity fraction, in turn, is a simple function of the lubricant “coverage” with respect to the less mismatched of the two substrate surfaces [@Vanossi06; @Manini07PRE]. This special sliding mode was discovered and analyzed in detail in a very idealized 1-dimensional (1D) Frenkel-Kontorova (FK)-like model [@Vanossi06]: the plateau mechanism was interpreted in terms of solitons, or kinks (the 1D version of misfit dislocations), being produced by the mismatch of the lubricant periodicity to that of the more commensurate substrate, with these kinks being rigidly dragged by the other, more mismatched, substrate. In the present work, we investigate the presence of similar velocity plateaus associated to solitonic mechanisms in a more realistic geometry: a 2-dimensional (2D) $x-z$ model of Lennard-Jones (LJ) solid lubricant. The 2D model {#model:sec} ============ We represent the sliding crystalline substrates by two rows of equally-spaced “atoms”. Between these two rigid layers, we insert $N_{\rm p}$ identical lubricant atoms, organized in $N_{\rm layer}$ layers (see Fig. \[model:fig\] where $N_{\rm layer}=5$). While the mutual position of top and bottom substrate atoms are fixed, the lubricant atoms move under the action of pairwise LJ potentials $$\label{LJpotential} \Phi_a(r)=\epsilon_a \left[\left(\frac{\sigma_a}{r}\right)^{12} -2\left(\frac{\sigma_a}{r}\right)^6\right]$$ describing the mutual interactions between them, and with the substrate atoms as well. To avoid long-range tails, we set a cutoff radius at $r=r_{\rm c}= 2.5\,\sigma_a$, where $\Phi_a\left(r_{\rm c} \right) \simeq -8.2\,\cdot 10^{-3}\,\epsilon_a$. For the two substrates and the lubricant we assume three different kinds of atoms, and characterize their mutual interactions as truncated-LJ potentials ($\Phi_{\rm bp}$, $\Phi_{\rm pp}$ and $\Phi_{\rm tp}$ refer to interaction energies for the bottom-lubricant, lubricant-lubricant, and top-lubricant interactions, respectively) with the following LJ radii $\sigma_a$ $$\label{sigma} \sigma_{\rm tp}=a_{\rm t}\,,\qquad \sigma_{\rm bp}=a_{\rm b}\,,\quad {\rm and} \ \sigma_{\rm pp}=a_{\rm 0} \,,$$ which, for simplicity, are set to coincide with the fixed spacings $a_{\rm t}$ and $a_{\rm b}$ between neighboring substrate atoms, and the average $x$-separation $a_{\rm 0}$ of two neighboring lubricant atoms, respectively. This restriction is only a matter of convenience, and is not essential to the physics we are describing. The choice of slightly different values of $\sigma_{\rm tp}$ and $\sigma_{\rm bp}$ does not affect the lubricant to substrate density ratios, which are the crucial ingredient driving the “quantized” sliding state we address here: accordingly very similar results are observed. If however the LJ radii were taken much larger or smaller than the corresponding lattice parameters, then undesired phenomena could occur, such as lubricant atoms squeezing in between the substrate layers and escaping confinement altogether. The three different periodicities $a_{\rm t}$, $a_{\rm 0}$ and $a_{\rm b}$ define two independent dimensionless ratios: $$\label{ratiotb} \lambda_{\rm t}=\frac{a_{\rm t}}{a_{\rm 0}}\,,\qquad \lambda_{\rm b}=\frac{a_{\rm b}}{a_{\rm 0}}\,,$$ the latter of which we take closer to unity, $\max(\lambda_{\rm b},\lambda_{\rm b}^{-1})<\lambda_{\rm t}$, so that the lubricant is closer in registry to the bottom substrate than to the top. For simplicity, we fix the same LJ interaction energy $\epsilon_{\rm tp} =\epsilon_{\rm pp} =\epsilon_{\rm bp} =\epsilon$ for all pairwise coupling terms. We also assume the same mass $m$ for all particles. We take $\epsilon$, $a_{\rm 0}$, and $m$ as energy, length, and mass units. This choice defines a set of “natural” model units for all physical quantities, for instance velocities are measured in units of $\epsilon^{1/2} \, m^{-1/2}$. In the following, all mechanical quantities are expressed implicitly in the respective model units. The interaction with the other lubricant and sliders’ particles produces a total force of the $j$-th lubricant particle $$\begin{aligned} \label{Fj} \vec F_j&=& - \sum_{i=1}^{N_{\rm t}}\frac{\partial}{\partial \vec r_j} \Phi_{\rm tp}\!\left(|\vec r_j-\vec r_{{\rm t}\,i}|\right) + \\ \nonumber && - \sum_{j'=1 \atop j'\ne j}^{N_{\rm p}} \frac{\partial}{\partial \vec r_j} \Phi_{\rm pp}\!\left(|\vec r_j-\vec r_{j'}|\right) -\sum_{i=1}^{N_{\rm b}} \frac{\partial}{\partial \vec r_j} \Phi_{\rm bp}\!\left(|\vec r_j-\vec r_{{\rm b}\,i}|\right) ,\end{aligned}$$ where $\vec r_{{\rm t}\,i}$ and $\vec r_{{\rm b}\,i}$ are the positions of the $N_{\rm t}$ top and $N_{\rm b}$ bottom atoms. By convention, we select the frame of reference where the bottom layer is immobile. The top layer moves rigidly at a fixed horizontal velocity $v_{\rm ext}$, and can also move vertically (its inertia equals the total mass $N_{\rm t}m$ of its atoms) under the joint action of the external vertical force $-F$ applied to each particle in that layer plus that due to the interaction with the particles in the lubricant layer: $$\label{xztop} r_{{\rm t}\,i\,x}(t)= i\,a_{\rm t}+v_{\rm ext}\,t \,,\qquad r_{{\rm t}\,i\,z}(t)= r_{{\rm t}\,z}(t) \,,$$ where the equation governing $r_{{\rm t}\,z}$ is $$\begin{aligned} \label{zztop} N_{\rm t}m \,\ddot r_{{\rm t}\,z} &=&\!-\! \sum_{i=1}^{N_{\rm t}}\sum_{j=1}^{N_{\rm p}} \frac{\partial \Phi_{\rm tp}}{\partial r_{{\rm t}\,i\,z}} \!\left(|\vec r_{{\rm t}\,i}-\vec r_j|\right)-\!N_{\rm t}F \,.\end{aligned}$$ To simulate finite temperature in this driven model, we use a standard implementation of the Nosé-Hoover thermostat chain [@nose-hoover; @Martyna92], rescaling particle velocities with respect to the instantaneous lubricant center of mass (CM) velocity $v_{\rm cm}$. The Nosé-Hoover chain method is described by the following equations [@Martyna92]: $$\begin{aligned} \label{nose-eq} m\,\ddot {\vec r}_j&=& \vec F_j-\xi_1 m\,(\dot{\vec r}_j-\vec v_{\rm cm}) \,,\\ \dot \xi_1&=&\frac{1}{Q_1} \left(\sum_{j=1}^{N_p} \left|\dot{\vec r}_j-\vec v_{\rm cm}\right|^2 -gK_BT\right)-\xi_1\xi_2 \,,\\ \dot \xi_i&=&\frac{1}{Q_i} \left(Q_{i-1}\xi^2_{i-1}-K_BT\right)-\xi_i\xi_{i+1} \,,\\ \dot {\xi_M}&=&\frac{1}{Q_M} \left(Q_{M-1}\xi^2_{M-1}-K_BT\right) \,.\end{aligned}$$ The thermostat chain acts equally on all lubricant particles $j=1,... N_{\rm p}$. The $M=3$ thermostats are characterized by the effective “mass” parameters $Q_1=N_{\rm p}$, $Q_2=Q_3=1$ ; the coefficient $g=2\left( N_{\rm p}-1\right)$ fixes the correct equipartition; the auxiliary variables $\xi_i$ ($i=1,...\, M$) keep the kinetic energy of the lubricant close to its classical value $N_{\rm p}K_BT$ (measured in units of the LJ energy $\epsilon$). We integrate the ensuing equations of motion within a $x$-periodic box of size $L=N_{\rm p} a_0$, by means of a standard fourth-order Runge-Kutta method [@NumericalRecipes]. We note that the Nosé-Hoover thermostat is not generally well defined for a forced system in dynamical conditions. However it can be assumed to work at least approximately for an adiabatically moving system, where the Joule heat is a small quantity [@Evans85]. We usually start off the dynamics (for a single lubricant layer) from equally-spaced lubricant particles at height $r_{i\,z}=a_{\rm b}$ and with the top layer at height $r_{{\rm t}\,z}= a_{\rm b}+a_{\rm t}$, but we considered also different initial conditions: after an initial transient, sometime extending for several hundred time units, the sliding system reaches its dynamical stationary state. For many layers we start off with lubricant particles at perfect triangular lattice sites, and the top slider correspondingly raised. In the numerical simulations, adiabatic variation of the external driving velocity is considered and realized by changing $v_{\rm ext}$ in small steps, letting the system evolve at each step for a time long enough for all transient stresses to relax. We compute accurate time-averages of the physical quantities of interest by averaging over a simulation time in excess of a thousand time units, starting after the transient is over. At higher temperature, fluctuations of all physical quantities around their mean values increase, thus requiring even longer simulation times to obtain well-converged averages. The plateau dynamics {#plateaudynamics:sec} ==================== We study here the model introduced in Sect. \[model:sec\], firstly for a single lubricant layer and then for a thicker multilayer of $N_{\rm layer}=2\dots\ 8$. In all cases, we consider complete layers, realizing an essentially crystalline configuration at the given temperature assumed well below the melting temperature. We focus our attention on the the dragging of kinks and on the ensuing exact velocity-quantization phenomenon. We expect that, like in previous studies of the idealized 1D model [@Vanossi06; @Manini07PRE; @Manini07extended; @Santoro06; @Cesaratto07; @Vanossi07Hyst; @Vanossi07PRL; @Vanossi08TribInt; @Manini08Erice], the ratio $w=\langle v_{\rm cm}\rangle /v_{\rm ext}$ of the lubricant CM $x$ velocity to the externally imposed sliding speed $v_{\rm ext}$ should stay pinned to an exact geometrically determined plateau value, while the model parameters, such as $v_{\rm ext}$ itself or temperature $T$ or load $F$, are made vary over wide ranges. In detail, the plateau velocity ratio $$\label{wplat} w_{\rm plat}= \frac{\langle v_{\rm cm}\rangle}{v_{\rm ext}}= \frac{\frac 1{a_0} - \frac 1{a_{\rm b}} }{\frac 1{a_0}}= \frac{\lambda_{\rm b} -1 } {\lambda_{\rm b}}= 1-\frac{1}{\lambda_{\rm b}}$$ is a function uniquely of the kink linear density, determined by the excess linear density of lubricant atoms with respect to that of the bottom substrate, thus of the length ratio $\lambda_{\rm b}$, see Eq. (\[ratiotb\]). The ratio $\frac{a_0^{-1} - a_{\rm b}^{-1} }{a_0^{-1}}$ represents precisely the fraction $N_{\rm kink}/(N_{\rm p}/N_{\rm layer})$ of kink defects in each lubricant layer. The top length ratio $\lambda_{\rm t}$, assumed much more different from 1, plays a different but crucial role, since it sets the kink coverage $\Theta= N_{\rm kink}/N_{\rm t}= \left(1-\lambda_{\rm b}^{-1}\right) \lambda_{\rm t}$. Assuming that the 1D mapping to the FK model sketched in Ref. [@Vanossi07PRL] makes sense also in the present richer geometry, the coverage ratio $\Theta$ should affect the pinning strength of kinks to the top corrugation, thus the robustness of the velocity plateau. We shall try to find out if $\Theta$ assumes a similar role in the 2D model in Sec. \[doublelayers:sec\] below. In the present work we consider mainly a geometry of nearly full commensuration of the lubricant to the bottom substrate, $\lambda_{\rm b}$ near unity: in particular we set $\lambda_{\rm b}=29/25=1.16$, which produces merely $4$ kinks every $29$ lubricant particles in each layer. This value of $\lambda_{\rm b}$ produces a good kink visibility, but it is not in any sense special. We also investigate the plateau dynamics for an anti-kink configuration $\lambda_{\rm b}=21/25=0.84$. Even for a $\lambda_{\rm b}$ value significantly distinct from unity, such as the golden mean $(1+\sqrt 5)/2\simeq 1.62$, we have evidence of perfect plateau sliding. The present model allows us to address for the first time the nontrivial issue of the survival of the quantized plateau even for a somewhat more realistic 2D dynamics, for several interposed lubricant layers and for finite temperature. Single lubricant layer {#monolayer:sec} ----------------------- Figure \[vextmonotheta1:fig\] reports the time-averaged horizontal velocity $\langle v_{\rm cm}\rangle$ of the single-layer lubricant CM, as a function of the velocity $v_{\rm ext}$ of a fully commensurate top layer ($\Theta=1$) for three different temperatures of the system. The velocity ratio $w=\langle v_{\rm cm}\rangle /v_{\rm ext}$ is generally a nontrivial function of $v_{\rm ext}$, showing wide flat plateaus and regimes of continuous evolution. The plateau velocity matches perfectly the ratio $w_{\rm plat}$ of Eq. (\[wplat\]). The plateau extends over a wide range of external driving velocities, up to a critical velocity $v_{\rm crit}$, whose precise value is obtained by ramping $v_{\rm ext}$ adiabatically; beyond $v_{\rm crit}$, the lubricant leaves the plateau speed and tends to become pinned to the (better matched) bottom layer. On the small-$v_{\rm ext}$ side of the plateau, despite error bars indicating increasing uncertainty in the determination of $w$, data are consistent with a plateau dynamics extending all the way to the static limit $v_{\rm ext}\rightarrow 0$, like in the 1D model [@Manini07PRE]. As temperature increases, $\langle v_{\rm cm}\rangle$ tends to deviate slightly from the perfect plateau value. At the highest temperature considered, $k_BT=0.5\,\epsilon$, near melting of LJ solid at zero pressure [@Ranganathan92], no plateau is seen in the simulations. Finite-size scaling, Fig. \[vextmonotheta1:fig\](a), shows little size effect on the plateau, and in particular on its boundary edge $v_{\rm crit}$. \ The specific roles of the two substrates in the dynamical plateau state is illustrated by a snapshot of the plateau-state atomic coordinates and potentials at an arbitrary time, shown in Fig. \[equi:fig\]. The bottom layer produces a potential energy whose iso-levels are sketched in the upper panel of Fig. \[equi:fig\]: with its near-matching corrugation, this potential profile is responsible for the creation of kinks, like in the simple 1D model [@Vanossi06; @Manini07PRE]. A kink is visible as a local compression of the lubricant atoms trapped in the same minimum of the bottom substrate potential. In the quantized-velocity state, kinks pin to the minima of the top potential (and slide with it at $v_{\rm ext}$), as illustrated in the lower panel of Fig. \[equi:fig\]. We observe precise velocity quantization also as the downward load $F$ applied to the top layer is changed in magnitude. At larger temperature, where thermal fluctuations tend to destabilize the quantized velocity, calculations show that the quantized-velocity state benefits higher loads $F$. Two lubricant layers and role of kink coverage {#doublelayers:sec} ---------------------------------------------- We now repeat the simulations of the previous Section by considering a doubled number of lubricant particles in the same box size. Even when starting from arbitrary geometries, the lubricant atoms eventually arrange themselves in a regular double layer, a stripe of a triangular lattice. After a transient, a quantized-velocity plateau develops, showing essentially the same conditions as described for a single layer in Fig. \[vextmonotheta1:fig\], with a clear depinning transition at a critical velocity $v_{\rm crit}$ remarkably close to that of a single layer. In this plateau state, we can still identify kinks in the lubricant layer adjacent to the bottom potential, while the other layer shows weaker $x$-spacing modulations. The vertical displacements of both layers are induced by the interactions with both the top- and the bottom-layer atoms. The matching of the number of kinks to the number of top-atoms $\Theta=N_{\rm kink}/N_{\rm t}=1$ is clearly very favorable for kink dragging, thus for the plateau phenomenon. It is important to investigate situations where this strong commensuration is missing. As an example of lesser commensuration, we consider $5$, rather than $4$, particles in the top chain, thus producing a coverage ratio $\Theta = \frac{4}{5}=0.8$, still commensurate, but weakly so. Figure \[vextcrit:fig\] shows that a perfect plateau again occurs also for $\Theta=\frac{4}{5}$, whether the lubricant is a mono- or a bi-layer, and apparently this less commensurate configuration produces and even more robust quantized-velocity state, at least for $N_{\rm layer}=1$. We note however that this increased stability may be an artifact of having increased the total load $N_t F$, and thus the applied “pressure” on the lubricant. It is instructive to study how the depinning point $v_{\rm crit}$ varies when the ratio of commensuration $\Theta$ varies. We study this evolution at fixed $\lambda_{\rm b}$, thus fixed density of solitons, while the number of surface atoms changes in the top substrate. Figure \[vcrit:fig\] reports the depinning velocity $v_{\rm crit}$, always evaluated through an adiabatic increase of $v_{\rm ext}$, as a function of the number $N_{\rm t}$ of top-layer atoms, or the inverse commensuration ratio $1/\Theta=N_{\rm t}/N_{\rm kink}$. One mono-layer shows a monotonically increasing depinning velocity $v_{\rm crit}$, characterized by a sudden increase in correspondence to the fully matching coverage $\Theta=1$. For even larger $N_{\rm t}\sim N_{\rm b}$ (not shown), eventually kinks cannot ingrain in the much finer oscillation of the top potential energy and we find a weakening of the plateau regime. Lubricant multi-layer {#Multilayers:sec} --------------------- We come now to investigate the role of $N_{\rm layer}$ on the dynamically pinned state. Figure \[multi:fig\] shows the dependency of the critical velocity $v_{\rm crit}$ on the number $N_{\rm layer}$ of layers of the confined lubricant in the strong-pinning condition characterized by $\Theta=1$, $F=25$, and $T=0.001$. For up to $N_{\rm layer}=6$ layers we find quite robust perfect velocity plateaus, with a remarkably weak dependence of $v_{\rm crit}$ on $N_{\rm layer}$. Figure \[equimulti5:fig\] shows that little or no sign of kinks (horizontal displacements) is visible above the two lowermost layers near the bottom. However, vertical corrugations of the lubricant propagate from bottom to top, corresponding to kinks. These vertical displacements are mediating agents transmitting the kink tendency to pin to the top-layer corrugations, and giving rise to the observed perfect velocity quantization, at least for small $v_{\rm ext}$. For a further increase in $N_{\rm layer}$, this $z$-displacement mechanism becomes rapidly ineffective, as evident in Fig. \[equimulti9:fig\]: the vertical corrugations induced by the substrates reach into the solid lubricant for about $4$ layers, while inner layers, such as the $5$th layer of Fig. \[equimulti9:fig\] remain essentially flat, thus not supporting the dynamic pinning. In the unpinned state, the top-chain slides over the upper lubricant layer, but the deformation it induces propagates only through a few superficial layers but cannot drag the kinks created by the bottom potential. Even in the large-$v_{\rm ext}$ unpinned state, the relative positions of lubricant atoms are essentially ordered, and show neither defects nor a liquid configuration, due to the low temperature considered, confinement [@Klein98; @Persson99], and full commensuration. Anti-kinks {#antikink:sec} ---------- Previous 1D work showed the surprising phenomenon of [*backward*]{} lubricant sliding corresponding to the dragging of anti-kinks [@Vanossi06; @Manini07PRE]. We set now a reversed condition of quasi-commensuration of the chain to the $a_{\rm b}$ substrate, $\lambda_{\rm b}=21/25=0.84$, which produces a [*negative*]{} $x$-density of kinks $-4/21\,a_0^{-1}\simeq-0.190\,a_0^{-1}$. This condition in fact produces, instead of local compressions, local dilations of the chain, classifiable as [*anti-kinks*]{}, alternating with in-register regions. The anti-kinks again pin, like kinks did, to the corrugations of the top substrate, which drag them along at full speed $v_{\rm ext}$. As anti-kinks are basically missing particles, like holes in semiconductors, they carry a negative mass. Their rightward motion produces a net leftward motion of the lubricant: the lubricant chain moves in the [*opposite direction*]{} with respect to the top layer [@Vanossi06]. Figure \[antikink:fig\] displays a clear reversed-velocity plateau for both one layer and two layers, thus confirming this mechanism. The perfect plateau is comparably weaker than the plateau produced by $\lambda_{\rm b}>1$, as seen from it ending at a smaller $v_{\rm ext}$. Discussion and conclusions ========================== Within the idealized scheme of a simple 1D FK-like model, a special “quantized” sliding state was found for a solid lubricant confined between two periodic layers [@Manini07extended]. This state, characterized by a nontrivial geometrically fixed ratio of the mean lubricant drift velocity $\langle v_{\rm cm}\rangle$ and the externally imposed translational velocity $v_{\rm ext}$, was understood as due to the rigid dragging of kinks (or solitons), formed by the lubricant due to incommensuration with one of the substrates, pinning to the other sliding substrate. In the present work, a quantized sliding state of the same nature is demonstrated for a substantially less idealized 2D model of boundary lubrication, where atoms are allowed to move perpendicularly to the sliding direction and interact via LJ potentials. We find perfect plateaus, at the same geometrically determined velocity ratio $w_{\rm plat}$ as observed in the simple 1D model for varied driving speed $v_{\rm ext}$, not only at low temperatures but also for temperatures not too far from the melting point of the LJ lubricant, whether the model solid lubricant runs from a single layer to several layers. An increased load-$F$ tends to benefit the plateau state at higher temperatures. The velocity plateau, as a function of $v_{\rm ext}$, ends at a critical velocity $v_{\rm crit}$, and for $v_{\rm ext}>v_{\rm crit}$ the lubricant moves at a speed which is generally lower than that of the plateau state. In fact, by cycling $v_{\rm ext}$, the layer sliding velocity exhibits hysteretic phenomena around $v_{\rm crit}$, which we shall investigate in detail in future work. The unpinning velocity $v_{\rm crit}$ is linked to the commensuration $\Theta$ of kinks to the upper slider period: at $\Theta=1$ marks a sudden rise of $v_{\rm crit}$. A clear plateau dynamics is demonstrated even for a confined solid lubricant composed of several (up to $N_{\rm layer}=6$) lubricant layers: the strength of the plateau (measured by $v_{\rm crit}$) is a generally decreasing function of the number of layers. The striking backward lubricant motion produced by the presence of “anti-kinks” is again recovered in this more realistic context. The present work focuses on ordered configurations: both substrates are perfect crystals and the lubricant retains the configuration of a strained crystalline solid. The dynamical depinning speed $v_{\rm crit}$, that we usually find of the order of a few model units (corresponding to $\sim 10^3$ m/s for realistic choice of the model parameters), is very large compared to typical sliding velocities investigated in experiments. This suggests that sliding at a dynamically quantized velocity is likely to be extremely robust. In experiments, depinning from the quantized sliding state is likely to be associated to mechanisms such as disorder or boundary effects, rather than excessive driving speed. The role of disorder and defects both in the substrate [@Guerra07] and in the lubricant will be the object of future investigation. A detailed investigation of the stick-slip phenomena and of other features of the dynamical properties will also require further study. Acknowledgments {#acknowledgments .unnumbered} =============== This research was partially supported by PRRIITT (Regione Emilia Romagna), Net-Lab “Surfaces & Coatings for Advanced Mechanics and Nanomechanics” (SUP&RMAN). Work in SISSA was supported through PRIN 2006022847, and Iniziativa Trasversale Calcolo Parallelo INFM-CNR. References {#references .unnumbered} ========== [10]{} . . . . . . . . . . . . . . . . .
--- abstract: 'Lack of knowledge about the detailed many-particle motion on the microscopic scale is a key issue in any theoretical description of a macroscopic experiment. For systems at or close to thermal equilibrium, statistical mechanics provides a very successful general framework to cope with this problem. Far from equilibrium, only very few quantitative and comparably universal results are known. Here, a new quantum mechanical prediction of this type is derived and verified against various experimental and numerical data from the literature. It quantitatively describes the entire temporal relaxation towards thermal equilibrium for a large class (in a mathematically precisely defined sense) of closed many-body systems, whose initial state may be arbitrarily far from equilibrium.' author: - Peter Reimann title: 'Typical fast thermalization processes in closed many-body systems' --- In a macroscopic object, which is spatially confined and unperturbed by the rest of the world, every single atom exhibits an essentially unpredictable, chaotic motion ad infinitum, yet the system as a whole seems to approach in a predictable and often relatively simple manner some steady equilibrium state. Paradigmatic examples are compound systems, parts of which are initially hotter than others, or a simple gas in a box, streaming through a little hole into an empty second box. While such equilibration and thermalization phenomena are omnipresent in daily life and extensively observed in experiments, they entail some very challenging fundamental questions: Why are the macroscopic phenomena reproducible though the microscopic details are irreproducible in any real experiment? How can the irreversible tendency towards macroscopic equilibrium be reconciled with the basic laws of physics, implying a perpetual and essentially reversible motion on the microscopic level? Such fundamental issues are widely considered as still not satisfactorily understood [@tas98; @pop06; @gol06b; @rig08; @gem09; @eis15]. Within the realm of classical mechanics, they go back to Maxwell, Boltzmann, and many others [@skl93]. Their quantum mechanical treatment was initiated by von Neumann [@neu29] and is presently attracting renewed interest [@gol10a; @gol10b; @gol10c; @rei15], e.g., in the context of imitating thermal equilibrium by single pure states due to such fascinating phenomena as concentration of measure [@pop06; @pop06a; @mul11], canonical typicality [@gol06b; @sug07; @rei07; @bar09; @sug12], or eigenstate thermalization [@deu91; @sre94; @rig08; @neu12; @rig12; @ike13; @beu14; @ste14; @gol15c]. Numerically, scrutinizing ultracold atom experiments [@cra08a; @tro12; @gri12; @per14] and unraveling the relations between thermalization, integrability, and many-body localization are among the current key issues [@rig08; @rig09b; @rig09a; @san10a; @pal10; @bri10; @gog11; @ban11]. Analytically, essential equilibration and thermalization properties of closed many-body systems or of subsystems thereof were deduced from first principles under increasingly weak assumptions about the initial disequilibrium, the system Hamiltonian, and the observables [@tas98; @neu29; @gol10a; @gol10b; @gol10c; @rei15; @cra08b; @rei08; @lin09; @rei10; @sho11; @rei12a; @rei12b; @sho12]. In particular, groundbreaking results regarding pertinent relaxation time scales have been obtained in [@sho12; @cra12; @gol13; @mon13; @mal14; @gol15; @gol15b]. Of foremost relevance for our present study is the work of the Bristol collaboration [@mal14], showing, among others, that all two-outcome measurements, where one of the projectors is of low rank, equilibrate as fast as they possibly can without violating the time-energy uncertainty relation. A second recent key result is due to Goldstein, Hara, and Tasaki [@gol15; @gol15b], demonstrating that most systems closely approach an overwhelmingly large, so-called equilibrium Hilbert-subspace on the extremely short Boltzmann time scale ${t_\mathrm{B}}:=h/{k_\mathrm{B}}T$. A more detailed account of pertinent previous works is provided as Supplementary Note 1. Here, we will further extend these findings in two essential respects: Instead of upper bounds for some suitably defined characteristic time scale, as in [@mal14; @gol15; @gol15b], the entire temporal relaxation will be approximated in the form of an equality. As an even more decisive generalization of [@mal14; @gol15; @gol15b], we will admit largely arbitrary observables. Finally, and actually for the first time within the realm of the above mentioned analytical approaches [@tas98; @neu29; @gol10a; @gol10b; @gol10c; @rei15; @cra08b; @rei08; @lin09; @rei10; @sho11; @rei12a; @rei12b; @sho12; @cra12; @gol13; @mon13; @mal14; @gol15; @gol15b], we will compare our predictions with various experimental as well as numerical data from the literature. In fact, most of those data have not been quantitatively explained by any other analytical theory before. Adopting a “typicality approach” similar in spirit to random matrix theory [@gol10a; @gol10b; @gol10c; @rei15], our result covers the vast majority (in a suitably defined mathematical sense) of initial conditions, observables, and system Hamiltonians. On the other hand, many commonly considered observables and initial conditions actually seem to be rather special in that they are close to or governed by a hidden conserved quantity and therefore thermalize “untypically slowly”. RESULTS {#results .unnumbered} ======= Setup {#setup .unnumbered} ----- Employing textbook quantum mechanics, we consider time-independent Hamiltonians $H$ with eigenvalues $E_n$ and eigenvectors $|n\rangle$ on a Hilbert space ${{\cal H}}$ of large (but finite) dimensionality ${D}\gg 1$. As usual, system states (pure or mixed) are described by density operators $\rho:{{\cal H}}\to{{\cal H}}$ and observables by Hermitian operators $A:{{\cal H}}\to{{\cal H}}$ with matrix elements $\rho_{mn}:=\langle m|\rho|n\rangle$ and $A_{mn}:=\langle m |A| n \rangle$, respectively. Expectation values are given by $\langle A\rangle_{\!\rho}:= {\mbox{Tr}}\{\rho A\}$ and the time evolution by $\rho(t)={{\cal U}_t}\rho(0){{\cal U}_t}^\dagger$ with propagator ${{\cal U}_t}:=e^{-iHt/{\hbar}}$, yielding $$\begin{aligned} \langle A\rangle_{\!\rho(t)} = \sum_{m,n=1}^{{D}} \rho_{mn}(0) A_{nm} \, e^{i(E_n-E_m)t/{\hbar}} \ . \label{10}\end{aligned}$$ The main examples are closed many-body systems with a macroscopically well defined energy, i.e., all relevant eigenvalues $E_1$,...,$E_{{D}}$ are contained in some microcanonical energy window $[E-\Delta E,E]$, where $\Delta E$ is small on the macroscopic but large on the microscopic scale. For systems with $f \gg 1$ degrees of freedom, ${D}$ is then exponentially large in $f$ [@gol10a; @rei10]. Accordingly, the relevant Hilbert space ${{\cal H}}$ is spanned by the eigenvectors $\{|n\rangle\}_{n=1}^{{D}}$ and is sometimes also named energy shell or active Hilbert space, see, e.g., Refs. [@neu29; @gol10a; @gol10b; @gol10c; @rei15] and Supplementary Note 2 for more details. Analytical results {#analytical-results .unnumbered} ------------------ Our main players are the three Hermitian operators $H$ (Hamiltonian), $A$ (observable), and $\rho(0)$ (initial state), each with its own eigenvalues (spectrum) and eigenvectors (basis of ${{\cal H}}$). In the following, the three spectra will be considered as arbitrary but fixed, while the eigenbases will be randomly varied relatively to each other. More precisely, all unitary transformations $U :{{\cal H}}\to{{\cal H}}$ between the eigenbases of $H$ and $A$ are considered as equally likely (Haar distributed [@neu29; @gol10a; @gol10b; @gol10c]), while the basis of $\rho(0)$ relatively to that of $A$ is arbitrary but fixed. (Equivalently, we could let “rotate” $H$ relatively to $\rho(0)$ while keeping $A$ fixed relatively to $\rho(0)$). In particular, the initial expectation value $\langle A\rangle_{\!\rho(0)}$ can be chosen arbitrary but then remains fixed ($U$-independent). It is only for times $t>0$ that the randomness of the unitary $U$ also randomizes (via $H$) the further temporal evolution of $\rho(t)$ and thus of $\langle A\rangle_{\!\rho(t)}$. The basic idea behind this randomization of $U$ is akin to random matrix theory [@gol10a; @gol10b; @gol10c; @rei15], namely to derive an approximation for $\langle A\rangle_{\!\rho(t)}$ which applies to the overwhelming majority of all those randomly sampled $U$’s, hence it typically should apply also to the particular (non-random) $U$ of the actual system of interest. A more detailed justification of this “typicality approach” will be provided in section “Typicality of thermalization”. Since $A_{mn}$ refers to the basis of $H$, these matrix elements depend on $U$, and likewise for $\rho_{mn}(0)$ (the explicit formulae are provided in “Methods: Basic matrices”). Indicating averages over $U$ by the symbol ${\left[}\cdots{\right]_{U}}$ and exploiting that all basis transformations $U$ are equally likely, it follows for symmetry reasons that ${\left[}\rho_{nn}(0) A_{nn}{\right]_{U}}$ must be independent of $n$. Likewise, ${\left[}\rho_{mn}(0)A_{nm}{\right]_{U}}$ must be independent of $m$ and $n$ for all $m\not = n$. We thus can conclude that for any $n$ $$\begin{aligned} \!\! \!\! \!\! \!\! {D}\, {\left[}\rho_{nn}(0)A_{nn}{\right]_{U}}& = & {\left[}\sum_{k=1}^{D}\rho_{kk}(0)A_{kk}{\right]_{U}}\label{20}\end{aligned}$$ and that for any $m\not=n$ $$\begin{aligned} \!\! \!\! \!\! \!\! {D}({D}\!\! & - & \!\! 1)\, {\left[}\rho_{mn}(0) A_{nm}{\right]_{U}}= {\left[}\sum_{j\not=k} \rho_{jk}(0)A_{kj}{\right]_{U}}\nonumber \\ & = & {\left[}\sum_{j,k=1}^{D}\rho_{jk}(0)A_{kj}{\right]_{U}}-{\left[}\sum_{k=1}^{D}\rho_{kk}(0)A_{kk}{\right]_{U}}\ . \label{30}\end{aligned}$$ Defining the auxiliary density operator ${\omega}$ via the matrix elements ${\omega}_{mn} := \delta_{mn} \rho_{nn}{(0)}$, equation (\[20\]) can be rewritten as ${\left[}{\mbox{Tr}}\{{\omega}A\}{\right]_{U}}$. Working in a reference frame where only $H$ (and thus ${\omega}$) changes with $U$, but not $A$ and $\rho(0)$, implies ${\left[}{\mbox{Tr}}\{{\omega}A\}{\right]_{U}}={\mbox{Tr}}\{{\left[}{\omega}{\right]_{U}}A\}$. With ${\rho_{\mathrm{av}}}:= {\left[}{\omega}{\right]_{U}}$ it follows that $$\begin{aligned} {\left[}\rho_{nn}(0)A_{nn}{\right]_{U}}= {\mbox{Tr}}\{{\rho_{\mathrm{av}}}A\}/{D}=\langle A\rangle_{\!{\rho_{\mathrm{av}}}}/{D}\label{50}\end{aligned}$$ for arbitrary $n$. Likewise, equation (\[30\]) yields $$\begin{aligned} {\left[}\rho_{mn}(0) A_{nm}{\right]_{U}}= \frac{\langle A\rangle_{\!\rho(0)} -\langle A\rangle_{\!{\rho_{\mathrm{av}}}} }{{D}({D}- 1)} \label{60}\end{aligned}$$ for arbitrary $m\not = n$. Upon separately averaging in equation (\[10\]) the summands with $m=n$ and those with $m\not =n$ over $U$, and then exploiting equations (\[50\]) and (\[60\]) one readily finds that $$\begin{aligned} {\left[}\langle A\rangle_{\!\rho(t)}{\right]_{U}}& = & \langle A\rangle_{\!{\rho_{\mathrm{av}}}} + F(t)\, \left\{ \langle A\rangle_{\!\rho(0)} - \langle A\rangle_{\!{\rho_{\mathrm{av}}}}\right\} \label{70} \\ F(t) & := & \frac{D}{D-1}\left(|\phi(t)|^2-\frac{1}{D}\right) \label{80}\end{aligned}$$ where $\phi(t)$ is the Fourier transform of the spectral density from Ref. [@cra12] (see also [@gol15b; @zni11; @mon14]) $$\begin{aligned} \phi(t) :=\frac{1}{{D}}\sum_{n=1}^{{D}}e^{i E_n t/{\hbar}} \ . \label{90}\end{aligned}$$ The following results can be derived in principle along similar lines (symmetry arguments being one key ingredient), but since the actual details are quite tedious, they are postponed to “Methods”. As a first result, one obtains $$\begin{aligned} \langle A\rangle_{\!{\rho_{\mathrm{av}}}}= \langle A\rangle_{\!{\rho_{\mathrm{mc}}}} + \frac{\langle A\rangle_{\!\rho(0)} -\langle A\rangle_{\!{\rho_{\mathrm{mc}}}} }{{D}+1} \ , \label{100}\end{aligned}$$ where ${\rho_{\mathrm{mc}}}:=I/{D}$ is the microcanonical density operator and $I$ the identity on ${{\cal H}}$. As a second result, one finds for the statistical fluctuations $$\begin{aligned} \xi(t):= \langle A\rangle_{\!\rho(t)} - {\left[}\langle A\rangle_{\!\rho(t)}{\right]_{U}}\label{110}\end{aligned}$$ the estimate $$\begin{aligned} {\left[}\xi^2(t){\right]_{U}}= {{\cal O}}({\Delta_{\! A}}^2{\mbox{Tr}}\{\rho^2(0)\}/{D}) \label{120}\end{aligned}$$ for arbitrary $t$, where ${\Delta_{\! A}}$ is the range of $A$, i.e., the difference between the largest and smallest eigenvalues of $A$. Since averaging over $U$ and integrating over $t$ are commuting operations, equation (\[120\]) implies that $$\begin{aligned} {\left[}\frac{1}{t_2-t_1}\int_{t_1}^{t_2} \xi^2(t)\, dt {\right]_{U}}= {{\cal O}}\left(\frac{{\Delta_{\! A}}^2{\mbox{Tr}}\{\rho^2(0)\}}{{D}}\right) \label{130}\end{aligned}$$ for arbitrary $t_2>t_1$. Considering $t$ in equation (\[120\]) as arbitrary but fixed, equation (\[110\]) and $D \gg 1$ imply (obviously or by exploiting Chebyshev’s inequality [@tas98; @gol10a; @lin09; @sho12; @rei12a]) that $\langle A\rangle_{\!\rho(t)}$ is practically indistinguishable from the average in (\[70\]) for the vast majority of all unitaries $U$. Indeed, the fraction (normalized Haar measure) of exceptional $U$’s is unimaginably small for typical macroscopic systems with, say, $f\approx 10^{23}$ degrees of freedom, since ${D}$ in (\[120\]) is exponentially large in $f$ (see below equation (\[10\])). Likewise, considering an arbitrary but fixed time interval $[t_1,t_2]$ in equation (\[130\]), it follows for all but a tiny fraction of $U$’s that the time average over $\xi^2(t)$ on the left hand side of (\[130\]) must be unimaginably small, and hence also the integrand $\xi^2(t)$ itself must be exceedingly small for the overwhelming majority of all $t\in [t_1,t_2]$. Accordingly, $\langle A\rangle_{\!\rho(t)}$ must remain extremely close to (\[70\]) simultaneously for all those $t\in [t_1,t_2]$. Due to equation (\[100\]) and ${D}\gg 1$, we furthermore can safely approximate $\langle A\rangle_{\!{\rho_{\mathrm{av}}}}$ in (\[70\]) by $\langle A\rangle_{\!{\rho_{\mathrm{mc}}}}$. Altogether, we thus can conclude that in very good approximation $$\begin{aligned} \langle A\rangle_{\!\rho(t)} & = & \langle A\rangle_{\!{\rho_{\mathrm{mc}}}} + F(t)\, \left\{\langle A\rangle_{\!\rho(0)} - \langle A\rangle_{\!{\rho_{\mathrm{mc}}}}\right\} \label{140}\end{aligned}$$ for the vast majority of unitaries $U$ and times $t$. As detailed in “Methods”, the neglected corrections in (\[140\]) consist of a systematic ($U$-independent) part, which is bounded in modulus by ${\Delta_{\! A}}/(D^2-1)$ for all $t$, and a random ($U$-dependent) part (namely $\xi(t)$), whose typical order of magnitude is ${\Delta_{\! A}}\sqrt{{\mbox{Tr}}\{\rho^2(0)\}/D}$ (for most $U$ and $t$, cf. equations (\[120\]), (\[130\])), i.e., $\xi(t)$ is dominating by far (note that $1\geq{\mbox{Tr}}\{\rho^2(0)\}\geq{\mbox{Tr}}\{{\rho_{\mathrm{mc}}}^2\}=1/{D}$). Moreover, the correlations of $\xi(t)$ decay on time scales comparable to those governing $F(t)$. These are our main formal results. In the rest of the paper we discuss their physical content. Basic properties of $F(t)$ {#basic-properties-of-ft .unnumbered} -------------------------- Equation (\[90\]) implies that $\phi(0)=1$, $\phi(-t)=\phi^\ast (t)$, and $|\phi(t)| \leq 1$. With equation (\[80\]) and $D\gg 1$ it follows that in very good approximation $$F(t) = |\phi(t)|^2 \ , \label{150}$$ and thus $$\begin{aligned} F(0)=1 \, , \ 0\leq F(t)\leq 1 \, , \ F(-t)=F(t) \ . \label{160}\end{aligned}$$ Indicating averages over all $t \geq 0$ by an overbar, one can infer from equations (\[90\]) and (\[150\]) that $\overline{F(t)}=\sum_k d_k^2/{D}^2$, where $k$ labels the eigenspaces of $H$ with mutually different eigenvalues and $d_k$ denotes their dimensions. Since $\sum_k d_k={D}$ we thus obtain $\overline{F(t)} \leq \max_k(d_k/{D})$. Excluding extremely large multiplicities (degeneracies) of energy eigenvalues, it follows that the time average $\overline{F(t)}$ is negligibly small and hence [@tas98; @gol10a; @lin09; @sho12; @rei12a] that $F(t)$ itself must be negligibly small for the overwhelming majority of all sufficiently large $t$, symbolically indicated as $$\begin{aligned} F(t\to\infty ){\rightsquigarrow}0 \ . \label{170}\end{aligned}$$ Note that there still exist arbitrarily large exceptional $t$’s owing to the quasi-periodicity of $\phi(t)$ implied by (\[90\]). We also emphasize that our main result (\[140\]) itself admits arbitrary degeneracies of $H$. As an example, we focus on the microcanonical setup introduced below equation (\[10\]) and on not too large times, so that (\[90\]) is well approximated by $$\begin{aligned} \phi(t)=\int_{E-\Delta E}^{E} \rho(x)\, e^{ixt/{\hbar}}\, dx \ , \label{180}\end{aligned}$$ where $\rho(x)$ represents the (smoothened and normalized) density of energy levels $E_n$ in the vicinity of the reference energy $x$. If the level density is constant throughout the energy window $[E-\Delta E,E]$, we thus obtain with (\[150\]) $$\begin{aligned} F(t)=\frac{\sin^2(\Delta E\,t/2{\hbar})}{(\Delta E\, t/2{\hbar})^2} \ . \label{190}\end{aligned}$$ Next, we recall Boltzmann’s entropy formula $S(x)={k_\mathrm{B}}\ln(\Omega(x))$, where $\Omega(x)$ counts the number of $E_n$’s below $x$ and ${k_\mathrm{B}}$ is Boltzmann’s constant. Hence, $\Omega'(x)$ must be proportional to the level density $\rho(x)$ from above. Furthermore, $T:=1/S'(E)$ is the usual microcanonical temperature of a system with energy $E$ at thermal equilibrium. A straightforward expansion then yields the approximation $\rho(E-y)=c\, e^{-y/{k_\mathrm{B}}T}$ for $y \geq 0$, where $c$ is fixed via $\int_{E-\Delta E}^{E} \rho(x)\, dx=1$. The omitted higher order terms are safely negligible for all $y\geq 0$ and systems with $f\gg 1$ degrees of freedom, see also [@mon14]. With equations (\[150\]) and (\[180\]) one thus finds $$\begin{aligned} F(t)= \frac{1-2\alpha\cos(\Delta E\, t/\hbar)+\alpha^2} {(1-\alpha)^2[1+({k_\mathrm{B}}T\, t/{\hbar})^2]} \ , \label{200}\end{aligned}$$ where $\alpha:=e^{-\Delta E/{k_\mathrm{B}}T}$. For $\Delta E\ll {k_\mathrm{B}}T$, one recovers (\[190\]) and for $\Delta E\gg {k_\mathrm{B}}T$ one obtains $$\begin{aligned} F(t)=\frac{1}{1+({k_\mathrm{B}}T\, t/{\hbar})^2} \ . \label{210}\end{aligned}$$ Typicality of thermalization {#typicality-of-thermalization .unnumbered} ---------------------------- Equations (\[140\]) and (\[170\]) imply thermalization in the sense that the expectation value $\langle A\rangle_{\!\rho(t)}$ becomes (for most $U$) practically indistinguishable from the microcanonical average $\langle A\rangle_{\!{\rho_{\mathrm{mc}}}}$ for the overwhelming majority of all sufficiently large $t$. Exceptional $t$’s are, for instance, due to quantum revivals, which, in turn, are apparently closely related to the quasi-periodicities of $F(t)$. Our assumption that energy eigenvalues must not be extremely highly degenerate (see above equation (\[170\])) is similar to Refs. [@cra12; @gol13; @mal14; @gol15; @gol15b] but considerably weaker than the corresponding premises in most other related works [@tas98; @neu29; @rei08; @gol10a; @gol10b; @gol10c; @lin09; @rei10; @sho11; @sho12; @rei12a; @rei12b; @rei15]. The usual time inversion invariance on the fundamental, microscopic level [@skl93] is maintained by (\[140\]) due to (\[160\]). Surprisingly, and in accordance with the second law of thermodynamics, the latter symmetry persists even if it is broken in the microscopic quantum dynamics, e.g., by an external magnetic field! By propagating $\rho(0)$ backward in time (with respect to one particular $U$) and taking the result as new initial state, one may easily tailor [@rei10] examples of the very rare $U$’s and $t$’s which notably deviate from the typical behavior (\[140\]). Equivalently, one may back-propagate $A$ instead of $\rho(0)$ (Heisenberg picture). Note that $S$ and $T$ were introduced below equation (\[190\]) not in the sense of associating some entropy and temperature to the non-equilibrium states $\rho(t)$, but rather as a convenient level-counting tool. However, we now can identify them [*a posteriori*]{} with the pertinent entropy and temperature after thermalization. The randomization via $U$ (see section “Analytical results”) can be viewed in two ways: Either one considers $\rho(0)$, $A$, and the spectrum of $H$ as arbitrary but fixed, while the eigenbasis of $H$ is sampled from a uniform distribution (Haar measure). Or one considers $H$ and the spectra of $\rho(0)$ and $A$ as arbitrary but fixed and randomizes the eigenvectors of $A$ and $\rho(0)$. In doing so, a key point is that the relative orientation of the eigenbases of $\rho(0)$ and $A$ can be chosen arbitrarily but then is kept fixed. Indeed, it is well known [@mal14; @rei15] that for “most” such orientations the expectation values $\langle A\rangle_{\!\rho(0)}$ and $\langle A\rangle_{\!{\rho_{\mathrm{mc}}}}$ are practically indistinguishable, i.e., an initial $\langle A\rangle_{\!\rho(0)}$ far from equilibrium requires a careful fine-tuning of $\rho(0)$ relatively to $A$. In reality, there is usually nothing random in the actual physical systems one has in mind. Hence, results like (\[140\]), which (approximately) apply to the overwhelming majority of unitaries $U$, should be physically interpreted according to the common lore of random matrix theory [@gol10a; @gol10b; @rei15], namely as to apply practically for sure to a concrete system under consideration, unless there are particular reasons to the contrary. Such reasons arise, for instance, when $A$ is known to be a conserved quantity, implying a common eigenbasis of $A$ and $H$, i.e., the basis transformations $U$ must indeed be very special. Furthermore, this non-typicality is structurally stable against sufficiently small perturbations of $A$ and/or $H$ so that the eigenvectors remain “almost aligned” (each eigenvector of $A$ mainly overlaps with one or a few eigenvectors of $H$) and hence $A$ remains “almost conserved” (almost commuting with $H$). Analogous non-typical $U$’s are expected when $\rho(0)$ is known to be (almost) conserved (commuting with $H$). Further well-known exceptions are integrable systems, for which thermalization in the above sense may be absent for certain $\rho(0)$ and $A$ [@rig08; @rig09a] (but not for others [@rig12]), systems exhibiting many-body localization [@pal10; @gog11], or trivial cases with non-interacting subsystems (see also Supplementary Note 2). Our present focus is different: Taking thermalization for granted, is the temporal relaxation well approximated by equation (\[140\])? Typical fast relaxation and prethermalization {#typical-fast-relaxation-and-prethermalization .unnumbered} --------------------------------------------- Equation (\[210\]) is governed by the Boltzmann time ${t_\mathrm{B}}:=h/{k_\mathrm{B}}T$, amounting to ${t_\mathrm{B}}\approx 10^{-13}\,$s at room temperature. Equation (\[200\]) gives rise to comparably short time scales, unless the temperature is exceedingly low or the energy window $\Delta E$ is unusually small. Such relaxation times are much shorter than commonly observed in real systems [@cra12; @mal14; @gol15; @gol15b]. Moreover, the temporal decay is typically non-exponential (see e.g. (\[190\])-(\[210\])), again in contrast to the usual findings. This seems to imply that typical experiments correspond to non-typical unitaries $U$. Plausible explanations are as follows: To begin with, the above predicted typical relaxation times are so short that they simply could not be observed in most experiments. Second (or as a consequence), the usual initial conditions and/or observables are indeed quite “special” with respect to the prominent role of almost conserved quantities (see previous section), in particular “local descendants” of globally conserved quantities like energy, charge, particle numbers, etc.: Examples are the amount of energy, charge etc. within some subdomain of the total system, or, more generally, local densities, whose content within a given volume can only change via transport currents through the boundaries of that volume. As a consequence, the global relaxation process becomes “unusually slow” if the densities between macroscopically separated places need to equilibrate (small surface-to-volume ratio), or if there exists a natural “bottleneck” for their exchange (weakly interacting subsystems). Put differently, our present theory is meant to describe the very rapid relaxation towards local equilibrium, but not any subsequent global equilibration. Only if there exists a clear-cut time-scale separation between these two relaxation steps (or if there is no second step at all) can we hope to quantitatively capture the first step by our results. Conversely, the time scale-separation usually admits some Markovian approximation for the second step, yielding an exponential decay, whose time scale still depends on many details of the system. Natural further generalizations include the closely related concepts of hindered equilibrium, quasi-equilibrium (metastability), and, above all, prethermalization [@ber04; @moe08; @gri12], referring, e.g., to a fast partial thermalization within a certain subset of modes, (quasi-)particles, or other generalized degrees of freedom. (Like in [@ber04], we do not adopt here the additional requirement [@moe08] that the almost conserved quantities originate from a weak perturbation of an integrable system.) In short, our working hypothesis is that the theory (\[140\]) describes the temporal relaxation of $\langle A\rangle_{\!\rho(t)}$ for any given pair $(\rho(0),A)$ unless one of them is exceptionally close to or in some other way slowed down by an (almost) conserved quantity. Comparison with experimental results {#comparison-with-experimental-results .unnumbered} ------------------------------------ We focus on experiments in closed many-body systems in accordance with the above general requirements. In comparing them with our theory (\[140\]), we furthermore assume that the (pre-)thermalized system occupies a microcanonical energy window with some (effective) temperature $T$ and $\Delta E\gg {k_\mathrm{B}}T$, so that (\[210\]) applies. Finally, the asymptotic values $\langle A\rangle_{\!\rho(0)}$ and $\langle A\rangle_{\!{\rho_{\mathrm{mc}}}}$ in (\[140\]) are either obvious or will be estimated from the measurements, hence no further knowledge about the often quite involved details of the experimental observables will be needed! =1.0 Fig. \[fig1\] demonstrates the very good agreement of the theory with the rapid initial prethermalization of a coherently split Bose gas, observed by the Schmiedmayer group in Ref. [@gri12]. =1.0 In Fig. \[fig2\], the theory is compared with the pump-probe experiment by the Bigot group from Ref. [@gui02]. The finite widths of the pump and the probe laser pulses are roughly accounted for by convoluting equation (\[140\]) with a Gaussian of $35$fs FWHM (Full Width at Half Maximum). In Ref. [@gui02], the FWHM of the pump pulse is estimated as $20$fs and the combined FWHM for both pulses as $22$fs, implying a FWHM of $9$fs for the probe pulse. The latter value seem quite optimistic to us. A second “excuse” for our slightly larger FWHM value of $35$fs is that the tails of the experimental pulse shape may be considerably broader than those of a Gaussian with the same FWHM (see, e.g., Fig. 2c in the supplemental material of Ref. [@gie15]). Finally, the convolution of (\[140\]) with a Gaussian represents a rather poor “effective description” in the first place: Our entire theoretical approach becomes strictly speaking invalid when the duration of the perturbation becomes comparable to the thermalization time. =0.9 A similar comparison with the pump-probe experiments by Faure at al. from Ref. [@fau13] is presented in Fig. \[fig3\]. As before, we adopted a slightly larger FWHM of $100$fs than the estimate of $76$fs in [@fau13]. Due to the above mentioned fundamental limitations of our theory for such rather large FWHM values, the temperatures adopted in Fig. \[fig3\] should still be considered as quite crude estimates. Apart from that, Fig. \[fig3\] nicely confirms the predicted temperature dependence from (\[210\]). We close with three remarks: First, Refs. [@gui02; @fau13] also implicitly confirm our prediction that the essential temporal relaxation (encapsulated by $F(t)$ in (\[140\])) is generically the same for different observables. Second, similar pump-probe experiments abound in the literature, but usually the pulse-widths are too large for our purposes. Third, the temporal relaxation in Figs. \[fig1\]-\[fig3\] has also been investigated numerically, but closed analytical results have not been available before [@gri12; @fau13]. Comparison with numerical results {#comparison-with-numerical-results .unnumbered} --------------------------------- Fig. \[fig4\] illustrates the very good agreement of our theory with Rigol’s numerical findings from Ref. [@rig09a], both for an integrable and an non-integrable example. A similar agreement is found for all other parameters and also for an analogous hardcore boson model examined in Refs. [@rig09b; @rig09a]. On the other hand, a second observable considered in Ref. [@rig09a], deriving from the momentum distribution function, exhibits in all cases a significantly slower and also qualitatively different temporal relaxation. According to the discussion in section “Typical fast relaxation and prethermalization”, it is quite plausible that the latter observable is indeed “non-typical” in view of the fact that it represents a conserved quantity for fermions with $V=\tau'=V'=0$ [@rig09a]. =0.92 In Fig. \[fig5\] we compare our theory with the simulations of a different one-dimensional electron model by Thon et al. from Ref. [@tho04]. In doing so, the pertinent temperature $T$ has been estimated as follows: The textbook Sommerfeld-expansion for $N$ electrons in a one-dimensional box yields $E=E_0[1+(3\pi^2/8)({k_\mathrm{B}}T/{E_\mathrm{F}})^2]$, where $E$ is their total energy, $E_0=(1/3)N{E_\mathrm{F}}$ the ground state energy, ${E_\mathrm{F}}=(\pi\hbar N/gL)^2/2m$ the Fermi-energy, $L$ the box length, $m$ the electron mass, and $g:=2s+1=2$ ($s=1/2$ for electrons). Assuming that the pulse acts solely on the small well implies $N=16$, $L\simeq 15$nm [@tho04], and $E-E_0\simeq 0.045$eV (see Fig. 8a in [@tho04]). Altogether, we thus obtain $T\simeq 170$K. =0.92 The remnant “fluctuations” of the numerical data in Figs. \[fig4\] and \[fig5\] can be readily explained as finite particle number effects (see Fig. 4 in [@rig09a] and Fig. 10 in [@tho04]), and their temporal correlations are as predicted below equation (\[140\]). The seemingly rather strong fluctuations in Fig. \[fig5\] are a fallacy since the systematic changes themselves are very small. =1.0 Next we turn to the numerical findings for a qubit in contact with a spin bath by the Trauzettel group from Ref. [@het15]. The agreement with our theory in Fig. \[fig6\] is as good as it possibly can be for such a rather small dimensionality of $D=2^7$. Indeed, the remaining differences nicely confirm the predictions below equation (\[140\]), regarding both their typical order of magnitude ${\Delta_{\! A}}\sqrt{{\mbox{Tr}}\{\rho^2(0)\}/{D}} =1\sqrt{2^{-6}/2^7} \simeq 0.01$ and their temporal correlations (where we exploited that ${\mbox{Tr}}\{\rho^2(0)\}=2^{-6}$ for the particular initial condition $\rho(0)$ adopted in Fig. \[fig6\]). Our final example is Bartsch and Gemmer’s random matrix model from Ref. [@bar09]. Referring to the notation and definitions in the caption of Fig. \[fig7\], one readily sees that the considered observable $A$ is a conserved quantity for the unperturbed Hamiltonian ($\lambda =0$). In agreement with our discussion in section “Typical fast relaxation and prethermalization”, $A$ is therefore still “almost conserved” for small $\lambda$ and indeed exhibits a slow, exponential decay towards $\langle A\rangle_{\!{\rho_{\mathrm{mc}}}}=0$ (see Fig. 1a in [@bar09]). Upon increasing $\lambda$, one recovers the much faster, non-exponential decay of our present theory (see Fig. 1b in [@bar09]). Unfortunately, the $\lambda$-value $1.77\cdot 10^{-3}$ from Fig. 1b of [@bar09] is still somewhat too small and the eigenvalues $E_1,...,E_{6000}$ are not any more available (I asked the authors). Therefore, we repeated the numerics from [@bar09] on our own for $\lambda = 7\cdot 10^{-3}$. The resulting agreement with (\[140\]) in Fig. \[fig7\] is very good, and the temporal correlations of the deviations as well as their typical order of magnitude ${\Delta_{\! A}}\sqrt{{\mbox{Tr}}\{\rho^2(0)\}/{D}}=2\sqrt{1/6000}\simeq 0.03$ are as predicted below (\[140\]). =1.0 We close with two remarks: First, there is no fit parameter in any of the above examples apart from $\langle A\rangle_{\!\rho(0)}$ in Fig. \[fig4\] and $\langle A\rangle_{\!{\rho_{\mathrm{mc}}}}$ in Figs. \[fig4\] and \[fig5\]. Second, especially in the case of the integrable model in Fig. \[fig4\], one may question whether the considered system exhibits thermalization in the first place, as is tacitly assumed in equation (\[140\]). In Supplementary Note 2 we argue that (\[140\]) indeed is expected to still remain valid in such cases if $\langle A\rangle_{\!{\rho_{\mathrm{mc}}}}$ is replaced by the pertinent non-thermal long-time asymptotics (which, in turn, is estimated from the numerical data in Fig. \[fig4\]). Discussion {#discussion .unnumbered} ========== Our main result (\[140\]) implies thermalization in the sense that a generic non-equilibrium system with a macroscopically well defined energy becomes practically indistinguishable from the corresponding microcanonical ensemble for the overwhelming majority of all sufficiently late times. Apart from the concrete initial and long-time expectation values (i.e. $\langle A\rangle_{\!\rho(0)}$ and $\langle A\rangle_{{\rho_{\mathrm{mc}}}}$ in (\[140\])), the temporal relaxation (i.e. $F(t)$ in (\[140\])) depends only on the spectrum of the Hamiltonian within the pertinent interval of non-negligibly populated energy eigenstates, but not on any further details of the initial condition or the observable. This represents one of the rare instances of a general quantitative statement about systems far from equilibrium. The theory agrees very well with a wide variety of experimental and numerical results from the literature (though none of them was originally conceived for the purpose of such a comparison). We are in fact not aware of any other quantitative analytical explanation of those data comparable to ours. Indeed, the usual paradigm to identify and then analytically quantify the main physical mechanisms seems almost hopeless here. In a sense, our present approach thus amounts to a new paradigm: There is no need of any further “explanations” since the observed behavior is expected with overwhelming likelihood from the very beginning, i.e., unless there are special [*a priori*]{} reasons to the contrary. Similarly as in [@mal14; @gol15; @gol15b; @cra12], generic thermalization is found to happen extremely quickly (unless the system’s energy or temperature is exceedingly low). Moreover, the temporal decay is typically non-exponential. A main prediction of our theory is that these features should in fact be very common (at least in the form of prethermalization), but often they are unmeasurably fast or they have simply not been looked for so far. Conversely, most of the usually considered observables and initial conditions are actually quite “special”, namely exceptionally slow, “almost conserved” quantities. A better understanding of those principally untypical but practically very common thermalization processes remains an open problem [@mal14; @gol15; @gol15b]. METHODS {#methods .unnumbered} ======= Basic matrices {#basic-matrices .unnumbered} -------------- According to section “Analytical results”, the unitary $U$ represents the basis transformation between the eigenvectors $|n\rangle$ ($n=1,...,{D}$) of the Hamiltonian $H$ and those of the observable $A$. Denoting the eigenvalues of $A$ by $\lambda_\nu$ and the eigenvectors by $|\psi_\nu\rangle$ ($\nu=1,...,{D}$), the matrix elements of $U$ are thus $U_{n\nu}:=\langle n|\psi_\nu\rangle$. Accordingly, the matrix elements of $\rho(0)$ in the basis of $H$ are related to those in the basis of $A$ via $$\begin{aligned} \rho_{mn}(0) = \sum_{\mu,\nu=1}^{{D}} U_{m\mu} \, \rho_{\mu\nu}\, U_{n\nu}^\ast \ , \label{5b}\end{aligned}$$ where $\rho_{\mu\nu}:=\langle \psi_\mu|\rho(0)|\psi_\nu\rangle$. Similarly, the matrix elements of $A$ satisfy $$\begin{aligned} A_{mn} = \sum_{\xi=1}^{{D}} U_{m\xi}\, \lambda_\xi \, U_{n\xi}^\ast \label{6b}\end{aligned}$$ and hence $$\begin{aligned} \rho_{mn}(0)A_{nm} = \sum_{\mu,\nu,\xi=1}^{{D}} \rho_{\mu\nu}\, \lambda_\xi \, U_{m\mu} U_{n\nu}^\ast U_{n\xi} U_{m\xi}^\ast \ . \label{10b}\end{aligned}$$ As announced below equation (\[30\]), we work (without loss of generality) in a reference frame (or reference basis of ${{\cal H}}$) so that only $H$ (and thus $|n\rangle$) depends on $U$, while $A$ and $\rho(0)$ (and thus $|\psi_\nu\rangle$) are independent of $U$. Hence, $\rho_{\mu\nu}$ and $\lambda_\xi$ on the right hand side of equations (\[5b\])-(\[10b\]) are independent of $U$. Derivation of equation (\[100\]) {#derivation-of-equation-100 .unnumbered} -------------------------------- As a simple first exercise, let us average equation (\[10b\]) over all uniformly (Haar) distributed unitaries $U$, as specified in section “Analytical results”. Since the factors $\rho_{\mu\nu}\lambda_\xi$ on the right hand side are independent of $U$, we are left with averages over the $U$ matrix elements. Such averages have been evaluated repeatedly and often independently of each other in the literature, see e.g. [@bro81; @col06; @gem09; @bro96], a key ingredient being symmetry arguments due to the invariance of the Haar measure under arbitrary unitary transformations. Particularly convenient for our present purposes is the formalism adopted by Brouwer and Beenakker, see Ref. [@bro96] and further references therein. The general structure of such averages is provided by equation (2.2) in [@bro96], reading $$\begin{aligned} & & {\left[}U_{a_1b_1} \ldots U_{a_mb_m} U^\ast_{\alpha_1\beta_1} \ldots U^\ast_{\alpha_n \beta_n} {\right]_{U}}= \nonumber \\ & & =\delta_{mn} \sum\limits_{P,P'}V_{P,P'} \prod_{j=1}^n \delta_{a_j\alpha_{P(j)}} \delta_{b_j\beta_{P'(j)}} \ . \label{20b}\end{aligned}$$ Quoting verbatim from Ref. [@bro96], “the summation is over all permutations $P$ and $P'$ of the numbers $1,...,n$. The coefficients $V_{P,P'}$ depend only on the cycle structure of the permutation $P^{-1}P'$. Recall that each permutation of $1,...,n$ has a unique factorization in disjoint cyclic permutations (“cycles”) of lengths $c_1,....,c_k$ (where $n=\sum_{j=1}^k c_j$). The statement that $V_{P,P'}$ depends only on the cycle structure of $P^{-1}P'$ means that $V_{P,P'}$ depends only on the lengths $c_1,...,c_k$ of the cycles in the factorization of $P^{-1}P'$. One may therefore write $V_{c_1,...,c_k}$ instead of $V_{P,P'}$.” The explicit numerical values of all $V_{c_1,...,c_k}$ with $n \leq 5$ are provided by the columns “CUE” of Tables II and IV in [@bro96]. Further remarks: The labels $m$ and $n$ in (\[20b\]) have nothing to do with those in (\[10b\]). Equation (\[20b\]) equals zero unless $m=n$. Every label $a_j$ must have a “partner”, i.e., its value must coincide with one of the $\alpha_j$’s, and vice versa, since otherwise the product over the Kronecker delta’s $\delta_{a_j\alpha_{P(j)}}$ in (\[20b\]) would be zero for all $P$’s. Note that some $a_j$’s may assume the same value, but then an equal number of $\alpha_j$’s also must assume that value. Likewise, every $b_j$ needs a “partner” among the $\beta_j$’s, and vice versa. Adopting the abbreviation $$\begin{aligned} X_{mn}:={\left[}\rho_{mn}(0)A_{nm} {\right]_{U}}\label{30b}\end{aligned}$$ and the renamings $a_1:=m$, $a_2:=n$, $b_1:=\mu$, $b_2:=\xi$, $b_3:=\nu$, equation (\[10b\]) yields $$\begin{aligned} X_{a_1a_2}=\sum_{b_1,b_2,b_3} \rho_{b_1 b_3} \lambda_{b_2} {\left[}U_{a_1 b_1} U_{a_2 b_2} U_{a_1 b_2}^\ast U_{a_2 b_3}^\ast {\right]_{U}}\ . \label{40b}\end{aligned}$$ The connection with (\[20b\]) is established via the identifications $\alpha_1:=a_1$, $\alpha_2:=a_2$, $\beta_1:=b_2$, $\beta_2:=b_3$. Therefore, if $b_1\not=b_2$ then the only potential “partner” of $b_1$ is $\beta_2$, and only if their values coincide, i.e. $b_3=b_1$, the corresponding summands may be non-zero. The same conclusion can be drawn if $b_1=b_2$. We thus can rewrite (\[40b\]) with (\[20b\]) as $$\begin{aligned} X_{a_1a_2}=\sum_{b_1,b_2} \rho_{b_1 b_1} \lambda_{b_2} \sum\limits_{P,P'} V_{P,P'} \prod_{j=1}^2 \delta_{a_j a_{P(j)}} \delta_{b_j\beta_{P'(j)}} \label{50b}\end{aligned}$$ where $\beta_1=b_2$ and $\beta_2=b_1$. There are two permutations of the numbers $1,2$, namely the identity and one, which exchanges $1$ and $2$. Denoting them as $P_1$ and $P_2$, respectively, and observing that $\beta_j=b_{P_2(j)}$, equation (\[50b\]) can be rewritten as $$\begin{aligned} X_{a_1a_2} & = & \sum\limits_{k=1}^2 \prod_{j=1}^2 \delta_{a_j a_{P_k(j)}} \sum\limits_{l=1}^2 V_{P_k,P_l} S_l \label{60b} \\ S_l & := & \sum_{b_1,b_2} \rho_{b_1 b_1} \lambda_{b_2} \prod_{j=1}^2 \delta_{b_j b_{P_2(P_l(j))}} \label{70b}\end{aligned}$$ For $l=1$ the two Kronecker delta’s in (\[70b\]) both require that $b_1=b_2$ and hence $$\begin{aligned} S_1 =\sum_{b_1} \rho_{b_1b_1} \lambda_{b_1} = {\mbox{Tr}}\{\rho(0)A\} \ . \label{80b}\end{aligned}$$ The last equality can be verified by evaluating the trace in the eigenbasis of $A$, see above equation (\[5b\]). In the same way, one finds that $$\begin{aligned} \!\! S_2 = \sum_{b_1,b_2} \rho_{b_1 b_1} \lambda_{b_2} = {\mbox{Tr}}\{\rho(0)\} {\mbox{Tr}}\{A\} = {D}{\mbox{Tr}}\{{\rho_{\mathrm{mc}}}A\} \, . \label{90b}\end{aligned}$$ In the last equation, we exploited that ${\mbox{Tr}}\{\rho(0)\}=1$ and ${\rho_{\mathrm{mc}}}:=I/{D}$, see below equation (\[100\]). Observing that the two Kronecker delta’s in (\[60b\]) equal one if $k=1$ or if $k=2$ and $a_1=a_2$, the overall result is $$\begin{aligned} X_{a_1a_2} & = & \langle A\rangle_{\! \rho(0)} ( V_{P_1,P_1}+\delta_{a_1 a_2} V_{P_2,P_1}) \nonumber \\ & + & {D}\langle A\rangle_{\! {\rho_{\mathrm{mc}}}} ( V_{P_1,P_2}+\delta_{a_1 a_2} V_{P_2,P_2} ) \ , \label{100b}\end{aligned}$$ where, as usual, $\langle A\rangle_{\!\rho(0)}:={\mbox{Tr}}\{\rho(0)A\}$ and $\langle A\rangle_{\! {\rho_{\mathrm{mc}}}}:={\mbox{Tr}}\{{\rho_{\mathrm{mc}}}A\}$. Finally, the coefficients $V_{P_k,P_l}$ are evaluated as explained below equation (\[20b\]): If $k=l$ then $P_l^{-1}P_k=P_1$ factorizes in two cycles of lengths $c_1=c_2=1$, i.e. $V_{P_k,P_l}=V_{c_1,c_2}=V_{1,1}$. Likewise, if $k\not =l$ then $P_l^{-1}P_k=P_2$ consists of one cycle with $c_1=2$, i.e. $V_{P_k,P_l}=V_{2}$. Referring to columns “CUE” and rows “$n=2$” of Tables II and IV in Ref. [@bro96] yields $V_{1,1}=1/({D}^2-1)$ and $V_2=-1/[{D}({D}^2-1)]$. Returning to the original labels $m$ and $n$ in equation (\[30b\]), we thus can rewrite (\[100b\]) as $$\begin{aligned} \!\! X_{mn} & = & \langle A\rangle_{\! \rho(0)}\frac{{D}- \delta_{mn}}{{D}({D}^2-1)} + \langle A\rangle_{\! {\rho_{\mathrm{mc}}}}\frac{{D}\delta_{mn} -1}{{D}^2 - 1} \, . \label{110b}\end{aligned}$$ As a consequence, we can infer from equations (\[50\]) and (\[30b\]) that $\langle A\rangle_{\!{\rho_{\mathrm{av}}}}={D}X_{nn}$ and with (\[110b\]) that $$\begin{aligned} \!\! \langle A\rangle_{\!{\rho_{\mathrm{av}}}} & = & \langle A\rangle_{\! \rho(0)}\frac{1}{{D}+ 1} + \langle A\rangle_{\! {\rho_{\mathrm{mc}}}}\frac{{D}}{{D}+ 1} \ . \label{120b}\end{aligned}$$ Hence, one readily recovers equation (\[100\]). A relation remarkably similar to our present equation (\[100\]), albeit in a quite different physical context, has been previously obtained also in Ref. [@ols12] (see equation (2) therein). Derivation of equation (\[120\]) {#derivation-of-equation-120 .unnumbered} -------------------------------- Without any doubt, there are much faster ways to obtain equations (\[110b\]) or (\[120b\]). The advantage of our present way is that it can be readily adopted without any conceptual differences (albeit the actual calculations become more lengthy) to more demanding cases like $$\begin{aligned} {\left[}\xi^2(t){\right]_{U}}= {\left[}\langle A\rangle_{\!\rho(t)}^2{\right]_{U}}- {\left[}\langle A\rangle_{\!\rho(t)}{\right]_{U}}^2 \ , \label{125b}\end{aligned}$$ see equation (\[110\]). To evaluate the last term in (\[125b\]), we recast equation (\[70\]) with (\[80\]) and (\[100\]) into the form $$\begin{aligned} \!\!\!\! {\left[}\langle A\rangle_{\!\rho(t)}{\right]_{U}}& = & F_0(t)\, \langle A\rangle_{\!\rho(0)} + \bar F_0(t) \langle A\rangle_{\!{\rho_{\mathrm{mc}}}} + R_1(t) \label{130b} \\ R_1(t) & := & \bar F_0(t)\frac{\langle A\rangle_{\!{\rho_{\mathrm{mc}}}}-\langle A\rangle_{\!\rho(0)}}{{D}^2-1} \label{140b} \\ \bar{F}_0(t) & := & 1-F_0(t) \label{150b} \\ F_0(t) & := & \frac{1}{D^2} \sum_{m,n=1}^{{D}}e^{i (E_n-E_m) t/{\hbar}} = |\phi(t)|^2 \ , \label{160b}\end{aligned}$$ where $\phi(t)$ is defined in equation (\[90\]). Similarly as in equation (\[160\]), one sees that $F_0(t),\,\bar{F}_0(t)\in[0,1]$ for all $t$. Denoting by ${\lambda_\mathrm{max}}$ and ${\lambda_\mathrm{min}}$ the largest and smallest among the eigenvalues $\lambda_1,...,\lambda_{{D}}$ of $A$, the range of $A$ is defined as ${\Delta_{\! A}}:={\lambda_\mathrm{max}}-{\lambda_\mathrm{min}}$. Furthermore, we can and will add a constant to $A$ so that ${\lambda_\mathrm{min}}= -{\lambda_\mathrm{max}}$ without any change in the final conclusions below. It readily follows that $|\lambda_\nu|\leq{\Delta_{\! A}}/2$ for all $\nu$ and hence that $$\begin{aligned} |\langle A^{\kappa} \rangle_{\!\rho}| \leq ({\Delta_{\! A}}/2)^{\kappa} \label{170b}\end{aligned}$$ for arbitrary density operators $\rho$ and $\kappa\in{\mathbb N}$. We thus can infer from equation (\[140b\]) that $$\begin{aligned} |R_1(t)|\leq {\Delta_{\! A}}/({D}^2-1) \ . \label{180b}\end{aligned}$$ Likewise, one finds upon squaring equation (\[130b\]) that $$\begin{aligned} \!\!\!\! \!\!\!\! {\left[}\langle A\rangle_{\!\rho(t)}{\right]_{U}}^2 & = & ( F_0(t)\, \langle A\rangle_{\!\rho(0)} + \bar F_0(t) \langle A\rangle_{\!{\rho_{\mathrm{mc}}}} )^2 + R_2(t) \label{190b} \\ |R_2(t)| & \leq & 3 {\Delta_{\! A}}^2/({D}^2-1) \ . \label{200b}\end{aligned}$$ Turning to the first term on the right hand side of (\[125b\]), one can infer, similarly as in (\[30b\]), (\[40b\]), from (\[10\]) and (\[10b\]) that $$\begin{aligned} \!\!\!\! \!\!\!\! & & {\left[}\langle A\rangle_{\!\rho(t)}^2{\right]_{U}}= \!\!\!\! \sum_{a_1,...,a_4} \!\!e^{i(E_{a_1}-E_{a_2}+E_{a_3}-E_{a_4})t/\hbar} X_{a_1 ... a_4} \label{210b} \\ \!\!\!\! \!\!\!\! & & X_{a_1 ... a_4}:=\sum_{b_1,...,b_6} \rho_{b_1 b_5} \lambda_{b_2} \rho_{b_3 b_6} \lambda_{b_4} \nonumber \\ \!\!\!\! \!\!\!\! & & \qquad \qquad \times {\left[}U_{a_1 b_1} \ldots U_{a_4 b_4} U_{a_1 \beta_1}^\ast \ldots U_{a_4 \beta_4}^\ast {\right]_{U}}\ , \label{220b}\end{aligned}$$ with $\beta_1:=b_2$, $\beta_2:=b_5$, $\beta_3:=b_4$, $\beta_4:=b_6$. Similarly as below equation (\[40b\]) it follows that only those summands may be non-zero, for which $b_1$ and $b_3$ have “partners” among $\beta_2$ and $\beta_4$, and vice versa. This condition can be satisfied in two ways: (i) $b_5=b_1$ and $b_6=b_3$. (ii) $b_5=b_3$ and $b_6=b_1$ and $b_1 \not = b_3$. The latter condition is due to the fact that the case $b_1=b_3$ is already covered by (i). Exploiting (\[20b\]) and with the abbreviation $\vec a:=(a_1,...,a_4)$ and likewise for $\vec b$, $\vec \beta$ etc., we thus obtain $$\begin{aligned} \!\!\!\! \!\!\!\! & & X_{\vec a}= X^{(i)}_{\vec a}+X^{(ii)}_{\vec a} \label{230b} \\ \!\!\!\! \!\!\!\! & & X^{(i)}_{\vec a}:=\sum_{\vec b} \rho_{b_1 b_1} \lambda_{b_2} \rho_{b_3 b_3} \lambda_{b_4} \nonumber \\ \!\!\!\! \!\!\!\! & & \qquad \qquad \times \sum\limits_{P,P'} V_{P,P'} \prod_{j=1}^4 \delta_{a_j a_{P(j)}} \delta_{b_j\beta^{(i)}_{P'(j)}} \label{240b} \\ \!\!\!\! \!\!\!\! & & X^{(ii)}_{\vec a}:=\sum_{\vec b,b_1\not=b_3} \rho_{b_1 b_3} \lambda_{b_2} \rho_{b_3 b_1} \lambda_{b_4} \nonumber \\ \!\!\!\! \!\!\!\! & & \qquad \qquad \times \sum\limits_{P,P'} V_{P,P'} \prod_{j=1}^4 \delta_{a_j a_{P(j)}} \delta_{b_j\beta^{(ii)}_{P'(j)}} \ , \label{250b}\end{aligned}$$ where $\vec \beta^{(i)}:=(b_2,b_1,b_4,b_3)$ and $\vec \beta^{(ii)}:=(b_2,b_3,b_4,b_1)$. There are $4!=24$ permutations $P$ of the numbers $1,2,3,4$. Adopting the shorthand notation $[P(1)P(2)P(3)P(4)]$ to explicitly specify a given $P$, these $24$ permutations are: $$\begin{aligned} & & \!\!\!\! \!\!\!\! \!\!\!\! \!\!\!\! P_1=[1234] ,\, P_2=[2134] ,\, P_3=[3214] ,\, P_4=[4231], \nonumber \\ & & \!\!\!\! \!\!\!\! \!\!\!\! \!\!\!\! P_5=[1324] ,\, P_6=[1432] ,\, P_7=[1243] ,\, P_8=[2143], \nonumber \\ & & \!\!\!\! \!\!\!\! \!\!\!\! \!\!\!\! P_9=[3412] ,\, P_{10}\!=\![4321] ,\, P_{11}\!=\![1342] ,\, P_{12}\!=\![1423], \nonumber \\ & & \!\!\!\! \!\!\!\! \!\!\!\! \!\!\!\! P_{13}\!=\![3241] ,\, P_{14}\!=\![4213] ,\, P_{15}\!=\![2431] ,\, P_{16}\!=\![4132], \nonumber \\ & & \!\!\!\! \!\!\!\! \!\!\!\! \!\!\!\! P_{17}\!=\![2314] ,\, P_{18}\!=\![3124] ,\, P_{19}\!=\![2341] ,\, P_{20}\!=\![2413], \nonumber \\ & & \!\!\!\! \!\!\!\! \!\!\!\! \!\!\!\! P_{21}\!=\![3421] ,\, P_{22}\!=\![3142] ,\, P_{23}\!=\![4312] ,\, P_{24}\!=\![4123]. \nonumber\end{aligned}$$ Observing that $\beta^{(i)}_j=b_{P_8(j)}$ and $\beta^{(ii)}_j=b_{P_{19}(j)}$ for all $j=1,...,4$, it is quite straightforward but very arduous to explicitly carry out the sums over $P'$ and $\vec b$ in (\[240b\]), (\[250b\]) and the sum over $\vec a$ in (\[210b\]), yielding $$\begin{aligned} \!\!\!\! \!\!\!\! & & {\left[}\langle A\rangle_{\!\rho(t)}^2{\right]_{U}}=\sum_{k=1}^{24} f_k(t)\, T(P_k) \ , \label{270b}\end{aligned}$$ where the functions $f_k(t)$ are given by $$\begin{aligned} \!\!\!\! \!\!\!\! & & f_1(t)=D^4F_0^2(t) \ , \nonumber \\ \!\!\!\! \!\!\!\! & & f_2(t)=f_4(t)=f_5(t)=f_7(t)=D^3F_0(t) \ , \nonumber \\ \!\!\!\! \!\!\!\! & & f_3(t)=f_6^\ast(t)=D^3 [\phi(t)]^2[\phi(2t)]^\ast \ , \nonumber \\ \!\!\!\! \!\!\!\! & & f_8(t)=f_{10}(t)=D^2 \ , \nonumber \\ \!\!\!\! \!\!\!\! & & f_9(t)=D^2F_0(2t) \ , \nonumber \\ \!\!\!\! \!\!\!\! & & f_k(t)=D^2F_0(t)\ \mbox{for $k=11,...,18$} \ , \nonumber \\ \!\!\!\! \!\!\!\! & & f_k(t)=D \ \mbox{for $k=19,...,24$} \ , \label{280b}\end{aligned}$$ and the coefficients $T(P)$ are given by $$\begin{aligned} \!\!\!\! \!\!\!\! & & T(P) = D^2\langle A\rangle^2_{\!{\rho_{\mathrm{mc}}}} ( V_{P,P_{8}}+V_{P,P_{24}} {\mbox{Tr}}\{\rho^2(0)\} ) \nonumber \\ \!\!\!\! \!\!\!\! & & \ \ + D\langle A^2\rangle_{\!{\rho_{\mathrm{mc}}}} ( V_{P,P_{10}} {\mbox{Tr}}\{\rho^2(0)\} +V_{P,P_{19}} ) \nonumber \\ \!\!\!\! \!\!\!\! & & \ \ + D\langle A\rangle_{\!{\rho_{\mathrm{mc}}}}\langle A\rangle_{\!\rho(0)} ( V_{P,P_{2}}+V_{P,P_{7}}+V_{P,P_{20}}+V_{P,P_{22}} ) \nonumber \\ \!\!\!\! \!\!\!\! & & \ \ + D\langle A\rangle_{\!{\rho_{\mathrm{mc}}}} {\mbox{Tr}}\{\rho^2(0)A\} ( V_{P,P_{12}}+V_{P,P_{14}}+V_{P,P_{16}}+V_{P,P_{18}} ) \nonumber \\ \!\!\!\! \!\!\!\! & & \ \ + \langle A\rangle^2_{\!\rho(0)} ( V_{P,P_{1}}+V_{P,P_{9}} ) \nonumber \\ \!\!\!\! \!\!\!\! & & \ \ + {\mbox{Tr}}\{[\rho(0)A]^2\} ( V_{P,P_{3}}+V_{P,P_{6}} ) \nonumber \\ \!\!\!\! \!\!\!\! & & \ \ + \langle A^2\rangle_{\!\rho(0)} ( V_{P,P_{11}}+V_{P,P_{13}}+V_{P,P_{15}}+V_{P,P_{17}} ) \nonumber \\ \!\!\!\! \!\!\!\! & & \ \ + {\mbox{Tr}}\{\rho^2(0)A^2\} ( V_{P,P_{4}}+V_{P,P_{5}}+V_{P,P_{21}}+V_{P,P_{23}} ) \ . \label{290b}\end{aligned}$$ To explicitly evaluate (\[270b\])-(\[290b\]), we still need the coefficients $V_{P_k,P_l}$ for all $k,l\in\{1,...,24\}$. They are obtained as explained below equation (\[20b\]): Defining $j=j(k,l)$ implicitly via $P_{j}=P_l^{-1}P_k$, one finds by factorizing each $P_j$ into its disjoint cycles and exploiting Tables II and IV of Ref. [@bro96] that $V_{P_k,P_l}$ is given by $$\begin{aligned} V_{1,1,1,1} & = & {D}^{-4}\ \ \mbox{for $j=1$}, \nonumber \\ V_{2,1,1} & = & -{D}^{-5}\ \ \mbox{for $j=2,...,7$}, \nonumber \\ V_{2,2} & = & {D}^{-6}\ \ \mbox{for $j=8,...,10$}, \nonumber \\ V_{3,1} & = & 2\, {D}^{-6}\ \ \mbox{for $j=11,...,18$}, \nonumber \\ V_{4} & = & -5\, {D}^{-7}\ \ \mbox{for $j=19,...,24$}, \label{300b}\end{aligned}$$ up to correction factors of the form $1+{{\cal O}}({D}^{-2})$ on the right hand side of each of those relations. One thus is left with finding $P_{j}=P_l^{-1}P_k$ for all $24^2$ pairs $(k,l)$. To mitigate this daunting task, we have restricted ourselves to those summands in (\[270b\]) which are at least of the order $D^{-1}$. Along these lines, one finally recovers with equations (\[125b\]), (\[170b\]), and (\[190b\]) the result (\[120\]). Derivation of equation (\[140\]) {#derivation-of-equation-140 .unnumbered} -------------------------------- While the essential steps in deriving equation (\[140\]) have been outlined already in the main text, we still have to provide the details of the statements below (\[140\]): Our first observation is that $R_1(t)$ in equation (\[130b\]) amounts to the systematic ($U$-independent) part of the omitted corrections in (\[140\]) and equation (\[180b\]) to the bound announced below (\[140\]). By means of a straightforward (but again very tedious) generalization of the calculations from the preceding subsection one finds that $$\begin{aligned} {\left[}\xi(t)\xi(s){\right]_{U}}= C(t,s)\frac{{\Delta_{\! A}}^2{\mbox{Tr}}\{\rho^2(0)\}}{{D}} +{{\cal O}}\left(\frac{{\Delta_{\! A}}^2}{{D}^2}\right) \label{320b}\end{aligned}$$ where $C(t,s)$ has the following six properties: First, $C(t,s)=C(s,t)=C(-t,-s)$ for all $t,s$. Second, $|C(t,s)|\leq 9$ for all $t,s$. Third, $C(t,0)=0$ for all $t$. Fourth, $C(t,s) {\rightsquigarrow}0$ for $|t-s|\to\infty$, cf. equation (\[170\]). Fifth, $C(t,s){\rightsquigarrow}F(t-s) \langle (A-\langle A\rangle_{\!{\rho_{\mathrm{mc}}}})^2 \rangle_{\!{\rho_{\mathrm{mc}}}}$ for $t,s\to\infty$. Sixth, given $s$, the behavior of $C(t,s)$ as a function of $t$ is roughly comparable to that of $F(t-s)$ for most $t$. Though we did not explicitly evaluate the last term in (\[320b\]), closer inspection of its general structure shows that it can be bounded in modulus by $c\, {\Delta_{\! A}}^2/{D}^2$ for some $c$ which is independent of $t,s,{D},A,\rho(0),H$. Moreover, there is no indication of any fundamental structural differences in comparison with the leading and next-to-leading order terms, which we did evaluate. In other words, the last term in (\[320b\]) is expected to satisfy properties analogous to those mentioned below equation (\[320b\]). Recalling that the purity ${\mbox{Tr}}\{\rho^2(0)\}$ satisfies the usual bounds $1\geq{\mbox{Tr}}\{\rho^2(0)\}\geq{\mbox{Tr}}\{{\rho_{\mathrm{mc}}}^2\}=1/{D}$, we thus recover the properties of $\xi(t)$ announced below equation (\[140\]). [70]{} Tasaki, H. From quantum dynamics to the canonical distribution: general picture and rigorous example. [*Phys. Rev. Lett.*]{} [**80**]{}, 1373-1376 (1998) Popescu, S., Short, A. J., & Winter, A. Entanglement and the foundations of statistical mechanics. [*Nature Phys.*]{} [**2**]{}, 754-758 (2006) Goldstein, S., Lebowitz, J. L., Tumulka, R., & Zhangì, N. Canonical typicality. [*Phys. Rev. Lett.*]{} [**96**]{}, 050403 (2006) Rigol, M., Dunjko, V., & Olshanii, M. Thermalization and its mechanism for generic isolated quantum systems. [*Nature*]{} [**452**]{}, 854-858 (2008) Gemmer, J., Michel, M., & Mahler, G. [*Quantum Thermodynamics*]{}, 2nd edition, Springer, Berlin, Heidelberg, 2009 Eisert, J., Friesdorf, M., & Gogolin, C. Quantum many-body systems out of equilibrium. [*Nature Phys.*]{} [**11**]{}, 124-130 (2015) Sklar, L. [*Physics and Chance*]{}, Cambridge University Press, 1993 von Neumann, J. Beweis des Ergodensatzes und des H-Theorems in der neuen Mechanik. [*Z. Phys.*]{} [**57**]{}, 30-70 (1929) \[English translation by Tumulka, R. Proof of the ergodic theorem and the H-theorem in quantum mechanics. [*Eur. Phys. J. H*]{} [**35**]{}, 201-237 (2010)\] Goldstein, S., Lebowitz, J. L., Tumulka, R., & Zhangì, N. Long-time behavior of macroscopic quantum systems: commentary accompanying the english translation of John von Neumann’s 1929 article on the quantum ergodic theorem. [*Eur. Phys. J. H*]{} [**35**]{}, 173-200 (2010) Goldstein, S., Lebowitz, J. L., Mastrodonato, C., Tumulka, R., & Zhangì, N. Approach to thermal equilibrium of macroscopic quantum systems. [*Phys. Rev. E*]{} [**81**]{}, 011109 (2010) Goldstein, S., Lebowitz, J. L., Mastrodonato, C., Tumulka, R., & Zhangì, N. Normal typicality and von Neumann’s quantum ergodic theorem. [*Proc. R. Soc. A*]{} [**466**]{}, 3203-3224 (2010) Reimann, P. Generalization of von Neumann’s approach to thermalization. [*Phys. Rev. Lett.*]{} [**115**]{}, 010403 (2015) Popescu, S., Short, A. J., & Winter, A. The foundations of statistical mechanics from entanglement: Individual states vs. averages. Preprint at http://arxiv.org/abs/quant-ph/0511225 (2005) Müller, M. P., Gross, D., & Eisert, J. Concentration of measure for quantum states with a fixed expectation value. [*Commun. Math. Phys.*]{} [**303**]{}, 785-824 (2011) Sugita, A. On the basis of quantum statistical mechanics. [*Nonlinear Phenom. Complex Syst.*]{} [**10**]{}, 192-195 (2007) Reimann, P. Typicality for generalized microcanonical ensembles. [*Phys. Rev. Lett.*]{} [**99**]{}, 160404 (2007) Bartsch, C. & Gemmer, J. Dynamical typicality of quantum expectation values. [*Phys. Rev. Lett.*]{} [**102**]{}, 110403 (2009) Sugiura, S. & Shimizu, A. Thermal pure quantum states at finite temperature. [*Phys. Rev. Lett.*]{} [**108**]{}, 240401 (2012) Deutsch, J. M. Quantum statistical mechanics in a closed system. [*Phys. Rev. A*]{} [**43**]{}, 2046-2049 (1991) Srednicki, M. Chaos and quantum thermalization. [*Phys. Rev. E*]{} [**50**]{}, 888-901 (1994) Neuenhahn, C. & Marquardt, F. Thermalization of interacting fermions and delocalization in Fock space. [*Phys. Rev. E*]{} [**85**]{}, 060101(R) (2012) Rigol, M. & Srednicki, M. Alternatives to eigenstate thermalization. [*Phys. Rev. Lett.*]{} [**12**]{}, 110601 (2012) Ikeda, T. N., Watanabe, Y., & Ueda, M. Finite-size scaling analysis of the eigenstate thermalization hypothesis in a one-dimensional interacting Bose gas. [*Phys. Rev. E*]{} [**87**]{}, 012125 (2013) Beugeling, W., Moessner, R., & Haque, M. Finite-size scaling of eigenstate thermalization. [*Phys. Rev. E*]{} [**89**]{}, 042112 (2014) Steinigeweg, R., Khodja, A., Niemeyer, H., Gogolin, C., & Gemmer, J. Pushing the limits of the eigenstate thermalization hypothesis towards mesoscopic quantum systems. [*Phys. Rev. Lett.*]{} [**112**]{}, 130403 (2014) Goldstein, S., Huse, D. A., Lebowitz, J. L., & Tumulka, R., Thermal equilibrium of a macroscopic quantum system in a pure state. [*Phys. Rev. Lett.*]{} [**115**]{}, 100402 (2015) Cramer, M., Flesch, A., McCulloch, I. P., Schollwöck, U., & Eisert, J. Exploring local quantum many-body relaxation by atoms in optical superlattices. [*Phys. Rev. Lett.*]{} [**101**]{}, 063001 (2008) Trotzky, S., Chen, Y.-A., Flesch, A., McCulloch, I. P., Schollwöck, U. Eisert, J., & Bloch, I. Probing the relaxation towards equilibrium in an isolated strongly correlated one-dimensional Bose gas. [*Nature Phys.*]{} [**8**]{}, 325-330 (2012) Gring, M. et al. Relaxation and prethermalization in an isolated quantum system. [*Science*]{} [**337**]{}, 1318-1322 (2012) Pertot, D. et al. Relaxation dynamics of a Fermi gas in an optical superlattice. [*Phys. Rev. Lett.*]{} [**113**]{}, 170403 (2014) Rigol, M. Breakdown of thermalization in finite one-dimensional systems. [*Phys. Rev. Lett.*]{} [**103**]{}, 100403 (2009) Rigol, M. Quantum quenches and thermalization in one-dimensional fermionic systems. [*Phys. Rev. A*]{} [**80**]{}, 053607 (2009) Santos, L. F. & Rigol, M. Onset of quantum chaos in one-dimensional bosonic and fermionic systems and its relation to thermalization. [*Phys. Rev. E*]{} [**81**]{}, 036206 (2010) Pal, A. & Huse, D. A. Many-body localization phase transitions. [*Phys. Rev. E*]{} [**82**]{}, 174411 (2010) Brioli, G., Kollath, C., & Läuchli, A. Effect of rare fluctuations on the thermalization of isolated quantum systems. [*Phys. Rev. Lett.*]{} [**105**]{}, 250401 (2010) Gogolin, C., Müller, M., & Eisert, J. Absence of thermalization in nonintegrable systems. [*Phys. Rev. Lett.*]{} [**106**]{}, 040401 (2011) Banuls, M. C., Cirac, J. I., & Hastings, M. B. Strong and weak thermalization of infinite non-integrable quantum systems. [*Phys. Rev. Lett.*]{} [**106**]{}, 050405 (2011) Cramer, M., Dawson, C. M., Eisert, J., & Osborne, T. J. Exact relaxation in a class of non-equilibrium quantum lattice systems. [*Phys. Rev. Lett.*]{} [**100**]{}, 030602 (2008) Reimann, P. Foundation of statistical mechanics under experimentally realistic conditions. [*Phys. Rev. Lett.*]{} [**101**]{}, 190403 (2008) Linden, N., Popescu, S., Short, A. J., & Winter, A. Quantum mechanical evolution towards equilibrium. [*Phys. Rev. E*]{} [**79**]{}, 061103 (2009) Reimann, P. Canonical thermalization. [*New J. Phys.*]{} [**12**]{}, 055027 (2010) Short, A. J. Equilibration of quantum systems and subsystems. [*New J. Phys.*]{} [**13**]{}, 053009 (2011) Reimann, P. & Kastner, M. Equilibration of macroscopic quantum systems. [*New J. Phys.*]{} [**14**]{}, 043020 (2012) Reimann, P. Equilibration of isolated macroscopic quantum systems under experimentally realistic conditions. [*Phys. Scr.*]{} [**86**]{}, 058512 (2012) Short, A. J. & Farrelly, T. C. Quantum equilibration in finite time. [*New J. Phys.*]{} [**14**]{}, 013063 (2012) Cramer, M. Thermalization under randomized local Hamiltonians. [*New. J. Phys.*]{} [**14**]{}, 053051 (2012) Goldstein, S., Hara, T., & Tasaki, H. Time scales in the approach to equilibrium of macroscopic quantum systems. [*Phys. Rev. Lett.*]{} [**111**]{}, 140401 (2013) Monnai, T. Generic evaluation of relaxation time for quantum many body systems: analysis of system size dependence. [*J. Phys. Soc. Jpn.*]{} [**82**]{}, 044006 (2013) Malabarba, A. S. L., Garcia-Pintos, L. P., Linden, N., Farrelly, T. C., & Short, A. J. Quantum systems equilibrate rapidly for most observables. [*Phys. Rev. E*]{} [**90**]{}, 012121 (2014) Goldstein, S., Hara, T., & Tasaki, H. Extremely quick thermalization in a macroscopic quantum system for a typical nonequilibrium subspace. [*New. J. Phys.*]{} [**17**]{}, 045002 (2015) Goldstein, S., Hara, T., & Tasaki, H. The approach to equilibrium in a macroscopic quantum system for a typical nonequilibrium subspace. Preprint at http://arxiv.org/abs/1402.3380 (2014) Znidaric, M., Pineda, C., & Garcia-Mata, I. Non-Markovian behavior of small and large complex quantum systems. [*Phys. Rev. Lett.*]{} [**107**]{}, 080404 (2011) Monnai, T. General relaxation time of the fidelity for isolated quantum thermodynamic systems. [*J. Phys. Soc. Jpn.*]{} [**82**]{}, 044006 (2014) Berges, J., Borsányi, Sz., & Wetterich, C. Prethermalization. [*Phys. Rev. Lett.*]{} [**93**]{}, 142002 (2004) Moeckel, M., & Kehrein, S. Interaction quench in the Hubbard model. [*Phys. Rev. Lett.*]{} [**100**]{}, 175702 (2008) Guidoni, L., Beaurepaire, E., & Bigot, J.-Y. Magneto-optics in the ultrafast regime: Thermalization of spin populations in ferromagnetic films. [*Phys. Rev. Lett.*]{} [**89**]{}, 017401 (2002) Gierz, I. et al. Tracking primary thermalization events in graphene with photoemission at extreme time scales. [*Phys. Rev. Lett.*]{} [**115**]{}, 086803 (2015) Beaurepaire, E., Merle, J.-C., Daunois, A., & Bigot, J.-Y. Ultrafast spin dynamics in ferromagnetic Nickel. [*Phys. Rev. Lett.*]{} [**76**]{}, 4250-4253 (1996) J. Faure et al. Direct observation of electron thermalization and electron-phonon coupling in photoexcited bismuth. [*Phys. Rev. B*]{} [**88**]{}, 075120 (2013) Papalazarou, E., et al. Supplemental Material of [*Phys. Rev. Lett.*]{} [**108**]{}, 256808 (2012) Thon, A., et al. Photon-assisted tunneling versus tunneling of excited electrons in metal-insulator-metal junctions. [*Appl. Phys. A*]{} [**78**]{}, 189-199 (2004) Klamroth, T. Laser-driven electron transfer through metal-insulator-metal contacts: Time-dependent configuration interaction singles calculations for a jellium model. [*Phys. Rev. B*]{} [**68**]{}, 245421 (2003) Hetterich, D., Fuchs, M., & Trauzettel, B. Equilibration in closed quantum systems: Application to spin qubits. [*Phys. Rev. B*]{} [**92**]{}, 155314 (2015) Brody, T. A., Flores, J., French, J. B., Mello, P. A., Pandey, A., & Wong, S. S. M. Random-matrix physics: spectrum and strength fluctuations. [*Rev. Mod. Phys.*]{} [**53**]{}, 385-480 (1981), Sect. VIII.B. Collins, B. & Sniady, P. Integration with respect to the Haar measure on unitary, orthogonal and symplectic group. [*Commun. Math. Phys.*]{} [**264**]{}, 773-795 (2006) Brouwer, P. W., & Beenakker, C. W. J. Diagrammatic method of integration over the unitary group, with applications to quantum transport in mesoscopic systems. [*J. Math. Phys.*]{} [**37**]{}, 4904-4934 (1996) Olshanii, M., Jacobs, K., Rigol, M., Dunjko, V., Kennard, H., & Yurovsky V. A. An exactly solvable model for the integrability-chaos transition in rough quantum billiards. [*Nat. Commun.*]{} [**3:641**]{} (2012) \ I am indebted to Walter Pfeiffer and Thomas Dahm for numerous enlightening discussions. I also thank all authors of Refs. [@bar09; @rig09a; @tho04; @het15] for providing the raw data of their published works, in particular Christian Bartsch, Marcos Rigol, Tillmann Klamroth, and Daniel Hetterich. This project was supported by DFG-Grant RE1344/7-1. SUPPLEMENTARY INFORMATION {#supplementary-information .unnumbered} ========================= Supplementary Note 1 {#a2 .unnumbered} ==================== This note provides a brief account of those previous analytical findings which exhibit some appreciable similarity to ours. Ref. \[1\] (see Supplementary References) considers the convergence (for most times) towards some steady state, which is in general different from the microcanonical ensemble. Furthermore, the focus is either on two-outcome measurements, where one of the projectors is of low rank, or the initial state must be an eigenvector of the considered observable. Finally, a role more or less similar to our present randomization via $U$ is played by the assumption that the initial state must be spread over very many energy levels. Within this setting, general upper bounds are obtained for some suitably defined equilibration time scale (as opposed to the (approximate) equality (13) for the entire temporal behavior). Apart from these quite significant differences, the essential conclusions are analogous to ours, namely an extremely rapid relaxation for all above mentioned two-outcome measurements of low rank, as well as for most observables if the initial state is an eigenvector of the observable. Ref. \[2\] focuses on subsystem-plus-bath compounds, the total Hilbert space being a collection of many smaller units (e.g. due to a local Hamiltonian on a lattice), and on separable initial states. Under these premises, upper bounds for the subsystem’s temporal relaxation are derived, which exhibit some (limited) similarities to our present findings, including the prediction of typically very fast relaxation processes. Under the additional assumptions that in the latter setup the subsystem is a single qubit, the initial state of the qubit as well as the considered observable are given by Pauli matrices (or the identity), and the environment is in a pure initial state, similar findings as in our present work have been obtained in Ref. \[3\]. Note that those initial states of the qubit are not very physical and that their linear superposition is not admitted in the findings of \[3\] due to the non-linearity of the problem. Refs. \[4,5\] focus on macroscopic observables with a concomitant projector $P_{\rm{neq}}$ onto a very small subspace of the “energy shell” ${{\cal H}}$ so that any (normalized) state $|\psi\rangle\in{{\cal H}}$ with $\langle\psi | P_{\rm{neq}} |\psi\rangle \ll 1$ represents thermal equilibrium. Denoting, similarly as in our present approach, by $U$ the transformation between the bases of the “observable” $P_{\rm{neq}}$ and the Hamiltonian $H$, it is then shown that most $U$ result in an extremely quick thermalization for any initial pure state $|\psi(0)\rangle\in{{\cal H}}$. Similarly as in \[1\] (see above), this conclusion is based on an upper (but arguably rather tight) estimate for the actual temporal relaxation and on similar assumptions about the energy level density $\rho(x)$ as in equations (17)-(20). In Ref. \[6\] it is shown that the vast majority of all pure states featuring a common expectation value of some generic observable at a given time will yield very similar expectation values of the same observable at any later time. While in our present approach, $\rho(0)$ and $A$ are kept fixed relatively to each other and $U$ randomizes their constellation relatively to the Hamiltonian $H$, in Ref. \[6\] the pair $A$ and $H$ is kept fixed, while $\rho(0)$ is randomly sampled under the additional constraint that it is a pure state with a preset (arbitrary but fixed) expectation value $\langle A\rangle_{\!\rho(0)}$. Moreover, no quantitative statements about how $\langle A\rangle_{\!\rho(t)}$ actually evolves in time have been obtained in \[6\]. Ref. \[7\] suggests fairly rough relaxation time estimates by exploiting three quite drastic [*a priori*]{} assumptions. One of them postulates that the relaxation is monotonous in time, which can in fact not be generally true, see equation (19) and Fig. 6. Apart from that, the obtained estimates are roughly comparable to ours. Supplementary Note 2 {#a3 .unnumbered} ==================== This note compiles some additional remarks and extensions, ordered according to their appearance in the main text. ### Regarding section “Setup” {#regarding-section-setup .unnumbered} 1\. In the present paper, we mainly have in mind the examples mentioned below equation (1), i.e., ${{\cal H}}$ represents some microcanonical “energy shell” of a closed many-body system. But similarly as in Refs. \[8-10\], our main result (13) is actually valid for the more general setup outlined above equation (1), i.e., ${{\cal H}}$ may also represent a more abstract type of “active Hilbert space”. For instance, this may be of interest for autonomous systems with few degrees of freedom in the context of semiclassical chaos when the initial state is “spread” over many energy levels. 2\. [*A priori*]{}, the pertinent Hilbert space of a many-body system is not a microcanonical energy shell ${{\cal H}}$, nor are the Hamiltonian, observables, and system states given by Hermitian operators $H$, $A$, and $\rho(t)$ on ${{\cal H}}$ right from the beginning. Rather, the system originally “lives” in a much larger Hilbert space ${{\cal H}}'$ and the Hamiltonian, observables, and system states are given by Hermitian operators $H'$, $A'$, and $\rho'(t)$ on ${{\cal H}}'$. How to go over from the original (primed) to the reduced (unprimed) setup is not very difficult \[11-15\], but also not entirely obvious: Similarly as in the main text, we denote by $E_n$ and $|n\rangle$ the eigenvalues and eigenvectors of $H'$, where $n$ runs from $1$ to infinity or to some finite upper limit (dimension of ${{\cal H}}'$). Likewise, the corresponding matrix elements of $\rho'(t)$ are denoted as $\rho'_{mn}(t):=\langle m|\rho'(t)|n\rangle$. The key point consist in our assumption below equation (1) that the system exhibit a well defined macroscopic energy, i.e., there exists a microcanonical energy window $I:=[E-\Delta E,E]$ so that the level populations $\rho'_{nn}(0)$ are negligibly small for energies $E_n$ outside the interval $I$. Moreover, we can an will assume that the labels $n$ and the integer ${D}$ are chosen so that $E_n\in I \Leftrightarrow n\in\{1,...,{D}\}$. Next, we denote by ${{\cal H}}$ the subspace spanned by $\{|n\rangle\}_{n=1}^{D}$, by $P:=\sum_{n=1}^{D}|n\rangle\langle n|$ the projector onto ${{\cal H}}$, and by $H:=PH'P$, $A:=PA'P$, $\rho(t):=P\rho'(t)P$ the corresponding “restrictions” or “projections” of the original operators. With $|\rho'_{mn}|^2\leq \rho'_{mm} \rho'_{nn}$ (Cauchy-Schwarz inequality) and the above approximation $\rho'_{nn}(0)=0$ for $n>{D}$, it follows that $\rho'_{mn}(0)=0$ if $m>{D}$ or $n>{D}$ and hence that $\rho(0)=\rho'(0)$. Since $P$ commutes with $H'$ and thus with ${{\cal U}'_t}:=e^{-iH't/{\hbar}}$, the original time evolution $\rho'(t)={{\cal U}'_t}\rho'(0)({{\cal U}'_t})^\dagger$ implies that $\rho(t)=\rho'(t)$ for all $t$, and with $P^2=P$ it follows that $\rho(t)={{\cal U}_t}\rho(0){{\cal U}_t}^\dagger$, where ${{\cal U}_t}:=e^{-iHt/{\hbar}}$. Exploiting the cyclic invariance of the trace and $P^2=P$ finally yields ${\mbox{Tr}}\{\rho'(t) A'\}={\mbox{Tr}}\{\rho(t) A\}$ for all $t$. So far, the basic operators $H$, $A$, $\rho(t)$ and their descendants ${{\cal U}_t}$ and ${\mbox{Tr}}\{\rho(t) A\}$ are strictly speaking still defined on ${{\cal H}}'$ but it is trivial to reinterpret them as being defined on ${{\cal H}}$. In particular, the eigenvalues and eigenvectors of $H : {{\cal H}}\to {{\cal H}}$ are now given by $\{E_n\}_{n=1}^{D}$ and $\{|n\rangle \}_{n=1}^{D}$, respectively. While the connection between $H$ and $H'$ and between $\rho(t)$ and $\rho'(t)$ is thus rather trivial, the eigenvalues and eigenvectors of $A$ are in general quite different from those of $A'$. Nevertheless, all original (primed) expectation values are correctly recovered within the reduced (unprimed) formalism. ### Regarding section “Analytical results” {#regarding-section-analytical-results .unnumbered} 3\. A natural intuitive guess is that $\rho_{nn}(0)$ and $A_{nn}$ should be essentially independent of each other in the sense that ${\left[}\rho_{nn}(0)A_{nn}{\right]_{U}}$ can be approximated by ${\left[}\rho_{nn}(0){\right]_{U}}{\left[}A_{nn}{\right]_{U}}$. If so, one could readily conclude from equations (2) and (4) that $\langle A\rangle_{\!{\rho_{\mathrm{av}}}}=\langle A\rangle_{\!{\rho_{\mathrm{mc}}}}$, which is nothing else than the leading order approximation of equation (9). In other words, our guess seems right, the essence of equation (9) is intuitively quite obvious, and the last term in equation (9) must be due to weak correlations between $\rho_{nn}(0)$ and $A_{nn}$ for non-equilibrium initial conditions $\langle A\rangle_{\!\rho(0)}$. ### Regarding section “Basic properties of $F(t)$” {#regarding-section-basic-properties-of-ft .unnumbered} 4\. The essential prerequisite in approximating equation (8) by (17) is that $t/\hbar$ must be much smaller than the inverse mean level distance. Since the energy levels are extremely dense for typical many-body systems, the approximation applies for all experimentally realistic times $t$. However, the quasi-periodicities of $F(t)$ for extremely large $t$, inherited from $\phi(t)$ via (7) and (8), usually get lost. ### Regarding section “Typicality of thermalization” {#regarding-section-typicality-of-thermalization .unnumbered} 5\. By similar methods as in the derivation of our main result (13), one can show \[11\] that for the overwhelming majority of unitaries $U$ the diagonal matrix elements $A_{nn}$ remain very close to their mean value ${\left[}A_{nn}{\right]_{U}}=\langle A\rangle_{\!{\rho_{\mathrm{mc}}}}$, a property also known under the name eigenstate thermalization hypothesis (ETH) \[12-14\]. It is tempting to argue that a violation of ETH indicates an “untypical” case and hence also (13) will be violated. However, there is no reason why the extremely small subset of $U$’s which violate (13) has any relevant overlap with the extremely small subset of $U$’s which violate ETH. In other words, we expect that equation (13) still applies to the vast majority of ETH-violating systems, i.e., provided their initial condition $\rho(0)$ is still sufficiently “typical” to guarantee thermalization. Numerical examples of such cases are provided, e.g., by Ref. \[15\]. Vice versa, the findings about typicality of ETH and thermalization from \[11,12,16-19\] are expected to remain valid even when (13) is violated (thus including cases which do not thermalize as rapidly as predicted by (13)). An analogous consideration applies to the “level populations” $\rho_{nn}(0)$: They must be negligible outside the microcanonical energy window $[E-\Delta E,E]$, but inside the window they may still be distributed quite “untypically”. 6\. More abstractly speaking, in order to realize simultaneously an untypical $U$ and a far from equilibrium $\langle A\rangle_{\!\rho(0)}$, one generally expects that the eigenbases of both $A$ and $\rho(0)$ must be fine-tuned relatively to a given $H$. As a consequence, one expects untypically strong correlations between $A_{nn}$ and $\rho_{nn}(0)$ (see also paragraph 3. above). This is confirmed, e.g., by the numerical examples in Refs. \[14,20\] and is also closely related to the ideas proposed by Peres in Ref. \[21\]. 7\. To further scrutinize the untypical $U$’s, we consider subsets $S_a$ consisting of all $U$’s with the extra property that $\langle A\rangle_{\! {\omega}}=a$, where ${\omega}$ is defined below equation (3). One readily sees that for any given $a$-value, the set $S_a$ still entails the necessary symmetries so that equations (2)-(8) remain valid when re-defining ${\left[}\cdots{\right]_{U}}$ as the restricted average over all $U\in S_a$. On the other hand, equation (9) is replaced by $\langle A\rangle_{\! {\rho_{\mathrm{av}}}}=a$, implied by $\langle A\rangle_{\! {\omega}}=a$ for all $U\in S_a$. Finally, one expects, analogously as in equations (10)-(12), that the fluctuations about the average behavior are typically small for most $U \in S_a$. While a rigorous proof seems very difficult, the intuitive argument is that the subset $S_a$ can be represented as a manifold of fantastically large dimensionality (just one dimension less that for the unrestricted set of all $U$’s due to the extra constraint $\langle A\rangle_{\! {\omega}}=a$). Hence, a similar concentration of measure phenomenon is expected in both cases. Analogously as in (13), the overall conclusion is that $$\begin{aligned} \langle A\rangle_{\!\rho(t)} & = & a + F(t)\, \left\{\langle A\rangle_{\!\rho(0)} - a \right\} \nonumber\end{aligned}$$ should be satisfied in very good approximation for the vast majority of all times $t$ and unitaries $U\in S_a$. Upon comparison with (13) one sees that if $a$ notably differs from $\langle A\rangle_{\!{\rho_{\mathrm{mc}}}}$ then most $U\in S_a$ are untypical. On the other hand, any given untypical $U$ is contained in one of the subsets $S_a$ and is thus generically expected to satisfy the above approximation. Remarkably, the time dependence is governed by the same function $F(t)$ for all $a$. These considerations justify our comparison of the theory (in the above generalized version) with the integrable model in Fig. 4. In turn, the good agreement with the numerical results in Fig. 4 supports the above arguments. Supplementary References {#supplementary-references .unnumbered} ------------------------ : \[1\] Malabarba, A. S. L., Garcia-Pintos, L. P., Linden, N., Farrelly, T. C., & Short, A. J. Quantum systems equilibrate rapidly for most observables. [*Phys. Rev. E*]{} [**90**]{}, 012121 (2014) : \[2\] Cramer, M. Thermalization under randomized local Hamiltonians. [*New. J. Phys.*]{} [**14**]{}, 053051 (2012) : \[3\] Znidaric, M., Pineda, C., & Garcia-Mata, I. Non-Markovian behavior of small and large complex quantum systems. [*Phys. Rev. Lett.*]{} [**107**]{}, 080404 (2011) : \[4\] Goldstein, S., Hara, T., & Tasaki, H. Extremely quick thermalization in a macroscopic quantum system for a typical nonequilibrium subspace. [*New. J. Phys.*]{} [**17**]{}, 045002 (2015) : \[5\] Goldstein, S., Hara, T., & Tasaki, H. The approach to equilibrium in a macroscopic quantum system for a typical nonequilibrium subspace. Preprint at http://arxiv.org/abs/1402.3380 (2014) : \[6\] Bartsch, C. & Gemmer, J. Dynamical typicality of quantum expectation values. [*Phys. Rev. Lett.*]{} [**102**]{}, 110403 (2009) : \[7\] Monnai, T. Generic evaluation of relaxation time for quantum many body systems: analysis of system size dependence. [*J. Phys. Soc. Jpn.*]{} [**82**]{}, 044006 (2013) : \[8\] Popescu, S., Short, A. J., & Winter, A. Entanglement and the foundations of statistical mechanics. [*Nature Phys.*]{} [**2**]{}, 754-758 (2006) : \[9\] Popescu, S., Short, A. J., & Winter, A. The foundations of statistical mechanics from entanglement: Individual states vs. averages. Preprint at http://arxiv.org/abs/quant-ph/0511225 (2005) : \[10\] Linden, N., Popescu, S., Short, A. J., & Winter, A. Quantum mechanical evolution towards equilibrium. [*Phys. Rev. E*]{} [**79**]{}, 061103 (2009) : \[11\] von Neumann, J. Beweis des Ergodensatzes und des H-Theorems in der neuen Mechanik. [*Z. Phys.*]{} [**57**]{}, 30-70 (1929) $[$English translation by Tumulka, R. Proof of the ergodic theorem and the H-theorem in quantum mechanics. [*Eur. Phys. J. H*]{} [**35**]{}, 201-237 (2010)$]$ : \[12\] Goldstein, S., Lebowitz, J. L., Tumulka, R., & Zhangì, N. Long-time behavior of macroscopic quantum systems: commentary accompanying the english translation of John von Neumann’s 1929 article on the quantum ergodic theorem. [*Eur. Phys. J. H*]{} [**35**]{}, 173-200 (2010) : \[13\] Goldstein, S., Lebowitz, J. L., Mastrodonato, C., Tumulka, R., & Zhangì, N. Approach to thermal equilibrium of macroscopic quantum systems. [*Phys. Rev. E*]{} [**81**]{}, 011109 (2010) : \[14\] Goldstein, S., Lebowitz, J. L., Mastrodonato, C., Tumulka, R., & Zhangì, N. Normal typicality and von Neumann’s quantum ergodic theorem. [*Proc. R. Soc. A*]{} [**466**]{}, 3203-3224 (2010) : \[15\] Reimann, P. Generalization of von Neumann’s approach to thermalization. [*Phys. Rev. Lett.*]{} [**115**]{}, 010403 (2015) : \[16\] Deutsch, J. M. Quantum statistical mechanics in a closed system. [*Phys. Rev. A*]{} [**43**]{}, 2046-2049 (1991) : \[17\] Srednicki, M. Chaos and quantum thermalization. [*Phys. Rev. E*]{} [**50**]{}, 888-901 (1994) : \[18\] Rigol, M., Dunjko, V., & Olshanii, M. Thermalization and its mechanism for generic isolated quantum systems. [*Nature*]{} [**452**]{}, 854-858 (2008) : \[19\] Rigol, M. & Srednicki, M. Alternatives to eigenstate thermalization. [*Phys. Rev. Lett.*]{} [**12**]{}, 110601 (2012) : \[20\] Gogolin, C., Müller, M., & Eisert, J. Absence of thermalization in nonintegrable systems. [*Phys. Rev. Lett.*]{} [**106**]{}, 040401 (2011) : \[21\] Peres, A. Ergodicity and mixing in quantum theory I. [*Phys. Rev. A*]{} [**30**]{}, 504-509 (1984)
------------- IPPP/02/72 DCPT/02/144 ------------- 0.5cm [**$B\to \gamma e\nu$ Transitions from QCD Sum Rules\ on the Light-Cone**]{} 1.3cm [Patricia Ball[^1] and Emi Kou[^2]]{} 0.2cm IPPP, Department of Physics, University of Durham, Durham DH1 3LE, UK\ [**Abstract:\ **]{} Introduction ============ With the $B$ factories BaBar and Belle running “full steam”, $B$ physics has entered the era of precision measurements. The quality and precision of experimental data calls for a corresponding match in theoretical precision, in particular with regard to the analysis of nonleptonic decays. Notwithstanding the fact that a complete solution of the problem appears as elusive as ever, progress has been made in the description of nonleptonic $B$ decays, in the heavy quark limit, which were shown to be amenable to perturbative QCD (pQCD) factorization [@BBNS1; @BBNS2]. In this framework, $B$ decay amplitudes are decomposed into a “soft” part, for instance a weak decay form factor, or, in general, an intrinsically nonperturbative quantity that evades further breakdown into factorizable components, and a “hard” part which can be neatly described in a factorized form in terms of a convolution of a hard perturbative kernel, depending only on collinear momenta, with one or more hadron distribution amplitudes that describe the (collinear) momentum distribution of partons inside the hadron. Factorization was shown to hold, for certain $B$ decay channels, to all orders in perturbation theory [@softcoll], but to date could not be extended to include contributions that are suppressed by powers of the $b$ quark mass. For the important channel $B\to K\pi$, in particular, it was found that factorization breaks down for one specific class of power-suppressed corrections which are expected to be numerically relevant [@BBNS2]. Reliable alternative methods for calculating QCD effects in nonleptonic $B$ decays are scarce, and only little is known about the generic size of power-suppressed corrections for instance in $B\to\pi\pi$ [@alex], one of the “benchmark” channels for measuring the angle $\alpha$ of the CKM unitarity triangle. There is, however, one channel that can be treated both in pQCD factorization and by an alternative method, QCD sum rules: the $B\to\gamma \ell\nu_\ell$ transition. It has been shown recently [@CS; @LPW] that $B\to\gamma$ is indeed accessible to collinear factorization, in contrast to previous findings [@Kor] which indicated the need to include also transverse degrees of freedom in the convolution. On the other hand, the $B\to\gamma$ transition can also been investigated in the framework of QCD sum rules on the light-cone [@KSW; @AB]. The crucial point here is that the photon is not treated as an exactly pointlike object with standard EM couplings, but that, in addition to that “hard” component, it also features a “soft” hadronic component whose contribution to the decay amplitude must not be neglected. The “soft” component is related to the probability of a real photon to dissociate, at small transverse separation, into partons and resembles in many ways a (massless) transversely polarized vector meson; like it, it can be described by a Fock-state expansion in terms of distribution amplitudes of increasing twist. An analysis of these distribution amplitudes, including terms up to twist-4, has recently been completed [@BBK] and comes in handy for an update and extension of the previous QCD sum rules analyses of $B\to\gamma$. Such a reanalysis is the subject of this letter and we include in particular one-loop radiative corrections to the contribution of leading twist-2 distribution amplitude to $B\to\gamma$. In the framework of pQCD factorization, the “soft” component of the photon leads to formally power-suppressed contributions and is hence neglected in Refs. [@CS; @LPW; @Kor]. As we shall show in this letter, this suppression is not effective numerically due to the large value of the matrix-element governing its strength: the magnetic susceptibility of the quark condensate. Our findings indicate that such contributions are likely to be nonneglible also in other channels involving photon emission, notably $B\to K^*\gamma$ and $B\to \rho\gamma$, which are treated in [@BFS; @Bosch]. We also compare the QCD sum rule for the hard part of the $B\to\gamma$ amplitude with the pQCD result and derive a sum rule for $\lambda_B$, the first negative moment of the $B$ meson distribution amplitude. Definition of Relevant Quantities and Outline of Calculation {#sec2} ============================================================ First of all, let us define the form factors that describe the $B\to\gamma$ transition: $$\label{eq1} \frac{1}{\sqrt{4\pi\alpha}}\, \langle \gamma(\epsilon^*,q)|\bar{u}\gamma_{\mu}(1-\gamma_5)b |B^-(p_B)\rangle =\\ -F_V\,\epsilon_{\mu\nu\rho\sigma}\epsilon^{*\nu}v^{\rho}q^{\sigma} +iF_A[\epsilon^*_{\mu}(v\cdot q)-q_{\mu}(\epsilon^*\cdot v)],$$ where $\epsilon^{*}$ and $q$ are the polarisation and momentum vector of the photon, respectively, and $v=p_B/m_B$ is the four-velocity of the $B$ meson. The definition of the form factors $F_{A,V}$ in (\[eq1\]) is exactly the same as in Ref. [@CS].[^3] The form factors depend on $p^2=(p_B-q)^2$ or, equivalently, on the photon energy $E_\gamma = (m_B^2-p^2)/(2m_B)$. $p^2$ varies in the physical region $0\leq p^2\leq m_B^2$, which corresponds to $0\leq E_\gamma\leq m_B/2$. The starting point for the calculation of the form factors from QCD sum rules is the correlation function $$\begin{aligned} \label{eq:corr} \Pi_{\mu}(p,q)&=&i\int d^4x e^{ipx} \frac{1}{\sqrt{4\pi\alpha}}\langle\gamma(\epsilon^*,q)| T\{\bar{u}(x)\gamma_{\mu}(1-\gamma_5)b(x)\bar{b}(0)i\gamma_5u(0)\} |0\rangle \nonumber \\ &=&-\Pi_{V}\epsilon_{\mu\nu\rho\sigma}\epsilon^{*\nu}p^{\rho}q^{\sigma} +i\Pi_{A}[\epsilon^*_{\mu}(p\cdot q) - q_{\mu}(\epsilon^*\cdot p)]+\dots\end{aligned}$$ The dots refer to contact terms which appear for pointlike photons and for a discussion of which we refer to Ref. [@contact]; the treatment of the soft component of the photon involves nonlocal operators, as we shall discuss below, and gauge-invariance of $\Pi_\mu$ is realized explicitly, without contact terms, by working in the background field method. The method of QCD sum rules [@SVZ] exploits the fact that the correlation function contains information on the form factors in question: expressing $\Pi_{V(A)}$ via a dispersion relation, one has $$\label{disper} \Pi_{V(A)}=\frac{f_Bm_BF_{V(A)}}{m_b(m_B^2-p_B^2)} +\int^{\infty}_{s_0}\,\frac{ds}{s-p_B^2}\,\rho_{V(A)}(s,p^2),$$ where the first term on the r.h.s. is the contribution of the ground state $B$ meson to the correlation function, featuring the form factors we want to calculate, and the second term includes all other states coupling to the pseudoscalar current $\bar{b}i\gamma_5u$, above the threshold $s_0$. $f_B$ is the decay constant of the $B$ and defined as $\langle B|\bar{b}i\gamma_5u|0\rangle =f_Bm_B^2/m_b$. The form factors $F_{A,V}$ are obtained, in principle, by equating the dispersion relation (\[disper\]) to $\Pi_{A,V}$ calculated for Euclidean $p_B^2$ by means of an operator product expansion. The sum over higher states, the 2nd term on the r.h.s. of (\[disper\]), is evaluated using quark-hadron duality, which means that the hadronic spectral density $\rho_{V(A)}$ is replaced by its perturbative equivalent. In order to reduce the model-dependence associated with that procedure, one subjects the whole expression to a Borel-transformation, which results in an exponential suppression of the continuum of higher states: $$B_{M^2}\Pi_{V(A)} = \frac{f_Bm_BF_{V(A)}}{m_b}\,e^{-m_B^2/M^2} + \int_{s_0}^\infty ds\,\rho^{\mbox{\scriptsize pert}}_{V(A)}(s) e^{-s/M^2}. \label{SRs}$$ The relevant parameters of the sum rule are then $M^2$, the Borel parameter, and $s_0$, the continuum threshold, and our results for the form factors will depend (moderately) on these parameters. As already mentioned in the introduction, the photon does not only have EM pointlike couplings to the quarks, but also soft nonlocal ones, and we write $F_{A,V} = F_{A,V}{^{\mbox{\scriptsize hard}}}+ F_{A,V}{^{\mbox{\scriptsize soft}}}$. The separation between these two components is of course scheme-dependent and we define them in the $\overline{\rm MS}$ scheme. For the pointlike component of the photon, $\Pi_\mu$ is just a three-point correlation function, described by the triangle diagrams shown in Fig.$\,$\[fig:1\]. These diagrams, and also the leading nonperturbative correction which is proportional to the quark condensate, have been calculated in Refs. [@KSW; @AB], and we confirm the results; we will come back to the hard contributions in Sec. \[sec:5\]. The calculation of radiative corrections to the three-point function, although highly desirable, is beyond the scope of this letter whose main emphasis is on calculating the soft photon contributions which become relevant for a certain configuration of virtualities, namely $m_b^2-p_B^2\geq O(\Lambda_{\rm QCD}m_b)$ and $m_b^2-p^2\geq O(\Lambda_{\rm QCD}m_b)$, i.e. $E_\gamma\gg \Lambda_{\rm QCD}$. In this regime, the integral in Eq.$\,$(\[disper\]) is dominated by light-like distances and can be expanded around the light-cone: $$\label{eq:3} \Pi_{V(A)}(p_B^2,E_\gamma) = Q_u \chi(\mu_F) \langle\bar u u\rangle(\mu_F) \sum_n \int_0^1 du\, \phi^{(n)}(u;\mu_{F}) T_{V(A)}^{(n)}(u;p_B^2,E_\gamma;\mu_{F}).$$ This is a factorization formula expressing the correlation function as a convolution of genuinely nonperturbative and universal distribution amplitudes (DA) $\phi^{(n)}$ with process-dependent hard kernels $T^{(n)}$, to be calculated in perturbation theory; the overall factor $Q_u \chi \langle\bar u u\rangle$ ($Q_u=2/3$) is pulled out for later convenience. In the above equation, $n$ labels the twist of operators and $\mu_{F}$ is the factorization scale. The restriction on $E_\gamma$ implies that $F{^{\mbox{\scriptsize soft}}}_{A,V}$ cannot be calculated for all photon energies; to be specific, we restrict ourselves to $E_\gamma>1\,$GeV. The factorization formula holds if the infrared divergencies occurring in the calculation of $T^{(n)}$ are such that they can be absorbed into the universal distribution amplitudes $\phi^{(n)}$ and if the convolution integral converges. We find that this is indeed the case, at least for the leading twist-2 contribution and to first order in $\alpha_s$. We also would like to stress that the above factorization formula has got nothing to do with the pQCD factorized expression for $F_{A,V}{^{\mbox{\scriptsize hard}}}$ derived in [@CS; @LPW] (although we will discuss the relevance of our findings as compared to theirs in Sec. 4) and that it is valid for arbitrary values of the $b$ quark mass — there is no need to restrict oneself to the heavy quark limit. Let us now define the photon DAs that enter Eq.$\,$(\[eq:3\]). As mentioned above, we shall work in the background field gauge, which means that the expression (\[disper\]), formulated for an outgoing photon, gets replaced by a correlation function of the time-ordered product of two currents in the vacuum, which is populated by an arbitrary EM field configuration $F_{\alpha\beta}$, $$\Pi_F^{\mu}(p,q)=i\int d^4x e^{ipx} \frac{1}{\sqrt{4\pi\alpha}}\langle 0| T\{\bar{u}(x)\gamma^{\mu}(1-\gamma_5)b(x)\bar{b}(0)i\gamma_5u(0)\} |0\rangle_F,$$ where the subscript $F$ indicates that an EM background field $B_\mu$ is included in the action. The calculation is explicitly gauge-invariant, and it is only in the final step that we select one specific field configuration corresponding to an outgoing photon with momentum $q$: $$F_{\alpha\beta}(x)\to -i (\epsilon^*_\alpha q_\beta - \epsilon^*_\beta q_\alpha) e^{iqx}.$$ Following Ref. [@BBK], we define the leading-twist DA as the vacuum expectation value of the nonlocal quark-antiquark operator with light-like separations, in the EM background field configuration $F_{\alpha\beta}$: $$\langle 0|\bar q(z)\sigma_{\alpha\beta}[z,-z]_F q(-z) |0\rangle_F = e_q\, \chi\, \langle \bar q q\rangle \int\limits_0^1 \!du\, F_{\alpha\beta}(-\xi z) \phi_\gamma(u)\,, \label{T2}$$ where $z^2=0$ and $[z,-z]_F$ is the path-ordered gauge-factor $$[z,-z]_F ={\rm P}\!\exp\left\{i\!\!\int_0^1\!\! dt\,2z_\mu [g A^\mu((2t-1)z)+e_q B^\mu((2t-1)z)]\right\},$$ including both the gluon field $A_\mu=A_\mu^a \lambda^a/2$ and the EM background field $B_\mu$. $e_q$ is the electric charge of the light quark $q$, e.g. $e_u=2/3 \sqrt{4\pi\alpha}$, $\langle \bar q q\rangle$ the quark condensate and $\chi$ its magnetic susceptibility – defined via the local matrix element $\langle 0|\bar q\sigma_{\alpha\beta} q |0\rangle_F = \,e_q\, \chi\, \langle \bar q q\rangle \,F_{\alpha\beta}$, which implies the normalization $\int_0^1 du\phi_\gamma(u)=1$. $\phi_\gamma$ can be expanded in terms of contributions of increasing conformal spin (cf. [@BBKT] for a detailed discussion of the conformal expansion of DAs): $$\phi_\gamma(u,\mu) = 6u(1-u) \left[1+ \sum_{n=2,4,\ldots}^\infty \phi_n(\mu) C^{3/2}_n(2u-1)\right], \label{T2DA}$$ where $C_n^{3/2}$ are Gegenbauer polynomials and $0\leq u\leq 1$ is the collinear momentum fraction carried by the quark. The usefulness of the conformal expansion lies in the fact that the Gegenbauer moments $\phi_n(\mu)$ renormalize multiplicatively in LO perturbation theory. At higher twist, there exists a full plethora of photon DAs, which we refrain from defining in full detail, but refer the reader to the discussion in Ref. [@BBK], in particular Sec. 4.1. The task is now to calculate the hard kernels $T^{(n)}$, to $O(\alpha_s)$ in leading-twist and tree-level for higher twist. As for the former, the relevant diagrams are shown in Fig. \[fig:2\]. The diagrams are IR divergent, which divergence can be absorbed into the DA $\chi\langle\bar u u \rangle \phi_\gamma$; we have checked that this is indeed the case. A critical point in pQCD calculations involving heavy particles is the possibility of soft divergencies, which manifest themselves as divergence of the $u$ integration at the endpoints; the success of the approach advocated in [@BBNS1; @BBNS2] relies precisely on the fact that it yields a factorization formula for nonleptonic decays where these divergencies are absent in the heavy quark limit (but come back at order $1/m_b$). We find that there are no soft divergencies in our case[^4], which reiterates what has been found for other heavy-light transitions, cf. [@BBB; @BB98; @BZ]. We thus confirm the factorization formula (\[eq:3\]) to $O(\alpha_s)$ at twist-2. Explicit expressions for the spectral densities of the diagrams are too bulky to be given here; they can be obtained from the authors. At this point we only note that, to leading-twist accuracy, $\Pi_A \equiv \Pi_V$. As for the higher-twist contributions, we obtain the following results: $$\begin{aligned} \Pi_A&=& -Q_uf_{3\gamma}\int^1_0du \frac{\bar{\psi}^{(v)}(u)}{s^2} m_b +\frac{1}{2}Q_u\langle \bar{u}u\rangle \int^1_0du \frac{\bar{h}(u)}{s^2}\left[1+\frac{2m^2}{s}\right] \\ &&-Q_u\langle \bar{u}u\rangle \int^1_0dv\int\mathcal{D}\underline{\alpha} \frac{S(\underline{\alpha})}{\tilde{s}^2}(1-2v) +\frac{1}{6}Q_u\langle \bar{u}u\rangle \int^1_0dv\int\mathcal{D} \underline{\alpha} \frac{\tilde{S}(\underline{\alpha})}{\tilde{s}^2} \nonumber \\ &&-2Q_u\langle \bar{u}u\rangle \int^1_0dv\int\mathcal{D}\underline{\alpha} \frac{p\cdot q}{\tilde{s}^3} \left[\bar{T}_1(\underline{\alpha})-(1-2v)\bar{T}_2 (\underline{\alpha})+(1-2v)\bar{T}_3(\underline{\alpha})-\bar{T}_4 (\underline{\alpha})\right], \nonumber \\ \Pi_V &=&\frac{1}{2}Q_uf_{3\gamma}\int^1_0du \frac{\psi^{(a)}(u)}{s^2} m_b \\ &&-Q_u\langle \bar{u}u\rangle \int^1_0dv\int\mathcal{D}\underline{\alpha} \frac{S(\underline{\alpha})}{\tilde{s}^2} +\frac{1}{6}Q_u\langle \bar{u}u\rangle \int^1_0dv\int\mathcal{D} \underline{\alpha} \frac{\tilde{S}(\underline{\alpha})}{\tilde{s}^2}(1-2v) \nonumber \\ &&-2Q_u\langle \bar{u}u\rangle \int^1_0dv\int\mathcal{D}\underline{\alpha} \frac{p\cdot q}{\tilde{s}^3}\left[\bar{T}_1(\underline{\alpha})-\bar{T}_2 (\underline{\alpha})+(1-2v)\bar{T}_3(\underline{\alpha})-(1-2v)\bar{T}_4 (\underline{\alpha})\right],\nonumber\end{aligned}$$ where $s=m_b^2-(p+uq)^2$ and $\tilde{s}=m_b^2-(p+\zeta q)^2$ with $\zeta=(\alpha_{\bar{q}}-\alpha_q+(2v-1)\alpha_g+1)/2$. Definitions and explicit expressions for the higher twist DAs $\bar{\psi}^{(v)}(u)$ and $\psi^{(a)}(u)$ (two-particle twist-3), $\bar{h}(u)$ (two-particle twist-4), $S(\underline{\alpha })$, $\tilde{S}(\underline{\alpha })$ (three-particle twist-3) and $\bar{T}_i(\underline{\alpha })$ (three-particle twist-4) can be found in [@BBK]. Note that in contrast to the leading-twist results, $\Pi_V^{\mbox{\scriptsize higher twist}} \neq \Pi_A^{\mbox{\scriptsize higher twist}}$. Numerical Results for $F_{A,V}{^{\mbox{\scriptsize soft}}}$ =========================================================== Before presenting numerical values for the soft part of the form factors, we first discuss the numerical input to the sum rule (\[SRs\]). As for the photon DA, we use the values and parametrizations derived in [@BBK], notably for the normalization of the twist-2 matrix element (\[T2\]): $$(\chi\langle\bar u u \rangle)(1\,\mbox{GeV}) = -(0.050\pm 0.015)\,\mbox{GeV}.$$ As discussed in [@BBK], there is no conclusive evidence for $\phi_\gamma$ to differ significantly from its asymptotic form, so we set $$\phi_\gamma(u) = 6 u (1-u).$$ The remaining hadronic matrix elements characterizing higher-twist DAs are detailed in [@BBK]. Note that we evaluate scale-dependent quantities at the factorization scale $\mu_F^2 = m_B^2 - m_b^2$ [@bel]; the dependence of the form factors on $\mu_F$ is very small, as all numerically sizeable contributions are now available to NLO in QCD, which ensures good cancellation of the residual scale dependence. As for the remaining parameters occurring in (\[SRs\]), we have the sum rule specific parameters $M^2$ and $s_0$, that is the Borel parameter and the continuum threshold, respectively. In addition, the sum rule depends on $m_b$, the b quark mass, and $f_B$, the leptonic decay constant of the $B$. $f_B$ can in principle be measured from the decay $B\to \ell\bar\nu_\ell$, which, due to the expected smallness of its branching ratio, BR$\,\sim O(10^{-6})$, has, up to now, escaped experimental detection. $f_B$ is one of the best-studied observables in lattice-simulations with heavy quarks; the current world-average from unquenched calculations with two dynamical quarks is $f_B = (200\pm 30)\,$MeV [@fBlatt]. It can also be calculated from QCD sum rules: the most recent determinations [@fBSR] include $O(\alpha_s^2)$ corrections and find $(206\pm 20)\,$MeV and $(197\pm 23)\,$MeV, respectively. For consistency, we do not use these results, but replace $f_B$ in (\[SRs\]) by its QCD sum rule to $O(\alpha_s)$ accuracy, including the dependence on $s_0$ and $M^2$, and use the corresponding “optimum” ranges of continuum threshold and Borel parameter also in evaluating the Borel-transformed correlation function $\Pi_{V(A)}$, i.e. the l.h.s. of (\[SRs\]). For the $b$ quark mass, we use an average over recent determinations of the $\overline{\rm MS}$ mass, $\overline{m}_{b,\overline{\rm MS}}(\overline{m}_b) = (4.22\pm 0.08)\,$GeV [@bquark; @latmasses], which corresponds to the one-loop pole-mass $m_{b,{\rm 1L-pole}}=(4.60\pm 0.09)\,$GeV. With these values we find $f_B =(192\pm 22)\,$GeV (the error only includes variation with $m_b$ and $M^2$, at optimized $s_0$), in very good agreement with both lattice and QCD sum rules to $O(\alpha_s^2)$ accuracy. For $m_b = (4.51,4.60,4.69)\,$GeV the optimized $s_0$ are $(34.5,34.0,33.5)\,$GeV$^2$, and the relevant range of $M^2$ is (4.5–8)$\,$GeV$^2$. In Fig. \[figure2\] we plot the different contributions to $F_{V,A}{^{\mbox{\scriptsize soft}}}(0)$ as function of $M^2$, for $m_b=4.6\,$GeV, $s_0 = 34\,$GeV$^2$ and the central value of $\chi\langle\bar u u\rangle$. It is evident that the sum rule is dominated by twist-2 contributions and that both radiative corrections and higher-twist terms are well under control. Note also the minimal sensitivity to $M^2$ which indicates a “well-behaved” sum rule. Varying $m_b$, $s_0$ and the other input parameters within the ranges specified above, we find $$\label{007} F{^{\mbox{\scriptsize soft}}}_A(0) = 0.07\pm 0.02, \qquad F_V{^{\mbox{\scriptsize soft}}}(0) = 0.09\pm 0.02.$$ As mentioned before, the above results are obtained using the asymptotic form of the twist-2 photon DA. Although there is presently no evidence for nonzero values of higher Gegenbauer moments, the $\phi_n$ in (\[T2DA\]), it may be illustrative to estimate their possible impact on the form factors. As a guideline for numerics, we choose $\phi^\gamma_2$ to be equal to $\phi_2^{\rho_\perp}$, its analogue for the transversely polarized $\rho$ meson, as determined in [@BB96]: $\phi^{\rho_\perp}_2(1\,{\rm GeV}) = 0.2\pm 0.1$. In Fig. \[figure4\] we plot the twist-2 contribution to the form factors obtained with the asymptotic $\phi_\gamma$ and the corrections induced by nonzero $\phi_2^\gamma(\mu_F)$. It is clear that the effect is moderate and at most about 20%. It is also interesting to note that positive values of $\phi_2^\gamma$, which are in accordance with assuming $\rho$ meson dominance for the photon, increase the form factor, which means that our results are likely to be an [*underestimate*]{} of the soft contributions rather than the contrary. $$\epsffile{fig3new.eps}$$ -0.5cm $$\epsffile{figphi2labeled.eps}$$ -1cm $$\epsfxsize=\textwidth\epsffile{fig4new.eps}$$ -0.5cm In Fig. \[figure3\], we show the dependence of $F_{A,V}{^{\mbox{\scriptsize soft}}}(p^2)$ on the momentum transfer $p^2$, including the variation of all input parameters in their respective ranges. The form factors can be fitted by the following formula: $$F_{A(V)}{^{\mbox{\scriptsize soft}}}(p^2)=\frac{F_{A(V)}{^{\mbox{\scriptsize soft}}}(0)} {1-a_{A{(V)}}\left(p^2/m_B^2\right)+b_{A(V)} \left(p^2/m_B^2\right)^2}\,. \label{interpolation}$$ The fit parameters $F{^{\mbox{\scriptsize soft}}}_{A(V)}(0)$, $a_{A(V)}$ and $b_{A(V)}$ for each curve in the figure are given in Tab. \[table1\]. The above formula fits the full sum rule results to 1% accuracy for $0<p^2<17\,{\rm GeV}^2$, which corresponds to $1\,{\rm GeV}< E_\gamma< m_B/2$. Note that the uncertainty induced by $M^2$ and $s_0$ is very small: up to $5\% $ for $F_V$ and up to $10\% $ for $F_A$. The main theoretical uncertainty comes from $\chi\langle\bar{q}q\rangle$. Input parameters (a) (b) (c) -------------------------------------------------- ---------- ---------- ---------- $m_b$ \[GeV\] 4.69 4.60 4.51 $s_0\ \mbox{[GeV}^2]$ 33.5 34.0 34.5 $M^2\ \mbox{[GeV}^2]$ 8 6 5 $\chi\langle\bar{u}u\rangle(\mu=1\,$GeV) \[GeV\] $-0.035$ $-0.050$ $-0.065$ Fit parameters (a) (b) (c) $F_A{^{\mbox{\scriptsize soft}}}(0)$ 0.057 0.072 0.093 $a_A$ 1.97 1.97 1.95 $b_A$ 1.18 1.15 1.08 $F_V{^{\mbox{\scriptsize soft}}}(0)$ 0.071 0.088 0.110 $a_V$ 2.09 2.08 2.05 $b_V$ 1.25 1.22 1.16 : [ Input parameter sets for Fig. \[figure3\] and fit parameters for Eq. (\[interpolation\]). ]{}[]{data-label="table1"} Calculation of $\lambda_B$ – Comparison to pQCD {#sec:5} =============================================== As mentioned above, the $B\to\gamma$ form factors have also been calculated in the framework of pQCD factorization [@Kor; @CS; @LPW]. This approach employs the limit $m_b\to\infty$ (HQL), in which the soft contributions vanish. The form factors $F{^{\mbox{\scriptsize hard}}}_{A(V),{\mbox{\scriptsize HQL}}}$ are equal and at tree level given by $$\label{FApQCD} F_{A,{\mbox{\scriptsize HQL}}}{^{\mbox{\scriptsize hard}}}(E_\gamma) \equiv F_{V,{\mbox{\scriptsize HQL}}}{^{\mbox{\scriptsize hard}}}(E_\gamma) = \frac{f_B m_B Q_u}{2\sqrt{2} E_\gamma} \int\limits_0^\infty dk_+\,\frac{\Phi_+^B(k_+)}{k_+} =: \frac{f_B m_B Q_u}{2 E_\gamma}\,\frac{1}{\lambda_B},$$ where the light-cone DA $\Phi_+^B$ of the $B$ meson depends on the momentum of the light spectator quark, $k_+$. The calculation of radiative corrections to this formula has been the subject of a certain controversy, cf. Refs. [@Kor; @CS; @LPW]. The parameter $\lambda_B$ does not scale with $m_b$ in the HQL and hence is of natural size $O(\Lambda_{\mbox{\scriptsize QCD}})$; it has been quoted as $\lambda_B = 0.3\,$GeV [@BBNS1] and $\lambda_B = (0.35\pm 0.15)\,$GeV [@CS], but without calculation. Although presently any statement about the numerical size of $\lambda_B$ appears slightly precarious, since $\Phi_B^+$ and hence $\lambda_B$ depend in a yet unknown way on the factorization scale $\mu_F$, we nonetheless venture to present the (to the best of our knowledge) first calculation of $\lambda_B$. To that purpose, we recall that the hard contribution to $F_{A,V}$ can be obtained, in the QCD sum rule approach, from the local contributions to the correlation function $\Pi_{A,V}$, Eq. (\[eq:corr\]), which, to leading order in perturbation theory, correspond to the diagrams shown in Fig. \[fig:1\] and have been calculated in [@KSW]. In order to extract $\lambda_B$ via (\[FApQCD\]) from the local QCD sum rule for $F{^{\mbox{\scriptsize hard}}}_{A,V}$, we first have to find its HQL. QCD sum rules in the heavy quark limit have actually been studied in quite some detail, cf. [@BBBD], with the following result for the scaling relations of the sum rule specific parameters $M^2$ and $s_0$: $$\label{scaling} M^2\to 2 m_b\tau,\quad s_0\to m_b^2 + 2 m_b \omega_0.$$ Applying these relations to the sum rule for $F{^{\mbox{\scriptsize hard}}}_{V,A}$ and using (\[FApQCD\]), we obtain the following expression: $$\label{lamma} e^{-\bar\Lambda/\tau}\,\frac{f_B^2 m_B^2}{m_bE_\gamma}\,\frac{1}{\lambda_B} = \frac{3}{\pi^2 E_\gamma} \int_0^{\omega_0} d\omega \,\omega \, e^{-\omega/\tau}.$$ We note that the factor $1/E_\gamma$ on the r.h.s.  arises automatically in the HQL of the correlation function. $\bar\Lambda$ is the binding energy of the $b$ quark in the $B$ meson, $\bar\Lambda = m_B - m_b$. On the l.h.s. of (\[lamma\]), the expression $f_B^2 m_B^2/m_b$ still contains $1/m_b$ corrections; in the rigorous HQL we have $f_B^2 m_B^2/m_b \to f_{\mbox{\scriptsize stat}}^2$. $f_{\mbox{\scriptsize stat}}$ is known both from lattice calculations and QCD sum rules; in the same spirit that we applied for calculating $F_{A,V}{^{\mbox{\scriptsize soft}}}$, we replace, for the numerical evaluation of $\lambda_B$, $f_{\mbox{\scriptsize stat}}^2$ by its sum rule [@BBBD]: $$\label{fstat} f_{\mbox{\scriptsize stat}}^2 e^{-\bar\Lambda/\tau} = \frac{3}{\pi^2} \int_0^{\omega_0} d\omega\,\omega^2 e^{-\omega/\tau},$$ where we have suppressed (small) condensate contributions. Combining (\[lamma\]) and (\[fstat\]), we find $$\label{lambda} \lambda_B = \frac{\displaystyle \int_0^{\omega_0} d\omega\,\omega^2 e^{-\omega/\tau}}{\displaystyle \int_0^{\omega_0} d\omega\,\omega e^{-\omega/\tau}}.$$ This is our sum rule for $\lambda_B$, which, admittedly, is the first rather than the last word in the story of how to calculate $\lambda_B$. It can and should be improved by including both radiative (which will also settle the issue of scale dependence) and nonperturbative corrections. For fixing the sum rule parameters $\omega_0$ and $\tau$, we exploit the fact that $\bar\Lambda = m_B-m_b \approx 0.7\,$GeV is known; the corresponding sum rule reads $$\bar\Lambda = \frac{\displaystyle\int_0^{\omega_0} d\omega\,\omega^3 e^{-\omega/\tau}}{\displaystyle \int_0^{\omega_0} d\omega\,\omega^2 e^{-\omega/\tau}},$$ which can be derived from (\[fstat\]) by taking one derivative in $1/\tau$. None of the sum rules (\[lamma\]), (\[fstat\]), (\[lambda\]) is “well-behaved” in the sense that they feature no stability plateau in $\tau$, since the perturbative term is not counterbalanced by a nonperturbative one. Nonetheless, taking $0.5\,\mbox{GeV}<\tau<1\,\mbox{GeV}$ as indicated by our preferred range of $M^2$ used above and the scaling laws (\[scaling\]), we find $\omega_0 = 1\,$GeV and then, from (\[lambda\]), $$\lambda_B \approx 0.57\,{\rm GeV}.$$ Given the neglect of $O(\alpha_s)$ corrections and nonperturbative terms, it is difficult to attribute an error to that number. For this reason, we check if our result is compatible with the local duality approximation, which corresponds to the limit $\tau\to\infty$. From (\[lamma\]) and (\[fstat\]) we then obtain the interesting relation $$\lambda_B = \frac{8}{9}\,\bar\Lambda,$$ which for $m_b = 4.6\,$GeV implies $\lambda_B = 0.6\,$GeV. We thus conclude that the value of $\lambda_B$ is set by $\bar\Lambda$ rather than $\Lambda_{\mbox{\scriptsize QCD}}$ and depends strongly on the actual value of $m_b$; at present, nothing meaningful can be said about the error associated with $\lambda_B$ and we thus quote as our final result $$\lambda_B = 0.6\,{\rm GeV}.$$ We are now in a position to compare the numerical size of the pQCD result $F{^{\mbox{\scriptsize hard}}}_{A(V),{\mbox{\scriptsize HQL}}}$ to $F_{A(V)}{^{\mbox{\scriptsize soft}}}$. At tree level, we have $F_{A(V),{\mbox{\scriptsize HQL}}}{^{\mbox{\scriptsize hard}}}(E_\gamma=m_B/2)=0.22,$ according to (\[FApQCD\]). Notwithstanding the additional hadronic uncertainties involved at NLO in QCD, we employ the models for $\Phi_B^+$ advocated in [@CS] to obtain $F_{A(V),{\mbox{\scriptsize HQL}}}^{\mbox{\scriptsize hard},NLO}(E_\gamma = m_B/2)=0.21\pm 0.01$ for $\mu^2_F = m_B^2-m_b^2$, our choice of the factorization scale. This number has to be compared to (\[007\]), the soft contributions. We conclude that $F_{A,V}{^{\mbox{\scriptsize soft}}}/F{^{\mbox{\scriptsize hard}}}_{A(V),{\mbox{\scriptsize HQL}}}\approx 0.3$ at maximum photon energy so that the parametrical scaling $F_{A,V}{^{\mbox{\scriptsize soft}}}\sim O(1/m_b)$ is numerically relaxed. Summary and Conclusions ======================= The relevance of $B$ physics for extracting information on weak interaction parameters and new physics is limited by our lack of knowledge on nonperturbative QCD. Recent progress in describing the notoriously difficult nonleptonic decays in perturbative QCD factorization has raised hopes that a sufficiently accurate solution to the problem is around the corner. As much as this is a highly desirable goal, it» is nonetheless necessary to critically examine the theoretical uncertainty of the method, which, at least at present, is set by the restriction to the heavy quark limit. A direct [*theoretical*]{} test of pQCD factorization in nonleptonic decays is currently not feasible, and any significant [*experimental*]{} deviation of measured decay rates or CP asymmetries from pQCD predictions is as likely to be attributed to new physics effects as to uncertainties in the predictions themselves. An indirect theoretical test becomes however possible in the admittedly phenomenologically not very attractive channel $B\to\gamma e\nu$, where alternative methods of calculation exist and allow one to assess effects suppressed by powers of $m_b$. In this letter, we have calculated corrections to the hard pQCD form factors, which are parametrically suppressed by one power of $m_b$. These “soft” corrections are induced by photon emission at large distances and involve the hadronic structure of the photon. We have also presented the first calculation of $\lambda_B$, the first negative moment of the $B$ meson distribution amplitude, a very relevant parameter for pQCD calculations. The calculation is admittedly rather crude, but amenable to improvement. Comparing the numerical size of the pQCD result to the soft contributions, we found that the latter are indeed sizeable, $O(30\%)$. This result implies an immediate caveat for pQCD analyses involving photon emission, in particular $B\to K^*\gamma$ and $B\to\rho\gamma$, e.g. [@BFS; @Bosch]. In a wider sense, it also adds a possible questionmark to pQCD analyses of purely hadronic $B$ decays and emphasizes the relevance of power-suppressed corrections to the heavy quark limit. [**Acknowledgements**]{}\ We are grateful to C. Sachrajda for useful discussions. [99]{} M. Beneke [*et al.*]{}, Phys. Rev. Lett.  [**83**]{} (1999) 1914 \[arXiv:hep-ph/9905312\]; Nucl. Phys. B [**591**]{} (2000) 313 \[arXiv:hep-ph/0006124\]. M. Beneke [*et al.*]{}, Nucl. Phys. B [**606**]{} (2001) 245 \[arXiv:hep-ph/0104110\]. C.W. Bauer, D. Pirjol and I.W. Stewart, Phys. Rev. Lett.  [**87**]{} (2001) 201806 \[arXiv:hep-ph/0107002\]. A. Khodjamirian, Nucl. Phys. B [**605**]{} (2001) 558 \[arXiv:hep-ph/0012271\];\ A. Khodjamirian, T. Mannel and P. Urban, arXiv:hep-ph/0210378. S. Descotes-Genon and C.T. Sachrajda, arXiv:hep-ph/0209216. E. Lunghi, D. Pirjol and D. Wyler, arXiv:hep-ph/0210091. G.P. Korchemsky, D. Pirjol and T.M. Yan, Phys. Rev. D [**61**]{} (2000) 114510 \[arXiv:hep-ph/9911427\]. A. Khodjamirian, G. Stoll and D. Wyler, Phys. Lett. B [**358**]{} (1995) 129 \[arXiv:hep-ph/9506242\]. A. Ali and V.M. Braun, Phys. Lett. B [**359**]{} (1995) 223 \[arXiv:hep-ph/9506248\]. P. Ball, V.M. Braun and N. Kivel, arXiv:hep-ph/0207307 (to appear in NPB). M. Beneke, T. Feldmann and D. Seidel, Nucl. Phys. B [**612**]{} (2001) 25 \[arXiv:hep-ph/0106067\]. S.W. Bosch and G. Buchalla, Nucl. Phys. B [**621**]{} (2002) 459 \[arXiv:hep-ph/0106081\]. A. Khodjamirian and D. Wyler, arXiv:hep-ph/0111249. M.A. Shifman, A.I. Vainshtein and V.I. Zakharov, Nucl. Phys. B [**147**]{} (1979) 385; Nucl. Phys. B [**147**]{} (1979) 448. P. Ball [*et al.*]{}, Nucl. Phys. B [**529**]{} (1998) 323 \[arXiv:hep-ph/9802299\]. E. Bagan, P. Ball and V.M. Braun, Phys. Lett. B [**417**]{} (1998) 154 \[arXiv:hep-ph/9709243\]. P. Ball and V.M. Braun, Phys. Rev. D [**58**]{} (1998) 094016 \[arXiv:hep-ph/9805422\]. P. Ball and R. Zwicky, JHEP [**0110**]{} (2001) 019 \[arXiv:hep-ph/0110115\]. V.M. Belyaev [*et al.*]{}, Phys. Rev. D [**51**]{} (1995) 6177 \[hep-ph/9410280\]. C.W. Bernard, Nucl. Phys. Proc. Suppl.  [**94**]{} (2001) 159 \[hep-lat/0011064\]. M. Jamin and B.O. Lange, Phys. Rev. D [**65**]{} (2002) 056005 \[arXiv:hep-ph/0108135\];\ A.A. Penin and M. Steinhauser, Phys. Rev. D [**65**]{} (2002) 054006 \[arXiv:hep-ph/0108110\]. A.H. Hoang, Phys. Rev. D [**61**]{} (2000) 034005 \[hep-ph/9905550\];\ A. Pineda, JHEP [**0106**]{} (2001) 022 \[hep-ph/0105008\]. V. Lubicz, Nucl. Phys. Proc. Suppl.  [**94**]{} (2001) 116 \[hep-lat/0012003\]. P. Ball and V.M. Braun, Phys. Rev. D [**54**]{} (1996) 2182 \[arXiv:hep-ph/9602323\]. E. Bagan [*et al.*]{}, Phys. Lett. B [**278**]{} (1992) 457. [^1]: [email protected] [^2]: [email protected] [^3]: The difference in sign in the 1st term on the r.h.s. is due to our conventions for the epsilon tensor: we define Tr$[\gamma_{\mu}\gamma_{\nu}\gamma_{\rho}\gamma_{\sigma}\gamma_5]= 4i\epsilon_{\mu\nu\rho\sigma}$, in contrast to [@CS], where a different sign was chosen. [^4]: Note that this statement applies to the [*full*]{} correlation function with finite $m_b$, not only to the heavy quark limit.
--- abstract: '**Abstract**: Formulas of $\pi(x)$-fine structure are presented.' author: - Lubomir Alexandrov title: | **The Eratosthenes Progression\ $p_{k+1}=\pi^{-1}(p_k), k=0,1,2,...,~p_0 \in N \setminus P$\ Determines An Inner Prime Number Distribution Law [@MTCP]** --- Let $\pi(p_n)$ denotes $\# n$ of prime $p_n \in P=\{2,3,5,7,11,13...\}~~(\pi(p_n)=n)$ and $\pi^{-1}(n)$ denotes the prime $p_n$ with $\# n~~ (\,\, \pi^{-1}(n)=p_n; \,\, \pi^{-1}(n) \equiv \mbox{Prime}[n], \, Mma) $. For the sets of primes (“Eratosthenes rays”) $$\label{eq1} r_{p_0} = \{ p_{k+1} = \pi^{-1} ( p_k ): p_0 \in N, k=0,1,2,... \}$$ the assertions $$\label{eq2} \bigcap_{p_0 \in \overline{C}} r_{p_0} = \O, ~~~ \bigcup_{p_0 \in P} r_{p_0} \subset \bigcup_{p_0 \in \overline{C}} r_{p_0},~~~ \bigcup_{p_0 \in N} r_{p_0} \subset \bigcup_{p_0 \in \overline{C}} r_{p_0},~~~ P = \bigcup_{p_0 \in \overline{C}} r_{p_0},$$ ( where $\overline{C} = N \setminus P=\{ 1 \} \bigcup C,~ N$ is a set of naturals and $C$ is a set of composites ) are true [@Sofia1964; @NonAs]. From (\[eq2\]) we obtain matrix representations of the prime and natural numbers: $^2P = \{ r_{p_0} \}_{p_0 \in \overline{C}} = \{ p_{\mu \nu} \}~~~~~~ {\rm and}~~~~~~^2N=\{\overline{C}, \, \,^2P \}.$ The left upper corner of matrix $^2N$ looks like [@NonAs], p. 18 : $^2 N = \left [ \begin{array}{ccccccccccc} 1 & 2 & 3 & 5 & 11 & 31 & 127 & 709 & 5381 & 52771 \ldots \\ 4 & 7 & 17 & 59 & 277 & 1787 & 15299 & \ldots & & \\ 6 & 13 & 41 & 179 & 1063 & 8527 & \dots & & & \\ 8 & 19 & 67 & 331 & 2221 & 19577 & \ldots & & & \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & & & \end{array}\right ].$ Let $\pi(n',n''),~ n', n'' \in N$ denotes the number of primes within a given interval $(n', n'')$. The elements of matrices $^2P$ and $^2N$ satisfy the relations: $~~p \in ~^2P_{-1} \Longleftrightarrow \pi(p) \in P, ~~~~~ p \in P_1 \Longleftrightarrow \pi(p) \in \overline{C}$,                where $^2P_{-1} =~^2P~\setminus P_1,~ P_1 = {\rm column}({p_{\mu 1}})$; $$\label{eq3} \left . \begin{array}{l} \hspace{-8mm}\pi(p_{\mu 1},0) = p_{\mu 0} -1, \quad \mu \geq 1, \, p_{\mu 0} \in \overline C ; \\ \hspace{-8mm}\pi(p_{\mu \nu_1},p_{\mu \nu_2})=p_{\mu, \nu_2 -1} - p_{\mu, \nu_1 -1} - 1,~~ \mu, \nu_1 \geq 1,~ \nu_2=\nu_1+\alpha,~ \alpha=1,2,3...; \\ \hspace{-8mm}\pi(p_{\mu_1 \nu_1},p_{\mu_2 \nu_2})=| p_{\mu_1, \nu_1 -1} - p_{\mu_2, \nu_2 -1}| - 1,~~ \mu_i, \nu_i \geq 1,~ i=1,2,~\mu_1\neq \mu_2. \end{array} \right \}$$ The differences $p_{p_0,k+1}-p_{p_0,k} > p_{p_0,k}(\ln{p_{p_0,k}}-1),~~ k=0,1,2,...,~ p_0=2, p_0 \in C$ are monotonically increasing along the raws of $^2P$ and the explicit law (\[eq1\]) together with its corollaries (\[eq3\]) of the $\pi(x)$–fine structure is valid. From there a possibility follows to construct the prime number spiral and web on the plane $R^2$ re-creating the prime number distribution in details. Applications of these results to the identification problem of new transactinides as well as to some actual quantum physics and molecular biology problems are suggested. [99]{} Lubomir Alexandrov, [*The Eratosthenes progression $p_{k+1}=\pi^{-1}(p_k), \\ k=0,1,2,...,~p_0 \in N \setminus P$ determines an inner prime number distribution law*]{},\ Second Int. Conf. “ Modern Trends in Computational Physics ”, July 24-29, 2000, Dubna, Russia, Book of Abstracts, p. 19. Lubomir Alexandrov, [*Multiple Eratostenes sieve and the prime number distribution on the plane*]{}, Sofia, 1964, unpublished. Lubomir Alexandrov, [*On the nonasymptotic prime number distribution*]{}, math.NT/9811096, lanl, 1998.
--- abstract: 'For any $g\geq 3$ we show that the pull-backs of the Mumford Morita Miller classes of the moduli space ${{\mathcal{M}}}_g$ of curves of genus $g$ to a component of a stratum of projective abelian differentials over ${{\mathcal{M}}}_g$ vanish. We deduce that strata are affine.' author: - Ursula Hamenstädt date: 'January 9, 2020' title: On the cohomology of strata of abelian differentials --- [^1] Introduction ============ For $g\geq 3$ the *moduli space* ${{\mathcal{M}}}_g$ of complex curves of genus $g$ is a complex orbifold. It is the quotient of *Teichmüller space* ${{\mathcal{T}}}_g$ of genus $g$ under the action of the *mapping class group* ${\rm Mod}(S_g)$. The following question can be found in [@FL08], see also [@HL98] for a motivation. Does ${{\mathcal{M}}}_g$ admit a stratification with all strata affine subvarieties of codimension $\leq g-1$? Here a variety is affine if it does not contain any complete proper subvariety. The complement of an irreducible effective ample divisor in the Deligne Mumford compactification $\overline{{\mathcal{M}}}_g$ of ${{\mathcal{M}}}_g$ is affine. This was used by Fontanari and Looijenga to show that the complement of the *Thetanull* divisor in ${{\mathcal{M}}}_g$ parameterizing curves with an effective even theta characteristic is affine for every $g\geq 4$ (Proposition 2.1 of [@FL08]). They also show that the answer to the question is yes for all $g\leq 5$. Another approach towards an answer to this question which is closer to our viewpoint is due to Chen [@Ch19]. The main goal of this article is to give some additional evidence that the answer to the above question is affirmative. To this end consider the *Hodge bundle* over ${{\mathcal{M}}}_g$ whose fiber over a complex curve $X$ is just the $g$-dimensional vector space of holomorphic one-forms on $X$. The projectivization $P:{{\mathcal{P}}}\to {{\mathcal{M}}}_g$ of the Hodge bundle admits a natural stratification whose strata consist of projective differentials with the same number and multiplicities of zeros. These strata need not be connected, but the number of connected components is at most 3 [@KtZ03]. The *tautological ring* of ${{\mathcal{M}}}_g$ is the subring of the rational cohomology ring of ${{\mathcal{M}}}_g$ generated by the *Mumford Morita Miller* classes $\kappa_k\in H^{2k}({{\mathcal{M}}}_g,\mathbb{Q})$ (see [@M87] and [@Lo95] for a comprehensive discussion of these classes). Denote by $\psi$ the Chern class of the universal line bundle over the fibers of ${{\mathcal{P}}}$. The following is the main result of this article. \[strata\] Let ${{\mathcal{Q}}}\subset {{\mathcal{P}}}$ be a component of a stratum of projective abelian differentials; then $P^*\kappa_k\vert {{\mathcal{Q}}}=\psi\vert {{\mathcal{Q}}}=0$ for all $k\geq 1$. The projective Hodge bundle extends to the Deligne Mumford compactification $\overline{{\mathcal{M}}}_g$ of ${{\mathcal{M}}}_g$. Teichmüller curves in ${{\mathcal{Q}}}$ extend to complete curves in the closure of ${{\mathcal{Q}}}$ in this extension which violate the statement of Theorem \[strata\] for $k=1$. Since all but the first Mumford Morita Miller classes vanish on ${{\mathcal{M}}}_3$ [@Lo95], Theorem \[strata\] for two strata in $g=3$ is due to Looijenga and Mondello [@LM14]. As an application of Theorem \[strata\], we give a positive answer to a question of Chen [@Ch19]. \[affinecor\] Components of strata of projective abelian differentials are affine. The article [@Ch19] contains some partial results in the direction of Corollary \[affinecor\], using a completely different approach. It also contains a complete analysis of the case $g=3$. Additional related results can be found in [@FL08] and [@Mo17]. Corollary \[affinecor\] can be used to construct a new stratification of ${{\mathcal{M}}}_g$ with all strata affine subvarieties of codimension $\leq g-2$ in the case $g=3$ and $g=4$. We also obtain some information for arbitrary $g$. Namely, the closure in ${{\mathcal{P}}}$ of the component $\mathbb{P}{{\mathcal{H}}}(2,\dots,2)^{\rm odd}$ of the stratum of projective abelian differentials with all zeros of order two and odd spin structure projects onto ${{\mathcal{M}}}_{g}$. The closure in ${{\mathcal{P}}}$ of the component $\mathbb{P}{{\mathcal{H}}}(2,\dots,2,4)^{\rm odd}$ of projective differentials with a single zero of order 4, all remaining zeros of order 2 and odd spin structure projects to a divisor ${{\mathcal{D}}}$ in ${{\mathcal{M}}}_{g}$. Similarly, the union of the closures in ${{\mathcal{P}}}$ of the components of strata of projective differentials with all zeros of even order, odd spin structure and either at least one zero of order at least 6 or at least two zeros of order at least 4 projects to a divisor ${{\mathcal{D}}}_2$ in ${{\mathcal{D}}}$ (see Section \[oddspin\]Ê for details). Corollary \[affinecor\] is used to show \[nocomplete\] The locus ${{\mathcal{M}}}_{g}-{{\mathcal{D}}}$ is affine, and ${{\mathcal{D}}}-{{\mathcal{D}}}_2$ does not contain a complete curve. As another consequence of Theorem \[strata\], we obtain an alternative proof of the following well known result (a proof is for example contained in [@FL08]). For its formulation, the *hyperelliptic locus* in ${{\mathcal{M}}}_g$ is the subset of all curves which are hyperelliptic, that is, they admit a degree two branched cover over $\mathbb{C}P^1$. \[hyperelliptic\] The hyperelliptic locus in ${{\mathcal{M}}}_g$ is affine. There exists a natural fiber bundle ${{\mathcal{C}}}\to {{\mathcal{M}}}_g$, the so-called *universal curve*, whose fiber over a complex curve $X$ is just $X$. A *surface bundle* $E$ with base a simplicial complex $B$ and fiber a closed surface $S_g$ of genus $g$ is the pull-back of the universal curve by a continuous map $f:B\to {{\mathcal{M}}}_g$, a so-called *classifying map*. Homotopic maps give rise to homeomorphic surface bundles. Denote by $\nu^*$ the *vertical cotangent bundle* of a surface bundle $\Pi:E\to B$, that is, the cotangent bundle of the fibers. The bundle $\nu^*$ admits a natural structure of a complex line bundle over $E$ whose Chern class $c_1(\nu^*)\in H^2(E,\mathbb{Z})$ (or, alternatively, its Euler class) is defined. The main tool for the proof of Theorem \[strata\] is an analysis of the Poincaré dual of the cohomology class $c_1(\nu^*)$ in such a surface bundle $E$. Along the way we give a purely topological proof of the following result of Korotkin and Zograf [@KZ11]. For its formulation, let ${{\mathcal{P}}}_1\subset {{\mathcal{P}}}\to {{\mathcal{M}}}_g$ be the divisor in the projectivized Hodge bundle consisting of projective abelian differentials with at least one zero which is not simple. The projectivized Hodge bundle is a Poincaré duality space, so we can ask for the dual of ${{\mathcal{P}}}_1$, viewed as a homology class relative to the boundary of the Deligne Mumford compactification of ${{\mathcal{P}}}$. \[korotkinzograf\] The class in $H^2({{\mathcal{P}}},\mathbb{Q})$ which is dual to the divisor ${{\mathcal{P}}}_1$ equals $2P^*\kappa_1 -(6g-6)\psi$. The work [@KZ11] also contains a computation of the dual of the extension of ${{\mathcal{P}}}_1$ to the projective Hodge bundle over the Deligne Mumford compactification of ${{\mathcal{M}}}_g$ which we do not duplicate here. The methods of proof in [@KZ11]Ê stem from mathematical physics. An algebraic geometric proof of Theorem \[korotkinzograf\] is due to Chen [@Ch13]. The organization of this article is as follows. In Section \[branchedmultisection\], we introduce branched multisections of a surface bundle over a surface. We use the Hodge bundle over ${{\mathcal{M}}}_g$ to show that for every surface bundle over a surface, the Poincaré dual of the Chern class $c_1(\nu^*)$ of the vertical cotangent bundle can be represented by a branched multisection. In Section \[signatureasintersection\] we give a topological proof of Theorem \[korotkinzograf\] which builds on the results from Section \[branchedmultisection\]. Variations of the ideas which go into the proof are used in Section \[oddspin\] to show Theorem \[strata\], Corollary \[nocomplete\] and Corollary \[hyperelliptic\]. Branched multi-sections {#branchedmultisection} ======================= Let ${{\mathcal{M}}}_g$ be the moduli space of curves of genus $g\geq 2$. This a complex orbifold. The moduli space of curves of genus $g$ with a single marked point (puncture) is the *universal curve* ${{\mathcal{C}}}\to {{\mathcal{M}}}_g$, a fiber bundle (in the orbifold sense) over ${{\mathcal{M}}}_g$ whose fiber over the point $X\in {{\mathcal{M}}}_g$ is just the complex curve $X$. The moduli spaces ${{\mathcal{M}}}_g$ and ${{\mathcal{C}}}$ are quotients of the *Teichmüller spaces* ${{\mathcal{T}}}_g$ and ${{\mathcal{T}}}_{g,1}$ of *marked* complex curves of genus $g$ and of marked complex curves of genus $g$ with one marked point, respectively, under the corresponding mapping class group ${\rm Mod}(S_g)$ and ${\rm Mod}(S_{g.1})$. The marked point forgetful map induces a surjective [@FM12] homomorphism $\Theta:{\rm Mod}(S_{g,1})\to {\rm Mod}(S_g)$. This homomorphism fits into the *Birman exact sequence* $$\label{birman} 1\to \pi_1(S_g)\to {\rm Mod}(S_{g,1})\xrightarrow{\Theta} {\rm Mod}(S_g)\to 1.$$ Any surface bundle $E$ over a surface $B$ with fiber genus $g$ is a topological manifold which can be represented as the pull-back of ${{\mathcal{C}}}$ by a continuous map $\phi:B\to {{\mathcal{M}}}_g$, called a *classifying map* for $E$. Up to homeomorphism, the bundle only depends on the homotopy class of $\phi$. Equivalently, it only depends on the conjugacy class of the induced *monodromy homomorphism* $\phi_*=\rho:\pi_1(B)\to {\rm Mod}(S_g)$. As a consequence, we may choose the map $\phi$ to be smooth (as maps between orbifolds). Then $\Pi:E\to B$ is a smooth fiber bundle. In the remainder of this section we always assume that this is the case. A *section* of a surface bundle $\Pi:E\to B$ is a smooth map $\sigma:B\to E$ such that $\Pi\circ \sigma={\rm Id}$. The following is well known and reported here for completeness. For its formulation, define a *lift* of the monodromy homomorphism $\rho:\pi_1(B)\to {\rm Mod}(S_g)$ to be a homomorphism $\tilde \rho:\pi_1(B)\to {\rm Mod}(S_{g,1})$ with the property that $\Theta \circ \tilde \rho=\rho$. Note that there may be elements of the kernel of $\rho$ which are not contained in the kernel of $\tilde \rho$. \[sectionexists\] The surface bundle $E\to B$ has a section if and only if there is a lift $\tilde \rho:\pi_1(B)\to {\rm Mod}(S_{g,1})$ of the monodromy homomorphism $\rho:\pi_1(B)\to {\rm Mod}(S_g)$. If $\sigma:B\to E$ is a section and if $x\in B$ is an arbitrarily chosen point, then the image under $\sigma$ of any based loop $\alpha$ at $x$ is a based loop at $\sigma(x)$. Via the classifying map $\phi:B\to {{\mathcal{M}}}_g$, this loop defines a lift of the element $\phi_*\alpha\in {\rm Mod}(S_g)$ to ${\rm Mod}(S_{g,1})$. As this construction is compatible with group multiplication, it defines a lift of $\rho$ to ${\rm Mod}(S_{g,1})$. Vice versa, let us assume that the monodromy homomorphism $\rho:\pi_1(B)\to {\rm Mod}(S_g)$ admits a lift $\tilde \rho: \pi_1(B)\to {\rm Mod}(S_{g,1})$. Since both $B$ and ${{\mathcal{C}}}$ are classifying spaces for their orbifold fundamental groups, there exists a smooth map $F:B\to {{\mathcal{C}}}$ with monodromy $F_*=\tilde \rho$. Then the projection of $F$ to a map $f:B\to {{\mathcal{M}}}_g$ induces the homomorphism $\rho:\pi_1(B)\to {\rm Mod}(S_g)$. As $\rho$ is the monodromy of a classifying map for $E$, the surface bundle defined by $f$ is diffeomorphic to $E$. Since the map $F$ is a lift of $f$ to ${{\mathcal{C}}}$ by construction, it defines a section of $E$. Recall that a *branched covering* of a surface $\Sigma$ over a surface $B$ is a finite to one surjective map $f:\Sigma\to B$ such that there exists a finite set $A\subset B$, perhaps empty, with the property that $f\vert f^{-1}(B-A)$ is an ordinary covering projection. We will use the following generalization of the notion of a section. \[branchedmulti\] A *branched multi-section of degree $d$* of a surface bundle $\Pi:E\to B$ is defined to be a smooth injective immersion $f:\Sigma\to E$ where $\Sigma$ is a (not necessarily connected) closed surface and such that $\Pi\circ f:\Sigma\to B$ is a branched covering of degree $d$. Any section of a surface bundle is a branched multisection as in Definition \[branchedmulti\]. A *multisection* of $E$ is a smooth injective immersion $f:\Sigma\to E$ so that $\Pi\circ f$ is an unbranched covering. Surface bundles may or may not admit multisections, although it is difficult to construct examples of surface bundles which do not admit multisections. Some evidence for the existence of examples is a result of Chen and Salter [@CS18], inspired by earlier work of Mess: For $g\geq 5$ and $m\geq 1$, there does not exist a finite index subgroup of ${\rm Mod}(S_g)$ which admits a lift to ${\rm Mod}(S_{g,m})$. In particular, the Birman exact sequence (\[birman\]) does not virtually split. Complete complex curves in ${{\mathcal{M}}}_g$ constructed from complete intersections provided examples of surface bundles over surfaces so that the image of the monodromy homomorphism is a finite index subgroup of ${\rm Mod}(S_g)$. As this monodromy homomorphism necessarily has a large kernel, such surface bundles may admit sections in spite of [@CS18]. From now on we assume that all surfaces are oriented. Then the image $f(\Sigma)$ of a branched multisection $f:\Sigma\to E$ is a cycle in $E$ which defines a homology class $[f(\Sigma)]\in H_2(E,\mathbb{Z})$. Recall from the introduction that the vertical cotangent bundle $\nu^*$ of $E$ is the cotangent bundle of the fibers of the surface bundle $\Pi:E\to B$. This is a smooth complex line bundle on $E$. The main goal of this section is to show. \[branchedpoincare\] A surface bundle over a surface admits a branched multisection whose homology class is Poincaré dual to the Chern class $c_1(\nu^*)\in H^2(E,\mathbb{Z})$ of the vertical cotangent bundle $\nu^*$. Our strategy is to construct explicitly a cycle in $E$ representing the Poincaré dual of $c_1(\nu^*)$ using the *moduli space of abelian differentials*. We begin with introducing the objects we need. The moduli space of abelian differentials for a surface $S_g$ of genus $g$ is the complement of the zero section in the Hodge bundle ${{\mathcal{H}}}\to {{\mathcal{M}}}_g$ over the moduli space of curves. A holomorphic one-form on a Riemann surface $X$ of genus $g$ has precisely $2g-2$ zeros counted with multiplicity. Denote as in the introduction by $P:{{\mathcal{P}}}\to {{\mathcal{M}}}_g$ the projectivized Hodge bundle over ${{\mathcal{M}}}_g$ and let ${{\mathcal{P}}}_1<{{\mathcal{P}}}$ be the closure of the subspace of all projective holomorphic one-forms with at least one zero which is not simple. Then ${{\mathcal{P}}}_1$ is an complex subvariety of ${{\mathcal{P}}}$ of complex codimension one. More explicitly, the set of all projective abelian differentials with precisely one zero of order two and $2g-4$ zeros of order one is a smooth complex suborbifold of ${{\mathcal{P}}}$ of complex codimension one which is contained in ${{\mathcal{P}}}_1$ by definition. Its complement in ${{\mathcal{P}}}_1$ is a complex subvariety ${{\mathcal{P}}}_2$ of codimension one which is a union of strata of smaller dimension. Here by a smooth complex orbifold we mean an orbifold with a finite orbifold cover which is a complex manifold. The Hodge bundle ${{\mathcal{H}}}$ is a complex vector bundle of rank $g\geq 3$ (in the orbifold sense) and hence the fiber of its sphere subbundle ${{\mathcal{S}}}$ is a sphere of real dimension $2g-1\geq 3$. Let $Q:{{\mathcal{S}}}\to {{\mathcal{P}}}$ be the natural projection. Let $\Pi:E\to B$ be a surface bundle over a surface $B$ with fibre $S_g$. We may assume that $E$ is defined by a smooth map $\phi:B\to {{\mathcal{M}}}_g$. In particular, each fibre $\pi^{-1}(x)$ has a complex structure which depends smoothly on $x$. We have (see Lemma 2.7 of [@H19] for a proof of this standard fact). \[lift\] There exists a smooth map $\theta:B\to {{\mathcal{S}}}$ such that $P\circ Q\circ \theta=\phi$. As ${{\mathcal{Q}}}^{-1}{{\mathcal{P}}}_2\subset {{\mathcal{S}}}$ is of real codimension 4, by transversality we may assume that $Q(\theta(B))$ is disjoint from ${{\mathcal{P}}}_2$. We may furthermore assume that $Q(\theta(B))$ intersects the divisor ${{\mathcal{P}}}_1$ transversely in isolated smooth points. For each $x\in B$ let $\delta(x)\subset \Pi^{-1}(x)$ be the set of zeros of the holomorphic one-form $\theta(x)$ on the Riemann surface $x$, counted with multiplicities. Then $\delta(x)$ is a divisor of degree $2g-2$ on $\Pi^{-1}(x)$ which defines the canonical bundle of $x$. Write $\Delta(x)$ to denote the unweighted set $\delta(x)$, that is, the support of the divisor $\delta(x)$. This set consists of either $2g-2$ or $2g-3$ points, and points $x\in B$ so that the cardinality of $\Delta(x)$ equals $2g-3$ are isolated. Moreover, $$\Delta=\cup_{x\in B}\Delta(x)$$ is a closed subset of $E$. Call a point $y\in \Delta$ *singular* if it is a double zero of the abelian differential $\theta(\Pi(y))$. A point in $\Delta$ which is not singular is called *regular*. By the choice of the map $\theta$, the set $A\subset \Delta$ of singular points of $\Delta$ is finite. Moreover, if $y\in \Delta$ is singular, then $y$ is the only singular point in $\Pi^{-1}(\Pi(y))$. We have \[cycle\] There exists a closed oriented surface $\Sigma$ and an injective continuous map $f:\Sigma\to E$ with image $\Delta$ which is smooth on $\Sigma-f^{-1}(A)$. The map $\Pi\circ f:\Sigma\to B$ is a branched covering of degree $2g-2$, branched at $f^{-1}(A)$, and each branch point has branch index 2. The zeros of an abelian differential depend smoothly on the differential. Thus if $x\in B$ and if $y\in \Delta(x)$ is a simple zero of the differential $\theta(x)$, then there is a neighborhood $U$ of $y$ in $E$ such that the intersection $U\cap \Delta$ is diffeomorphic to a disk and that the restriction of the projection $\Pi$ to $U\cap \Delta$ is a diffeomorphism onto a neighborhood of $x$ in $B$. In particular, if $A\subset \Delta$ is the finite set of double zeros of the differentials $\theta(x)$ $(x\in B)$, then the restriction of the projection $\Pi:E\to B$ to $\Delta-\Pi^{-1}(\Pi(A))$ is a $2g-2$-sheeted covering of $B-\Pi(A)$. Choose a triangulation $T$ of the base surface $B$ into $k>0$ triangles such that each of the finitely many points of $\Pi(A)$ is a vertex of the triangulation and that no triangle contains more than one of these points. Then each of the triangles of $T$ has precisely $2g-2$ preimages under the map $\Pi\vert \Delta$. In particular, the preimage of $T$ in $\Delta$ defines a decomposition of $\Delta$ into $(2g-2)k$ triangles with disjoint interiors. Each edge of such a triangle is adjacent to an edge of precisely one other triangle. But this just means that $\Delta$ is a topological surface (see [@Hat01] for a nice exposition of this fact). The orientation of $B$ pulls back to an orientation of $\Delta$. As a consequence, there exists a closed oriented surface $\Sigma$ and an injective continuous map $f:\Sigma\to \Delta$ so that $\Pi\circ f$ is a branched covering of degree $2g-2$. Using again the fact that zeros of abelian differentials depend smoothly on the differential, the map $f$ can be chosen to be smooth away from the preimage of points in $A$. The lemma now follows from the fact that the restriction of the map $\Pi\circ f$ to $\Sigma -f^{-1}(\Pi^{-1}(\Pi(A))$ is a covering of degree $2g-2$, and a point in $\Pi(A)$ has precisely $2g-3$ preimages under $\Pi\circ f$. Lemma \[cycle\] does not explicitly state that the map $f$ is a branched multisection as this requires that $f$ is a smooth immersion. To show that we may assume that this is indeed the case we have to analyze the map $f$ near the points in $f^{-1}(A)$. This is carried out in the next lemma. In its statement and later on, we allow to modify the map $\theta:B\to {{\mathcal{S}}}$ by a smooth homotopy which also changes the classifying map $P\circ \theta$. \[branchedcover\] The map $f:\Sigma\to \Delta$ is a branched multisection. At each point $y\in A$ which corresponds to a positive (or negative) intersection point of $Q\theta(B)$ with ${{\mathcal{P}}}_1$, the oriented tangent plane of $f(\Sigma)$ at $y$ equals the oriented tangent plane of the fibre (or the tangent plane of the fibre with the reversed orientation). Choose a complex structure for $B$. Let $x\in B$ be such that $Q\theta(x)$ is a positive transverse intersection point with ${{\mathcal{P}}}_1$. Then up to changing $\theta$ with a homotopy supported in a small neighborhood of $x$, we may assume that there is a neighborhood $W$ of $x$ in $B$ such that the restriction of $\theta$ to $W$ is holomorphic as a map into the complex orbifold ${{\mathcal{H}}}\supset {{\mathcal{S}}}$. Note that since $\theta$ is holomorphic in $W$ and the projection ${{\mathcal{H}}}\to {{\mathcal{M}}}_g$ is holomorphic, the local surface bundle $\Pi^{-1}(W)\subset E$ is a complex manifold. Let $y\in A\subset \Delta$ be the double zero of the abelian differential $\theta(x)$. Choose holomorphic coordinates $(u,v)\in \mathbb{C}^2$ on a neighborhood of $y$ in the complex surface $\Pi^{-1}(W)$ so that in these coordinates, the projection $\Pi:E\to B$ is the projection $(u,v)\to v$. We may assume that the range of the coordinates $(u,v)$ contains a set of the form $U\times V$ for open disks $U,V\subset \mathbb{C}$ centered at $0$, that the singular point $y$ corresponds to the origin $0$ and that $y$ is the only singular point in $U\times V$. Moreover, we may assume that for every $0\not=v\in V$ the intersection $(U\times \{v\})\cap \Delta$ consists of precisely two points. Any holomorphic one-form $\omega$ on a Riemann surface $X$ can be represented in a holomorphic local coordinate $z$ on $X$ in the form $\omega(z)=h(z)dz$ with a holomorphic function $h$. Zeros of $\omega$ of order one or two correspond to zeros of the function $h$ of the same order. Thus up to a biholomorphic coordinate change and perhaps decreasing the sizes of the sets $U$ and $V$, we may expand the holomorphic differentials $\theta(v)$ $(v\in V)$ on the coordinate disk $U$ as a power series about the point $0$. This expansion is of the form $$\theta(v)(u)=(\sum_{n=0}^\infty a_n(v)u^n)du$$ with holomorphic functions $a_n:V\to \mathbb{C}$ which satisfy $a_0(0)=a_1(0)=0$ and $a_2(0)=1$. By transversality, we also may assume that $a_0^\prime(0)\not=0$. Apply the holomorphic implicit function theorem to the equation $q(u,v)=\sum_{n=0}^\infty a_n(v)u^n=0$ at the point $v=u=0$. This is possible since $a_0^\prime(0)\not=0$ by assumption. We find that locally near $v=u=0$, the solution of this equation is a complex curve which is tangent at $v=u=0$ to the fiber $v\equiv 0$. (A model case is the family $\theta(v)(u)=(v+u^2)du$). As this complex curve is contained in the topological surface $\Delta$, we conclude that $\Delta$ is smooth near the point $y$. Moreover, the map $f$ maps the tangent plane of $\Sigma$ at $f^{-1}(y)$ orientation preserving onto the tangent plane at $y$ of the fiber. The above reasoning extends to the case that $\theta(x)$ is a negative transverse intersection point with ${{\mathcal{P}}}_1$. Namely, in this case we may assume that the map $\theta$ is antiholomorphic near $x$ in suitable complex coordinates. We then can write $\theta$ as a composition of a holomorphic map with complex conjugation. To summarize, up to perhaps modifying $\theta$ and hence $\Pi\circ \theta=\phi$ with a smooth homotopy, we may assume that the map $f$ is a smooth embedding which is tangent to the fibers exactly at the double zeros of the differentials $\theta(x)$ $(x\in B)$. Moreover, the differential of $f$ at the preimage of such a double zero preserves the orientation if and only if $\theta(x)$ is a positive transverse intersection point with ${{\mathcal{P}}}_1$. Then $f:\Sigma\to \Delta$ has all the properties stated in the lemma. The next observation is the last remaining step for the proof of Theorem \[branchedpoincare\]. As $\Sigma$ is a compact oriented surface, the map $f:\Sigma\to E$ defines a second homology class $\delta\in H_2(E,\mathbb{Z})$. We have \[firstchern\] The homology class $\delta\in H_2(E,\mathbb{Z})$ is Poincaré dual to $c_1(\nu^*)$. Recall that the set $A\subset \Delta$ of singular points of $\Delta $ is finite, and that $\Delta\subset E$ is a smoothly embedded surface. Thus to show that the homology class $\delta$ is Poincaré dual to $c_1(\nu^*)$, in view of the fact that every second integral homology class in $E$ can be represented by a smooth map from a closed oriented surface (for smoothness, note that $E$ is a classifying space for its fundamental group and recall the proof of Lemma \[sectionexists\]), it suffices to show the following. Let $M$ be a smooth closed oriented surface and let $\alpha:M\to E$ be a smooth map such that $\alpha(M)$ intersects $\Delta$ transversely in finitely many regular points. Then the number of such intersection points, counted with sign (and multiplicity, however the multiplicity is one by assumption on transversality) equals the degree of the pull-back line bundle $\alpha^*(\nu^*)$ on $M$. That this holds true can be seen as follows. For each $z\in E-\Delta$, the restriction of the holomorphic one-form $\theta(\Pi(z))$ to the tangent space of the fiber of $E\to B$ at $z$ does not vanish and hence it defines a nonzero element $\beta(z)$ of the fiber of $\nu^*$ at $z$. Associating to $z$ the linear functional $\beta(z)$ defines a trivialization of $\nu^*$ on $E-\Delta$. We claim that at each regular point $y\in \Delta$, the restriction of this trivialization to the oriented fiber $\Pi^{-1}(\Pi(y))$ has rotation number $1$ about $y$ with respect to a trivialization which extends across $y$. However, this is equivalent to the statement that the divisor on the Riemann surface $\Pi^{-1}(\Pi(y))$ defined by $\theta(\Pi(y))$ defines the holomorphic cotangent bundle of $\Pi^{-1}(\Pi(y))$. Namely, the holomorphic one-form $\theta(\Pi(y))$ on the surface $\Pi^{-1}(\Pi(y))$ defines an euclidean metric on $\Pi^{-1}(\Pi(y))-\Delta$ which extends to a singular euclidean metric on all of $\Pi^{-1}(\Pi(y))$. As $y$ is a simple zero of the abelian differential $\theta(\Pi(y))$ by assumption, it is a four-pronged singular point for this singular euclidean metric. Now let $D\subset \Pi^{-1}(\Pi(y))$ be a small disk about $y$ not containing any other point of $\Delta$, with boundary $\partial D$. Choose a nowhere vanishing vector field $\xi$ on $\partial D$ with $\theta(\Pi(y))(\xi)>0$. As $y$ is a four-pronged singular point, the rotation number of $\xi$ with respect to a vector field which extends to a trivialization of the tangent bundle of $D$ equals $-1$. By duality, the rotation number of the trivialization of the cotangent bundle $\nu^*$ of $\Pi^{-1}(\Pi(y))-\Delta$ defined by $\theta(\Pi(y))$ on $\partial D$ with respect to a trivialization of the cotangent bundle which extends to all of $D$ equals $1$. Now if $M$ is a closed oriented surface and if $\alpha:M\to E$ is a smooth map which intersects $\Delta$ transversely in finitely many regular points, then via modifying $\alpha$ with a smooth homotopy we may assume that the following holds true. Let $u\in M$ be such that $\alpha(u)\in \Delta$. Then $\alpha$ maps a neighborhood of $u$ in $M$ diffeomorphically to a neighborhood of $\alpha(u)$ in the fibre $\Pi^{-1}(\Pi\alpha(u))$. As the restriction of $\nu^*$ to $E-\Delta$ admits a natural trivialization, the same holds true for the pull-back of $\nu^*$ to $M-\alpha^{-1}(\Delta)$. Furthermore, for each $u\in M$ with $\alpha(u)\in \Delta$, the induced trivialization of the pull-back bundle $\alpha^*(\nu^*)$ on $M-\alpha^{-1}(\Delta)$ has rotation number $1$ or $-1$ at $u$ with respect to a trivialization of $\alpha^*\nu^*$ on a neighborhood $u$. Here the sign depends on whether $\alpha$, viewed as a diffeomorphism from a neighborhood of $u$ in $M$ onto a neighborhood of $\alpha(u)$ in $\Pi^{-1}(\Pi(u))$, is orientation preserving or orientation reversing. This just means that the degree of the line bundle $\alpha^*\nu^*$ on $M$ equals the number of intersection points of $\alpha(M)$ with $\Delta$ counted with signs (and multiplicities- by the assumption on $\alpha$, these multiplicities are all one). In other words, the degree of the line bundle $\alpha^*(\nu^*)$ on $M$ equals the intersection number $\alpha(M)\cdot \Delta$. The Poincaré dual of ${{\mathcal{P}}}_1$ {#signatureasintersection} ======================================== In this section we apply the results of Section \[branchedmultisection\] to represent the signature of a surface bundle as an intersection number. This yields a purely topological proof of Theorem \[korotkinzograf\]Ê [@KZ11]. The projectivized Hodge bundle $P:{{\mathcal{P}}}\to {{\mathcal{M}}}_g$ extends to a bundle over the Deligne Mumford compactification $\overline{{\mathcal{M}}}_g$ of ${{\mathcal{M}}}_g$ which we denote by $P:\overline{{\mathcal{P}}}\to \overline{{\mathcal{M}}}_g$. A standard spectral sequence argument shows that the second rational cohomology group of $\overline{{\mathcal{P}}}$ is generated by the pull-back of the second rational cohomology group of $\overline{{\mathcal{M}}}_g$ together with the cohomology class $\psi$ of the universal line bundle of the fibre (see Lemma 1 of [@KZ11] for details). Since $\overline{{\mathcal{P}}}$ is a Poincaré duality space, there is a cohomology class $\eta\in H^2(\overline{{\mathcal{P}}},\mathbb{Q})$ which is Poincaré dual to the closure $\overline{{\mathcal{P}}}_1$ of ${{\mathcal{P}}}_1$. The class $\eta$ can be expressed as a rational linear combination of the class $\psi$ and the pull-back of a set of generators of $H^2(\overline{{\mathcal{M}}}_g,\mathbb{Q})$. Such a set of generators consists of the first Chern class $\lambda$ of the Hodge bundle as well as the Poincaré duals $\delta_j$ $(0\leq j\leq \lfloor g/2\rfloor)$ of the boundary divisors. Here $\delta_0$ is dual to the divisor of stable curves with a single non-separating node, and for $1\leq j\leq g/2$, the class $\delta_j$ is dual to the divisor of stable curves with a node separating the stable curve into a curve of genus $j$ and a curve of genus $g-j$. Korotkin and Zograf calculated this linear combination (the formula before Remark 2 on p.456 of [@KZ11]). \[kappaone\] $$\eta=24P^*\lambda-(6g-6)\psi-P^*(2\delta_0-3\sum_{j=1}^{\lfloor g/2\rfloor} \delta_j).$$ Another proof of Theorem \[kappaone\] using tools from algebraic geometry is due to Chen [@Ch13]. In view of the identity $\kappa_1=12\lambda$ on ${{\mathcal{M}}}_g$ [@HM98], the formula in Theorem \[korotkinzograf\] is a special case of Theorem \[kappaone\] for which we present below a topological argument. Let as before $\Pi:E\to B$ be a surface bundle over a surface, defined by a smooth map $\phi:B\to {{\mathcal{M}}}_g$. Let $\theta:B\to {{\mathcal{S}}}$ be a lift of $\phi$ to the sphere subbundle of the Hodge bundle and let $\delta\in H_2(E,\mathbb{Z})$ be the Poincaré dual for the Chern class $c_1(\nu^*)$ of the vertical cotangent bundle defined by $\theta$ in Proposition \[firstchern\]. We have \[selfinter2\] $c_1(\nu^*)\cup c_1(\nu^*)(E)=\delta\cdot \delta= c_1(\nu^*)(\delta)$. Both equations follows from Poincaré duality for the bundle $E$. The first Mumford Morita Miller class $\kappa_1\in H^2({{\mathcal{M}}}_g,\mathbb{Q})$ is defined as follows [@M87]. Let as before ${{\mathcal{C}}}\to {{\mathcal{M}}}_g$ be the universal curve and let $c_1(\nu^*)$ be the first Chern class of the relative dualizing sheaf of ${{\mathcal{C}}}$. On fibres over smooth points $x\in {{\mathcal{M}}}_g$, the relative dualizing sheaf is just the sheaf of sections of the vertical cotangent bundle $\nu^*$. Then $$\kappa_1=\Pi_*(c_1(\nu^*)\cup c_1(\nu^*))$$ where $\Pi_*$ is the Gysin push-forward map obtained by integration over the fiber. In particular, for any smooth map $\phi:B\to{{\mathcal{M}}}_g$ which defines the surface bundle $E\to B$ we have $$\label{kappasigma} \kappa_1(\phi(B))=c_1(\nu^*)\cup c_1(\nu^*)(E)$$ and hence \[tau\] $\phi^*\kappa_1(B)=c_1(\nu^*)(\delta).$ Lemma \[branchedcover\] is used to show \[intersection1\] $Q\theta(B)\cdot {{\mathcal{P}}}_1=2\delta\cdot\delta=2c_1(\nu^*)(\delta)$; in particular, the restriction of the class $\eta$ to the ${{\mathcal{P}}}\vert {{\mathcal{M}}}_g$ satisfies $$\eta=2P^*\kappa_1+ a\psi=24 P^*\lambda +a\psi$$ for some $a\in \mathbb{Q}$. Using the notations from Section \[branchedmultisection\], let $\Delta\subset E$ be the set of zeros of the differentials in $\theta(B)$ and let $A\subset \Delta$ be the set of singular points. By Lemma \[branchedcover\], there is a closed oriented surface $\Sigma$ and a smooth embedding $f:\Sigma\to E$ so that $f(\Sigma)=\Delta$. The map $\Pi\circ f$ is a covering branched at the points in $A$. Each such branch point has branch index two, and the index is positive (or negative) if the corresponding intersection point of $Q\theta(B)$ with ${{\mathcal{P}}}_1$ is a positive (or negative) intersection point. Since $A$ is precisely the set of branch points of the map $\Pi\circ f$, the Hurwitz formula shows that the tangent bundle of $\Sigma$ can be represented in the form $f^*(\Pi\vert \Delta)^*(TB\otimes(-H))$ where $H$ is the line bundle on $\Sigma$ with divisor $f^{-1}(A)$. Then the normal bundle $N$ of $f(\Sigma)$ can be written as $N=\nu\otimes H^+(\otimes H^-)^{-1}$ where $H^+$ corresponds to the intersection points of $Q\theta(B)$ with ${{\mathcal{P}}}_1$ of positive intersection index, and $H^-$ corresponds to the points with negative intersection index. This implies that the self-intersection number in $E$ of the surface $f(\Sigma)\subset E$ equals $c_1(f^*\nu)(\Sigma)+b$ where $$b=Q\theta(B)\cdot {{\mathcal{P}}}_1$$ is the number of branch points of $\Pi\circ f$, counted with sign. By Poincaré duality (see Corollary \[selfinter2\]), we have $$c_1(\nu^*)(\delta)=\delta\cdot \delta= c_1(\nu)(\delta)+b=-c_1(\nu^*)(\delta)+b$$ and hence $b=2c_1(\nu^*)(\delta)$. Together with Corollary \[tau\] and the fact that $\kappa_1=12\lambda$ as classes in $H^2({{\mathcal{M}}}_g,\mathbb{Q})$ [@HM98], this completes the proof of the lemma. By Lemma \[intersection1\], for the proof of Theorem \[kappaone\] we are left with calculating the coefficient $a\in \mathbb{Q}$ in the expression in Lemma \[intersection1\]. First note that in the case $g=2$, we have $\lambda=0$ [@HM98]Ê and $$\eta=2P^*\kappa_1-6\psi=-6\psi.$$ Namely, the complex dimension of the Hodge bundle equals 2 and hence the fibre of the bundle ${{\mathcal{P}}}\to {{\mathcal{M}}}_2$ over the moduli space of genus 2 complex curves is just $\mathbb{C}P^1$. A *Weierstrass point* on a genus 2 complex curve $X$ is a double zero of a holomorphic one-form on $X$, and there are no other double zeros of any holomorphic one-forms on $X$. Now $X$ has precisely $6=-3\chi(S_2)$ Weierstrass points and hence the intersection number of the fibre of the bundle ${{\mathcal{P}}}\to {{\mathcal{M}}}_2$ with the divisor ${{\mathcal{P}}}_1$ equals $6$. As the evaluation on $\mathbb{C}P^1$ of the Chern class of the universal bundle on $\mathbb{C}P^1$ equals $-1$, the formula in the theorem follows from Poincaré duality. For arbitrary $g\geq 3$ choose a complex curve $X\in {{\mathcal{M}}}_g$ which admits an unbranched cover of degree $d$ onto a curve $Y\in {{\mathcal{M}}}_2$. Note that $d=g-1$. The projective line $\mathbb{C}P^1$ of projective holomorphic one-forms on $Y$ pulls back to a projective line of projective holomorphic one-forms on $X$. The preimage of a projective differential with simple zeros is a differential with only simple zeros, but the preimage of a differential with a double zero is a holomorphic differential with $d$ double zeros. As an abelian differential $q$ with a double zero at a point $p$ can be deformed to a differential with two simple zeros with a deformation which keeps $q$ fixed outside of an arbitrarily small neighborhood of $p$, this complex line of projective abelian differentials can be deformed to a complex line of differentials with at most one double zero each without changing the total intersection index. As a consequence, the intersection number with ${{\mathcal{P}}}_1$ of this lifted sphere equals $6d=6g-6$. This completes the proof of Theorem \[korotkinzograf\]. The second cohomology group of strata {#oddspin} ===================================== In this final section we apply the results from Section \[branchedmultisection\] to prove Theorem \[strata\], Corollary \[affinecor\], Corollary \[nocomplete\] and Corollary \[hyperelliptic\]. We begin with the proof of Theorem \[strata\]. Thus let ${{\mathcal{Q}}}\subset {{\mathcal{P}}}$ be a component of a stratum of projective abelian differentials on a surface of genus $g\geq 3$ where as before, $P:{{\mathcal{P}}}\to {{\mathcal{M}}}_g$ is the projectivized Hodge bundle. For $k\geq 0$ the cohomology group $H^{2k}({{\mathcal{Q}}},\mathbb{Q})$ can be thought of as the space of linear functionals on the $2k$-th rational homology group of ${{\mathcal{Q}}}$. As $H_{2k}({{\mathcal{Q}}},\mathbb{Q})=H_{2k}({{\mathcal{Q}}},\mathbb{Z})\otimes \mathbb{Q}$, it suffices to evaluate for $k\geq 1$ the class $P^*\kappa_k\in H^{2k}({{\mathcal{Q}}},\mathbb{Q})$ on a class in $H_{2k}({{\mathcal{Q}}},\mathbb{Z})$. Such a homology class can be represented as the image of a finite simplicial complex $B$ of homogeneous dimension $2k$ with $H_{2k}(B,\mathbb{Z})= \mathbb{Z}$ by a continuous map $\phi:B\to {{\mathcal{Q}}}$. We may furthermore assume that $B$ is a Poincaré duality space. In the case $k=1$, we may assume that that $B$ is a closed oriented surface and that $\phi$ is smooth. The pull-back of the universal curve by the map $P\circ \phi:B\to {{\mathcal{M}}}_g$ is a surface bundle $\Pi:E\to B$. Since the image of $B$ under $\phi$ is contained in the component ${{\mathcal{Q}}}$, the zeros of the projective differentials in $\phi(B)$ define a multisection $\Delta$ of $E$. A component of this multisection only contains zeros of the same order. Thus the multisection $\Delta$ has at least as many components as the number of different orders of zeros of the differentials in ${{\mathcal{Q}}}$. The universal line bundle over the fibers of ${{\mathcal{P}}}$ pulls back via the map $(\phi\circ \Pi)^*$ to a line bundle $\tau$ on $E$. In the proof of the following lemma and later on, we always denote by $\zeta^*$ the dual of a line bundle $\zeta$, or, equivalently, the inverse of $\zeta$ in the group of all complex line bundles on $E$. In particular, $\nu^*$ denotes the vertical cotangent bundle of $E$. We have \[trivial\] Let $\Delta_j$ be a component of $\Delta$ with the property that the order of the zeros of the points in $\Delta_j$ equals $k_j$. Then $\nu^*\vert \Delta_j=\tau^{\otimes (k_j+1)}\vert \Delta_j$. The lemma is well known, but we were not able to locate it in the literature, so we provide a proof. A point $y\in \Delta_j$ is a zero of order $k_j$ of a projective holomorphic one-form on the fiber of $E$ through $y$. The fiber $\tau_y$ of $\tau$ at $y$ can be identified with the complex line of holomorphic one-forms in this projective class. As $y$ is a zero of this holomorphic one-form of order $k_j$, there are local coordinates $z$ on $\Pi^{-1}(\Pi(y))$ near $y$, with $y$ corresponding to $z=0$, such that a nonzero differential in $\tau_y$ can locally near $y$ be written in the form $az^{k_j}dz$ for some $a\in \mathbb{C}^*$. This differential then defines a singular euclidean metric near $y$, which has a cone point of cone angle $2\pi(k_j+1)$ at $y$. A tangent vector $Y\in T_y\Pi^{-1}(\Pi(y))$ of the fiber of $E$ through $y$ defines a $\mathbb{C}$-valued functional $\beta_Y$ on $\tau_y$ by associating to a differential $\omega\in \tau_y$ the *complex* length of $Y$ with respect to the singular euclidean metric defined by $\omega$, that is, we distinguish real and imaginary part of this length, and we distinguish the orientation. If $0\not=Y$ then we have $\beta_{aY}=\beta_Y$ for some $a\in \mathbb{C}$ if and only if $a=e^{2\pi i \ell/(k_j+1)}$ for some $\ell\in \mathbb{Z}$. As a consequence, $Y\to \beta_Y$ defines an nonzero element in the fiber of the bundle $\nu^*\otimes (\tau^{k_j+1})^*$ at $y$. Since this element depends continuously on $y\in \Delta_j$ by construction, we conclude that the line bundle $\nu^*\otimes (\tau^{k_j+1})^*\vert \Delta_j$ is trivial. In other words, $\nu^*\vert \Delta_j$ is isomorphic to $\tau^{\otimes (k_j+1}\vert \Delta_j$ which shows the lemma. In the proof of Proposition \[firstchern\], we used triviality of the vertical cotangent bundle on the complement of the branched multisection to identify the Poincaré dual of the class $c_1(\nu^*)$. The following observation serves as a substitute. \[trivial2\] The line bundle $\nu\otimes \tau$ is trivial on $E-\Delta$. Let $\alpha\in \nu^*$ be any vector in the vertical cotangent bundle of $E$ at a point $y\in E-\Delta$. We may view $\alpha$ as a $\mathbb{C}$-linear functional on the holomorphic tangent space of the fiber at $y$. As the dimension of the complex vector space of $\mathbb{C}$-linear functionals $T_y\Pi^{-1}(\Pi(y))\to \mathbb{C}$ equals one, there is precisely one holomorphic one-form $\Lambda(\alpha)$ in the fiber $\tau_y$ of the bundle $\tau$ at $y$ whose restriction to $T_y\Pi^{-1}(\Pi(y))$ coincides with $\alpha$. Then $\alpha\to \Lambda(\alpha)$ defines an isomorphism between $\nu^*$ and $\tau$ on $E-\Delta$, and hence it defines a nowhere vanishing section of the bundle $(\nu^*)^*\otimes \tau=\nu\otimes \tau$ on $E-\Delta$ which is what we wanted to show. Let $\Delta_1,\dots,\Delta_m$ be the components of $\Delta$. The restriction of the projection $\Pi$ to any of these components is a finite unbranched cover $\Delta_j\to B$. Hence each of these components defines a homology class $[\Delta_j]\in H_{2k}(E,\mathbb{Z})$. For each $j$ let $k_j$ be the order of the zero of the differentials in $\phi(B)$ at the points in $\Delta_j$. Denote as before by $\psi$ the Chern class of the universal bundle over the fibers of ${{\mathcal{P}}}$. The following lemma is an analog of Proposition \[firstchern\] and is the key step towards the proof of Theorem \[strata\]. \[dual\] Assume that either 1. $B$ is a closed surface or 2. $\phi^*\psi=0$; then the Chern class $c_1(\nu^*\otimes \tau^*)$ of the complex line bundle $\nu^*\otimes \tau^*\to E$ is Poincaré dual to $\sum_jk_j[\Delta_j]$. Since $B$ is a Poincaré duality space by assumption, the same holds true for $E$. Thus there exists a cohomology class $\alpha\in H^2(E,\mathbb{Z})$ which is Poincaré dual to $\delta=\sum_jk_j[\Delta_j]$. The class $\alpha\in H^2(E,\mathbb{Z})$ is the first Chern class of a complex line bundle $L$ on $E$. Namely, complex line bundles on $E$ are classified up to topological equivalence by classes in the cohomology group $H^1(E,{{\mathcal{R}}}^*)$ where ${{\mathcal{R}}}^*$ is the sheaf of continuous nowhere vanishing $\mathbb{C}$-valued functions on $E$. Since the sheaf ${{\mathcal{R}}}$ of all continuous $\mathbb{C}$-valued functions on $E$ is fine, associating to a line bundle its Chern class defines an isomorphism between the group of line bundles on $E$ and the cohomology group $H^2(E,\mathbb{Z})$ via the long exact sequence for sheaf cohomology defined by the short exact sequence $0\to {{\mathcal{R}}}\to {{\mathcal{R}}}^*\to \mathbb{Z}\to 0.$ As we will need some more precise information about the line bundle $L$, we also sketch the explicit construction of $L$ as for example described on p. 141 of [@GH78]. Let ${{\mathcal{F}}}$ be the sheaf of continuous complex valued functions on $E$ whose restrictions to a fibre of $E$ are holomorphic and not identically zero. Let ${{\mathcal{F}}}^*$ be the sheaf of functions in ${{\mathcal{F}}}$ which vanish nowhere. A global section of ${{\mathcal{F}}}/{{\mathcal{F}}}^*$ is given by an open cover $\{U_\alpha\}$ of $E$ and a function $f_\alpha\in {{\mathcal{F}}}$ on $U_\alpha$ for each $\alpha$ so that $$\frac{f_\alpha}{f_\beta}\in {{\mathcal{F}}}^*(U_\alpha\cap U_\beta).$$ We first claim that a component $\Delta_j$ of $\Delta$ defines a section of ${{\mathcal{F}}}/{{\mathcal{F}}}^*$. Namely, choose a cover ${{\mathcal{U}}}=\{U_\alpha\}$ of $E$ which consists of open contractible sets with the following additional properties. The intersection of each set $U_\alpha$ with $\Delta_j$ is connected. There is a function $f_\alpha\in {{\mathcal{F}}}$ on $U_\alpha$ such that for each point $y\in U_\alpha\cap \Delta_j$, the restriction of $f_\alpha$ to $\Pi^{-1}(\Pi(y))\cap U_\alpha$ has a simple zero at $y$. Moreover, these are the only zeros of $f_\alpha$. By the construction of $\Delta_j$, such functions $f_\alpha$ exists provided that the sets $U_\alpha$ are sufficiently small. Namely, let $z$ be a local holomorphic coordinate on the intersection of $U_\alpha$ with the fibers of $E\to B$ depending continuously on the base. The functions $f_\alpha$ can be chosen as the fiberwise coordinate, normalized in such a way that they vanish precisely at $\Delta_j\cap U_\alpha$. Any two such sections of ${{\mathcal{F}}}/{{\mathcal{F}}}^*$ differ by a section of ${{\mathcal{F}}}^*$ and hence these sections define a class in $H^1({{\mathcal{U}}},{{\mathcal{F}}}^*)$. A standard refinement argument (see [@GH78] for details) then yields a cohomology class $\zeta(\Delta_j)\in H^1(E,{{\mathcal{F}}}^*)$ and hence a line bundle $L_j$ on $E$. The line bundle $L_j$ has the following properties. 1. $L_j\vert E-\Delta_j$ is trivial. 2. The restriction of $L_j$ to each fiber of $E$ is holomorphic. 3. $L_j$ has a continuous fiberwise holomorphic section which vanishes to first order precisely at the points of $\Delta_j$. In particular, the degree of the restriction of $L_j$ to a fiber of $E$ equals the degree of the covering $\Pi\vert \Delta_j:\Delta_j\to B$. Furthermore, these properties characterize the line bundle $L_j$ uniquely. The first property is immediate from the construction. The second property also follows from the construction. Namely, as the functions $f_\alpha$ are fiberwise holomorphic by assumption, the same holds true for $f_\alpha/f_\beta$ and hence for the restriction of the transition functions of $L_j$ to a fiber of the surface bundle. A section of $L_j$ with the properties stated in the third property is given by the defining functions $f_\alpha$ on the sets $U_\alpha$. As they only vanish on $\Delta_j$ and are compatible with the transition functions which define the line bundle $L_j$, they define indeed a global section of $L_j$. This section is moreover fiberwise holomorphic, and it vanishes to first order at the intersection of a fiber with $\Delta_j$. We next show that properties (1)-(3) above imply that $c_1(L_j)$ is Poincaré dual to $[\Delta_j]$ from which uniqueness follows. Namely, let $B$ be a closed surface and let $\phi:B\to E$ be a continuous map which is transverse to $\Delta_j$ (viewed as a map between simplicial complexes). Then $\phi(B)$ intersects $\Delta_j$ in finitely many points, counted with multiplicity. The pull-back by $\phi$ of the global section of $L_j$ is a section of $\phi^*L_j$ which vanishes precisely at the points in $\phi^{-1}(\Delta_j)$, with multiplicity equal to the multiplicity of the intersection points. This shows that $c_1(L_j)$ is Poincaré dual to $[\Delta_j]$. We refer to the proof of Proposition \[firstchern\] for a more detailed discussion in a similar situation. Now assume that $B$ is a closed surface. Assume furthermore without loss of generality that the map $\phi$ is smooth. Then $\Delta$ is a smoothly embedded surface in $E$ transverse to the fibers. The pull-back of the universal bundle over the fibers of ${{\mathcal{P}}}\to {{\mathcal{M}}}_g$ is a line bundle $\xi$ on $B$ which pulls back to the line bundle $\tau$. Line bundles on a surface are uniquely determined by their degree up to topological equivalence. Let $d\in \mathbb{Z}$ be the degree of $\xi$. Choose a point $p\in B$ and a trivialization of $\xi$ on $B-\{p\}$ as well as a trivialization of $\xi$ on a disk neighborhood $D$ of $B$. These trivializations determine a global fiberwise holomorphic section of $\nu^*$ on $\Pi^{-1}(B-\{p\})$ and on $\Pi^{-1}(D)$ which vanishes precisely on the components $\Delta_j$ of $\Delta$, to the order $k_j$. Namely, they define a lift of the map $\phi\vert B-\{p\}$ and $\phi\vert D$ to the Hodge bundle over ${{\mathcal{M}}}_g$. The line bundle $\xi$ is obtained from its restrictions to $B-\{p\}$ and $D$ be gluing the fibers over a circle $S^1$ in $D-\{p\}\subset B-\{p\}$ surrounding the point $p$. This fiberwise gluing map is given by a homomorphism $S^1\to S^1\subset \mathbb{C}^*$ whose degree is the degree of the line bundle. Furthermore, gluing the fibers of $\xi$ over the circle $S^1$ extends to a gluing of the holomorphic sections of $\nu^*$ over the circle $S^1$ which are uniquely determined by the points in these fibers (and, of course, the map $\phi$). By naturality of the tensor product of line bundles, a global section of the tensor product $\nu^*\otimes \tau^*$ is given by the same construction, but with the gluing map corresponding to the trivial line bundle on $B$. As a consequence, the bundle $\nu^*\otimes \tau^*$ admits a global fiberwise holomorphic section which vanishes precisely on the components of $\Delta$, and the vanishing degree on the component $\Delta_j$ equals $k_j$. By the beginning of this proof and the fact that the Poincaré dual of $k_j[\Delta_j]$ is the first Chern class of the line bundle $\otimes_j L_j^{k_j}$, this means that $\nu^*\otimes \tau^*$ is Poincaré dual to $\Delta$ which is what we wanted to show. Similarly, if $B$ is arbitrary and if $\phi^*\psi=0$, then the pull-back by $\phi$ of the universal bundle on ${{\mathcal{P}}}$ is trivial and therefore its pull-back $\tau$ is trivial as well. Therefore the map $\phi$ admits a lift to the Hodge bundle, and as in the proof of Proposition \[firstchern\], we conclude that $\nu^*$ admits a global fiberwise holomorphic section which only vanishes on the components of $\Delta_j$ to the correct order. This then implies as before that $\nu^*$ is Poincare dual to $\sum_jk_j[\Delta_j]$. The lemma is proven. We use this to show \[degree2\] Let ${{\mathcal{Q}}}$ be a component of a stratum, let $B$ be a closed surface and let $\phi:B\to {{\mathcal{Q}}}$ be a smooth map. Then $\kappa_1(P\phi(B))=\psi(\phi(B))=0$. We continue to use the assumptions and notations from Lemma \[dual\]. Consider in particular the components $\Delta_j$ of the cycle $\Delta$. These are smoothly embedded surfaces in the surface bundle $\Pi:E\to B$ defined by $P\circ \phi$. By Lemma \[dual\], the first Chern class $c_1(\nu^*)-c_1(\tau)$ of the line bundle $\nu^*\otimes \tau^*$ is Poincaré dual to the homology class $\delta=\sum_jk_j[\Delta_j]$ where as before, $k_j\geq 1$ is the order of the zero of the points in $\Delta_j$. We compute the self-intersection number of $\delta$ as follows. As $\Delta_j\subset E$ is a smoothly embedded surface transverse to the fibers of $E\to B$, the vertical tangent bundle $\nu$ is the normal bundle of $\Delta_j$ and hence the self-intersection number of $[\Delta_j]$ equals $c_1(\nu)[\Delta_j]$. Since the components of $\Delta_j$ are pairwise disjoint we then have $$\label{firstway}\delta \cdot \delta=(\sum_jk_j [\Delta_j])(\sum_jk_j[\Delta_j])= \sum_jk_j^2[\Delta_j]\cdot [\Delta_j]=\sum_jk_j^2c_1(\nu)(\Delta_j).$$ On the other hand, by Lemma \[dual\], the class $\delta$ is Poincaré dual to $c_1(\nu^*)-c_1(\tau)$. Moreover, by Lemma \[trivial\], the restriction of $\nu^*\otimes \tau^*$ to $\Delta_j$ is equivalent to the line bundle $\tau^{k_j}$. Thus we also have $$\label{secondway} \delta\cdot \delta=\sum_j k_j (c_1(\nu^*)-c_1(\tau))[\Delta_j]=\sum_jk_j^2c_1(\tau)([\Delta_j])= \sum_jk_j^2d_j b$$ where $b=\psi(\phi(B))$ and where $d_j$ is the degree of the map $\Pi\vert \Delta_j: \Delta_j\to B$. The last equality holds true since $\tau$ is the pull-back of the line bundle on $B$ with Chern class $\phi^*\psi$. Substituting the identity $c_1(\nu)[\Delta_j]=-(k_j+1)c_1(\tau)[\Delta_j]=-(k_j+1)d_jb$ from Lemma \[trivial\] in equation (\[firstway\]) and comparison with equation (\[secondway\]) then yields $$\label{combine} b(\sum_j d_j(k_j^2(k_j+1)+k_j^2))=0.$$ But the numbers $d_j,k_j$ are all positive and hence this is only possible if $b=0$. As a consequence, the line bundle $\tau$ on $E$ is trivial, and $\delta$ is Poincaré dual to $c_1(\nu^*)$. Moreover, equation (\[secondway\]) yields that $\delta \cdot \delta=c_1(\nu^*)\cup c_1(\nu^*)(E)=\kappa_1(P\phi(B))=0$ whence the proposition. Let ${{\mathcal{Q}}}$ be a component of a stratum of projective abelian varieties. Let $V$ be a complex variety of dimension $k$. We have to show that any holomorphic map $\eta:V\to {{\mathcal{Q}}}$ is constant. To this end we evoke the following result of Wolpert [@W86]: There exists a holomorphic line bundle $L$ on ${{\mathcal{M}}}_{g}$ with Chern class $\kappa_1$, and there is a Hermitean metric on $L$ with curvature form $\omega=\frac{1}{2\pi^2}\omega_{WP}$ where $\omega_{WP}$ is the Weil Peterssen Kähler form on ${{\mathcal{M}}}_{g}$. In particular, $\omega$ is positive. As a consequence, if $V$ is a complex variety of dimension $k\geq 1$ and if $\zeta:V\to {{\mathcal{M}}}_{g}$ is a holomorphic map which does not factor through a map from a variety of smaller dimension, then $$\label{kappa1} \kappa_1^k(\zeta(V))=\int_V(\zeta^*\omega)^k>0.$$ Now if $\eta:V\to {{\mathcal{Q}}}$ is holomorphic, then the same holds true for $P\circ \eta$. Proposition \[degree2\] shows that the pull-back by $P$ of the cohomology class $\kappa_1$ vanishes on ${{\mathcal{Q}}}$ and hence $P^*\kappa_1^k(\eta(V))=\kappa_1^k(P\circ \eta)(V)=0$. By possibly modifying the variety $V$ we may assume that the holomorphic map $\eta$ does not factor through a map from a variety of smaller dimension. However, by positivity of the curvature form $\omega$, if the dimension of $V$ is positive, then the map $P\circ \eta$ factors through a map of a variety of smaller dimension. As a consequence, the dimension of the generic fiber of $P\circ \eta$ is positive. Let $C\subset V$ be such a generic fiber of positive dimension. Since by assumption the map $\eta$ does not factor through a map from a variety of smaller dimension, the restriction of $\eta$ to $C$ is a nonconstant holomorphic map $C\to {{\mathcal{Q}}}$ whose image is entirely contained in a fiber of the bundle ${{\mathcal{P}}}\to {{\mathcal{M}}}_g$. Moreover, we may assume that $\eta\vert C$ does not factor through a map from a variety of smaller dimension. On the other hand, by Proposition \[degree2\], if we denote as before by $\psi$ the Chern class of the universal bundle of the fiber of ${{\mathcal{P}}}\to {{\mathcal{M}}}_g$, then $(\eta\vert C)^*\psi=0$. But the dual of the universal bundle over a complex projective space admits a Hermitean metric with fiberwise positive curvature form (which up to a positive constant is just the Fubini Study metric) and hence using the same argument as in the previous paragraph, we deduce that $\eta\vert C$ factors through a map from a variety of smaller dimension. This is a contradiction which completes the proof of the corollary. Let ${{\mathcal{M}}}_{g,{\rm odd}}$ be the finite orbifold cover of ${{\mathcal{M}}}_g$ which is the moduli space of curves with odd theta characteristic. By definition, this is the quotient of Teichmüller space by the finite index subgroup of ${\rm Mod}(S_g)$ which preserves an *odd spin structure* on $S_g$. Such an odd spin structure is defined as a quadratic form on $H_2(S_g,\mathbb{Z}/2\mathbb{Z})$ with odd Arf invariant (see [@KtZ03] for more information). Each of the curves $X\in {{\mathcal{M}}}_{g,{\rm odd}}$ admits an odd *theta characteristic* which by definition is a holomorphic line bundle $L$ whose square equals the canonical bundle and such that $h^0(X,L)$ is odd. The square of a holomorphic section of $L$ is a holomorphic one-form on $X$ with all zeros of even multiplicity. All bundles over ${{\mathcal{M}}}_g$ will be pulled back to ${{\mathcal{M}}}_{g,{\rm odd}}$ and will be denoted by the same symbols. Let ${{\mathcal{Q}}}$ be the closure in ${{\mathcal{P}}}$ of the stratum $\mathbb{P}{{\mathcal{H}}}(2,\dots,2)^{\rm odd}$ of projective abelian differentials with all zeros of order two and odd spin structure. Then the restriction of the projection $P:{{\mathcal{P}}}\to {{\mathcal{M}}}_{g,{\rm odd}}$ to ${{\mathcal{Q}}}$ is surjective. By a result of Teixidor i Bigas [@TiB87], the locus of pairs $(X,L)\in {{\mathcal{M}}}_{g,{\rm odd}}$ with $h^0(X,L)\geq 3$ has codimension three in ${{\mathcal{M}}}_{g,{\rm odd}}$. This implies that there exists a subset $A\subset {{\mathcal{M}}}_{g, {\rm odd}}$ of codimension 3 such that the restriction of $P$ to ${{\mathcal{Q}}}-P^{-1}(A)$ is a bijective holomorphic morphism. Let ${{\mathcal{D}}}\subset {{\mathcal{M}}}_{g,{\rm odd}}$ be the image of the set of all points in ${{\mathcal{Q}}}$ which are contained in the boundary of $\mathbb{P}{{\mathcal{H}}}(2,\dots,2)^{\rm odd}$. For reasons of dimension, ${{\mathcal{D}}}$ is a divisor in ${{\mathcal{M}}}_{g,{\rm odd}}$. Moreover, if $h^0(X,L)\geq 3$ then there is at least one holomorphic section of $L$ whose square is a differential with at least one zero of order at least four. This yields that $A\subset {{\mathcal{D}}}$. As a consequence, if $V$ is any complex variety and if $\eta: V\to {{\mathcal{M}}}_{g,{\rm odd}}-{{\mathcal{D}}}$ is any holomorphic map, then $\eta$ lifts to a holomorphic map into $\mathbb{P}{{\mathcal{H}}}(2,\dots,2)^{\rm odd}$. By Corollary \[affinecor\], $\eta$ is constant. This shows that indeed, ${{\mathcal{M}}}_{g,{\rm odd}}-{{\mathcal{D}}}$ is affine. Similarly, define ${{\mathcal{D}}}_2\subset {{\mathcal{D}}}$ to be the image under the map $P$ of the union of all strata of projective abelian differentials with all zeros of even order and either at least one zero of order at least 6 or at least two zeros or order at least 4. Then ${{\mathcal{D}}}_2\subset {{\mathcal{D}}}$ is of codimension one. Furthermore, as $A\subset {{\mathcal{D}}}$ is of complex codimension 2, if $B$ is a closed surface and if $\phi:B\to {{\mathcal{D}}}-{{\mathcal{D}}_2}$ is any smooth map, then with a small homotopy, we can modify $\phi$ to a map $\hat \phi$ whose image is entirely contained in ${{\mathcal{D}}}-({{\mathcal{D}}}_2\cup A)$. Then $\hat \phi$ lifts to a smooth map $B\to \mathbb{P}{{\mathcal{H}}}(2,\dots,2,4)^{\rm odd}$ and hence by Proposition \[degree2\], we have $\kappa_1(\hat \phi(B))=\kappa_1(\phi(B))=0$. By the discussion in the proof of Corollary \[affinecor\]. this implies that $\phi$ can not be non-constant and holomorphic. This shows that ${{\mathcal{D}}}-{{\mathcal{D}}}_2$ does not contain a complete curve. \[lowgenus\] Using the notations from the proof of Corollary \[nocomplete\], a result of Harris (see Theorem 0.1 of [@TiB87]) shows that for $g=3$ and $g=4$, the locus of all curves $X\in {{\mathcal{M}}}_{g,{\rm odd}}$ with odd theta characteristic $L$ and such that $h^0(X,L)\geq 3$ is empty. As a consequence, the restriction of the projection $P:{{\mathcal{P}}}\to {{\mathcal{M}}}_{g,{\rm odd}}$ to the closure of $\mathbb{P}{{\mathcal{H}}}(2,2)^{\rm odd}$ (for $g=3$) and of $\mathbb{P}{{\mathcal{H}}}(2,2,2)^{\rm odd}$ (for $g=4$) is a biholmorphism. By Corollary \[nocomplete\], the projections of the components of strata $\mathbb{P}{{\mathcal{H}}}(2,2)^{\rm odd}$ and $\mathbb{P}{{\mathcal{H}}}(4)^{\rm odd}$ define a stratification of depth $2=g-1$ of ${{\mathcal{M}}}_{3,{\rm odd}}$ with affine strata. This is however well known and is discussed in [@FL08] and [@Ch19]. Similarly, the components of the strata $\mathbb{P}{{\mathcal{H}}}(2,2,2)^{\rm odd}$, $\mathbb{P}{{\mathcal{H}}}(2,4)^{\rm odd}$ and $\mathbb{P}{{\mathcal{H}}}(4)^{\rm odd}$ project to a stratification of ${{\mathcal{M}}}_{4,{\rm odd}}$ of depth $3=g-1$ with affine strata. Corollary \[hyperelliptic\] from the introduction also is a consequence of Theorem \[strata\]. Namely, the *hyperelliptic locus* in ${{\mathcal{M}}}_g$ is the moduli space of all hyperelliptic complex curves. In the case $g=2$, this is just the entire moduli space. By Corollary \[nocomplete\] and its proof, it suffices to show that the restriction of the first Mumford Morita Miller class to the hyperelliptic locus vanishes. Thus let $B$ be a closed oriented surface and let $\phi:B\to {{\mathcal{M}}}_g$ be a smooth map whose image is contained in the hyperelliptic locus. The pull-back under $\phi$ of the universal curve is a surface bundle $\Pi:E\to B$. Choose a basepoint $x_0\in B$ and a Weierstrass point $z_0\in \Pi^{-1}(x_0)$ in the fibre. Since Weierstrass points are distinct, every loop $\gamma$ in $B$ based at $x_0$ admits a unique lift to $E$ beginning at $z_0$ whose image consists of Weierstrass points. The endpoint is another Weierstrass point in $\Pi^{-1}(x_0)$ which only depends on the homotopy class of the loop. Thus this construction defines a homomorphism of $\pi_1(B)$ into the permutation group of the $2g+2$ Weierstrass points of $\Pi^{-1}(x_0)$. Let $\Gamma<\pi_1(B)$ be the kernel of this homomorphism and let $\theta:B_0\to B$ be the finite cover of $B$ with fundamental group $\Gamma$. Then $\kappa_1(\phi\theta(B_0))=p\kappa_1(\theta(B))$ where $p\geq 1$ is the degree of the covering $B_0\to B$. Hence it suffices to show that $\kappa_1(\phi\theta(B_0))=0$. The homomorphism of $\pi_1(B_0)$ into the permutation group of $2g+2$ points defined above is trivial by construction and hence the pullback $\Pi:E_0\to B_0$ of the universal curve by $\phi\circ \theta$ admits $2g+2$ pairwise disjoint sections whose images consist of Weierstrass points. Let $\alpha:B_0\to E_0$ be one of these sections. Then for each $x\in B_0$, there is an abelian differential $q(x)$ on the Riemann surface $\phi\circ \theta(x)$, unique up to a multiple by an element in $\mathbb{C}^*$, which has a zero of order $2g-2$ at $x$. This differential is the pull-back under the hyperelliptic involution of a meromorphic *quadratic* differential on $\mathbb{C}P^1$ which has a single zero of order $2g-3$ at the image of the distinguished Weierstrass point, and a simple pole at each of the other $2g+1$ Weierstrass points (see [@KtZ03] for a detailed account on this construction). The projective class of this differential depends smoothly on $x$ and hence the differentials $q(x)$ $(x\in B_0)$ define a section of the bundle ${{\mathcal{P}}}$ whose image is contained in the stratum of projective abelian differentials with a single zero of order $2g-2$. Theorem \[hyperelliptic\] is now an immediate consequence of Proposition \[degree2\]. Let ${{\mathcal{Q}}}\subset {{\mathcal{P}}}$ be a component of a stratum of projective abelian differentials. It suffices to show that for every $k\geq 1$, for every finite $2k$-dimensional simplicial Poincaré duality complex $B$ of homogeneous dimension $2k$ and for every continuous map $\phi:B\to {{\mathcal{Q}}}$ we have $(P\circ \phi)^*\kappa_{k}(B)=0$. To this end consider the surface bundle $\Pi:E\to B$ defined by $P\circ \phi$. The zeros of the differentials in $\phi(B)$ define a multisection $\Delta$ of $E$. Let as before $\Delta_1,\dots,\Delta_m$ be the components of $\Delta$ and for $j\leq m$ let $k_j$ be the multiplicity of the zero of a point in $\Delta_j$. By Lemma \[dual\] and Proposition \[degree2\], the homology class $\delta=\sum_jk_j[\Delta_j]$ is Poincare dual to $c_1(\nu^*)$. In particular, by the definition of the $k$-th Mumford Morita Miller class [@M87], we have $$\label{kappak} \kappa_k(P\phi(B))=c_1(\nu^*)^{k+1}(E)=c_1(\nu^*)^k(\delta).$$ The restriction of $\Pi$ to each of the sets $\Delta_j\subset \Delta$ is a covering. Moreover, by Lemma \[trivial\], we have $\nu^*\vert \Delta_j=\tau^{\otimes k_j+1}$ where $\tau$ is the pull-back by $\phi\circ \Pi$ of the universal bundle over the fibers of ${{\mathcal{P}}}\to {{\mathcal{M}}}_g$. But by Proposition \[degree2\], the Chern class of the line bundle $\tau$ on $E$ vanishes and hence the bundle $\tau$ is trivial. Then the same holds true for $\nu^*\vert \Delta_j$ and therefore $c_1(\nu^*)^k(\Delta_j)=0$. Since this holds true for all $j$, we have $\kappa_k(P\phi(B))=0$ by equation (\[kappak\]) which completes the proof of the theorem. [EKKOS02]{} D. Chen, [*Strata of abelian differentials and the Teichmüller dynamics*]{}, J. Mod. Dyn. 7 (2013), 135–152. D. Chen, [*Affine geometry of strata of differentials*]{}, J. Inst. Math. Jussieu 18 (2019), 1331–1340. L. Chen, N. Salter, [*The Birman exact sequence does not virtually split*]{}, arXiv:1804.11235. B. Farb, D. Margalit, [*A primer on mapping class groups*]{}, Princeton Univ. Press, Princeton 2012. C. Fontanari, E. Looijenga, [*A perfect stratification of ${{\mathcal{M}}}_g$ for $g\leq 5$*]{}, Geom. Dedicata 136 (2008), 133-143, P. Griffith, J. Harris, [*Pricniples of algebraic geometry*]{}, Wiley-Interscience 1978. R. Hain, E. Looijenga, [*Mapping class groups and moduli spaces of curves*]{}, Proc. Symp. Pure Math., AMS 62, 97–142 (1998). U. Hamenstädt, [*Some topological properties of surface bundles*]{}, preprint 2019. J. Harer, [*The second homology group of the mapping class group of an oriented surface*]{}, Invent. Math. 72 (1983), 221–239. J. Harris, I. Morrison, [*Moduli of curves*]{}, Springer Graduate Text in Math. 187, Springer, New York 1998. A. Hatcher, [*Algebraic topology*]{}, Cambridge Univ. Press, Cambridge 2001. M. Kontsevich, A. Zorich, [*Connected components of the moduli space of Abelian differentials with prescribed singularities*]{}, Invent. Math 153 (2003), 631–678. D. Korotkin, P. Zograf, [*Tau function and moduli of differentials*]{}, Math. Res. Lett. 18 (2011), 447–458. E. Looijenga, [*On the tautological ring of ${{\mathcal{M}}}_g$*]{}, Invent. Math. 121 (1995), 411–419. G. Mondello, [*On the cohomological dimension of the moduli space of Riemann surfaces*]{}, Duke Math. J. 166 (2017), 1463–1515. G. Mondello, E. Looijenga, [*The fine structure of the moduli space of abelian differentials in genus 3*]{}, Geom. Dedicata 169 (2014), 109–128. S. Morita, [*Characteristic classes of surface bundles*]{}, Invent. Math. 90 (1987), 551–577. M. Teixidor i Bigas, [*Half-canonical series on algebraic curves*]{}, Trans. Amer. Math. Soc. 302 (1987), 99–115. S. Wolpert, [*Chern forms and the Riemann tensor for the moduli space of curves*]{}, Invent. Math. 85 (1986), 119–145. MATHEMATISCHES INSTITUT DER UNIVERSITÄT BONN\ ENDENICHER ALLEE 60\ 53115 BONN, GERMANY e-mail: [email protected] [^1]: Partially supported by the Hausdorff Center Bonn\ AMS subject classification:57R22, 57R20, 30F30, 14h10
--- abstract: 'Recent research in the enteric nervous system, sometimes called the second brain, has revealed potential of the digestive system in predicting emotion. Even though people regularly experience changes in their gastrointestinal (GI) tract which influence their mood and behavior multiple times per day, robust measurements and wearable devices are not quite developed for such phenomena. However, other manifestations of the autonomic nervous system such as electrodermal activity, heart rate, and facial muscle movement have been extensively used as measures of emotions or in biofeedback applications, while neglecting the gut. We expose electrogastrography (EGG), i.e., recordings of the myoelectric activity of the GI tract, as a possible measure for inferring human emotions. In this paper, we also wish to bring into light some fundamental questions about emotions, which are often taken for granted in the field of Human Computer Interaction, but are still a great debate in the fields of cognitive neuroscience and psychology.' author: - bibliography: - 'biblio.bib' --- Introduction ============ Recent developments in Human Computer Interaction (HCI), and physiological and affective computing brought to light the necessity for wearable and robust physiological sensors. So far, using physiological sensors a person can: (1) consciously monitor/regulate their bodily functions through biofeedback for well-being [@McKee2008], (2) (un)consciously adapt an environment or task, which can for instance increase immersion in gaming [@van2013], or (3) consciously manipulate an external device with only physiological (neural) activity, as in active Brain-Computer Interfaces, to control wheelchairs or for communication for example[@wolpaw2002brain]. Measures of electrodermal activity (EDA), cardiac function, facial muscles activity, and respiration have been used frequently to assess emotional states [@mayer2000]. Nowadays there are wearable devices developed for measuring EDA and heart rate, such as the Empatica E4 smartwatch. Remarkably however, the gastrointestinal system has often been neglected by affective research. Even though humans regularly experience having a “gut feeling” or “butterflies in the stomach”, they often overlook the importance of such phenomenon as an actual physiological process. However, studies have shown that indeed the gut could have an important role in affective disorders [@bennett1998]. Still, non-invasive, robust physiological measurements or wearable devices for such phenomena are not yet developed. The possibility of assisting users in regulating the internal processes of the gut, and thus regulating the emotions that arise with such physiological processes are not yet taken seriously into consideration. In this paper we briefly explain what the gut signal is, and the usefulness of such modality for inferring and regulating emotions, using a biofeedback. We also tackle some fundamental questions about emotions which are often taken lightly in the HCI community. Gastro-Intestinal tract ======================= The gastro-intestinal (GI) tract comprises of the mouth, esophagus, stomach and intestines. The GI tract has a bidirectional communication with the Central Nervous System (CNS) through the sympathetic and parasympathetic systems [@sudo2004], thus researchers often refer to the gut-brain axis. The GI tract is governed by the enteric nervous system which can act independently from the CNS and contains over 500 million nerves, which is why it is also called the “second brain”. Moreover, today there has been many interest in the gut microbiota or microorganisms that inhabit the gut and have shown to have a role in the stress regulation in mice [@sudo2004]. The electrogastrogram (EGG) is a reliable and noninvasive method of recording gastric myoelectrical activity [@nelsen1968]. The gastric myoelectrical activity paces the contraction of the stomach. The normal frequency of the electrogastric wave is 3 cycles per minute (cpm), and is termed normogastria [@koch2004]. It is worth nothing that amplifiers typically used for electroencephalography (assessing brain activity) have shown to be equally useful for EGG, for example in [@gharibans2018] using an affordable and open-source device, OpenBCI. Recent studies showed that EGG could be a valuable measure of emotion [@vianna2006]. Individuals often report a “nervous stomach” for too frequent contractions (tachygastria, 4–9 cpm) during stressful experiences [@vujic2018]. Participants reacted with tachygastria during horror movies, but a reduced frequency of gastric waves during a relaxation session [@yin2004]. It is also shown that gastric slow waves can be useful for predicting the experience of disgust [@harrison2010embodiment]. Individuals clearly react emotionally with their gut, as well as the gut influences their emotions. As such, we advocate that it could be interesting to propose biofeedback specifically aimed at regulating a “nervous stomach”. Biofeedback for gut awareness ============================= Biofeedback is a system that externalizes one’s internal bodily activity, for example in visual, audio or haptic modalities. It assists people to be aware of their internal processes or physiological activity, as a technique of interoception, known to be beneficial for well-being [@Farb2015]. Notice that biofeedback is built under the assumption that being aware of one’s physiological processes creates or modulates an emotion. In other words, the perception of physiological changes contributes to the content of conscious experiences of emotion [@tsuchiya2007emotion]. Biofeedback thus externalizes such phenomena and enables people to consciously examine and regulate their internal states and their experience of emotions. As the gut clearly has an important role in human emotion, we believe it could be beneficial to build an EGG wearable device which could record and process feedback to one’s gut contractions, as depicted in Figure \[fig:teaser\]. Interestingly, the use of biofeedback could also expose the relationship between experiencing bodily activity and experiencing an emotion. In experiments where people are given a fake biofeedback to manipulate their emotions toward images of individuals, the perception of external audio stimuli dominated over their autonomic perception [@woll1979effects]. This leads us to ask whether the perceived physiological process is more important than the actual one. Relation between physiology and emotion ======================================= Sympathetic nervous system, governing the fight or flight mechanisms, influences sweat secretion, increases heart rate, constricts blood vessels in gastrointestinal organs or inhibits contractions in the digestive tract, and much more. These physiological changes are recognized as measures of emotion and expressed as stress, anxiety, fear etc. This assumption follows the James’ theory [@james1884emotion] in which feeling (emotion experience) exists due to physiological changes in one’s own body. James argued that seeing a fearful stimulus would first trigger emotional responses (increases in sympathetic activity), and that the perception of these physiological changes would form the basis for our conscious experience of emotion. Today, in affective neuroscience, the James theory is revised and updated, e.g. acknowledging the role of emotions in decision-making [@bechara2000emotion]; or distinguishing “the conscious experience of an emotion (feeling), its expression (physiological response), and semantic knowledge about it (recognition)” [@tsuchiya2007emotion]. Taking more often into consideration the role of the GI tract might help to reconcile antagonist views of emotion. For example, in [@johnsen2009] authors described the dissociation between the autonomic response and affect through the study of patients with brain lesions. In this experiment, patients without automonic responses would not sweat but would still be able to experience emotions related to music excerpts, while patients with different lesions, incapable of judging music, displayed EDA responses. As such, without a link between physiology and emotions, authors “opposed” James’ theory. Nevertheless, we believe, as the enteric nervous system can function independently from the autonomic system, it could be that the physiology still contributed to the emotional perception of music. Conclusion ========== With this paper we hope to foster discussions among HCI practitioners about the study of gut signals. To discover further how the body contributes to the experience of emotion and *vice versa*, it can be useful to include EGG as an additional tool for emotion recognition. Also, affordable and mobile biosignal amplifiers could enable the creation of a new biofeedback mechanism, in which individuals could learn how to regulate their emotion related to the gut. Acknowledgment ============== I wish to thank Jérémy Frey and Angela Vujić for insightful discussions and for proofreading this paper.
--- abstract: 'The Nyström method is a popular technique for computing fixed-rank approximations of large kernel matrices using a small number of landmark points. In practice, to ensure high quality approximations, the number of landmark points is chosen to be greater than the target rank. However, the standard Nyström method uses a sub-optimal procedure for rank reduction mainly due to its simplicity. In this paper, we highlight the drawbacks of standard Nyström in terms of poor performance and lack of theoretical guarantees. To address these issues, we present an efficient method for generating improved fixed-rank Nyström approximations. Theoretical analysis and numerical experiments are provided to demonstrate the advantages of the modified method over the standard Nyström method. Overall, the aim of this paper is to convince researchers to use the modified method, as it has nearly identical computational complexity, is easy to code, and has greatly improved accuracy in many cases.' author: - 'Farhad Pourkamali-Anaraki and Stephen Becker[^1]' bibliography: - 'phd\_farhad.bib' title: 'Improved Fixed-Rank Nyström Approximation via QR Decomposition: Practical and Theoretical Aspects' --- Kernel methods, large-scale learning, Nyström method, matrix factorization, kernel approximation. Introduction {#sec:intro} ============ Kernel methods are widely used in various learning problems. Well-known examples include support vector machines (SVM) [@VapnikSVM; @suykens1999least], kernel principal component analysis for feature extraction [@scholkopf1998nonlinear; @alaiz2017convex], kernel clustering [@girolami2002mercer; @chitta2011approximate; @pourkamali2016randomized; @LANGONE2017], and kernel ridge regression [@saunders1998ridge; @alaoui2015fast]. The main idea behind kernel-based learning is to map the input data points into a feature space, where all pairwise inner products of the mapped data points can be computed via a nonlinear kernel function that satisfies Mercer’s condition [@LearningWithKernels]. Thus, kernel methods allow one to use linear algorithms in the feature space which correspond to nonlinear algorithms in the original space. For this reason, kernel machines have received much attention as an effective tool to tackle problems with complex and nonlinear structures. Let ${\mathbf{x}}_1,\ldots,{\mathbf{x}}_n$ be a set of $n$ data points in ${\mathbb{R}}^p$. The inner products in feature space are calculated using a nonlinear kernel function $\kappa(\cdot,\cdot)$: $$\label{eq:kernel} K_{ij}{\stackrel{\text{\tiny def}}{=}}\kappa({\mathbf{x}}_i,{\mathbf{x}}_j)=\langle\Phi({\mathbf{x}}_i),\Phi({\mathbf{x}}_j)\rangle,\;\;\forall i,j\in\{1,\ldots,n\},$$ where $\Phi:{\mathbf{x}}\mapsto\Phi({\mathbf{x}})$ is the kernel-induced feature map. A popular choice is the Gaussian kernel function $\kappa({\mathbf{x}}_i,{\mathbf{x}}_j)=\exp(-\|{\mathbf{x}}_i-{\mathbf{x}}_j\|_2^2/c)$, with the parameter $c>0$. In kernel machines, the pairwise inner products are stored in the symmetric positive semidefinite (SPSD) kernel matrix ${\mathbf{K}}\in{\mathbb{R}}^{n\times n}$. However, it takes ${\mathcal{O}}(n^2)$ memory to store the full kernel matrix and subsequent processing of ${\mathbf{K}}$ within the learning process is quite expensive or impractical for large data sets. A popular approach to tackle these challenges is to use the best rank-$r$ approximation $\llbracket{\mathbf{K}}\rrbracket_r={\mathbf{U}}_r{\boldsymbol{\Lambda}}_r{\mathbf{U}}_r^T$ obtained via the eigenvalue decomposition of ${\mathbf{K}}$ for $r\leq\operatorname{rank}({\mathbf{K}})$. Here, the columns of ${\mathbf{U}}_r\in{\mathbb{R}}^{n\times r}$ span the top $r$-dimensional eigenspace of ${\mathbf{K}}$, and the diagonal matrix ${\boldsymbol{\Lambda}}_r\in{\mathbb{R}}^{r\times r}$ contains the top $r$ eigenvalues. Since the kernel matrix is SPSD, we have: $${\mathbf{K}}\approx\llbracket{\mathbf{K}}\rrbracket_r={\mathbf{U}}_r{\boldsymbol{\Lambda}}_r{\mathbf{U}}_r^T={\mathbf{L}}{\mathbf{L}}^T,\label{eq:low-rank-kernel}$$ where ${\mathbf{L}}{\stackrel{\text{\tiny def}}{=}}{\mathbf{U}}_r{\boldsymbol{\Lambda}}_r^{1/2}\in{\mathbb{R}}^{n\times r}$. When the target rank $r$ is small and chosen independently of $n$ (e.g., $r$ is chosen according to the degrees of freedom in the learning problem [@BachKernelReview]), the benefits of the rank-$r$ approximation in are twofold. First, it takes ${\mathcal{O}}(nr)$ to store the matrix ${\mathbf{L}}$ which is only linear in the number of samples $n$. Second, the rank-$r$ approximation leads to substantial computational savings within the learning process. For example, approximating ${\mathbf{K}}$ with ${\mathbf{L}}{\mathbf{L}}^T$ means the matrix inversion $\left({\mathbf{K}}+\lambda{\mathbf{I}}_{n\times n}\right)^{-1}$ in kernel ridge regression can be calculated using the Sherman-Morrison-Woodbury formula in ${\mathcal{O}}(nr^2+r^3)$ time compared to ${\mathcal{O}}(n^3)$ if done naively. Other examples are kernel K-means, which is performed on the matrix ${\mathbf{L}}^T$, and so each step of the K-means algorithm runs in time proportional to $r$; and kernel-based spectral clustering, where fixed-rank kernel approximations have been used to reduce computation time [@langone2017large]. Although it has been shown that the fixed-rank approximation of kernel matrices is a promising approach to trade-off accuracy for scalability [@cortes2010impact; @LinearizedSVM; @golts2016linearized; @wang2016towards], the eigenvalue decomposition of ${\mathbf{K}}$ has at least quadratic time complexity and takes ${\mathcal{O}}(n^2)$ space. To address this issue, one line of prior work is centered around efficient techniques for approximating the best rank-$r$ approximation when we have ready access to ${\mathbf{K}}$; see [@Martinson_SVD; @FarhadPreconditioned] for a survey. However, ${\mathbf{K}}$ is typically unknown in kernel methods and the cost to form ${\mathbf{K}}$ using standard kernel functions is ${\mathcal{O}}(pn^2)$, which is extremely expensive for large high-dimensional data sets. For this reason, the Nyström method [@Nystrom2001] has been a popular technique for computing fixed-rank approximations in kernel-based learning, which eliminates the need to access every entry of the full kernel matrix. The Nyström method works by selecting a small set of bases, referred to as landmark points, and computes the kernel similarities between the input data points and landmark points. To be formal, the standard Nyström method generates a rank-$r$ approximation of ${\mathbf{K}}$ using $m$ landmark points ${\mathbf{z}}_1,\ldots,{\mathbf{z}}_m$ in ${\mathbb{R}}^p$. In practice, it is common to choose $m$ greater than $r$ for obtaining higher quality rank-$r$ approximations [@kumar2012sampling; @li2015large], since the accuracy of the Nyström method depends on the number of selected landmark points and the selection procedure. The landmark points can be sampled with respect to a uniform or nonuniform distribution from the set of $n$ input data points [@gittens2016revisiting]. Moreover, some recent techniques utilize out-of-sample landmark points for generating improved Nyström approximations, e.g., centroids found from K-means clustering on the input data points [@zhang2010clusteredNys; @zhang2008improved]. For a fixed set of landmark points, let ${\mathbf{C}}\in{\mathbb{R}}^{n\times m}$ and ${\mathbf{W}}\in{\mathbb{R}}^{m\times m}$ be two matrices with the $(i,j)$-th entries $C_{ij}=\kappa({\mathbf{x}}_i,{\mathbf{z}}_j)$ and $W_{ij}=\kappa({\mathbf{z}}_i,{\mathbf{z}}_j)$. Then, the rank-$m$ Nyström approximation has the form ${\mathbf{G}}={\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T$, where ${\mathbf{W}}^\dagger$ is the pseudo-inverse of ${\mathbf{W}}$. For the fixed-rank case, the standard Nyström method restricts the rank of the $m\times m$ inner matrix ${\mathbf{W}}$ and computes its best rank-$r$ approximation $\llbracket{\mathbf{W}}\rrbracket_r$ to obtain ${\mathbf{G}}_{(r)}^{nys}={\mathbf{C}}\llbracket{\mathbf{W}}\rrbracket_r^\dagger{\mathbf{C}}^T$, which has rank no great than $r$. Despite the simplicity of the rank reduction process in the standard Nyström method, the main downside is that the structure of ${\mathbf{C}}$ is completely disregarded. The standard Nyström method generates the rank-$r$ approximation ${\mathbf{G}}_{(r)}^{nys}$ solely based on filtering ${\mathbf{W}}$ because of its smaller size compared to the matrix ${\mathbf{C}}$ of size $n\times m$. As a result, the selection of more landmark points in the standard Nyström method does not guarantee improved rank-$r$ approximations of kernel matrices. For example, our experimental results in Section \[sec:exper\] reveal that the increase in the number of landmark points may even produce less accurate rank-$r$ approximations due to the poor rank reduction process, cf. Remarks \[rmk:1\] and \[rmk:2\]. This paper considers the fundamental problem of rank reduction in the Nyström method. In particular, we present an efficient technique for computing a rank-$r$ approximation in the form of ${\mathbf{G}}_{(r)}^{opt}=\llbracket{\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T\rrbracket_r$, which runs in time comparable with the standard Nyström method. The modified method utilizes the thin QR decomposition of the matrix ${\mathbf{C}}$ for computing a more accurate rank-$r$ approximation of ${\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T$ compared to ${\mathbf{G}}_{(r)}^{nys}$. Moreover, unlike the standard Nyström method, our results show that both theoretically and empirically, modified Nyström produces more accurate rank-$r$ approximations as the number of landmark points $m$ increases. Contributions ------------- In this work, we make the following contributions: 1. In Algorithm \[alg:NysQR\], we present an efficient method for generating improved rank-$r$ Nyström approximations. The modified method computes the best rank-$r$ approximation of ${\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T$, i.e., ${\mathbf{G}}_{(r)}^{opt}=\llbracket{\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T\rrbracket_r$, in linear time with respect to the sample size $n$. In Theorem \[thm:nys-qr-sta\], it is shown that ${\mathbf{G}}_{(r)}^{opt}$ always produces a more accurate rank-$r$ approximation of ${\mathbf{K}}$ compared to ${\mathbf{G}}_{(r)}^{nys}$ with respect to the trace norm, when $m$ is greater than $r$ and landmark points are selected from the input data set. Remark \[thm:remark-frob\] shows this is not necessarily true in the Frobenius norm. 2. Theorem \[thm:nys-qr-std-more\] proves that the accuracy of the modified rank-$r$ Nyström approximation always improves (with respect to the trace norm) as more landmark points are selected from the input data set. 3. We provide counter-examples in Remarks \[rmk:1\] and \[rmk:2\] showing that an equivalent of Theorem \[thm:nys-qr-std-more\] cannot hold for the standard Nyström method. Example \[example1\] shows a situation where the modified Nyström method is arbitrarily better than the standard method, with respect to the trace and Frobenius norms. Remark \[rmk:same\] gives insight into when we expect the standard and modified methods to differ. 4. Theorem \[thm:out-of-sample\] shows that, under certain conditions, our theoretical results are also applicable to more recent landmark selection techniques based on out-of-sample extensions of the input data set, such as centroids found from K-means clustering. 5. Finally, we provide experimental results to demonstrate the superior performance and advantages of modified Nyström over the standard Nyström method. To our knowledge, the modified Nyström method was not discussed in the literature until our preliminary preprint [@pourkamali2016rand], though its derivation is straightforward and so we suspect it may have been previously derived in unpublished work. Due to the importance of rank reduction in the Nyström method, there are two recent works [@wang2017scalable; @tropp2017fixed] that independently study the approximation error of $\llbracket{\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T\rrbracket_r$, when landmark points are selected from the input data set. However, there are two principal differences between this work and the aforementioned references. First, the main focus of this paper is to directly compare the standard and modified Nyström methods, and provide both theoretical and experimental evidences on the effectiveness of modified Nyström. Second, we present theoretical and experimental results for the important class of out-of-sample landmark points, which often lead to accurate Nyström approximations [@zhang2010clusteredNys]. Paper Organization ------------------ In Section \[sec:notation\], we present the notation and give a brief review of some matrix decomposition and low-rank approximation techniques. Section \[sec:standard-nys\] reviews the standard Nyström method for computing rank-$r$ approximations of kernel matrices and we explain the process of obtaining approximate eigenvalues and eigenvectors. In Section \[sec:improved-nys\], we present an efficient modified method for computing improved rank-$r$ approximations of kernel matrices. The main theoretical results are given in Section \[sec:theory\], and we present experimental results comparing the modified and standard Nyström methods in Section \[sec:exper\]. Section \[sec:conclusion\] provides a brief conclusion. Notation and Preliminaries {#sec:notation} ========================== We denote column vectors with lower-case bold letters and matrices with upper-case bold letters. ${\mathbf{I}}_{n\times n}$ is the identity matrix of size $n\times n$; $\mathbf{0}_{n\times m}$ is the $n\times m$ matrix of zeros. For a vector ${\mathbf{x}}\in{\mathbb{R}}^p$, let $\|{\mathbf{x}}\|_2$ denote the Euclidean norm, and $\operatorname{diag}({\mathbf{x}})$ represents a diagonal matrix with the elements of ${\mathbf{x}}$ on the main diagonal. The $(i,j)$-th entry of $\mathbf{A}$ is denoted by $A_{ij}$, $\mathbf{A}^T$ is the transpose of $\mathbf{A}$, and $\operatorname{tr}(\cdot)$ is the trace operator. We write $\text{range}(\mathbf{A})$ to denote the column space of $\mathbf{A}$. Each $n\times m$ matrix $\mathbf{A}$ with $\rho=\operatorname{rank}(\mathbf{A})\leq \min\{n,m\}$ admits a factorization in the form of $\mathbf{A}={\mathbf{U}}{\boldsymbol{\Sigma}}{\mathbf{V}}^T$, where ${\mathbf{U}}\in{\mathbb{R}}^{n\times \rho}$ and ${\mathbf{V}}\in{\mathbb{R}}^{m\times \rho}$ are orthonormal matrices known as the left singular vectors and right singular vectors, respectively. The diagonal matrix ${\boldsymbol{\Sigma}}=\operatorname{diag}([\sigma_1(\mathbf{A}),\ldots,\sigma_\rho(\mathbf{A})])$ contains the singular values of $\mathbf{A}$ in descending order, i.e., $\sigma_1(\mathbf{A})\geq\ldots\geq\sigma_\rho(\mathbf{A})>0$. Throughout the paper, we use several standard matrix norms. The Frobenius norm of $\mathbf{A}$ is defined as $ \|\mathbf{A}\|_F^2{\stackrel{\text{\tiny def}}{=}}\sum_{i=1}^{\rho}\sigma_i(\mathbf{A})^2=\operatorname{tr}(\mathbf{A}^T\mathbf{A}) $ and $\|\mathbf{A}\|_*{\stackrel{\text{\tiny def}}{=}}\sum_{i=1}^{\rho}\sigma_i(\mathbf{A})=\operatorname{tr}(\sqrt{\mathbf{A}^T\mathbf{A}})$ denotes the trace norm (or nuclear norm) of $\mathbf{A}$. The spectral norm of $\mathbf{A}$ is the largest singular value of $\mathbf{A}$, i.e., $\|\mathbf{A}\|_2{\stackrel{\text{\tiny def}}{=}}\sigma_1(\mathbf{A})$. It is straightforward to show that $\|\mathbf{A}\|_2\leq \|\mathbf{A}\|_F\leq \|\mathbf{A}\|_*$. When ${\mathbf{K}}$ is a *kernel matrix*, meaning it is generated via , we assume the kernel function $\kappa$ satisfies Mercer’s condition and therefore ${\mathbf{K}}$ is symmetric positive semidefinite (SPSD) [@aronszajn1950theory; @LearningWithKernels]. Let ${\mathbf{K}}\in{\mathbb{R}}^{n\times n}$ be any SPSD matrix with $\rho=\text{rank}({\mathbf{K}})\leq n$. Similar to the singular value decomposition (SVD), the matrix ${\mathbf{K}}$ can be factorized as ${\mathbf{K}}={\mathbf{U}}{\boldsymbol{\Lambda}}{\mathbf{U}}^T$, where ${\mathbf{U}}\in{\mathbb{R}}^{n\times \rho}$ contains the orthonormal eigenvectors, i.e., ${\mathbf{U}}^T{\mathbf{U}}={\mathbf{I}}_{\rho\times \rho}$, and ${\boldsymbol{\Lambda}}=\operatorname{diag}\left([\lambda_1({\mathbf{K}}),\ldots,\lambda_\rho({\mathbf{K}})]\right)$ is a diagonal matrix which contains the nonzero eigenvalues of ${\mathbf{K}}$ in descending order. This factorization is known as the eigenvalue decomposition (EVD). The matrices ${\mathbf{U}}$ and ${\boldsymbol{\Lambda}}$ can be partitioned for a target rank $r$ ($r\leq\rho$): $$\begin{aligned} {\mathbf{K}}&=\left[\begin{array}{cc} {\mathbf{U}}_r & {\mathbf{U}}_{\rho-r}\end{array}\right]\left[\begin{array}{cc} {\boldsymbol{\Lambda}}_r & \mathbf{0}_{r\times (\rho-r)}\\ \mathbf{0}_{(\rho-r)\times r} & {\boldsymbol{\Lambda}}_{\rho-r} \end{array}\right]\left[\begin{array}{c} {\mathbf{U}}_r^T\\ {\mathbf{U}}_{\rho-r}^T \end{array}\right]\nonumber\\ &= {\mathbf{U}}_r{\boldsymbol{\Lambda}}_r{\mathbf{U}}_r^T+{\mathbf{U}}_{\rho-r}{\boldsymbol{\Lambda}}_{\rho-r}{\mathbf{U}}_{\rho-r}^T,\label{eq:eig-decomp}\end{aligned}$$ where ${\boldsymbol{\Lambda}}_r\in{\mathbb{R}}^{r\times r}$ contains the $r$ leading eigenvalues and the columns of ${\mathbf{U}}_r\in{\mathbb{R}}^{n\times r}$ span the top $r$-dimensional eigenspace, and ${\boldsymbol{\Lambda}}_{\rho-r}\in{\mathbb{R}}^{(\rho-r)\times (\rho-r)}$ and ${\mathbf{U}}_{\rho-r}\in{\mathbb{R}}^{n\times (\rho - r)}$ contain the remaining $(\rho-r)$ eigenvalues and eigenvectors. It is well-known that $\llbracket{\mathbf{K}}\rrbracket_r{\stackrel{\text{\tiny def}}{=}}{\mathbf{U}}_r{\boldsymbol{\Lambda}}_r{\mathbf{U}}_r^T$ is the “best rank-$r$ approximation” to ${\mathbf{K}}$ in the sense that $\llbracket{\mathbf{K}}\rrbracket_r$ minimizes $\|{\mathbf{K}}-{\mathbf{K}}'\|_F$ and $\|{\mathbf{K}}-{\mathbf{K}}'\|_*$ over all matrices ${\mathbf{K}}'\in{\mathbb{R}}^{n\times n}$ of rank at most $r$. If $\lambda_r({\mathbf{K}}) = \lambda_{r+1}({\mathbf{K}})$, then $\llbracket{\mathbf{K}}\rrbracket_r$ is not unique, so we write $\llbracket{\mathbf{K}}\rrbracket_r$ to mean any matrix satisfying . The Moore-Penrose pseudo-inverse of ${\mathbf{K}}$ can be obtained from the eigenvalue decomposition as ${\mathbf{K}}^\dagger={\mathbf{U}}{\boldsymbol{\Lambda}}^{-1}{\mathbf{U}}^T$. When ${\mathbf{K}}$ is full rank, i.e., $\rho=n$, we have ${\mathbf{K}}^\dagger={\mathbf{K}}^{-1}$. Another matrix factorization technique that we use in this paper is the QR decomposition. An $n\times m$ matrix $\mathbf{A}$, with $n\geq m$, can be decomposed as a product of two matrices $\mathbf{A}={\mathbf{Q}}{\mathbf{R}}$, where ${\mathbf{Q}}\in{\mathbb{R}}^{n\times m}$ has $m$ orthonormal columns, i.e., ${\mathbf{Q}}^T{\mathbf{Q}}={\mathbf{I}}_{m\times m}$, and ${\mathbf{R}}\in{\mathbb{R}}^{m\times m}$ is an upper triangular matrix. Sometimes this is called the *thin* QR decomposition, to distinguish it from a *full* QR decomposition which finds ${\mathbf{Q}}\in{\mathbb{R}}^{n\times n}$ and zero-pads ${\mathbf{R}}$ accordingly. Finally, we state a standard result on the rank-$r$ approximation of a matrix expressed as a product of two matrices. The proof of this result can be found in [@boutsidis2014near Lemma 8]. \[lemma:best-rank-ONB\] Consider the matrix ${\mathbf{K}}\in{\mathbb{R}}^{n\times n}$ and let ${\mathbf{Q}}\in{\mathbb{R}}^{n\times m}$ be a matrix that has $m<n$ orthonormal columns. For any positive integer $r\leq m$, we have: $$\llbracket{\mathbf{Q}}^T{\mathbf{K}}\rrbracket_r = \operatorname*{arg\,min}_{\mathbf{T}:\; \operatorname{rank}(\mathbf{T})\leq r}\| {\mathbf{K}}- {\mathbf{Q}}\mathbf{T}\|_F^2.$$ The Standard Nyström Method {#sec:standard-nys} =========================== The Nyström method generates a fixed-rank approximation of the SPSD kernel matrix ${\mathbf{K}}\in{\mathbb{R}}^{n\times n}$ by selecting a small set of bases referred to as “landmark points”. The simplest and most common selection technique is uniform sampling without replacement [@Nystrom2001; @kumar2012sampling]. In this case, each data point in the data set is sampled with the same probability, i.e., $p_i=\frac{1}{n}$, for $i=1,\ldots,n$. The advantage of this technique is the low computational complexity associated with sampling landmark points. However, uniform sampling does not take into account the nonuniform structure of many data sets and the resulting kernel matrices. Therefore, sampling mechanisms with respect to nonuniform distributions have been proposed to address this problem. This line of work requires the computation of (approximate) statistical leverage scores of ${\mathbf{K}}$, which is more expensive than uniform sampling [@Nystrom_Kernel_Approx; @mahoney2009cur; @FastApproxCohLev]. In addition, leverage score sampling often requires computing the entire kernel matrix ${\mathbf{K}}$, which negates one of the principal benefits of the Nyström method. A comprehensive review and comparison of uniform and nonuniform landmark selection techniques can be found in [@kumar2012sampling; @sun2015review]. More recently, generating landmark points using out-of-sample extensions of input data has been shown to be effective for high quality Nyström approximations. This line of work originates from the work of Zhang et al. [@zhang2010clusteredNys; @zhang2008improved], and it is based on the observation that the Nyström approximation error depends on the quantization error of encoding the data set with the landmark points. Hence, the landmark points are selected to be the centroids found from K-means clustering. In machine learning and pattern recognition, K-means clustering is a well-established technique to partition a data set into clusters by trying to minimize the total sum of the squared Euclidean distances of each point to the closest cluster center [@Bishop]. In general, assume that a set of $m\ll n$ landmark points in ${\mathbb{R}}^p$, denoted by ${\mathbf{Z}}=[{\mathbf{z}}_1,\ldots,{\mathbf{z}}_m]\in{\mathbb{R}}^{p\times m}$, are given. Let us consider two matrices ${\mathbf{C}}\in{\mathbb{R}}^{n\times m}$ and ${\mathbf{W}}\in{\mathbb{R}}^{m\times m}$, where $C_{ij}=\kappa({\mathbf{x}}_i,{\mathbf{z}}_j)$ and $W_{ij}=\kappa({\mathbf{z}}_i,{\mathbf{z}}_j)$. The Nyström method uses both ${\mathbf{C}}$ and ${\mathbf{W}}$ to construct an approximation of the kernel matrix ${\mathbf{K}}$ in the following form with rank at most $m$: $${\mathbf{K}}\approx{\mathbf{G}}={\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T.\label{eq:Nystrom-low-rank}$$ For the fixed-rank case, the Nyström method generates a rank-$r$ approximation of the kernel matrix, $r\leq m$, by computing the best rank-$r$ approximation of the inner matrix ${\mathbf{W}}$ [@li2015large; @gittens2016revisiting; @lu2016large], which results in ${\mathbf{G}}_{(r)}^{nys}={\mathbf{C}}\llbracket{\mathbf{W}}\rrbracket_r^\dagger{\mathbf{C}}^T$, where $\llbracket{\mathbf{W}}\rrbracket_r^\dagger$ represents the pseudo-inverse of $\llbracket{\mathbf{W}}\rrbracket_r$. Thus, the eigenvalue decomposition of the matrix ${\mathbf{W}}$ should be computed to find the top $r$ eigenvalues and corresponding eigenvectors. Let ${\boldsymbol{\Sigma}}_r\in{\mathbb{R}}^{r\times r}$ and ${\mathbf{V}}_r\in{\mathbb{R}}^{m\times r}$ contain the top $r$ eigenvalues and the corresponding orthonormal eigenvectors of ${\mathbf{W}}$, respectively. Then, the rank-$r$ approximation of ${\mathbf{K}}$ can be expressed as: $${\mathbf{G}}_{(r)}^{nys}={\mathbf{L}}^{nys}\left({\mathbf{L}}^{nys}\right)^T,\; {\mathbf{L}}^{nys}={\mathbf{C}}{\mathbf{V}}_r\Big({\boldsymbol{\Sigma}}_r^\dagger\Big)^{1/2}.\label{eq:Nys-low-rank-ll}$$ The time complexity of the Nyström method to form ${\mathbf{L}}^{nys}$ is ${\mathcal{O}}(pnm+m^2r+nmr)$, where it takes ${\mathcal{O}}(pnm)$ to construct matrices ${\mathbf{C}}$ and ${\mathbf{W}}$. It takes ${\mathcal{O}}(m^2r)$ time to perform the partial eigenvalue decomposition of ${\mathbf{W}}$ and ${\mathcal{O}}(nmr)$ represents the cost of matrix multiplication ${\mathbf{C}}{\mathbf{V}}_r$. Thus, for $r\leq m\ll n$, the computation cost to form the rank-$r$ approximation of the kernel matrix is only linear in the data set size $n$. The eigenvalues and eigenvectors of ${\mathbf{K}}$ can be estimated by using the rank-$r$ approximation in , and in fact this approach provides the exact eigenvalue decomposition of ${\mathbf{G}}_{(r)}^{nys}$. The first step is to find the eigenvalue decomposition of the $r\times r$ matrix: $$\big({\mathbf{L}}^{nys}\big)^T{\mathbf{L}}^{nys}=\widetilde{{\mathbf{V}}}\widetilde{{\boldsymbol{\Sigma}}}\widetilde{{\mathbf{V}}}^T,$$ where $\widetilde{{\mathbf{V}}},\widetilde{{\boldsymbol{\Sigma}}}\in{\mathbb{R}}^{r\times r}$. Then, the estimates of $r$ leading eigenvalues and eigenvectors of ${\mathbf{K}}$ are obtained as follows [@zhang2010clusteredNys]: $$\widehat{{\mathbf{U}}}_{r}^{nys}={\mathbf{L}}^{nys}\widetilde{{\mathbf{V}}}\Big(\widetilde{{\boldsymbol{\Sigma}}}^\dagger\Big)^{1/2},\;\widehat{\boldsymbol{\Lambda}}_r^{nys}=\widetilde{{\boldsymbol{\Sigma}}}.$$ The overall procedure to estimate the $r$ leading eigenvalues/eigenvectors is summarized in Algorithm \[alg:StandardNys\]. The time complexity of the approximate eigenvalue decomposition is ${\mathcal{O}}(nr^2+r^3)$, in addition to the cost of computing ${\mathbf{L}}^{nys}$ mentioned earlier. Thus, the total cost of computing the approximate eigenvalue decomposition of ${\mathbf{K}}$ is linear in $n$. **Input:** data set ${\mathbf{X}}$, $m$ landmark points ${\mathbf{Z}}$, kernel function $\kappa$, target rank $r$ ($r\leq m$) **Output:** estimates of $r$ leading eigenvectors and eigenvalues of the kernel matrix ${\mathbf{K}}\in{\mathbb{R}}^{n\times n}$: $\widehat{{\mathbf{U}}}_r^{nys}\in{\mathbb{R}}^{n\times r}$, $\widehat{{\boldsymbol{\Lambda}}}_r^{nys}\in{\mathbb{R}}^{r\times r}$ Form ${\mathbf{C}}$ and ${\mathbf{W}}$: $C_{ij}=\kappa({\mathbf{x}}_i,{\mathbf{z}}_j)$, $W_{ij}=\kappa({\mathbf{z}}_i,{\mathbf{z}}_j)$ Compute EVD: ${\mathbf{W}}={\mathbf{V}}{\boldsymbol{\Sigma}}{\mathbf{V}}^T$ Form the matrix: ${\mathbf{L}}^{nys}={\mathbf{C}}{\mathbf{V}}_r\Big({\boldsymbol{\Sigma}}_r^\dagger\Big)^{1/2}$ Compute EVD: $({\mathbf{L}}^{nys})^T{\mathbf{L}}^{nys}=\widetilde{{\mathbf{V}}}\widetilde{{\boldsymbol{\Sigma}}}\widetilde{{\mathbf{V}}}^T$ $\widehat{{\mathbf{U}}}_{r}^{nys}={\mathbf{L}}^{nys}\widetilde{{\mathbf{V}}}\Big(\widetilde{{\boldsymbol{\Sigma}}}^\dagger\Big)^{1/2}$ and $\widehat{\boldsymbol{\Lambda}}_r^{nys}=\widetilde{{\boldsymbol{\Sigma}}}$ Improved Nyström Approximation via QR Decomposition {#sec:improved-nys} =================================================== In the previous section, we explained the Nyström method for computing rank-$r$ approximations of SPSD kernel matrices based on selecting a small set of landmark points. Although the final goal is to find an approximation that has rank no greater than $r$, it is often preferred to select $m>r$ landmark points and then restrict the resultant approximation to have rank at most $r$. The main intuition is that selecting $m>r$ landmark points and then restricting the approximation to a lower rank-$r$ space has a regularization effect which can lead to more accurate approximations [@kumar2012sampling; @gittens2016revisiting]. For example, when landmark points are chosen to be centroids from K-means clustering, more landmark points lead to smaller quantization error of the data set, and thus higher quality Nyström approximations. In the standard Nyström method presented in Algorithm \[alg:StandardNys\], the rank of the matrix ${\mathbf{G}}$ is restricted by computing the best rank-$r$ approximation of the inner matrix ${\mathbf{W}}$: ${\mathbf{G}}_{(r)}^{nys}={\mathbf{C}}\llbracket{\mathbf{W}}\rrbracket_r^\dagger{\mathbf{C}}^T$. Since the inner matrix in the representation of ${\mathbf{G}}_{(r)}^{nys}$ has rank no greater than $r$, it follows that ${\mathbf{G}}_{(r)}^{nys}$ has rank at most $r$. The main benefit of this technique is the low computational cost of performing an exact eigenvalue decomposition on a relatively small matrix of size $m\times m$. However, the standard Nyström method totally ignores the structure of the matrix ${\mathbf{C}}$ in the rank reduction process. In fact, since the rank-$r$ approximation ${\mathbf{G}}_{(r)}^{nys}$ does not utilize the full knowledge of ${\mathbf{C}}$, the selection of more landmark points does not guarantee an improved low-rank approximation in the standard Nyström method, cf. Remarks \[rmk:1\] and \[rmk:2\]. To solve this problem, we present an efficient method to compute the best rank-$r$ approximation of the matrix ${\mathbf{G}}={\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T$, for given matrices ${\mathbf{C}}\in{\mathbb{R}}^{n\times m}$ and ${\mathbf{W}}\in{\mathbb{R}}^{m\times m}$. In contrast with the standard Nyström method, the modified approach takes advantage of both matrices ${\mathbf{C}}$ and ${\mathbf{W}}$. To begin, let us consider the best[^2] rank-$r$ approximation of the matrix ${\mathbf{G}}$ in any unitarily invariant norm $\|\cdot\|$, such as the Frobenius norm or trace norm: $$\begin{aligned} {\mathbf{G}}_{(r)}^{opt}&{\stackrel{\text{\tiny def}}{=}}\operatorname*{arg\,min}_{{\mathbf{G}}':\;\text{rank}({\mathbf{G}}')\leq r} \|{\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T-{\mathbf{G}}'\| \nonumber\\ &\overset{(i)}{=} \operatorname*{arg\,min}_{{\mathbf{G}}':\;\text{rank}({\mathbf{G}}')\leq r} \|{\mathbf{Q}}\underbrace{{\mathbf{R}}{\mathbf{W}}^\dagger{\mathbf{R}}^T}_{m\times m}{\mathbf{Q}}^T-{\mathbf{G}}'\|\nonumber\\ &\overset{(ii)}{=} \operatorname*{arg\,min}_{{\mathbf{G}}':\;\text{rank}({\mathbf{G}}')\leq r} \|\left({\mathbf{Q}}{\mathbf{V}}'\right){\boldsymbol{\Sigma}}'\left({\mathbf{Q}}{\mathbf{V}}'\right)^T-{\mathbf{G}}'\|\nonumber\\ & = \left({\mathbf{Q}}{\mathbf{V}}'_r\right) {\boldsymbol{\Sigma}}'_r\left({\mathbf{Q}}{\mathbf{V}}'_r\right)^T,\label{eq:optimal-nys}\end{aligned}$$ where (i) follows from the QR decomposition of ${\mathbf{C}}\in{\mathbb{R}}^{n\times m}$; ${\mathbf{C}}={\mathbf{Q}}{\mathbf{R}}$, where ${\mathbf{Q}}\in{\mathbb{R}}^{n\times m}$ and ${\mathbf{R}}\in{\mathbb{R}}^{m\times m}$. To get (ii), the eigenvalue decomposition of the $m\times m$ matrix ${\mathbf{R}}{\mathbf{W}}^\dagger{\mathbf{R}}^T$ is computed, ${\mathbf{R}}{\mathbf{W}}^\dagger{\mathbf{R}}^T={\mathbf{V}}'{\boldsymbol{\Sigma}}'{\mathbf{V}}'^T$, where the diagonal matrix ${\boldsymbol{\Sigma}}'\in{\mathbb{R}}^{m\times m}$ contains $m$ eigenvalues in descending order on the main diagonal and the columns of ${\mathbf{V}}'\in{\mathbb{R}}^{m\times m}$ are the corresponding eigenvectors. Moreover, we note that the columns of ${\mathbf{Q}}{\mathbf{V}}'\in{\mathbb{R}}^{n\times m}$ are orthonormal because both ${\mathbf{Q}}$ and ${\mathbf{V}}'$ have orthonormal columns. Thus, the decomposition $({\mathbf{Q}}{\mathbf{V}}'){\boldsymbol{\Sigma}}'({\mathbf{Q}}{\mathbf{V}}')^T$ contains the $m$ eigenvalues and orthonormal eigenvectors of the Nyström approximation ${\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T$. Hence, the best rank-$r$ approximation of ${\mathbf{G}}={\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T$ is then computed using the $r$ leading eigenvalues ${\boldsymbol{\Sigma}}'_r\in{\mathbb{R}}^{r\times r}$ and corresponding eigenvectors ${\mathbf{Q}}{\mathbf{V}}'_r\in{\mathbb{R}}^{n\times r}$, as given in . Thus, the estimates of the top $r$ eigenvalues and eigenvectors of the kernel matrix ${\mathbf{K}}$ from the Nyström approximation ${\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T$ are obtained as follows: $$\widehat{{\mathbf{U}}}_r^{opt}={\mathbf{Q}}{\mathbf{V}}'_r,\;\;\widehat{{\boldsymbol{\Lambda}}}_r^{opt}={\boldsymbol{\Sigma}}'_r.$$ These estimates can be also used to approximate the kernel matrix as ${\mathbf{K}}\approx{\mathbf{L}}^{opt}\left({\mathbf{L}}^{opt}\right)^T$, where ${\mathbf{L}}^{opt}=\widehat{{\mathbf{U}}}_r^{opt}\big(\widehat{{\boldsymbol{\Lambda}}}_r^{opt}\big)^{1/2}$. The modified method for estimating the $r$ leading eigenvalues/eigenvectors of the kernel matrix ${\mathbf{K}}$ is presented in Algorithm \[alg:NysQR\]. The time complexity of this method is ${\mathcal{O}}(pnm+nm^2+m^3+nmr)$, where ${\mathcal{O}}(pnm)$ represents the cost to form matrices ${\mathbf{C}}$ and ${\mathbf{W}}$. The complexity of the QR decomposition is ${\mathcal{O}}(nm^2)$ and it takes ${\mathcal{O}}(m^3)$ time to compute the eigenvalue decomposition of ${\mathbf{R}}{\mathbf{W}}^\dagger{\mathbf{R}}^T$. Finally, the cost to compute the matrix multiplication ${\mathbf{Q}}{\mathbf{V}}'_r$ is ${\mathcal{O}}(nmr)$. **Input:** data set ${\mathbf{X}}$, $m$ landmark points ${\mathbf{Z}}$, kernel function $\kappa$, target rank $r$ ($r\leq m$) **Output:** estimates of $r$ leading eigenvectors and eigenvalues of the kernel matrix ${\mathbf{K}}\in{\mathbb{R}}^{n\times n}$: $\widehat{{\mathbf{U}}}_r^{opt}\in{\mathbb{R}}^{n\times r}$, $\widehat{{\boldsymbol{\Lambda}}}_r^{opt}\in{\mathbb{R}}^{r\times r}$ Form ${\mathbf{C}}$ and ${\mathbf{W}}$: $C_{ij}=\kappa({\mathbf{x}}_i,{\mathbf{z}}_j)$, $W_{ij}=\kappa({\mathbf{z}}_i,{\mathbf{z}}_j)$ Perform the thin QR decomposition: ${\mathbf{C}}={\mathbf{Q}}{\mathbf{R}}$ Compute EVD: ${\mathbf{R}}{\mathbf{W}}^\dagger{\mathbf{R}}^T={\mathbf{V}}'{\boldsymbol{\Sigma}}'{\mathbf{V}}'^T$ $\widehat{{\mathbf{U}}}_r^{opt}={\mathbf{Q}}{\mathbf{V}}'_r$ and $\widehat{{\boldsymbol{\Lambda}}}_r^{opt}={\boldsymbol{\Sigma}}'_r$ We can compare the computational complexity of Nyström via QR decomposition (Algorithm \[alg:NysQR\]) with that of the standard Nyström method (Algorithm \[alg:StandardNys\]). Since our focus in this paper is on large-scale data sets with $n$ large, we only consider terms involving $n$ which lead to dominant computation costs. Based on our previous discussion, it takes $\mathcal{C}_{nys}={\mathcal{O}}(pnm+nmr+nr^2)$ time to compute the eigenvalue decomposition using the standard Nyström method, while the cost of the modified technique is $\mathcal{C}_{opt}={\mathcal{O}}(pnm+nmr+nm^2)$. Thus, for data of even moderate dimension with $p\gtrsim m$, the dominant term in both $\mathcal{C}_{nys}$ and $\mathcal{C}_{opt}$ is ${\mathcal{O}}(pnm)$. This means that the increase in computation cost ($nm^2$ vs. $nr^2$) is only noticeable if $m > r$ and $p \approx m$. Typically $m\ll p$ so the $pnm$ term dominates for both algorithms, hence there is no significant increase in cost, as is the case in our runtime example shown in Fig. \[fig:runtime\]. \[example1\] In the rest of this section, we present a simple example to gain some intuition on the superior performance of the modified technique compared to standard Nyström. We consider a small kernel matrix of size $3\times 3$: $${\mathbf{K}}=\left[\begin{array}{ccc} 1 & 0 & 10\\ 0 & 1.01 & 0\\ 10 & 0 & 100 \end{array}\right].$$ One can find, for example, a data matrix ${\mathbf{X}}$ that generates this kernel matrix as ${\mathbf{K}}={\mathbf{X}}^T{\mathbf{X}}$. Here, the goal is to compute the rank $r=1$ approximation of ${\mathbf{K}}$. Sample $m=2$ columns of ${\mathbf{K}}$, and suppose we choose the first and second columns: $${\mathbf{C}}=\left[\begin{array}{cc} 1 & 0\\ 0 & 1.01\\ 10 & 0 \end{array}\right],\;\;{\mathbf{W}}=\left[\begin{array}{cc} 1 & 0\\ 0 & 1.01 \end{array}\right].$$ In the standard Nyström method, the best rank-$1$ approximation of the inner matrix ${\mathbf{W}}$ is first computed. Then, the rank-$1$ approximation of ${\mathbf{K}}$ using standard Nyström is given by: $${\mathbf{G}}_{(1)}^{nys} = {\mathbf{C}}\llbracket{\mathbf{W}}\rrbracket_1^\dagger{\mathbf{C}}^T= \left[\begin{array}{ccc} 0 & 0 & 0\\ 0 & 1.01 & 0\\ 0 & 0 & 0 \end{array}\right].$$ The normalized approximation error in terms of the Frobenius norm and trace norm is large: $\|{\mathbf{K}}-{\mathbf{G}}_{(1)}^{nys}\|_F/\|{\mathbf{K}}\|_F=0.99$ and $\|{\mathbf{K}}-{\mathbf{G}}_{(1)}^{nys}\|_*/\|{\mathbf{K}}\|_*=0.99$. On the other hand, using the same matrices ${\mathbf{C}}$ and ${\mathbf{W}}$, the modified method first computes the QR decomposition of ${\mathbf{C}}={\mathbf{Q}}{\mathbf{R}}$: $${\mathbf{Q}}=\left[\begin{array}{cc} \frac{1}{\sqrt{101}} & 0\\ 0 & 1\\ \frac{10}{\sqrt{101}} & 0 \end{array}\right],\;\;{\mathbf{R}}=\left[\begin{array}{cc} \sqrt{101} & 0\\ 0 & 1.01 \end{array}\right].$$ Then, the product of three matrices ${\mathbf{R}}{\mathbf{W}}^\dagger{\mathbf{R}}^T$ is computed to find its eigenvalue decomposition ${\mathbf{R}}{\mathbf{W}}^\dagger{\mathbf{R}}^T={\mathbf{V}}'{\boldsymbol{\Sigma}}'{\mathbf{V}}'^T$: $$\begin{aligned} {\mathbf{R}}{\mathbf{W}}^\dagger{\mathbf{R}}^T&= \left[\begin{array}{cc} \sqrt{101} & 0\\ 0 & 1.01 \end{array}\right] \left[\begin{array}{cc} 1 & 0\\ 0 & \frac{1}{1.01} \end{array}\right] \left[\begin{array}{cc} \sqrt{101} & 0\\ 0 & 1.01 \end{array}\right] \nonumber \\ &= \left[\begin{array}{cc} 101 & 0\\ 0 & 1.01 \end{array}\right]\nonumber \\ &= \underbrace{\left[\begin{array}{cc} 1 & 0\\ 0 & 1 \end{array}\right]}_{{\mathbf{V}}'}\underbrace{\left[\begin{array}{cc} 101 & 0\\ 0 & 1.01 \end{array}\right]}_{{\boldsymbol{\Sigma}}'}\underbrace{\left[\begin{array}{cc} 1 & 0\\ 0 & 1 \end{array}\right]}_{{\mathbf{V}}'^T}.\end{aligned}$$ Finally, the rank-$1$ approximation of the kernel matrix in the modified method is obtained via ${\mathbf{G}}_{(1)}^{opt}=({\mathbf{Q}}{\mathbf{V}}'_1){\boldsymbol{\Sigma}}'_1({\mathbf{Q}}{\mathbf{V}}'_1)^T$: $$\begin{aligned} \label{eq:example1_opt} {\mathbf{G}}_{(1)}^{opt}&=\left[\begin{array}{cc} \frac{1}{\sqrt{101}} & 0\\ 0 & 1\\ \frac{10}{\sqrt{101}} & 0 \end{array}\right]\left[\begin{array}{cc} 101 & 0\\ 0 & 0 \end{array}\right] \left[\begin{array}{ccc} \frac{1}{\sqrt{101}} & 0 & \frac{10}{\sqrt{101}}\\ 0 & 1 & 0 \end{array}\right]\nonumber\\ &=\left[\begin{array}{ccc} 1 & 0 & 10\\ 0 & 0 & 0\\ 10 & 0 & 100 \end{array}\right],\end{aligned}$$ where $\|{\mathbf{K}}-{\mathbf{G}}_{(1)}^{opt}\|_F/\|{\mathbf{K}}\|_F=0.01$ and $\|{\mathbf{K}}-{\mathbf{G}}_{(1)}^{opt}\|_*/\|{\mathbf{K}}\|_*=0.01$. In fact, one can show that our approximation is the same as the best rank-$1$ approximation formed using full knowledge of ${\mathbf{K}}$, i.e., ${\mathbf{G}}_{(1)}^{opt}=\llbracket{\mathbf{K}}\rrbracket_1$. Furthermore, by taking $K_{22} \searrow 1$, we can make the improvement of the modified method over the standard method arbitrarily large. Theoretical Results {#sec:theory} =================== We present theoretical results to show that the modified Nyström method provides improved rank-$r$ approximations of SPSD kernel matrices. In particular, given fixed matrices ${\mathbf{C}}\in{\mathbb{R}}^{n\times m}$ and ${\mathbf{W}}\in{\mathbb{R}}^{m\times m}$ with $m>r$ and generated by in-sample landmark points, the Nyström method via QR decomposition is guaranteed to generate improved rank-$r$ approximation compared to the standard Nyström method in terms of the *trace norm*. In addition, we present a theorem which shows that as the number of landmark points $m$ increases, the modified Nyström method generates more accurate rank-$r$ approximation of the kernel matrix ${\mathbf{K}}$ in terms of the trace norm. This is an important advantage of Nyström via QR decomposition, since the standard Nyström method may generate lower quality rank-$r$ approximations by increasing $m$ due to the sub-optimal rank restriction step, cf. Section \[sec:exper\]. Main Results {#sec:thm-main} ------------ In order to compare the accuracy of modified Nyström with the standard Nyström method, we first provide an alternative formulation of these two methods. We assume the landmark points ${\mathbf{Z}}=[{\mathbf{z}}_1,\ldots,{\mathbf{z}}_m]$ are *in-sample*, meaning they are selected in any fashion (deterministic or random) from among the set of input data points, so that the matrix ${\mathbf{C}}\in{\mathbb{R}}^{n\times m}$ contains $m$ columns of the kernel matrix ${\mathbf{K}}$. This column selection process can be viewed as forming a *sampling* matrix ${\mathbf{P}}\in{\mathbb{R}}^{n\times m}$ that has exactly one nonzero entry in each column, where its location corresponds to the index of the selected landmark point. Then, the matrix product ${\mathbf{C}}={\mathbf{K}}{\mathbf{P}}\in{\mathbb{R}}^{n\times m}$ contains $m$ columns sampled from the kernel matrix ${\mathbf{K}}$ and ${\mathbf{W}}={\mathbf{P}}^T{\mathbf{K}}{\mathbf{P}}\in{\mathbb{R}}^{m\times m}$ is the intersection of the $m$ columns with the corresponding $m$ rows of ${\mathbf{K}}$. Let us define ${\mathbf{D}}{\stackrel{\text{\tiny def}}{=}}{\mathbf{K}}^{1/2}{\mathbf{P}}\in{\mathbb{R}}^{n\times m}$, which means that ${\mathbf{C}}={\mathbf{K}}{\mathbf{P}}={\mathbf{K}}^{1/2}{\mathbf{D}}$ and ${\mathbf{W}}={\mathbf{P}}^T{\mathbf{K}}{\mathbf{P}}={\mathbf{D}}^T{\mathbf{D}}$. Moreover, we consider the singular value decomposition of ${\mathbf{D}}={\mathbf{F}}{\mathbf{S}}{\mathbf{N}}^T$, where the columns of ${\mathbf{F}}\in{\mathbb{R}}^{n\times m}$ are the left singular vectors of ${\mathbf{D}}$, and we have ${\mathbf{S}},{\mathbf{N}}\in{\mathbb{R}}^{m\times m}$. Thus, we get the eigenvalue decomposition of the matrix ${\mathbf{W}}={\mathbf{D}}^T{\mathbf{D}}={\mathbf{N}}{\mathbf{S}}^2{\mathbf{N}}^T$. For simplicity of presentation, we assume ${\mathbf{D}}$ and ${\mathbf{W}}$ have full rank, though the results still hold as long as they have rank greater than or equal to $r$. In the standard Nyström method, the rank-$r$ approximation is ${\mathbf{G}}_{(r)}^{nys}={\mathbf{L}}^{nys}({\mathbf{L}}^{nys})^T$, where ${\mathbf{L}}^{nys}$ can be written as: $$\begin{aligned} {\mathbf{L}}^{nys} &= {\mathbf{C}}\left(\llbracket{\mathbf{W}}\rrbracket_r^\dagger\right)^{1/2}= \left({\mathbf{K}}^{1/2}{\mathbf{F}}{\mathbf{S}}{\mathbf{N}}^T\right)\left({\mathbf{N}}_r{\mathbf{S}}_r^{-2}{\mathbf{N}}_r^T\right)^{1/2}\nonumber\\ & = {\mathbf{K}}^{1/2} {\mathbf{F}}_r{\mathbf{N}}_r^T,\end{aligned}$$ where we have used $({\mathbf{N}}_r{\mathbf{S}}_r^{-2}{\mathbf{N}}_r^T)^{1/2}={\mathbf{N}}_r{\mathbf{S}}_r^{-1}{\mathbf{N}}_r^T$, and the following two properties: $${\mathbf{N}}^T{\mathbf{N}}_r=\left[\begin{array}{c} {\mathbf{I}}_{r\times r}\\ \mathbf{0}_{(m-r)\times r} \end{array}\right],\;\;\;{\mathbf{F}}\left[\begin{array}{c} {\mathbf{N}}_r^T\\ \mathbf{0}_{(m-r)\times m} \end{array}\right]={\mathbf{F}}_r{\mathbf{N}}_r^T.$$ Since the columns of ${\mathbf{N}}_r$ are orthonormal, i.e., ${\mathbf{N}}_r^T{\mathbf{N}}_r={\mathbf{I}}_{r\times r}$, the rank-$r$ approximation of the kernel matrix ${\mathbf{K}}$ in the standard Nyström method is given by: $${\mathbf{G}}_{(r)}^{nys}={\mathbf{L}}^{nys}\left({\mathbf{L}}^{nys}\right)^T={\mathbf{K}}^{1/2}{\mathbf{F}}_r{\mathbf{F}}_r^T{\mathbf{K}}^{1/2}.\label{eq:G-nys-alt}$$ Next, we present an alternative formulation of the rank-$r$ approximation ${\mathbf{G}}_{(r)}^{opt}$ in terms of the left singular vectors of ${\mathbf{D}}$. The modified Nyström method finds the best rank-$r$ approximation of ${\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T$, and observe that: $${\mathbf{C}}\big({\mathbf{W}}^\dagger\big)^{1/2}=\left({\mathbf{K}}^{1/2}{\mathbf{F}}{\mathbf{S}}{\mathbf{N}}^T\right)\left({\mathbf{N}}{\mathbf{S}}^{-1}{\mathbf{N}}^T\right)={\mathbf{K}}^{1/2}{\mathbf{F}}{\mathbf{N}}^T.$$ Thus, we get ${\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T={\mathbf{K}}^{1/2}{\mathbf{F}}{\mathbf{F}}^T{\mathbf{K}}^{1/2}$, and the best rank-$r$ approximation has the following form: $${\mathbf{G}}_{(r)}^{opt}=\llbracket{\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T\rrbracket_r=\llbracket{\mathbf{K}}^{1/2}{\mathbf{F}}\rrbracket_r{\mathbf{F}}^T{\mathbf{F}}\llbracket{\mathbf{F}}^T{\mathbf{K}}^{1/2}\rrbracket_r,\label{eq:nys-opt-v1}$$ where we used ${\mathbf{F}}^T{\mathbf{F}}={\mathbf{I}}_{m\times m}$. Based on [@wang2017scalable Lemma 6], let ${\mathbf{H}}\in{\mathbb{R}}^{n\times r}$ be the orthonormal bases of the rank-$r$ matrix ${\mathbf{F}}\llbracket{\mathbf{F}}^T{\mathbf{K}}^{1/2}\rrbracket_r\in{\mathbb{R}}^{n\times n}$. Then, we have ${\mathbf{F}}\llbracket{\mathbf{F}}^T{\mathbf{K}}^{1/2}\rrbracket_r={\mathbf{H}}{\mathbf{H}}^T{\mathbf{K}}^{1/2}$, which allows us to simplify : $${\mathbf{G}}_{(r)}^{opt}={\mathbf{K}}^{1/2}{\mathbf{H}}{\mathbf{H}}^T{\mathbf{K}}^{1/2}.\label{eq:alt-G-our}$$ In the following, we present a theorem which shows that the modified Nyström method (Algorithm \[alg:NysQR\]) generates improved rank-$r$ approximation of the kernel matrix ${\mathbf{K}}$ compared to the standard Nyström method (Algorithm \[alg:StandardNys\]). To have a fair comparison, it is assumed that both methods have access to the same fixed matrices ${\mathbf{C}}$ and ${\mathbf{W}}$. \[thm:nys-qr-sta\] Let ${\mathbf{K}}\in{\mathbb{R}}^{n\times n}$ be an SPSD kernel matrix, and $r$ be the target rank. Let ${\mathbf{P}}$ be any $n \times m$ matrix, with $m\geq r$, such that ${\mathbf{C}}={\mathbf{K}}{\mathbf{P}}\in{\mathbb{R}}^{n\times m}$ and ${\mathbf{W}}={\mathbf{P}}^T{\mathbf{K}}{\mathbf{P}}\in{\mathbb{R}}^{m\times m}$. Then, we have: $$\|{\mathbf{K}}-{\mathbf{G}}_{(r)}^{opt}\|_*\leq \|{\mathbf{K}}-{\mathbf{G}}_{(r)}^{nys}\|_*,$$ where the Nyström method via QR decomposition generates ${\mathbf{G}}_{(r)}^{opt}=\llbracket{\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T\rrbracket_r$, and the standard Nyström method produces ${\mathbf{G}}_{(r)}^{nys}={\mathbf{C}}\llbracket{\mathbf{W}}\rrbracket_r^\dagger{\mathbf{C}}^T$. We start with the alternative formulation of ${\mathbf{G}}_{(r)}^{opt}={\mathbf{K}}^{1/2}{\mathbf{H}}{\mathbf{H}}^T{\mathbf{K}}^{1/2}$, where the columns of ${\mathbf{H}}\in{\mathbb{R}}^{n\times r}$ are orthonormal, cf. . Note that ${\mathbf{K}}-{\mathbf{G}}_{(r)}^{opt}$ is a SPSD matrix, and thus its trace norm is equal to the trace of this matrix: $$\begin{aligned} \|{\mathbf{K}}-{\mathbf{G}}_{(r)}^{opt}\|_*&=\operatorname{tr}\left({\mathbf{K}}^{1/2}\big({\mathbf{I}}_{n\times n}- {\mathbf{H}}{\mathbf{H}}^T\big){\mathbf{K}}^{1/2}\right)\nonumber\\ & \overset{(i)}{=}\operatorname{tr}\left({\mathbf{K}}^{1/2}\big({\mathbf{I}}_{n\times n}- {\mathbf{H}}{\mathbf{H}}^T\big)^2{\mathbf{K}}^{1/2}\right) \nonumber\\ & =\|\big({\mathbf{I}}_{n\times n}-{\mathbf{H}}{\mathbf{H}}^T\big){\mathbf{K}}^{1/2}\|_F^2\nonumber \\ &\overset{(ii)}{=} \|{\mathbf{K}}^{1/2} - {\mathbf{F}}\llbracket{\mathbf{F}}^T{\mathbf{K}}^{1/2}\rrbracket_r\|_F^2\nonumber \\ &\overset{(iii)}{=} \min_{\mathbf{T}:\;\operatorname{rank}(\mathbf{T})\leq r} \|{\mathbf{K}}^{1/2} - {\mathbf{F}}\mathbf{T}\|_F^2,\label{eq:thm-1-proof} \end{aligned}$$ where (i) follows from $({\mathbf{I}}_{n\times n}- {\mathbf{H}}{\mathbf{H}}^T\big)^2=({\mathbf{I}}_{n\times n}- {\mathbf{H}}{\mathbf{H}}^T\big)$, (ii) is based on the observation ${\mathbf{H}}{\mathbf{H}}^T{\mathbf{K}}^{1/2}={\mathbf{F}}\llbracket{\mathbf{F}}^T{\mathbf{K}}^{1/2}\rrbracket_r$, and (iii) is based on Lemma \[lemma:best-rank-ONB\]. Let us define the function $f(\mathbf{T}){\stackrel{\text{\tiny def}}{=}}\|{\mathbf{K}}^{1/2} - {\mathbf{F}}\mathbf{T}\|_F^2$, and consider the matrix $\mathbf{T}'\in{\mathbb{R}}^{m\times n}$ with rank no greater than $r$: $$\mathbf{T}'=\left[\begin{array}{c} {\mathbf{F}}_r^T\\ \mathbf{0}_{(m-r)\times n} \end{array}\right]{\mathbf{K}}^{1/2}.\label{eq:T-pr-sta}$$ Then, we see that: $$\begin{aligned} \|{\mathbf{K}}-{\mathbf{G}}_{(r)}^{opt}\|_* & \leq f(\mathbf{T'}) \nonumber\\ &= \|{\mathbf{K}}^{1/2} - {\mathbf{F}}_r{\mathbf{F}}_r^T{\mathbf{K}}^{1/2}\|_F^2\nonumber\\ &= \operatorname{tr}\left({\mathbf{K}}^{1/2}\big({\mathbf{I}}_{n\times n}- {\mathbf{F}}_r {\mathbf{F}}_r^T\big)^2{\mathbf{K}}^{1/2}\right) \nonumber\\ &= \|{\mathbf{K}}-{\mathbf{G}}_{(r)}^{nys}\|_*, \end{aligned}$$ where we used ${\mathbf{F}}\mathbf{T}'={\mathbf{F}}_r{\mathbf{F}}_r^T{\mathbf{K}}^{1/2}$, $({\mathbf{I}}_{n\times n}- {\mathbf{F}}_r {\mathbf{F}}_r^T\big)^2=({\mathbf{I}}_{n\times n}- {\mathbf{F}}_r {\mathbf{F}}_r^T\big)$, the alternative formulation of ${\mathbf{G}}_{(r)}^{nys}={\mathbf{K}}^{1/2}{\mathbf{F}}_r{\mathbf{F}}_r^T{\mathbf{K}}^{1/2}$, cf. , and that ${\mathbf{K}}-{\mathbf{G}}_{(r)}^{nys}$ is SPSD. This completes the proof. In Example \[example1\], we showed that the modified method outperforms the standard Nyström method. The essential structural feature of the example was the presence of a large-magnitude block of the kernel matrix, denoted ${\mathbf{K}}_{21}$ below. The following remark shows that when this block is zero, the two methods perform the same. \[rmk:same\] Let ${\mathbf{P}}\in{\mathbb{R}}^{n\times m}$ be the sampling matrix, where $m$ columns of the kernel matrix ${\mathbf{K}}\in{\mathbb{R}}^{n\times n}$ are sampled according to any distribution. Without loss of generality, the matrices ${\mathbf{C}}$ and ${\mathbf{K}}$ can be permuted as follows: $${\mathbf{K}}=\left[\begin{array}{cc} {\mathbf{W}}& {\mathbf{K}}_{21}^T\\ {\mathbf{K}}_{21} & {\mathbf{K}}_{22} \end{array}\right],\;\;{\mathbf{C}}=\left[\begin{array}{c} {\mathbf{W}}\\ {\mathbf{K}}_{21} \end{array}\right],$$ where ${\mathbf{K}}_{21}\in{\mathbb{R}}^{(n-m)\times m}$ and ${\mathbf{K}}_{22}\in{\mathbb{R}}^{(n-m)\times (n-m)}$. If ${\mathbf{K}}_{21}=\mathbf{0}_{(n-m)\times m}$, then Nyström via QR decomposition and the standard Nyström method generate the same rank-$r$ approximation of the kernel matrix ${\mathbf{K}}$, i.e., ${\mathbf{G}}_{(r)}^{opt}={\mathbf{G}}_{(r)}^{nys}$. Under the assumption ${\mathbf{K}}_{21}=\mathbf{0}_{(n-m)\times m}$, we have: $${\mathbf{K}}^{1/2}=\left[\begin{array}{cc} {\mathbf{W}}^{1/2} & \mathbf{0}_{m\times (n-m)}\\ \mathbf{0}_{(n-m)\times m} & {\mathbf{K}}_{22}^{1/2} \end{array}\right],$$ and $${\mathbf{D}}={\mathbf{K}}^{1/2}{\mathbf{P}}=\left[\begin{array}{c} {\mathbf{W}}^{1/2}\\ \mathbf{0}_{(n-m)\times m} \end{array}\right].$$ Let us consider the eigenvalue decomposition of ${\mathbf{W}}={\mathbf{V}}{\boldsymbol{\Sigma}}{\mathbf{V}}^T$, where ${\mathbf{V}},{\boldsymbol{\Sigma}}\in{\mathbb{R}}^{m\times m}$; this is also the SVD since ${\mathbf{K}}$, and hence ${\mathbf{W}}$, is SPSD and so ${\boldsymbol{\Sigma}}\ge 0$. Then, the left singular vectors of ${\mathbf{D}}$ have the following form: $${\mathbf{F}}=\left[\begin{array}{c} {\mathbf{V}}\\ \mathbf{0}_{(n-m)\times m} \end{array}\right]\in{\mathbb{R}}^{n\times m}.$$ Thus, we get ${\mathbf{F}}^T{\mathbf{K}}^{1/2}=[{\mathbf{V}}^T{\mathbf{W}}^{1/2},\mathbf{0}_{m\times (n-m)}]$. Note that ${\mathbf{V}}^T{\mathbf{W}}^{1/2}={\boldsymbol{\Sigma}}^{1/2}{\mathbf{V}}^T$, and the best rank-$r$ approximation of ${\mathbf{F}}^T{\mathbf{K}}^{1/2}$ can be written as: $$\llbracket{\mathbf{F}}^T{\mathbf{K}}^{1/2}\rrbracket_r=\left[{\mathbf{Y}}{\boldsymbol{\Sigma}}_r^{1/2}{\mathbf{V}}_r^T,\mathbf{0}_{m\times (n-m)}\right],$$ where $${\mathbf{Y}}{\stackrel{\text{\tiny def}}{=}}\left[\begin{array}{c} {\mathbf{I}}_{r\times r}\\ \mathbf{0}_{(m-r)\times r} \end{array}\right]\in{\mathbb{R}}^{m\times r}.$$ Next, we compute the matrix $\mathbf{T}'$ in by first simplifying ${\mathbf{F}}_r^T{\mathbf{K}}^{1/2}=\left[{\mathbf{V}}_r^T{\mathbf{W}}^{1/2},\mathbf{0}_{r\times (n-m)}\right]\in{\mathbb{R}}^{r\times n}$. Since ${\mathbf{V}}_r^T{\mathbf{W}}^{1/2}={\boldsymbol{\Sigma}}_r^{1/2}{\mathbf{V}}_r^T$, we have: $$\mathbf{T}'=\left[\begin{array}{c} {\mathbf{F}}_r^T{\mathbf{K}}^{1/2}\\ \mathbf{0}_{(m-r)\times n} \end{array}\right]=\left[{\mathbf{Y}}{\boldsymbol{\Sigma}}_r^{1/2}{\mathbf{V}}_r^T,\mathbf{0}_{m\times (n-m)}\right].$$ Thus, we get $\mathbf{T}'=\llbracket{\mathbf{F}}^T{\mathbf{K}}^{1/2}\rrbracket_r$ and this completes the proof. \[thm:remark-frob\] In Theorem \[thm:nys-qr-sta\], we showed that Nyström via QR decomposition generates improved rank-$r$ approximation of kernel matrices with respect to the trace norm. However, this property is not always satisfied in terms of the Frobenius norm. For example, consider the following $4\times 4$ SPSD kernel matrix: $${\mathbf{K}}=\left[\begin{array}{cccc} 1.0 & 0.7 & 0.9 & 0.4\\ 0.7 & 1.0 & 0.6 & 0.6\\ 0.9 & 0.6 & 1.0 & 0.6\\ 0.4 & 0.6 & 0.6 & 1.0 \end{array}\right].$$ If we sample the first and second column of ${\mathbf{K}}$ to form ${\mathbf{C}}\in{\mathbb{R}}^{4\times 2}$, i.e., $m=2$, then we get $\|{\mathbf{K}}- {\mathbf{G}}_{(1)}^{nys}\|_*=1.3441 $ and $\|{\mathbf{K}}- {\mathbf{G}}_{(1)}^{opt}\|_*=1.3299$. Thus, we have $\|{\mathbf{K}}- {\mathbf{G}}_{(1)}^{opt}\|_*\leq \|{\mathbf{K}}- {\mathbf{G}}_{(1)}^{nys}\|_*$, as expected by Theorem \[thm:nys-qr-sta\]. If we compare these two error terms based on the Frobenius norm, then we see that $\|{\mathbf{K}}- {\mathbf{G}}_{(1)}^{nys}\|_F=0.9397$ and $\|{\mathbf{K}}- {\mathbf{G}}_{(1)}^{opt}\|_F=0.9409$. Thus, in this example, the standard Nyström method has slightly better performance in terms of the Frobenius norm. In Section \[sec:exper\], we present comprehensive experimental results on real-world data sets, which show that Nyström via QR decomposition outperforms the standard Nyström method in terms of both trace norm *and* Frobenius norm. Next, we present an important theoretical result on the quality of rank-$r$ Nyström approximations when the number of landmark points are increased. To be formal, let us first sample $m_1\geq r$ landmark points from the set of input data points to generate the rank-$r$ approximation using the modified Nyström method, namely ${\mathbf{G}}_{(r)}^{opt}$. If we sample $(m_2-m_1)\in\mathbb{N}$ additional landmark points to form the new rank-$r$ approximation $\widetilde{{\mathbf{G}}}_{(r)}^{opt}$ using the total of $m_2$ landmark points, the following result states that $ \|{\mathbf{K}}-\widetilde{{\mathbf{G}}}_{(r)}^{opt}\|_*\leq \|{\mathbf{K}}-{\mathbf{G}}_{(r)}^{opt}\|_*$. Therefore, increasing the number of landmark points in the modified Nyström method leads to improved rank-$r$ approximation. \[thm:nys-qr-std-more\] Let ${\mathbf{K}}\in{\mathbb{R}}^{n\times n}$ be an SPSD kernel matrix, and $r$ be the target rank. Consider a sampling matrix or any fixed matrix ${\mathbf{P}}\in{\mathbb{R}}^{n\times m_1}$, with $m_1\geq r$, such that ${\mathbf{C}}={\mathbf{K}}{\mathbf{P}}\in{\mathbb{R}}^{n\times m_1}$ and ${\mathbf{W}}={\mathbf{P}}^T{\mathbf{K}}{\mathbf{P}}\in{\mathbb{R}}^{m_1\times m_1}$. Then, the modified Nyström method generates the rank-$r$ approximation of ${\mathbf{K}}$ as ${\mathbf{G}}_{(r)}^{opt}=\llbracket{\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T\rrbracket_r$. Increase the number of landmark points by concatenating the matrix ${\mathbf{P}}$ with ${\mathbf{P}}^{new}\in{\mathbb{R}}^{n\times (m_2-m_1)}$ with $m_2>m_1$, i.e., $\widetilde{{\mathbf{P}}}=[{\mathbf{P}},{\mathbf{P}}^{new}]\in{\mathbb{R}}^{n\times m_2}$. The resulting matrix $\widetilde{{\mathbf{P}}}$ can be used to form $\widetilde{{\mathbf{C}}}={\mathbf{K}}\widetilde{{\mathbf{P}}}\in{\mathbb{R}}^{n\times m_2}$ and $\widetilde{{\mathbf{W}}}=\widetilde{{\mathbf{P}}}^T{\mathbf{K}}\widetilde{{\mathbf{P}}}\in{\mathbb{R}}^{m_2\times m_2}$, and the modified Nyström method generates $\widetilde{{\mathbf{G}}}_{(r)}^{opt}=\llbracket\widetilde{{\mathbf{C}}}\widetilde{{\mathbf{W}}}^\dagger\widetilde{{\mathbf{C}}}^T\rrbracket_r$. Then this new approximation is better in the sense that: $$\|{\mathbf{K}}-\widetilde{{\mathbf{G}}}_{(r)}^{opt}\|_*\leq \|{\mathbf{K}}-{\mathbf{G}}_{(r)}^{opt}\|_*.$$ Let ${\mathbf{F}}\in{\mathbb{R}}^{n\times m_1}$ be the left singular vectors of ${\mathbf{K}}^{1/2}{\mathbf{P}}\in{\mathbb{R}}^{n\times m_1}$, and $\widetilde{{\mathbf{F}}}\in{\mathbb{R}}^{n\times m_2}$ be the left singular vectors of ${\mathbf{K}}^{1/2}\widetilde{{\mathbf{P}}}=[{\mathbf{K}}^{1/2}{\mathbf{P}},{\mathbf{K}}^{1/2}{\mathbf{P}}^{new}]\in{\mathbb{R}}^{n\times m_2}$. Then $$\begin{aligned} \|{\mathbf{K}}- \widetilde{{\mathbf{G}}}_{(r)}^{opt}\|_*& =& \min_{\widetilde{\mathbf{T}}:\;\operatorname{rank}(\widetilde{\mathbf{T}})\leq r} \|{\mathbf{K}}^{1/2} - \widetilde{{\mathbf{F}}} \widetilde{\mathbf{T}}\|_F^2\nonumber \\ & \leq & \min_{\mathbf{T}:\;\operatorname{rank}(\mathbf{T})\leq r} \|{\mathbf{K}}^{1/2} - {\mathbf{F}}\mathbf{T}\|_F^2\nonumber\\ &= & \|{\mathbf{K}}- {\mathbf{G}}_{(r)}^{opt}\|_*, \end{aligned}$$ where both equalities follow from and the inequality follows from the fact that $\text{range}({\mathbf{F}})\subset\text{range}(\widetilde{{\mathbf{F}}})$. \[rmk:1\] Theorem \[thm:nys-qr-std-more\] is not true for the standard Nyström method. Consider the kernel matrix from Example \[example1\]. By sampling the first two columns, the standard Nyström method gave relative errors of $0.99$ in both the trace and Frobenius norms. Had we sampled just the first column, the standard Nyström method would have returned the same approximation as and thus have $0.01$ relative error in these norms, meaning that adding additional landmark points leads to a worse approximation. See also Remark \[rmk:2\] for experiments. Extension to Out-of-Sample Landmark Points {#sec:thm-outofsample} ------------------------------------------ The main underlying component in our theoretical results is the existence of a matrix ${\mathbf{P}}\in{\mathbb{R}}^{n\times m}$, such that ${\mathbf{C}}$ and ${\mathbf{W}}$ in the Nyström method can be written as: ${\mathbf{C}}={\mathbf{K}}{\mathbf{P}}$ and ${\mathbf{W}}={\mathbf{P}}^T{\mathbf{K}}{\mathbf{P}}$. As mentioned earlier, this assumption holds true if the landmark points ${\mathbf{z}}_1,\ldots,{\mathbf{z}}_m$ are selected (randomly or arbitrarily) from the set of input data points, since then ${\mathbf{P}}$ is a sampling matrix consisting of columns of the identity matrix. However, some recent landmark selection techniques utilize out-of-sample extensions of the input data points to improve the accuracy of the Nyström method, e.g., centroids found from K-means clustering. In this case, the matrix ${\mathbf{C}}\in{\mathbb{R}}^{n\times m}$, where $C_{ij}=\kappa({\mathbf{x}}_i,{\mathbf{z}}_j)$, does not necessarily contain the columns of ${\mathbf{K}}$. Thus, we cannot hope for a sampling matrix ${\mathbf{P}}$ that satisfies ${\mathbf{C}}={\mathbf{K}}{\mathbf{P}}$. We show that, under certain conditions, our theoretical results (Theorem \[thm:nys-qr-sta\] and \[thm:nys-qr-std-more\]) are applicable to the case of out-of-sample landmark points. To be formal, consider a set of $n$ distinct data points in ${\mathbb{R}}^p$, i.e., ${\mathbf{X}}=[{\mathbf{x}}_1,\ldots,{\mathbf{x}}_n]\in{\mathbb{R}}^{p\times n}$, and the Gaussian kernel of the form $\kappa({\mathbf{x}}_i,{\mathbf{x}}_j)=\exp(-\|{\mathbf{x}}_i-{\mathbf{x}}_j\|_2^2/c)$, $c>0$, which leads to the kernel matrix ${\mathbf{K}}\in{\mathbb{R}}^{n\times n}$. Let ${\mathbf{Z}}=[{\mathbf{z}}_1,\ldots,{\mathbf{z}}_m]\in{\mathbb{R}}^{p\times m}$ be $m$ cluster centroids from K-means clustering on ${\mathbf{x}}_1,\ldots,{\mathbf{x}}_n$, and we form ${\mathbf{C}}\in{\mathbb{R}}^{n\times m}$ and ${\mathbf{W}}\in{\mathbb{R}}^{m\times m}$ with $C_{ij}=\kappa({\mathbf{x}}_i,{\mathbf{z}}_j)$ and $W_{ij}=\kappa({\mathbf{z}}_i,{\mathbf{z}}_j)$. Based on [@LearningWithKernels Theorem 2.18], the kernel matrix ${\mathbf{K}}$ defined on a set of $n$ distinct data points using the Gaussian kernel function has full rank. Thus by defining ${\mathbf{P}}={\mathbf{K}}^{-1}{\mathbf{C}}\in{\mathbb{R}}^{n\times m}$, we can write ${\mathbf{C}}={\mathbf{K}}{\mathbf{P}}$, but because this ${\mathbf{P}}$ is not a sampling matrix, it does not follow that ${\mathbf{W}}={\mathbf{P}}^T{\mathbf{K}}{\mathbf{P}}$, so our aim is to show that ${\mathbf{W}}\approx{\mathbf{P}}^T{\mathbf{K}}{\mathbf{P}}$. Let us consider the *empirical* kernel map $\Phi_e$, defined on the set of input data points ${\mathbf{x}}_1,\ldots,{\mathbf{x}}_n$: $$\Phi_{e}({\mathbf{z}}): {\mathbf{z}}\mapsto {\mathbf{K}}^{-1/2}\left[\kappa({\mathbf{x}}_1,{\mathbf{z}}),\ldots,\kappa({\mathbf{x}}_n,{\mathbf{z}})\right]^T\in{\mathbb{R}}^n.\label{eq:emp-kernel-map}$$ This map approximates the kernel-induced map $\Phi$ for out-of-sample data points ${\mathbf{z}}_1,\ldots,{\mathbf{z}}_m$ such that $\langle\Phi_e({\mathbf{z}}_i),\Phi_e({\mathbf{z}}_j)\rangle\approx \langle \Phi({\mathbf{z}}_i),\Phi({\mathbf{z}}_j)\rangle=\kappa({\mathbf{z}}_i,{\mathbf{z}}_j)$ [@LearningWithKernels]. Since ${\mathbf{P}}^T={\mathbf{C}}^T{\mathbf{K}}^{-1}$ and the $j$-th column of ${\mathbf{C}}$ is $[\kappa({\mathbf{x}}_1,{\mathbf{z}}_j),\ldots,\kappa({\mathbf{x}}_n,{\mathbf{z}}_j)]^T$, we have: $$\begin{aligned} {\mathbf{P}}^T{\mathbf{K}}{\mathbf{P}}&=\big({\mathbf{C}}^T{\mathbf{K}}^{-1/2}\big)\big({\mathbf{K}}^{-1/2}{\mathbf{C}}\big)\nonumber\\ & = \left[\Phi_e({\mathbf{z}}_1),\ldots,\Phi_e({\mathbf{z}}_m)\right]^T\left[\Phi_e({\mathbf{z}}_1),\ldots,\Phi_e({\mathbf{z}}_m)\right]\nonumber\\ & {\stackrel{\text{\tiny def}}{=}}{\mathbf{W}}_e\in{\mathbb{R}}^{m\times m}.\end{aligned}$$ Therefore, if we use out-of-sample landmark points with the Gaussian kernel function in the Nyström method, there exists a matrix ${\mathbf{P}}$ that satisfies ${\mathbf{C}}={\mathbf{K}}{\mathbf{P}}$ and ${\mathbf{W}}={\mathbf{W}}_e+{\mathbf{E}}={\mathbf{P}}^T{\mathbf{K}}{\mathbf{P}}+{\mathbf{E}}$, where the matrix ${\mathbf{E}}\in{\mathbb{R}}^{m\times m}$ represents the approximation error. It is known that when the relative amount of error is small (e.g., with respect to the spectral norm), ${\mathbf{W}}$ and ${\mathbf{W}}_e$ are close to one another and their eigenvalues and eigenvectors are perturbed proportional to the relative error [@mathias1998relative; @dopico2000weyl]. However, in this work, our goal is to prove that the Nyström approximations ${\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T$ and ${\mathbf{C}}{\mathbf{W}}_e^\dagger{\mathbf{C}}^T$ are close to one another when the relative amount of error is small. To demonstrate the importance of this result, note that for any invertible matrices $\mathbf{M}$ and $\mathbf{M}'$, we have the identity $\mathbf{M}'^{-1}-\mathbf{M}^{-1}=-\mathbf{M}'^{-1}(\mathbf{M}'-\mathbf{M})\mathbf{M}^{-1}$. Thus, the small norm of $\mathbf{M}'-\mathbf{M}$ cannot be directly used to conclude $\mathbf{M}'^{-1}$ and $\mathbf{M}^{-1}$ are close to one another. In the following, we present an error bound for the difference between the Nyström approximations, i.e., ${\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T-{\mathbf{C}}{\mathbf{W}}_e^\dagger{\mathbf{C}}^T$, in terms of the relative amount of error caused by the empirical kernel map. \[thm:out-of-sample\] Consider a set of $n$ distinct data points ${\mathbf{X}}=[{\mathbf{x}}_1,\ldots,{\mathbf{x}}_n]\in{\mathbb{R}}^{p\times n}$, and the Gaussian kernel function of the form $\kappa({\mathbf{x}}_i,{\mathbf{x}}_j)=\exp(-\|{\mathbf{x}}_i-{\mathbf{x}}_j\|_2^2/c)$, $c>0$, which leads to the kernel matrix ${\mathbf{K}}\in{\mathbb{R}}^{n\times n}$. Let ${\mathbf{Z}}=[{\mathbf{z}}_1,\ldots,{\mathbf{z}}_m]\in{\mathbb{R}}^{p\times m}$ be arbitrary (e.g., $m$ distinct cluster centroids from K-means clustering on ${\mathbf{x}}_1,\ldots,{\mathbf{x}}_n$), and we form ${\mathbf{C}}\in{\mathbb{R}}^{n\times m}$ and ${\mathbf{W}}\in{\mathbb{R}}^{m\times m}$ with $C_{ij}=\kappa({\mathbf{x}}_i,{\mathbf{z}}_j)$ and $W_{ij}=\kappa({\mathbf{z}}_i,{\mathbf{z}}_j)$. Then, there exists a matrix ${\mathbf{P}}\in{\mathbb{R}}^{n\times m}$ such that ${\mathbf{C}}={\mathbf{K}}{\mathbf{P}}$ and ${\mathbf{W}}={\mathbf{W}}_e+{\mathbf{E}}$, where ${\mathbf{W}}_e={\mathbf{P}}^T{\mathbf{K}}{\mathbf{P}}$ and ${\mathbf{E}}\in{\mathbb{R}}^{m\times m}$ represents the approximation error of the empirical kernel map defined in . Assuming that $\eta{\stackrel{\text{\tiny def}}{=}}\|{\mathbf{W}}_e^{-1/2}{\mathbf{E}}{\mathbf{W}}_e^{-1/2}\|_2<1$ for the positive definite matrix ${\mathbf{W}}_e$, then: $$\frac{\|{\mathbf{C}}{\mathbf{W}}^{\dagger}{\mathbf{C}}^T - {\mathbf{C}}{\mathbf{W}}_e^\dagger {\mathbf{C}}^T\|_2}{\|{\mathbf{K}}\|_2} \leq \frac{\eta}{1-\eta}.$$ As mentioned earlier, the kernel matrix ${\mathbf{K}}$ using the Gaussian kernel function has full rank. Thus, there exists a matrix ${\mathbf{P}}$ such that ${\mathbf{C}}={\mathbf{K}}{\mathbf{P}}$ and the empirical kernel map $\Phi_e$ can be defined as in . Recall the singular value decomposition of ${\mathbf{D}}={\mathbf{K}}^{1/2}{\mathbf{P}}={\mathbf{F}}{\mathbf{S}}{\mathbf{N}}^T$, and the eigenvalue decomposition of ${\mathbf{W}}_e={\mathbf{P}}^T{\mathbf{K}}{\mathbf{P}}={\mathbf{N}}{\mathbf{S}}^2{\mathbf{N}}^T$. Let ${\mathbf{W}}=\widetilde{{\mathbf{N}}}\widetilde{{\mathbf{S}}}^2\widetilde{{\mathbf{N}}}^T$ be the eigenvalue decomposition of ${\mathbf{W}}$ (since ${\mathbf{W}}$ is also SPSD). Moreover, let us define: $$\widetilde{{\mathbf{E}}}{\stackrel{\text{\tiny def}}{=}}{\mathbf{S}}{\mathbf{N}}^T \widetilde{{\mathbf{N}}} \widetilde{{\mathbf{S}}}^{-2} \widetilde{{\mathbf{N}}}^T{\mathbf{N}}{\mathbf{S}}- {\mathbf{I}}_{m\times m}\in{\mathbb{R}}^{m\times m}.\label{eq:error-perturb}$$ If we have ${\mathbf{E}}=\mathbf{0}_{m\times m}$, i.e., the approximate kernel map $\Phi_e$ is equal to the kernel-induced map, then $\widetilde{{\mathbf{N}}}={\mathbf{N}}$, $\widetilde{{\mathbf{S}}}={\mathbf{S}}$, and $\widetilde{{\mathbf{E}}}=\mathbf{0}_{m\times m}$. Next, we find an upper bound for $\|\widetilde{{\mathbf{E}}}\|_2$ in terms of the relative error $\eta$. Consider the eigenvalue decomposition of ${\mathbf{W}}={\mathbf{W}}_e+{\mathbf{E}}$: $$\widetilde{{\mathbf{N}}}\widetilde{{\mathbf{S}}}^2\widetilde{{\mathbf{N}}}^T = {\mathbf{N}}{\mathbf{S}}^2{\mathbf{N}}^T+{\mathbf{E}}= {\mathbf{N}}{\mathbf{S}}\left({\mathbf{I}}_{m\times m}+\mathbf{O}\right){\mathbf{S}}{\mathbf{N}}^T, \label{eq:perturb1}$$ where ${\mathbf{O}}{\stackrel{\text{\tiny def}}{=}}{\mathbf{S}}^{-1}{\mathbf{N}}^T{\mathbf{E}}{\mathbf{N}}{\mathbf{S}}^{-1}\in{\mathbb{R}}^{m\times m}$ is a symmetric matrix. Note that $\|\mathbf{O}\|_2=\|{\mathbf{N}}{\mathbf{O}}{\mathbf{N}}^T\|_2=\eta$, because of the unitary invariance of the spectral norm. If we multiply on the left by ${\mathbf{N}}^T$ and on the right by $\widetilde{{\mathbf{N}}}$, we get: $${\mathbf{N}}^T\widetilde{{\mathbf{N}}}\widetilde{{\mathbf{S}}}^2 = {\mathbf{S}}\left({\mathbf{I}}_{m\times m}+\mathbf{O}\right){\mathbf{S}}{\mathbf{N}}^T\widetilde{{\mathbf{N}}}. \label{eq:perturb2}$$ Next, we multiply on the left by $\widetilde{{\mathbf{N}}}^T{\mathbf{N}}$, and we see that: $$\widetilde{{\mathbf{S}}}^2= \widetilde{{\mathbf{N}}}^T{\mathbf{N}}{\mathbf{S}}^2 {\mathbf{N}}^T\widetilde{{\mathbf{N}}}+\widetilde{{\mathbf{N}}}^T{\mathbf{N}}{\mathbf{S}}\mathbf{O} {\mathbf{S}}{\mathbf{N}}^T\widetilde{{\mathbf{N}}}.\label{eq:perturb3}$$ Finally, we multiply on the left and right by $\widetilde{{\mathbf{S}}}^{-1}$: $${\mathbf{I}}_{m\times m}=\big(\widetilde{{\mathbf{S}}}^{-1}\widetilde{{\mathbf{N}}}^T{\mathbf{N}}{\mathbf{S}}\big)\big({\mathbf{I}}_{m\times m} + \mathbf{O}\big)\big({\mathbf{S}}{\mathbf{N}}^T\widetilde{{\mathbf{N}}}\widetilde{{\mathbf{S}}}^{-1}\big).$$ Thus, we observe that ${\mathbf{S}}{\mathbf{N}}^T\widetilde{{\mathbf{N}}}\widetilde{{\mathbf{S}}}^{-1}=({\mathbf{I}}_{m\times m}+\mathbf{O})^{-1/2}\mathbf{T}$, where $\mathbf{T}\in{\mathbb{R}}^{m\times m}$ is an orthogonal matrix. Thus, we have: $$\widetilde{{\mathbf{E}}}=\big({\mathbf{I}}_{m\times m} + \mathbf{O}\big)^{-1} - {\mathbf{I}}_{m\times m}.\label{eq:perturb4}$$ To find an upper bound for the spectral norm of $\widetilde{{\mathbf{E}}}$, we simplify by using the Neumann series $({\mathbf{I}}_{m\times m} + \mathbf{O})^{-1} = \sum_{n=0}^{\infty}(-1)^n\mathbf{O}^{n}$, where $\|\mathbf{O}\|_2=\eta<1$ by assumption. Hence, we get the following upper bound for the spectral norm of $\widetilde{{\mathbf{E}}}=\sum_{n=1}^{\infty}(-1)^n\mathbf{O}^n$: $$\|\widetilde{{\mathbf{E}}}\|_2\leq\sum_{n=1}^{\infty} \|\mathbf{O}^n\|_2\leq\sum_{n=1}^{\infty} \|\mathbf{O}\|_2^n\leq\sum_{n=1}^{\infty} \eta^n=\frac{\eta}{1-\eta},$$ where we have used the convergence of the Neumann series and the continuity of norms in the first inequality and the submultiplicativity property of the spectral norm in the second. To finish, we relate the difference between the two Nyström approximations ${\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T$ and ${\mathbf{C}}{\mathbf{W}}_e^\dagger{\mathbf{C}}^T$ to the norm of $\widetilde{{\mathbf{E}}}$: $${\mathbf{C}}\big({\mathbf{W}}^\dagger\big)^{1/2}={\mathbf{K}}^{1/2}{\mathbf{D}}\Big(\widetilde{{\mathbf{N}}}\widetilde{{\mathbf{S}}}^{-1}\widetilde{{\mathbf{N}}}^T\Big)={\mathbf{K}}^{1/2} {\mathbf{F}}{\mathbf{S}}{\mathbf{N}}^T \widetilde{{\mathbf{N}}} \widetilde{{\mathbf{S}}}^{-1} \widetilde{{\mathbf{N}}}^T.$$ Then, given the definition of $\widetilde{{\mathbf{E}}}$ in , we observe that: $${\mathbf{C}}{\mathbf{W}}^{\dagger}{\mathbf{C}}^T - {\mathbf{C}}{\mathbf{W}}_e^\dagger {\mathbf{C}}^T = {\mathbf{K}}^{1/2}{\mathbf{F}}\widetilde{{\mathbf{E}}} {\mathbf{F}}^T{\mathbf{K}}^{1/2}.$$ Thus, using the submultiplicativity property of the spectral norm, we have: $$\|{\mathbf{C}}{\mathbf{W}}^{\dagger}{\mathbf{C}}^T - {\mathbf{C}}{\mathbf{W}}_e^\dagger {\mathbf{C}}^T\|_2\leq \|{\mathbf{K}}^{1/2}\|_2^2\|{\mathbf{F}}\|_2^2\|\widetilde{{\mathbf{E}}}\|_2.$$ Note that $\|{\mathbf{K}}^{1/2}\|_2^2=\|{\mathbf{K}}\|_2$, and $\|{\mathbf{F}}\|_2=1$ since ${\mathbf{F}}$ has orthonormal columns. This completes the proof. To gain some intuition for Theorem \[thm:out-of-sample\], we present a numerical experiment on the [`pendigits`]{} data set ($p=16$ and $n=10,\!992$) used in Section \[sec:exper\]. Here, the Gaussian kernel function is employed with the parameter $c$ chosen as the averaged squared distances between all the data points and sample mean. The standard K-means clustering algorithm is performed on the input data points to select the landmark points ${\mathbf{z}}_1,\ldots,{\mathbf{z}}_m$ for various values of $m=2,\ldots,10$. For each value of $m$, we form two matrices ${\mathbf{C}}$ and ${\mathbf{W}}$. Also, we compute ${\mathbf{W}}_e={\mathbf{P}}^T{\mathbf{K}}{\mathbf{P}}$ and $\eta=\|{\mathbf{W}}_e^{-1/2}{\mathbf{E}}{\mathbf{W}}_e^{-1/2}\|_2$, where ${\mathbf{P}}={\mathbf{K}}^{-1}{\mathbf{C}}$ and ${\mathbf{E}}={\mathbf{W}}-{\mathbf{W}}_e$; calculating ${\mathbf{W}}_e$ is impractical for larger data sets and we do so only to support our theorem. Fig. \[fig:bound\] reports the mean of $\|{\mathbf{C}}{\mathbf{W}}^{\dagger}{\mathbf{C}}^T - {\mathbf{C}}{\mathbf{W}}_e^\dagger {\mathbf{C}}^T\|_2/\|{\mathbf{K}}\|_2$ over $50$ trials for varying number of landmark points. The figure also plots the mean of our theoretical bound in Theorem \[thm:out-of-sample\], i.e., $\eta/(1-\eta)$. We observe that ${\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T$ and ${\mathbf{C}}{\mathbf{W}}_e^\dagger{\mathbf{C}}^T$ provide very similar Nyström approximations of the kernel matrix ${\mathbf{K}}$, such that the relative error with respect to the spectral norm is less than $2\times 10^{-3}$ for all values of $m$. Furthermore, it is clear that our theoretical bound in Theorem \[thm:out-of-sample\] is accurate and it provides a meaningful upper bound for the relative error of the Nyström approximations. ![Mean of $\|{\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T-{\mathbf{C}}{\mathbf{W}}_e^\dagger{\mathbf{C}}^T\|_2/\|{\mathbf{K}}\|_2$ and the theoretical error bound $\eta/(1-\eta)$ over $50$ trials for varying number of landmark points $m$, obtained via K-means clustering.[]{data-label="fig:bound"}](err_bound_pendigits.pdf){width="50.00000%"} Based on Theorem \[thm:out-of-sample\], the closeness of the Nyström approximations ${\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T$ and ${\mathbf{C}}{\mathbf{W}}_e^\dagger{\mathbf{C}}^T$ with respect to the spectral norm is a function of the quantity $\eta$. Note that $\eta$ measures the relative amount of perturbation of the eigenvalues and eigenvectors of ${\mathbf{W}}_e$, and we have $\eta=\|{\mathbf{W}}_e^{-1/2}{\mathbf{E}}{\mathbf{W}}_e^{-1/2}\|_2\leq \|{\mathbf{W}}_e^{-1}{\mathbf{E}}\|_2\leq \|{\mathbf{W}}_e^{-1}\|_2\|{\mathbf{E}}\|_2$ [@dopico2000weyl Lemma 2.2]. Therefore, when $\|{\mathbf{E}}\|_2$ is small, ${\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T$ and ${\mathbf{C}}{\mathbf{W}}_e^\dagger{\mathbf{C}}^T$ lead to similar low-rank approximations of the kernel matrix. In particular, as $\|{\mathbf{E}}\|_2$ goes to zero, ${\mathbf{C}}{\mathbf{W}}_e^\dagger{\mathbf{C}}^T$ converges to ${\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T$. Hence we expect our theoretical results on the rank-$r$ Nyström approximations (Theorem \[thm:nys-qr-sta\] and \[thm:nys-qr-std-more\]) to be valid for the case out-of-sample landmark points for small values of $\eta$. Experimental Results {#sec:exper} ==================== We present experimental results on the fixed-rank approximation of kernel matrices using the standard and modified Nyström methods, and show evidence to (1) corroborate our theory of Section \[sec:thm-main\] and \[sec:thm-outofsample\], (2) suggest that our theory still holds even if the assumptions are not exactly satisfied (e.g., out-of-sample landmark points), and (3) highlight the benefits of the modified method. In order to illustrate the effectiveness of modified Nyström (Algorithm \[alg:NysQR\]), we compare its accuracy to that of the standard Nyström method (Algorithm \[alg:StandardNys\]) for the target rank $r=2$ and varying number of landmark points $m=r,\ldots,5r$. To provide a baseline for the comparison, we report the accuracy of the best rank-$2$ approximation obtained via the eigenvalue decomposition (EVD), which requires the computation and storage of full kernel matrices and hence is impractical for very large data sets. Experiments are conducted on four data sets from the LIBSVM archive [@CC01a], listed in Table \[table:data\]. In all experiments, the Gaussian kernel $\kappa\left({\mathbf{x}}_i,{\mathbf{x}}_j\right)=\exp\left(-\|{\mathbf{x}}_i-{\mathbf{x}}_j\|_2^2/c\right)$ is used with the parameter $c$ chosen as the averaged squared distances between all the data points and sample mean. We consider two landmark selection techniques: (1) uniform sampling, where $m$ landmark points are selected uniformly at random without replacement from $n$ data points; and (2) out-of-sample landmark points obtained via K-means clustering on the original data set, as in [@zhang2010clusteredNys]. To perform the K-means clustering algorithm, we use MATLAB’s built-in function `kmeans` and the maximum number of iterations is set to $10$. A MATLAB implementation of modified and standard Nyström is available online[^3]. [SSS]{} & [$p$ (dimension)]{} & [$n$ (size)]{}\ [`pendigits`]{} & [$16$]{} & [$10,\!992$]{}\ [`satimage`]{} & [$36$]{} & [$6,\!435$]{}\ [`w6a`]{} & [$300$]{} & [$13,\!267$]{}\ [`E2006-tfidf`]{} & [$150,\!360$]{} & [$3,\!000$]{}\ \[table:data\] We measure the quality of fixed-rank Nyström approximations using the relative kernel approximation error with respect to the trace norm, i.e., $\|{\mathbf{K}}-{\mathbf{G}}_{(r)}^{nys}\|_*/\|{\mathbf{K}}\|_*$ vs. $\|{\mathbf{K}}-{\mathbf{G}}_{(r)}^{opt}\|_*/\|{\mathbf{K}}\|_*$. In either metric, no method can ever out-perform the EVD baseline. In Fig. \[fig:err-trace\], the mean and standard deviation of relative error over $50$ trials are reported for all four data sets and varying number of landmark points $m=r,\ldots,5r$. Modified Nyström and the standard Nyström method have identical performance for $m=r$ because no rank restriction step is involved in this case. As the number of landmark points, $m$, increases beyond the rank parameter $r=2$, Nyström via QR decomposition always generates better rank-$2$ approximations of the kernel matrix than the standard method does. This observation is consistent with Theorem \[thm:nys-qr-sta\], which states that for the same landmark points, $\|{\mathbf{K}}-{\mathbf{G}}_{(r)}^{opt}\|_*\leq\|{\mathbf{K}}-{\mathbf{G}}_{(r)}^{nys}\|_*$ (one can divide both sides by constant $\|{\mathbf{K}}\|_*$ for the relative error). Even with out-of-sample landmark points, the modified method can be drastically better. In the right plot of Fig. \[fig:err\_trace\_2\], the modified method achieves a mean error of $0.47$ (compared to the $0.45$ EVD baseline) from just $m=2r$ landmark points, while the standard method has a higher mean error ($0.50$) even using $m=5r$ landmark points. This exemplifies the importance and effectiveness of the precise rank restriction step in the Nyström via QR decomposition method. \ Fig. \[fig:err-trace\] also shows that the accuracy of either Nyström method depends crucially on the landmark selection procedure. For all four data sets, the landmark points obtained via K-means clustering lead to more accurate approximations compared to the uniform sampling technique; note that the vertical axis scale varies from plot to plot. In fact, using K-means centroids as the landmark points and employing the modified Nyström method yield very accurate rank-$2$ approximations that are close to the best rank-$2$ approximation. Furthermore, the improvement due to the better sampling is more pronounced in the modified Nyström method than in the standard method. Fig. \[fig:err-frob\] is generated the same as Fig. \[fig:err-trace\] but the error is reported in the Frobenius norm. The pattern of behavior is very similar to that in Fig. \[fig:err-trace\] even though we lack theoretical guarantees. In fact, neither method dominates the other for all kernel matrices (cf. the adversarial example in Remark \[thm:remark-frob\]), but in these practical data sets, the modified method always performs better. \[rmk:2\] Remark \[rmk:1\] showed that in both the trace and Frobenius norms, the standard Nyström method can perform worse when we sample additional landmark points. Figs. \[fig:err-trace\] and \[fig:err-frob\] show that a similar effect happens with the standard Nyström method when we use out-of-sample landmark points selected via K-means (in this case, as we increase $m$, we do not necessarily include the landmark points selected for smaller $m$). For example, according to Fig. \[fig:err\_trace\_2\] (right), the mean relative error of standard Nyström is increased from $0.56$ to $0.61$ when we increase from $m=2$ to $m=4$ landmark points selected via K-means centroids; see Fig. \[fig:err\_frob\_2\] (right) for a similar effect with respect to the Frobenius norm. This counter-intuitive effect of decreased accuracy even with more landmark points (Remarks \[rmk:1\] and \[rmk:2\]) is due to the sub-optimal restriction procedure of standard Nyström. Theorem \[thm:nys-qr-std-more\] proves that the *modified* Nyström method does not suffer from the same effect in terms of the trace norm and in-sample landmark points, and Figs. \[fig:err-trace\] and \[fig:err-frob\] do not show any evidence of this effect even if we switch to Frobenius norm or consider out-of-sample landmark points. \ Finally, we demonstrate the efficiency of Nyström via QR decomposition by plotting the averaged running time over $50$ trials for [`E2006-tfidf`]{}. The running time results are omitted for the remaining data sets because the average running time was less than one second. Fig. \[fig:runtime\] shows that the modified Nyström method runs in time comparable with the standard Nyström method. Thus, as we discussed in Section \[sec:improved-nys\], in most practical cases the increase in the computation cost of Nyström via QR decomposition is negligible, while it provides improved fixed-rank kernel approximations. If we seek a given accuracy of approximation, the modified Nyström needs fewer landmark points, so in this case one could in fact argue that it is more efficient than the standard Nyström method. ![Running time results for the standard Nyström method and the modified technique.[]{data-label="fig:runtime"}](runtime_E2006tfidf.pdf){width="50.00000%"} Conclusion {#sec:conclusion} ========== In this paper, we have presented a modified technique for the important process of rank reduction in the Nyström method. Theoretical analysis shows that: (1) the modified method provides improved fixed-rank approximations compared to standard Nyström with respect to the trace norm; and (2) the quality of fixed-rank approximations generated via the modified method improves as the number of landmark points increases. Our theoretical results are accompanied by illustrative numerical experiments comparing the modified method with standard Nyström. We also showed that the modified method has almost the same computational complexity as standard Nyström, which makes it suitable for large-scale kernel machines. [^1]: The authors are with the Department of Applied Mathematics, University of Colorado, Boulder, CO 80309 USA (e-mail: [email protected]; [email protected]). This material is based upon work supported by the Bloomberg Data Science Research Grant Program. [^2]: to be precise, the best approximation is unique if and only if there is a nonzero gap between the $r$ and $r+1$ eigenvalues of ${\mathbf{C}}{\mathbf{W}}^\dagger{\mathbf{C}}^T$; we assume it is unique for simplicity of presentation, but if it is not unique, then our method returns one particular optimal solution. [^3]: <https://github.com/FarhadPourkamali/RandomizedClusteredNystrom>
--- abstract: 'In this paper our aim is to show some mean value inequalities for the modified Bessel functions of the first and second kinds. Our proofs are based on some bounds for the logarithmic derivatives of these functions, which are in fact equivalent to the corresponding Turán type inequalities for these functions. As an application of the results concerning the modified Bessel function of the second kind we prove that the cumulative distribution function of the gamma-gamma distribution is log-concave. At the end of this paper several open problems are posed, which may be of interest for further research.' address: - 'Department of Economics, Babeş-Bolyai University, Cluj-Napoca 400591, Romania' - 'Department of Mathematics, Indian Institute of Technology Madras, Chennai 600036, India' - 'Department of Mathematics, University of Turku, Turku 20014, Finland' author: - Árpád Baricz - Saminathan Ponnusamy - Matti Vuorinen title: Functional inequalities for modified Bessel functions --- **Introduction** ================ Let us consider the probability density function $\varphi:\mathbb{R}\rightarrow(0,\infty)$ and the reliability (or survival) function $\overline{\Phi}:\mathbb{R}\rightarrow(0,1)$ of the standard normal distribution, defined by $$\varphi(u)=\frac{1}{\sqrt{2\pi}}e^{-u^2/2} \ \ \ \ \ \mbox{and} \ \ \ \ \ \overline{\Phi}(u)=\frac{1}{\sqrt{2\pi}}\int_{u}^{\infty}e^{-t^2/2}{\operatorname{d\!}}t.$$ The function $r:\mathbb{R}\rightarrow(0,\infty),$ defined by $$r(u)=\frac{\overline{\Phi}(u)}{\varphi(u)}=e^{u^2/2}\int_u^{\infty}e^{-t^2/2}{\operatorname{d\!}}t,$$ is known in literature as Mills’ ratio [@mitri sect. 2.26] of the standard normal distribution, while its reciprocal $1/r,$ defined by $1/r(u)=\varphi(u)/\overline{\Phi}(u),$ is the so-called failure (hazard) rate, which arises frequently in economics and engineering sciences. Recently, among other things, Baricz [@bariczmills Corollary 2.6] by using the Pinelis’ version of the monotone form of l’Hospital’s rule (see [@pinelis; @andersonH; @avv1] for further details) proved the following result concerning the Mills ratio of the standard normal distribution: \[theoA\] If $u_1,u_2>u_0,$ where $u_0\approx1.161527889\dots$ is the unique positive root of the transcendent equation $u(u^2+2)\overline{\Phi}(u)=(u^2+1)\varphi(u),$ then the following chain of inequalities holds $$\label{funci}\frac{2r(u_1)r(u_2)}{r(u_1)+r(u_2)}\leq r\left(\frac{u_1+u_2}{2}\right)\leq\sqrt{r(u_1)r(u_2)}\leq r(\sqrt{u_1u_2}) \leq\frac{r(u_1)+r(u_2)}{2}\leq r\left(\frac{2u_1u_2}{u_1+u_2}\right).$$ Moreover, the first, second, third and fifth inequalities hold for all $u_1,u_2$ positive real numbers, while the fourth inequality is reversed if $u_1,u_2\in(0,u_0).$ In each of the above inequalities equality holds if and only if $u_1=u_2.$ We note here that, since Mills’ ratio $r$ is continuous, the second and third inequalities in mean actually that under the aforementioned assumptions Mills’ ratio is log-convex and geometrically concave on the corresponding interval. More precisely, by definition a function $f:[a,b]\subseteq\mathbb{R}\to(0,\infty)$ is log-convex if $\ln f$ is convex, i.e. if for all $u_1,u_2\in[a,b]$ and $\lambda\in[0,1]$ we have $$f(\lambda u_1+(1-\lambda)u_2)\leq \left[f(u_1)\right]^{\lambda}\left[f(u_2)\right]^{1-\lambda}.$$ Similarly, a function $g:[a,b]\subseteq(0,\infty)\to(0,\infty)$ is said to be geometrically (or multiplicatively) convex if $g$ is convex with respect to the geometric mean, i.e. if for all $u_1,u_2\in[a,b]$ and $\lambda\in[0,1]$ we have $$g\left(u_1^\lambda u_2^{1-\lambda}\right)\leq \left[g(u_1)\right]^{\lambda}\left[g(u_2)\right]^{1-\lambda}.$$ We note that if $f$ and $g$ are differentiable then $f$ is log-convex if and only if $u\mapsto f'(u)/f(u)$ is increasing on $[a,b]$, while $g$ is geometrically convex if and only if $u\mapsto ug'(u)/g(u)$ is increasing on $[a,b].$ A similar definition and characterization of differentiable log-concave and geometrically concave functions also holds. Mean value inequalities similar to those presented above appear also in the recent literature explicitly or implicitly for other special functions, like the Euler gamma function and its logarithmic derivative (see for example the paper [@alzer] and the references therein), the Gaussian and Kummer hypergeometric functions, generalized Bessel functions of the first kind, general power series (see the papers [@anderson; @bariczj1; @bariczj2], and the references therein), Bessel and modified Bessel functions of the first kind (see [@bariczexpo; @neuman; @eneuman]). In this paper, motivated by the above results, we are mainly interested in mean value functional inequalities concerning modified Bessel functions of the first and second kinds. The detailed content is as follows: in section 2 we present some preliminary results concerning some tight lower and upper bounds for the logarithmic derivative of the modified Bessel functions of the first and second kinds. These results will be applied in the sequel to obtain some interesting chain of inequalities for modified Bessel functions of the first and second kinds analogous to . To achieve our goal in section 2 we present some monotonicity properties of some functions which involve the modified Bessel functions of the first and second kinds. Section 3 is devoted to the study of the convexity with respect to Hölder (or power) means of modified Bessel functions of the first and second kinds. The results stated here complete and extend the results from section 2. As an application of our results stated in section 2, in section 4 we show that the cumulative distribution function of the three parameter gamma-gamma distribution is log-concave for arbitrary shape parameters. This result may be useful in problems of information theory and communications. Finally, in section 5 we present some interesting open problems, which may be of interest for further research. **Monotonicity properties of some functions involving modified Bessel functions** ================================================================================= As usual, in what follows let us denote by $I_{\nu}$ and $K_{\nu}$ the modified Bessel functions of the first and second kinds of real order $\nu$ (see [@watson]), which are in fact the linearly independent particular solutions of the second order modified Bessel homogeneous linear differential equation [@watson p. 77] $$\label{diffeq} u^2v''(u)+uv'(u)-(u^2+\nu^2)v(u)=0.$$ Recall that the modified Bessel function $I_{\nu}$ of the first kind has the series representation [@watson p. 77] $$I_{\nu}(u)=\sum_{n\geq0}\frac{(u/2)^{2n+\nu}}{n!\Gamma(n+\nu+1)},$$ where $\nu\neq-1,-2,\dots$ and $u\in\mathbb{R},$ while the modified Bessel function of the second kind $K_{\nu}$ (called sometimes as the MacDonald or Hankel function), is usually defined also as [@watson p. 78] $$K_{\nu}(u)=\frac{\pi}{2}\frac{I_{-\nu}(u)-I_{\nu}(u)}{\sin\nu\pi},$$ where the right-hand side of this equation is replaced by its limiting value if $\nu$ is an integer or zero. We note that for all $\nu$ natural and $u\in\mathbb{R}$ we have $I_{\nu}(u)=I_{-\nu}(u),$ and from the above series representation $I_{\nu}(u)>0$ for all $\nu>-1$ and $u>0.$ Similarly, by using the familiar integral representation [@watson p. 181] $$\label{integr}K_{\nu}(u)=\int_0^{\infty}e^{-u\cosh t}\cosh(\nu t){\operatorname{d\!}}t,$$ which holds for each $u>0$ and $\nu\in\mathbb{R},$ one can see easily that $K_{\nu}(u)>0$ for all $u>0$ and $\nu\in\mathbb{R}.$ The following results provide some tight lower and upper bounds for the logarithmic derivatives of the modified Bessel functions of the first and second kinds $I_{\nu}$ and $K_{\nu}$ and will be used frequently in the sequel. \[lem1\] For all $u>0$ and $\nu>0$ the following inequalities hold $$\label{eq4} \sqrt{\frac{\nu}{\nu+1}u^2+\nu^2}<\frac{uI_{\nu}'(u)}{I_{\nu}(u)}<\sqrt{u^2+\nu^2}.$$ Moreover, the right-hand side of holds true for all $\nu>-1.$ \[lem2\] For all $u>0$ and $\nu>1$ the following inequalities hold $$\label{eq1} -\sqrt{\frac{\nu}{\nu-1}u^2+\nu^2}<\frac{uK_{\nu}'(u)}{K_{\nu}(u)}<-\sqrt{u^2+\nu^2}.$$ Moreover, the right-hand side of holds true for all $\nu\in\mathbb{R}.$ The left-hand side of was proved for $u>0$ and positive integer $\nu$ by Phillips and Malin [@phillips], and later by Baricz [@baricz1] for $u>0$ and $\nu>0$ real. The right-hand side of appeared first in Gronwall’s paper [@gronwall] for $u>0$ and $\nu>0$ (motivated by a problem in wave mechanics), it was proved also by Phillips and Malin [@phillips] for $u>0$ and $\nu\geq 1$ integer, and recently by Baricz [@baricz1] for $u>0$ and $\nu\geq -1/2$ real (motivated by a problem in biophysics; see [@penfold]). For this inequality the case $u>0$ and $\nu>-1$ real has been proved recently in [@baricz2]. The left-hand side of was proved first by Phillips and Malin [@phillips] for $u>0$ and $\nu>1$ positive integer, and was extended to the case $u>0$ and $\nu>1$ real recently by Baricz [@baricz2]. Finally, the right-hand side of was proved first by Phillips and Malin [@phillips] for $u>0$ and $\nu\geq 1$ integer, and later extended to the case of $u>0$ and $\nu$ real arbitrary by Baricz [@baricz1]. It is worth mentioning that the inequalities and , which have been proved recently also by Segura [@segura], are in fact equivalent to the Turán type inequalities for the modified Bessel functions of the first and second kinds. For further details the interested reader is referred to [@baricz1; @baricz2; @bapo; @lana; @segura] and to the references therein. Our first main result reads as follows. \[th1\] The following assertions are true: 1. $u\mapsto uI_{\nu}'(u)/I_{\nu}^2(u)$ is strictly decreasing on $(0,\infty)$ for all $\nu\geq 1;$ 2. $u\mapsto uI_{\nu}'(u)/I_{\nu}(u)$ is strictly increasing on $(0,\infty)$ for all $\nu>-1;$ 3. $u\mapsto \sqrt{u}I_{\nu}(u)$ is strictly log-concave on $(0,\infty)$ for all $\nu\geq 1/2;$ 4. $u\mapsto u^2I_{\nu}'(u)/I_{\nu}^2(u)$ is strictly decreasing on $(0,\infty)$ for all $\nu\geq \nu_0,$ where $\nu_0\approx1.373318506\dots$ is the positive root of the cubic equation $8\nu^3-9\nu^2-2\nu-1=0.$ In particular, for all $u_1,u_2>0$ and $\nu\geq \nu_0$ the following chain of inequalities holds $$\label{chain1}\frac{2I_{\nu}(u_1)I_{\nu}(u_2)}{I_{\nu}(u_1)+I_{\nu}(u_2)}\leq I_{\nu}\left(\frac{2u_1u_2}{u_1+u_2}\right)\leq I_{\nu}\left(\sqrt{u_1u_2}\right)\leq \sqrt{I_{\nu}(u_1)I_{\nu}(u_2)} \leq \sqrt{\frac{u_1+u_2}{2\sqrt{u_1u_2}}}\cdot I_{\nu}\left(\frac{u_1+u_2}{2}\right).$$ Moreover, the second and third inequalities hold true for all $\nu>-1,$ and the fourth inequality holds true for all $\nu\geq 1/2.$ In each of the above inequalities equality hold if and only if $u_1=u_2.$ We recall that part [**(b)**]{} of Theorem \[th1\] was proved for $\nu>0$ by Gronwall [@gronwall]. Notice also that recently Baricz [@baricz2] in order to prove the right-hand side of proved implicitly part [**(b)**]{} of Theorem \[th1\]. For reader’s convenience we recall below that proof. Moreover, we give a somewhat different proof of this part, and two other completely different proofs. We note that part [**(c)**]{} of Theorem \[th1\] improves the result of Sun and Baricz [@sun], who proved that the function $u\mapsto uI_{\nu}(u)$ is log-concave on $(0,\infty)$ for all $\nu\geq 1/2.$ Recently, Baricz and Neuman [@neuman] conjectured that the modified Bessel function $I_{\nu}$ of the first kind is strictly log-concave on $(0,\infty)$ for all $\nu>0.$ As far as we know, this conjecture is still open and the much sharper result of this kind is of part [**(c)**]{} of Theorem \[th1\]. First we prove the monotonicity and log-concavity properties stated above. [**(a)**]{} Recall that the modified Bessel function of the first kind $I_{\nu}$ is a particular solution of the second-order differential equation and thus $$\label{eq3}I_{\nu}''(u)=(1+\nu^2/u^2)I_{\nu}(u)-(1/u)I_{\nu}'(u).$$ Using and the left-hand side of , we obtain that for all $u>0$ and $\nu\geq 1$ $$\begin{aligned} \frac{{\operatorname{d\!}}}{{\operatorname{d\!}}u}\left[\frac{uI_{\nu}'(u)}{I_{\nu}^2(u)}\right]=\left[\frac{1}{uI_{\nu}(u)}\right] \left[u^2+\nu^2-2\left[\frac{uI_{\nu}'(u)}{I_{\nu}(u)}\right]^2\right]<\left[\frac{1}{uI_{\nu}(u)}\right]\left[{-\nu^2+\frac{1-\nu}{1+\nu}u^2}\right]\leq0.\end{aligned}$$ [**(b)**]{} Consider the Turánian $$\Delta_{\nu}(u)=I_{\nu}^2(u)-I_{\nu-1}(u)I_{\nu+1}(u),$$ which in view of the recurrence relations $$I_{\nu-1}(u)=(\nu/u)I_{\nu}(u)+I_{\nu}'(u)$$ and $$I_{\nu+1}(u)=-(\nu/u)I_{\nu}(u)+I_{\nu}'(u),$$ can be rewritten as follows $$\Delta_{\nu}(u)=(1+\nu^2/u^2)I_{\nu}^2(u)-[I_{\nu}'(u)]^2.$$ Using we get $$\Delta_{\nu}(u)=\frac{1}{u}I_{\nu}^2(u)\left[\frac{uI_{\nu}'(u)}{I_{\nu}(u)}\right]'.$$ It is known (see [@tiru; @baricz2]) that the Turán-type inequality $\Delta_{\nu}(u)>0$ holds for all $u>0$ and $\nu>-1,$ and hence the required result follows. We may note incidentally that the result of this part actually follows also from the right-hand side of . More precisely, it is easy to see that the function $u\mapsto uI_{\nu}'(u)/I_{\nu}(u)$ satisfies the differential equation $uv'(u)=u^2+\nu^2-v^2(u),$ and using the right-hand side of it is clearly strictly increasing on $(0,\infty)$ for all $\nu>-1.$ It is important to add here that in fact the right-hand side of and the Turán-type inequality $\Delta_{\nu}(u)>0$ are equivalent (see [@baricz1; @baricz2]). A third proof of this part can be obtained as follows. By using the infinite series representation of the modified Bessel function of the first kind we just need to show that the function $$u\mapsto \frac{uI_{\nu}'(u)}{I_{\nu}(u)}=\left.\sum_{n\geq0}\frac{(2n+\nu)(u/2)^{2n}}{n!\Gamma(\nu+n+1)}\right/\sum_{n\geq0}\frac{(u/2)^{2n}}{n!\Gamma(\nu+n+1)}$$ is strictly increasing on $(0,\infty)$ for all $\nu>-1.$ To do this let us recall the following well-known result (see [@biernacki; @ponnusamy]): [*Let us consider the power series $f(u)=a_0+a_1u+{\dots}+a_nu^n+{\dots}$ and $g(u)=b_0+b_1u+{\dots}+b_nu^n+{\dots},$ where for all $n\geq 0$ integer $a_n\in\mathbb{R}$ and $b_n>0,$ and suppose that both converge on $(0,\infty).$ If the sequence $\{a_n/b_n\}_{n\geq 0}$ is strictly increasing, then the function $u\mapsto f(u)/g(u)$ is strictly increasing too on $(0,\infty).$*]{} We note that we can see easily that the above result remains true in the case of even functions. Thus, to prove that $u\mapsto uI_{\nu}'(u)/I_{\nu}(u)$ is indeed strictly increasing it is enough to show that the sequence $\{\alpha_n\}_{n\geq0},$ defined by $\alpha_n=2n+\nu$ for all $n\geq0,$ is strictly increasing, which is certainly true. Finally, a fourth proof is as follows. By using the Weierstrassian factorization $$I_{\nu}(u)=\frac{u^{\nu}}{2^{\nu}\Gamma(\nu+1)}\prod_{n\geq 1}\left(1+\frac{u^2}{j_{\nu,n}^2}\right),$$ where $\nu>-1$ and $j_{\nu,n}$ is the $n$th positive zero of the Bessel function $J_{\nu}$ of the first kind, we obtain that $$\frac{{\operatorname{d\!}}}{{\operatorname{d\!}}u}\left[\frac{uI_{\nu}'(u)}{I_{\nu}(u)}\right]=\frac{{\operatorname{d\!}}}{{\operatorname{d\!}}u}\left[\nu+2\sum_{n\geq 1}\frac{u^2}{u^2+j_{\nu,n}^2}\right]= 4\sum_{n\geq 1}\frac{uj_{\nu,n}^2}{(u^2+j_{\nu,n}^2)^2}>0$$ for all $u>0$ and $\nu>-1.$ We note that this proof reveals that the function $u\mapsto uI_{\nu}'(u)/I_{\nu}(u)$ is in fact strictly decreasing on $(-\infty,0)$ for all $\nu>-1.$ This is in the agreement with the fact that the function $u\mapsto uI_{\nu}'(u)/I_{\nu}(u)$ is even, as we can see in the above series representations. [**(c)**]{} Owing to Duff [@duff] it is known that the function $u\mapsto \sqrt{u}K_{\nu}(u)$ is strictly completely monotonic, and consequently (see [@widder p. 167]) strictly log-convex on $(0,\infty)$ for each $|\nu|\geq1/2.$ On the other hand, due to Hartman [@hartman] the function $u\mapsto uI_{\nu}(u)K_{\nu}(u)$ is concave, and consequently log-concave on $(0,\infty)$ for all $\nu>1/2.$ Since $u\mapsto 2uI_{1/2}(u)K_{1/2}(u)=1-e^{-2u}$ is concave on $(0,\infty),$ we conclude that in fact the function $u\mapsto uI_{\nu}(u)K_{\nu}(u)$ is concave, and hence log-concave on $(0,\infty)$ for all $\nu\geq 1/2.$ Now, combining these results, in view of the fact that the product of log-concave functions is log-concave, the required result follows. [**(d)**]{} Using and we obtain that $$\begin{aligned} \frac{{\operatorname{d\!}}}{{\operatorname{d\!}}u}\left[\frac{u^2I_{\nu}'(u)}{I_{\nu}^2(u)}\right]&=\frac{1}{I_{\nu}(u)} \left[u^2+\nu^2+\frac{uI_{\nu}'(u)}{I_{\nu}(u)}-2\left[\frac{uI_{\nu}'(u)}{I_{\nu}(u)}\right]^2\right]\\ &<\left[u^2+\nu^2+\sqrt{u^2+\nu^2}-2\left(u^2\frac{\nu}{\nu+1}+\nu^2\right)\right]\end{aligned}$$ for all $u>0$ and $\nu>0.$ Observe that the last expression is nonpositive if and only if we have $$\left(\frac{\nu-1}{\nu+1}\right)^2u^4+\left(2\nu^2\frac{\nu-1}{\nu+1}-1\right)u^2+\nu^2(\nu^2-1)\geq0.$$ A computation shows that this is satisfied if $$\left(2\nu^2\frac{\nu-1}{\nu+1}-1\right)^2-4\left(\frac{\nu-1}{\nu+1}\right)^2\nu^2(\nu^2-1)=-\frac{8\nu^3-9\nu^2-2\nu-1}{(\nu+1)^2}\leq0.$$ Now, since $\nu\geq \nu_0$ we have $8\nu^3-9\nu^2-2\nu-1\geq0$ and thus the proof of part [**(d)**]{} is complete. It should be mentioned here that part [**(a)**]{} of this theorem for $\nu\geq \nu_0$ actually is an immediate consequence of this part. More precisely, the proof of part [**(a)**]{} of this theorem can be simplified significantly as follows: in view of part [**(d)**]{} of this theorem, the function $$u\mapsto \frac{uI_{\nu}'(u)}{I_{\nu}^2(u)}=\frac{1}{u}\cdot \frac{u^2I_{\nu}'(u)}{I_{\nu}^2(u)}$$ is strictly decreasing as a product of two positive and strictly decreasing functions. Now, let us focus on the chain of inequalities . To prove this we use Corollary 2.5 from [@anderson]. More precisely, the first inequality in follows from part [**(d)**]{} of this theorem, while the second inequality in is an immediate consequence of the fact that $I_{\nu}$ is a strictly increasing function on $(0,\infty)$ for all $\nu>-1.$ The third inequality in means actually the strict geometrical convexity of $I_{\nu}$ and is equivalent to part [**(b)**]{} of this theorem; the fourth inequality is equivalent to part [**(c)**]{} of this theorem. Finally, observe that part [**(a)**]{} of this theorem is equivalent to the inequality $$\frac{2I_{\nu}(u_1)I_{\nu}(u_2)}{I_{\nu}(u_1)+I_{\nu}(u_2)}\leq I_{\nu}\left(\sqrt{u_1u_2}\right),$$ which holds for all $u_1,u_2>0$ and $\nu\geq 1. $ Moreover, in this inequality equality holds if and only if $u_1=u_2.$ The following result is a companion of Theorem \[th1\] for modified Bessel functions of the second kind. We note that part [**(b)**]{} of the following theorem is well-known (see for example [@giordano; @sun; @temme]), and part [**(c)**]{} was proved by Baricz [@baricz2]. For part [**(b)**]{} we give here a different proof, while for part [**(c)**]{} we recall the proof from [@baricz2] and we present a simple alternative proof. \[th2\] The following assertions are true: 1. $u\mapsto K_{\nu}'(u)/K_{\nu}^2(u)$ is strictly decreasing on $(0,\infty)$ for all $|\nu|\geq 1;$ 2. $u\mapsto K_{\nu}'(u)/K_{\nu}(u)$ is strictly increasing on $(0,\infty)$ for all $\nu\in\mathbb{R};$ 3. $u\mapsto uK_{\nu}'(u)/K_{\nu}(u)$ is strictly decreasing on $(0,\infty)$ for all $\nu\in\mathbb{R};$ 4. $u\mapsto uK_{\nu}'(u)$ is strictly increasing on $(0,\infty)$ for all $\nu\in\mathbb{R};$ 5. $u\mapsto u^2K_{\nu}'(u)$ is strictly increasing on $(0,\infty)$ for all $|\nu|\geq 5/4;$ 6. $u\mapsto u^2K_{\nu}'(u)$ is strictly increasing on $(2,\infty)$ for all $\nu\in\mathbb{R}.$ In particular, for all $u_1,u_2>0$ and $|\nu|\geq 1$ the following chain of inequalities holds $$\label{chain2}\frac{2K_{\nu}(u_1)K_{\nu}(u_2)}{K_{\nu}(u_1)+K_{\nu}(u_2)}\leq K_{\nu}\left(\frac{u_1+u_2}{2}\right)\leq \sqrt{K_{\nu}(u_1)K_{\nu}(u_2)} \leq K_{\nu}\left(\sqrt{u_1u_2}\right)\leq \frac{K_{\nu}(u_1)+K_{\nu}(u_2)}{2}.$$ Moreover, the second, third and fourth inequalities hold true for all $\nu\in\mathbb{R}.$ In addition, for $|\nu|\geq 5/4$ and $u_1,u_2>0$ the fourth inequality can be improved as $$\label{chain20}K_{\nu}\left(\frac{2u_1u_2}{u_1+u_2}\right)\leq\frac{K_{\nu}(u_1)+K_{\nu}(u_2)}{2}.$$ This inequality holds true for all $u_1,u_2>2$ and $\nu\in\mathbb{R}.$ In each of the above inequalities equality hold if and only if $u_1=u_2.$ First we prove the monotonicity properties for modified Bessel functions of the second kind. [**(a)**]{} Recall that the modified Bessel function of the second kind $K_{\nu}$ is a particular solution of the second-order differential equation , and this in turn implies that $$\label{eq2}K_{\nu}''(u)=(1+\nu^2/u^2)K_{\nu}(u)-(1/u)K_{\nu}'(u).$$ Consequently, by using two times the right-hand side of , for all $u>0$ and $\nu\geq 1$ we have $$\begin{aligned} \frac{{\operatorname{d\!}}}{{\operatorname{d\!}}u}\left[\frac{K_{\nu}'(u)}{K_{\nu}^2(u)}\right]&= \left[\frac{1}{u^2K_{\nu}(u)}\right]\left[u^2+\nu^2-\frac{uK_{\nu}'(u)}{K_{\nu}(u)}-2\left[\frac{uK_{\nu}'(u)}{K_{\nu}(u)}\right]^2\right]\\ &<-\left[\frac{1}{u^2K_{\nu}(u)}\right]\left[\frac{uK_{\nu}'(u)}{K_{\nu}(u)}\right]\left[\frac{uK_{\nu}'(u)}{K_{\nu}(u)}+1\right]\leq0.\end{aligned}$$ On the other hand the function $\nu\mapsto K_{\nu}(u)$ is even, and thus from the above result we obtain that indeed the function $u\mapsto K_{\nu}'(u)/K_{\nu}^2(u)$ is strictly decreasing on $(0,\infty)$ for all $|\nu|\geq 1.$ [**(b)**]{} The fact that $u\mapsto K_{\nu}(u)$ is log-convex can be verified (see for example [@giordano; @sun]) by using the Hölder-Rogers inequality and the familiar integral representation , which holds for each $u>0$ and $\nu\in\mathbb{R}.$ However, in view of , for all $n\in\{0,1,2,\dots\},$ $u>0$ and $\nu\in\mathbb{R},$ we easily have $$(-1)^nK_{\nu}^{(n)}(u)=\int_0^{\infty}(\cosh t)^ne^{-u\cosh t}\cosh(\nu t){\operatorname{d\!}}t>0,$$ i.e. the function $u\mapsto K_{\nu}(u)$ is strictly completely monotonic. Now, since each strictly completely monotonic function is strictly log-convex, we obtain that $u\mapsto K_{\nu}'(u)/K_{\nu}(u)$ is strictly increasing on $(0,\infty)$ for all $\nu\in\mathbb{R}.$ [**(c)**]{} Consider the Turánian $$\Delta_{\nu}(u)=K_{\nu}^2(u)-K_{\nu-1}(u)K_{\nu+1}(u).$$ Using the recurrence relations $$K_{\nu-1}(u)=-(\nu/u)K_{\nu}(u)-K_{\nu}'(u)$$ and $$K_{\nu+1}(u)=(\nu/u)K_{\nu}(u)-K_{\nu}'(u)$$ we have $$\Delta_{\nu}(u)=(1+\nu^2/u^2)K_{\nu}^2(u)-\left[K_{\nu}'(u)\right]^2.$$ Combining this with , we obtain [@baricz2] $$\Delta_{\nu}(u)=\frac{1}{u}K_{\nu}^2(u)\left[\frac{uK_{\nu}'(u)}{K_{\nu}(u)}\right]'.$$ But, the function $\nu\mapsto K_{\nu}(u)$ is strictly log-convex on $\mathbb{R}$ for each fixed $u>0$ (see [@bariczstudia]), which implies that for all $\nu\in\mathbb{R}$ and $u>0$ the Turán-type inequality $\Delta_{\nu}(u)<0$ holds. This shows that the function $u\mapsto uK_{\nu}'(u)/K_{\nu}(u)$ is strictly decreasing on $(0,\infty)$ for all $\nu\in\mathbb{R}.$ Another proof for this part can be obtained as follows. First observe that the function $u\mapsto uK_{\nu}'(u)/K_{\nu}(u)$ satisfies the differential equation $uv'(u)=u^2+\nu^2-v^2(u).$ On the other hand, it is well-known that $K_{\nu}$ is strictly decreasing on $(0,\infty)$ for all $\nu\in\mathbb{R}.$ Thus, by using the right-hand side of we conclude that $u\mapsto uK_{\nu}'(u)/K_{\nu}(u)$ is strictly decreasing too on $(0,\infty)$ for all $\nu\in\mathbb{R}.$ It is important to add here that in fact the right-hand side of and the Turán-type inequality $\Delta_{\nu}(u)>0$ are equivalent (see [@baricz1; @baricz2]). [**(d)**]{} By using again the fact that $K_{\nu}$ is a particular solution of the modified Bessel differential equation, i.e. the relation , we easily have for all $u>0$ and $\nu\in\mathbb{R}$ $$\left[uK_{\nu}'(u)\right]'=K_{\nu}'(u)+uK_{\nu}''(u)=u(1+\nu^2/u^2)K_{\nu}(u)>0.$$ [**(e)**]{} Using and the left-hand side of , we obtain $$\begin{aligned} \frac{\left[u^2K_{\nu}'(u)\right]'}{K_{\nu}(u)}&=2\frac{uK_{\nu}'(u)}{K_{\nu}(u)}+\frac{u^2K_{\nu}''(u)}{K_{\nu}(u)}= \left[\frac{uK_{\nu}'(u)}{K_{\nu}(u)}+u^2+\nu^2\right]>u^2+\nu^2-\sqrt{u^2\nu/(\nu-1)+\nu^2}\end{aligned}$$ for all $u>0$ and $\nu>1.$ The right-hand side of the above inequality is positive if and only if the expression $$Q_\nu(u)=u^4+[2\nu^2-\nu/(\nu-1)]u^2+\nu^2(\nu^2-1)$$ is positive. It is easy to see that the discriminant of the equation $Q_{\nu}(\sqrt{u})=0$ is $(5-4\nu)\nu^2/(\nu-1)^2$ and this is negative if and only if $\nu\geq5/4.$ Finally, since the function $\nu\mapsto K_{\nu}(u)$ is even, the proof is complete. [**(f)**]{} In view of we obtain that $$u^2K_{\nu}'(u)=-u^2\int_0^{\infty}e^{-u\cosh t}(\cosh t)(\cosh(\nu t)){\operatorname{d\!}}t$$ and thus $$\left[u^2K_{\nu}'(u)\right]'=u\int_0^{\infty}(u\cosh t-2)e^{-u\cosh t}(\cosh t)(\cosh(\nu t)){\operatorname{d\!}}t>0$$ for all $u>2$ and $\nu\in\mathbb{R}.$ Now, let us focus on the inequalities and . As in the proof of the chain of inequalities , we use Corollary 2.5 from [@anderson]. The first inequality in follows from part [**(a)**]{}, the second inequality is just the strict log-convexity of $K_{\nu}$ proved in part [**(b)**]{}, while the third inequality is equivalent to the geometrical concavity of $K_{\nu}$ proved in part [**(c)**]{}. The fourth inequality is equivalent to part [**(d)**]{} of this theorem, while the inequality is equivalent to part [**(e)**]{}. **Convexity of modified Bessel functions with respect to power means** ====================================================================== In this section we are going to complement and extend the results of the above section. To this aim we study the convexity of modified Bessel functions of the first and second kinds with respect to Hölder means. For reader’s convenience we recall here first some basics. Let $\varphi:[a,b]\subseteq\mathbb{R}\rightarrow\mathbb{R}$ be a strictly monotonic continuous function. The function $M_{\varphi}:[a,b]^2\rightarrow [a,b],$ defined by $$M_{\varphi}(u_1,u_2)=\varphi^{-1}\left(\frac{\varphi(u_1)+\varphi(u_2)}{2}\right)$$ is called the quasi-arithmetic mean (or Kolmogorov mean) associated to $\varphi,$ while the function $\varphi$ is called a generating function (or a Kolmogorov-Nagumo function) of the quasi-arithmetic mean $M_{\varphi}.$ A function $f:[a,b]\subseteq\mathbb{R}\rightarrow \mathbb{R}$ is said to be convex with respect to the mean $M_{\varphi}$ (or $M_{\varphi}-$convex) if for all $u_1,u_2\in [a,b]$ and all $\lambda\in[0,1]$ the inequality $$f(M_{\varphi}^{(\lambda)}(u_1,u_2))\leq M_{\varphi}^{(\lambda)}(f(u_1),f(u_2))$$ holds, where $M_{\varphi}^{(\lambda)}(u_1,u_2)=\varphi^{-1}(\lambda\varphi(u_1)+(1-\lambda)\varphi(u_2))$ is the weighted version of $M_{\varphi}.$ It can be proved easily (see for example [@borwein]) that $f$ is convex with respect to $M_{\varphi}$ if and only if $\varphi\circ f\circ\varphi^{-1}$ is convex in the usual sense on $\varphi([a,b]).$ Now, for any two quasi-arithmetic means $M_{\varphi}$ and $M_{\psi}$ (with Kolmogorov-Nagumo functions $\varphi$ and $\psi$ defined on intervals $[a,b]$ and $[c,d]$), a function $f:[a,b]\to[c,d]$ is called $(M_{\varphi},M_{\psi})-$convex if it satisfies $$f(M_{\varphi}^{(\lambda)}(u_1,u_2))\leq M_{\psi}^{(\lambda)}(f(u_1),f(u_2))$$ for all $u_1,u_2\in[a,b]$ and $\lambda\in[0,1],$ where $M_{\psi}^{(\lambda)}(u_1,u_2)=\psi^{-1}(\lambda\psi(u_1)+(1-\lambda)\psi(u_2)).$ If the above inequality is reversed, then we say that $f$ is $(M_{\varphi},M_{\psi})-$concave. Due to Aczél [@acel] it is known from a long time ago that if $\psi$ is increasing then the function $f$ is $(M_{\varphi},M_{\psi})-$convex if and only if the function $\psi\circ f\circ \varphi^{-1}$ is convex in the usual sense on $\varphi([a,b]).$ This is because, if $\psi$ is increasing and we denote with $s$ and $t$ the values $\varphi(u_1)$ and $\varphi(u_2)$, then by definition $f$ is $(M_{\varphi},M_{\psi})-$convex if and only if $$\psi\left(f\left(\varphi^{-1}(\lambda s+(1-\lambda)t)\right)\right)\leq \lambda \psi\left(f\left(\varphi^{-1}(s)\right)\right)+(1-\lambda) \psi\left(f\left(\varphi^{-1}(t)\right)\right)$$ holds for all $s,t\in\varphi([a,b])$ and $\lambda\in[0,1].$ See also [@borwein] for more details. Now, if $\psi$ is decreasing, then clearly the above inequality is reversed, and this in turn implies that the function $f$ is $(M_{\varphi},M_{\psi})-$convex if and only if the function $\psi\circ f\circ \varphi^{-1}$ is concave in the usual sense on $\varphi([a,b]).$ Moreover, a similar characterization of $(M_{\varphi},M_{\psi})-$concave functions is also valid, depending on the monotonicity of the function $\psi.$ Among the quasi-arithmetic means the Hölder means (or power means) are of special interest. They are associated to the generating function $\varphi_p:(0,\infty)\rightarrow\mathbb{R},$ defined by $$\varphi_p(u)=\left\{\begin{array}{ll}u^p,& \mbox{if}\ p\neq 0\\ \ln u,& \mbox{if}\ p=0,\end{array}\right.$$ and have the following form $$M_{\varphi_p}^{(\lambda)}(u_1,u_2)=\left\{\begin{array}{ll} {[(1-\lambda)u_1^p+\lambda u_2^p]^{1/p}},& \mbox{if}\ p\neq 0\\ u_1^{\lambda}u_2^{1-\lambda},& \mbox{if}\ p=0.\end{array}\right.$$ Now, let $p$ and $q$ be two arbitrary real numbers. Using the above definitions of generalized convexities we say that a function $f:[a,b]\subseteq(0,\infty)\to(0,\infty)$ is $(M_{\varphi_p},M_{\varphi_q})-$convex, or simply $(p,q)-$convex, if the inequality $$f(M_{\varphi_p}^{(\lambda)}(u_1,u_2))\leq M_{\varphi_q}^{(\lambda)}(f(u_1),f(u_2))$$ is valid for all $p,q\in\mathbb{R},$ $u_1,u_2\in[a,b]$ and $\lambda\in[0,1].$ If the above inequality is reversed, then we say that the function $f$ is $(M_{\varphi_p},M_{\varphi_q})-$concave, or simply $(p,q)-$concave. Observe that the $(1,1)-$convexity is the usual convexity, the $(1,0)-$convexity is exactly the log-convexity, while the $(0,0)-$convexity corresponds to the case of the geometrical convexity. We note that motivated by the works [@anderson; @bariczj1] and [@bariczj2], recently Baricz [@bariczjipam] considered the $(p,p)-$convexity of the zero-balanced Gaussian hypergeometric functions and general power series. The $(p,q)-$convexity of zero-balanced Gaussian hypergeometric functions was considered recently by Zhang et al. [@zhang]. The following result gives a characterization of differentiable $(p,q)-$convex functions and will be applied in the sequel in the study of the convexity of modified Bessel functions of the first and second kinds with respect to power means. For a proof see [@bariczgeo]. \[lem3\] Let $p,q\in\mathbb{R}$ and let $f:[a,b]\subseteq(0,\infty)\to(0,\infty)$ be a differentiable function. The function $f$ is (strictly) $(p,q)-$convex ($(p,q)-$concave) if and only if $u\mapsto u^{1-p}f'(u)[f(u)]^{q-1}$ is (strictly) increasing (decreasing) on $[a,b].$ The next result completes and extends parts [**(a)**]{}, [**(b)**]{} and [**(d)**]{} of Theorem \[th1\]. Notice that if we choose in part [**(b)**]{} of Theorem \[th1new\] the values $p=0$ and $q=-1,$ then we reobtain part [**(a)**]{} of Theorem \[th1\]. Similarly, choosing $p=q=0$ in part [**(a)**]{} of Theorem \[th1new\] we obtain the strict geometrical convexity stated in part [**(b)**]{} of Theorem \[th1\]. Finally, by taking $p=q=-1$ in part [**(b)**]{} of Theorem \[th1new\] we obtain the monotonicity result stated in part [**(d)**]{} of Theorem \[th1\]. \[th1new\] Let $p,q\in\mathbb{R}$ and let $\nu>-1.$ Then the following assertions are true: 1. if $p\leq0$ and $q\geq 0,$ then $I_{\nu}$ is strictly $(p,q)-$convex on $(0,\infty);$ 2. if $p\leq0$ and $q<0,$ then $I_{\nu}$ is strictly $(p,q)-$concave on $(0,\infty)$ provided if $\nu\geq -1/q$ and $$4q(q-1)\nu^3-(p^2-4(q-1))\nu^2-2p^2\nu-p^2\geq0;$$ 3. if $p\geq0$ and $q\leq-1,$ then $I_{\nu}$ is strictly $(p,q)-$concave on $(0,\infty)$ provided if $\nu\geq 1;$ 4. if $p\geq 0$ and $q>0,$ then $I_{\nu}$ is strictly $(p,q)-$convex on $(0,\infty)$ provided if $\nu\geq p/q;$ 5. if $p\leq1$ and $q\geq 1,$ then $I_{\nu}$ is strictly $(p,q)-$convex on $(0,\infty).$ For convenience first we introduce the following notation $$\begin{aligned} \lambda_{p,q,\nu}(u)=\frac{{\operatorname{d\!}}}{{\operatorname{d\!}}u}\left[\frac{u^{1-p}I_{\nu}'(u)}{I_{\nu}^{1-q}(u)}\right]= \frac{I_{\nu}^q(u)}{u^{p+1}}\left[u^2+\nu^2-p\left[\frac{uI_{\nu}'(u)}{I_{\nu}(u)}\right]-(1-q)\left[\frac{uI_{\nu}'(u)}{I_{\nu}(u)}\right]^2\right].\end{aligned}$$ We note that in view of Lemma \[lem3\] the $(p,q)-$convexity ($(p,q)-$concavity) of $I_{\nu}$ depends only on the sign of the expression $\lambda_{p,q,\nu}(u).$ [**(a)**]{} This follows easily from the fact that if $\nu>-1,$ $p\leq0$ and $q\geq 0,$ then $\lambda_{p,q,\nu}(u)>0$ for all $u>0.$ More precisely, from the right-hand side of we have $$\lambda_{p,q,\nu}(u)>\frac{I_{\nu}^q(u)}{u^{p+1}} \left[-p\left[\frac{uI_{\nu}'(u)}{I_{\nu}(u)}\right]+q\left[\frac{uI_{\nu}'(u)}{I_{\nu}(u)}\right]^2\right]\geq0$$ for all $\nu>-1,$ $p\leq0,$ $q\geq 0$ and $u>0.$ It should be mentioned here that this part follows actually from part [**(b)**]{} of Theorem \[th1\]. Namely, the function $u\mapsto u^{1-p}I_{\nu}'(u)\left[I_{\nu}(u)\right]^{q-1}$ is strictly increasing on $(0,\infty)$ for all $p\leq0,$ $q\geq0$ and $\nu>-1$ as a product of the strictly increasing functions $u\mapsto uI_{\nu}'(u)/I_{\nu}(u)$ and $u\mapsto u^{-p}I_{\nu}^q(u).$ Now, since for $p=q=0$ this part reduces to part [**(b)**]{} of Theorem \[th1\], the above remark reveals that in fact part [**(b)**]{} of Theorem \[th1\] and part [**(a)**]{} of Theorem \[th1new\] are equivalent. [**(b)**]{} First assume that $p<0$ and $q<0.$ Then by using we obtain that $$\lambda_{p,q,\nu}(u)<\frac{I_{\nu}^q(u)}{u^{p+1}}\left[u^2+\nu^2-p\sqrt{u^2+\nu^2}-(1-q)\left(\frac{\nu}{\nu+1}u^2+\nu^2\right)\right]$$ and this is nonpositive if $$p^2(u^2+\nu^2)\leq\left(q\nu^2+\frac{q\nu+1}{\nu+1}u^2\right)^2,\ \ \ \mbox{i.e.}\ \ 0\leq Q_{\nu}(u^2),$$ where $Q_{\nu}(u)=au^2+bu+c$ with $\nu\geq -1/q,$ $$a=\left(\frac{q\nu+1}{\nu+1}\right)^2,\ b=2q\nu^2\frac{q\nu+1}{\nu+1}-p^2,\ c=\nu^2(q^2\nu^2-p^2).$$ This gives a necessary condition to be $b^2-4ac\leq0.$ A computation shows that the condition $b^2-4ac\leq0$ is equivalent to the inequality $$4q(q-1)\nu^3-(p^2-4(q-1))\nu^2-2p^2\nu-p^2\geq0.$$ Now, assume that $p=0$ and $q<0.$ Then from the left-hand side of we have $$\lambda_{0,q,\nu}(u)=\frac{I_{\nu}^q(u)}{u}\left[u^2+\nu^2-(1-q)\left[\frac{uI_{\nu}'(u)}{I_{\nu}(u)}\right]^2\right]< \frac{I_{\nu}^q(u)}{u}\left[\left(\frac{q\nu+1}{\nu+1}\right)u^2+q\nu^2\right]<0$$ for all $\nu\geq -1/q,$ $q<0$ and $u>0,$ as we requested. [**(c)**]{} This follows directly from part [**(a)**]{} of Theorem \[th1\]. More precisely, it is easy to see that the function $u\mapsto u^{1-p}I_{\nu}'(u)I_{\nu}^{q-1}(u)$ is strictly decreasing on $(0,\infty)$ for all $\nu\geq 1$ as a product of the strictly decreasing function $u\mapsto uI_{\nu}'(u)/I_{\nu}^2(u)$ and the decreasing function $u\mapsto u^{-p}I_{\nu}^{q+1}(u).$ Since part [**(c)**]{} of Theorem \[th1new\] reduces to part [**(a)**]{} of Theorem \[th1\] when $p=0$ and $q=-1$, the above proof reveals that in fact part [**(c)**]{} of Theorem \[th1new\] is equivalent to part [**(a)**]{} of Theorem \[th1\]. [**(d)**]{} Recall that part [**(b)**]{} of Theorem \[th1\] states that $I_{\nu}$ is strictly geometrically convex on $(0,\infty)$ for all $\nu>-1,$ i.e. the function $u\mapsto uI_{\nu}'(u)/I_{\nu}(u)$ is strictly increasing on $(0,\infty)$ for all $\nu>-1.$ To prove that $I_{\nu}$ is strictly $(p,q)-$convex on $(0,\infty)$ for all $p\geq 0,$ $q>0$ and $\nu\geq p/q$ in what follows we show that the function $u\mapsto u^{1-p}I_{\nu}'(u)I_{\nu}^{q-1}(u)$ is strictly increasing as a product of the strictly increasing functions $u\mapsto uI_{\nu}'(u)/I_{\nu}(u)$ and $u\mapsto u^{-p}I_{\nu}^q(u).$ On the other hand, observe that since $u\mapsto uI_{\nu}'(u)/I_{\nu}(u)$ is strictly increasing on $(0,\infty)$, we obtain that $$uI_{\nu}'(u)/I_{\nu}(u)>\nu$$ for all $\nu>-1$ and $u>0$ (actually for $\nu>0$ this inequality follows directly from the left-hand side of ). Here we used that if $u$ tends to zero then $uI_{\nu}'(u)/I_{\nu}(u)$ tends to $\nu,$ which can be verified from or from $$\frac{uI_{\nu}'(u)}{I_{\nu}(u)}=\nu+2\sum_{n\geq 1}\frac{u^2}{u^2+j_{\nu,n}^2}.$$ The above inequality implies that $$\frac{{\operatorname{d\!}}}{{\operatorname{d\!}}u}\left[\frac{I_{\nu}^q(u)}{u^p}\right]=\frac{I_{\nu}^q(u)}{u^{p+1}}\left[-p+q\frac{uI_{\nu}'(u)}{I_{\nu}(u)}\right] >\frac{I_{\nu}^q(u)}{u^{p+1}}(-p+q\nu)\geq0,$$ and with this the proof of this part is complete. [**(e)**]{} This follows from the fact that $I_{\nu}$ is strictly increasing and convex on $(0,\infty)$ for all $\nu>-1.$ Namely, the function $u\mapsto u^{1-p}I_{\nu}'(u)I_{\nu}^{q-1}(u)$ is strictly increasing as a product of the strictly increasing function $u\mapsto I_{\nu}'(u)$ and the increasing functions $u\mapsto u^{1-p}$ and $u\mapsto I_{\nu}^{q-1}(u).$ Now, we are going to present the analogous result of Theorem \[th1new\] for modified Bessel functions of the second kind. We note that part [**(c)**]{} of Theorem \[th2new\] (when $p=1$ and $q=-1$) reduces to part [**(a)**]{} of Theorem \[th2\], part [**(e)**]{} of Theorem \[th2new\] (when $p=1$ and $q=0$) becomes part [**(b)**]{} of Theorem \[th2\], part [**(b)**]{} of Theorem \[th2new\] (when $p=q=0$) reduces to part [**(c)**]{} of Theorem \[th2\], and part [**(d)**]{} of Theorem \[th2new\] (when $p=0$ and $q=1$) becomes part [**(d)**]{} of Theorem \[th2\]. Finally, observe that if we choose $p=-1$ and $q=1$ in part [**(a)**]{} of Theorem \[th2new\], then we obtain part [**(e)**]{} of Theorem \[th2\]. \[th2new\] Let $p,q\in\mathbb{R}$ and let $\nu\in\mathbb{R}.$ Then the following assertions are true: 1. if $p\leq0$ and $q\geq1,$ then $K_{\nu}$ is strictly $(p,q)-$convex on $(0,\infty)$ provided if $\nu>1$ and $$4(1-q)p^2\nu^2+4(q-2)p^2\nu+p^2(p^2+4)\leq0;$$ 2. if $p\leq0$ and $q\leq 0,$ then $K_{\nu}$ is strictly $(p,q)-$concave on $(0,\infty);$ 3. if $p\geq0$ and $q<0,$ then $K_{\nu}$ is strictly $(p,q)-$concave on $(0,\infty)$ provided if $|\nu|\geq-p/q;$ 4. if $p\geq0$ and $q\geq 1,$ then $K_{\nu}$ is strictly $(p,q)-$convex on $(0,\infty);$ 5. if $p\geq1$ and $q\geq 0,$ then $K_{\nu}$ is strictly $(p,q)-$convex on $(0,\infty).$ For convenience first we introduce the following notation $$\begin{aligned} \mu_{p,q,\nu}(u)=\frac{{\operatorname{d\!}}}{{\operatorname{d\!}}u}\left[\frac{u^{1-p}K_{\nu}'(u)}{K_{\nu}^{1-q}(u)}\right]= \frac{K_{\nu}^q(u)}{u^{p+1}}\left[u^2+\nu^2-p\left[\frac{uK_{\nu}'(u)}{K_{\nu}(u)}\right]-(1-q)\left[\frac{uK_{\nu}'(u)}{K_{\nu}(u)}\right]^2\right].\end{aligned}$$ Observe that in view of Lemma \[lem3\] the $(p,q)-$convexity ($(p,q)-$concavity) of $K_{\nu}$ depends only on the sign of the expression $\mu_{p,q,\nu}(u).$ [**(a)**]{} Notice that for all $\nu\in\mathbb{R}$ fixed when $u$ tends to zero $uK_{\nu}'(u)/K_{\nu}(u)$ tends to $-\nu.$ This can be verified for example from the integral representation . On the other hand, in view of part [**(c)**]{} of Theorem \[th2\] the function $u\mapsto uK_{\nu}'(u)/K_{\nu}(u)$ is strictly decreasing on $(0,\infty)$ for all $\nu\in\mathbb{R},$ and this in turn implies that for all $\nu\in\mathbb{R}$ and $u>0$ the inequality $$\label{eq9}uK_{\nu}'(u)/K_{\nu}(u)<-\nu$$ holds. We note that actually this follows also from the right-hand side of . Now, by using and the left-hand side of we obtain that $$\mu_{p,q,\nu}(u)>\frac{K_{\nu}^q(u)}{u^{p+1}}\left[u^2+\nu^2+p\sqrt{\frac{\nu}{\nu-1}u^2+\nu^2}+(q-1)\nu^2\right]$$ and the right hand side of the last inequality is nonnegative if and only if $$Q_{\nu}(u)=u^4+\left(2q\nu^2-\frac{\nu}{\nu-1}p^2\right)u^2+\nu^2(q^2\nu^2-p^2)\geq0.$$ Now, under assumptions the discriminant of the quadratic equation $Q_{\nu}(\sqrt{u})=0,$ i.e. $$\frac{\nu^2}{(\nu-1)^2}\left[4(1-q)p^2\nu^2+4(q-2)p^2\nu+p^2(p^2+4)\right]$$ is negative and with this the proof of this part is complete. [**(b)**]{} This follows from the fact that if $\nu\in\mathbb{R}$ and $p,q\leq 0,$ then $\mu_{p,q,\nu}(u)<0$ for all $u>0.$ Namely, from the right-hand side of we have $$\mu_{p,q,\nu}(u)<\frac{K_{\nu}^q(u)}{u^{p+1}} \left[-p\left[\frac{uK_{\nu}'(u)}{K_{\nu}(u)}\right]+q\left[\frac{uK_{\nu}'(u)}{K_{\nu}(u)}\right]^2\right]\leq0$$ for all $\nu\in\mathbb{R},$ $p,q\leq 0$ and $u>0.$ Here we used that $K_{\nu}$ is strictly decreasing on $(0,\infty)$ for all $\nu\in\mathbb{R}.$ We note here that this part follows actually from part [**(c)**]{} of Theorem \[th2\]. Namely, the function $u\mapsto u^{1-p}K_{\nu}'(u)\left[K_{\nu}(u)\right]^{q-1}$ is strictly decreasing on $(0,\infty)$ for all $p,q\leq0$ and $\nu\in\mathbb{R}$ as a product of the strictly decreasing and negative function $u\mapsto uK_{\nu}'(u)/K_{\nu}(u)$ and the strictly increasing and positive function $u\mapsto u^{-p}K_{\nu}^q(u).$ Now, since for $p=q=0$ this part reduces to part [**(c)**]{} of Theorem \[th2\], the above remark shows that in fact part [**(c)**]{} of Theorem \[th2\] is equivalent to part [**(b)**]{} of Theorem \[th2new\]. [**(c)**]{} By using and the right-hand side of we have for all $u>0,$ $p\geq0,$ $q<0$ and $\nu\geq -p/q$ $$\begin{aligned} \mu_{p,q,\nu}(u)&<\frac{K_{\nu}^q(u)}{u^{p+1}} \left[-p\left[\frac{uK_{\nu}'(u)}{K_{\nu}(u)}\right]+q\left[\frac{uK_{\nu}'(u)}{K_{\nu}(u)}\right]^2\right]\\ &=-\frac{K_{\nu}^q(u)}{u^{p+1}}\left[\frac{uK_{\nu}'(u)}{K_{\nu}(u)}\right]\left[p-q\left[\frac{uK_{\nu}'(u)}{K_{\nu}(u)}\right]\right]\\ &\leq-(p+q\nu)\frac{K_{\nu}^q(u)}{u^{p+1}}\left[\frac{uK_{\nu}'(u)}{K_{\nu}(u)}\right] \leq0.\end{aligned}$$ [**(d)**]{} Since $p\geq0$ and $q\geq 1,$ the function $u\mapsto u^{-p}K_{\nu}^{q-1}(u)$ is decreasing on $(0,\infty)$ for all $\nu\in\mathbb{R}.$ Now, by using part [**(d)**]{} of Theorem \[th2\] we conclude that $u\mapsto u^{1-p}K_{\nu}'(u)\left[K_{\nu}(u)\right]^{q-1}$ is strictly increasing as a product of the strictly increasing and negative function $u\mapsto uK_{\nu}'(u)$ and the decreasing and positive function $u\mapsto u^{-p}K_{\nu}^{q-1}(u).$ Observe that since for $p=0$ and $q=1$ this part reduces to part [**(d)**]{} of Theorem \[th2\], in fact they are equivalent. Finally, we note that the proof of this part can be obtained also simply from the fact that under assumptions $\mu_{p,q,\nu}(u)>0.$ [**(e)**]{} The proof of this part is very similar to the proof of part [**(d)**]{} above. Under assumptions the function $u\mapsto u^{1-p}K_{\nu}^q(u)$ is decreasing. Consequently, by using part [**(b)**]{} of Theorem \[th2\], the function $u\mapsto u^{1-p}K_{\nu}'(u)\left[K_{\nu}(u)\right]^{q-1}$ is strictly increasing as a product of the strictly increasing and negative function $u\mapsto K_{\nu}'(u)/K_{\nu}(u)$ and the decreasing and positive function $u\mapsto u^{1-p}K_{\nu}^{q}(u).$ Observe that since for $p=1$ and $q=0$ this part reduces to part [**(b)**]{} of Theorem \[th2\], in fact they are equivalent. **Application to the log-concavity of the gamma-gamma distribution** ==================================================================== The probability density function $f_{a,b,\alpha}:(0,\infty)\to(0,\infty)$ of the three parameter gamma-gamma random variable is defined by (see [@karag]) $$f_{a,b,\alpha}(u)=\frac{2(ab)^{\frac{a+b}{2}}u^{\frac{a+b}{2}-1}}{\Gamma(a)\Gamma(b)\alpha^{{\frac{a+b}{2}}}} K_{a-b}\left(2\sqrt{\frac{ab}{\alpha}u}\right),$$ where $a,b>0$ are the distribution shaping parameters, $K_{\nu}$ stands for the modified Bessel function of the second kind, and $\alpha>0$ is the mean of the gamma-gamma random variable. The gamma-gamma distribution is produced from the product of two independent gamma random variables and has been widely used in a variety of applications, for example in modeling various types of land and sea radar clutters, in modeling the effects of the combined fading and shadowing phenomena, encountered in the mobile communications channels. Of particular interest is the application of the gamma-gamma distribution in optical wireless systems, where transmission of optical signals through the atmosphere is involved. For more details see [@karag; @karag2]. Now, consider the functions $\widetilde{f}_{a,b,\alpha}:(0,\infty)\to(0,\infty)$ and $F_{a,b,\alpha}:(0,\infty)\to(0,1)$ defined by $$\widetilde{f}_{a,b,\alpha}(u)=f_{a,b,\alpha}\left(\frac{\alpha u^2}{4ab}\right)= \frac{2^{3-(a+b)}(ab)u^{a+b-2}}{\alpha\Gamma(a)\Gamma(b)}K_{a-b}(u)$$ and $$F_{a,b,\alpha}(u)=\int_0^uf_{a,b,\alpha}(t){\operatorname{d\!}}t=\frac{1}{\Gamma(a)\Gamma(b)}\cdot G_{1,3}^{2,1} \left[\left.\frac{ab}{\alpha}u\right|\begin{array}{c}1\\ a,b,0\end{array}\right],$$ where $G_{1,3}^{1,2}$ is a Meijer $G-$function [@gradshteyn eq. 9.301]. Here $\widetilde{f}_{a,b,\alpha}$ is just a transformation of the probability density function ${f}_{a,b,\alpha},$ while $F_{a,b,\alpha}$ is the cumulative distribution function of the gamma-gamma distribution. In probability theory usually the cumulative distribution functions does not have closed-form, and thus sometimes it is quite difficult to study their properties directly. In statistics, economics and industrial engineering frequently appears some problems which are related to the study of log-concavity (log-convexity) of some univariate distributions. An interesting unified exposition of related results on the log-concavity and log-convexity of many distributions, including applications in economics, were communicated by Bagnoli and Bergstrom [@bagnoli]. Some of their main results were reconsidered by András and Baricz [@andras] by using the monotone form of l’Hospital’s rule. Moreover, by using the idea from [@andras], recently, Baricz [@bariczgeo] showed, among others, that if a probability density function is geometrically concave then the corresponding cumulative distribution function will be also geometrically concave. In this section we use this result to prove that the cumulative distribution function $F_{a,b,\alpha}$ is strictly log-concave on $(0,\infty)$ for all $a,b,\alpha>0.$ This result may be useful in problems of information theory and communications. \[th5\] Let $a,b,\alpha>0.$ Then the following assertions are true: 1. $u\mapsto u\widetilde{f}'_{a,b,\alpha}(u)/\widetilde{f}_{a,b,\alpha}(u)$ is strictly decreasing on $(0,\infty);$ 2. $u\mapsto u{f}'_{a,b,\alpha}(u)/{f}_{a,b,\alpha}(u)$ is strictly decreasing on $(0,\infty);$ 3. $u\mapsto u{F}'_{a,b,\alpha}(u)/{F}_{a,b,\alpha}(u)$ is strictly decreasing on $(0,\infty);$ 4. $u\mapsto {F}'_{a,b,\alpha}(u)/{F}_{a,b,\alpha}(u)$ is strictly decreasing on $(0,\infty).$ [**(a)**]{} From part [**(c)**]{} of Theorem \[th2\] we have that the function $$u\mapsto \frac{u\widetilde{f}'_{a,b,\alpha}(u)}{\widetilde{f}_{a,b,\alpha}(u)}=a+b-2+\frac{uK_{a-b}'(u)}{K_{a-b}(u)}$$ is strictly decreasing on $(0,\infty)$ for all $a,b,\alpha>0.$ [**(b)**]{} Observe that part [**(a)**]{} of this theorem actually means that the function $\widetilde{f}_{a,b,\alpha}$ is strictly geometrically concave, i.e. for all $a,b,\alpha>0,$ $\lambda\in(0,1)$ and $u_1,u_2>0,$ $u_1\neq u_2$ we have $$\widetilde{f}_{a,b,\alpha}\left(u_1^{\lambda}u_2^{1-\lambda}\right)> \left[\widetilde{f}_{a,b,\alpha}(u_1)\right]^{\lambda}\left[\widetilde{f}_{a,b,\alpha}(u_2)\right]^{1-\lambda}.$$ Now, changing in the above inequality $u_i$ with $2\sqrt{abu_i/\alpha},$ where $i\in\{1,2\},$ we obtain $${f}_{a,b,\alpha}\left(u_1^{\lambda}u_2^{1-\lambda}\right)> \left[{f}_{a,b,\alpha}(u_1)\right]^{\lambda}\left[{f}_{a,b,\alpha}(u_2)\right]^{1-\lambda}$$ for all $a,b,\alpha>0,$ $\lambda\in(0,1)$ and $u_1,u_2>0,$ $u_1\neq u_2.$ This means that the function $f_{a,b,\alpha}$ is strictly geometrically concave and hence the function $u\mapsto u{f}'_{a,b,\alpha}(u)/{f}_{a,b,\alpha}(u)$ is strictly decreasing on $(0,\infty).$ [**(c)**]{} This follows from part [**(b)**]{} of this theorem. Namely, it is known (see [@bariczgeo]) that if the probability density function is strictly geometrically concave, then the corresponding cumulative distribution function is also strictly geometrically concave. [**(d)**]{} Part [**(c)**]{} of this theorem states that the cumulative distribution function $F_{a,b,\alpha}$ is strictly geometrically concave. Now, by using the fact that $F_{a,b,\alpha},$ as a distribution function, is increasing, for all $a,b,\alpha>0,$ $\lambda\in(0,1)$ and $u_1,u_2>0,$ $u_1\neq u_2$ we have $${F}_{a,b,\alpha}\left(\lambda u_1+(1-\lambda)u_2\right)>{F}_{a,b,\alpha}\left(u_1^{\lambda}u_2^{1-\lambda}\right)> \left[{F}_{a,b,\alpha}(u_1)\right]^{\lambda}\left[{F}_{a,b,\alpha}(u_2)\right]^{1-\lambda},$$ that is, $F_{a,b,\alpha}$ is strictly log-concave on $(0,\infty).$ Open Problems ============= In this section our aim is to complement the results from the previous sections and to present certain open problems, which may be of interest for further research. Recall that Neuman [@eneuman] proved that the modified Bessel function $I_{\nu}$ is strictly log-convex on $(0,\infty)$ for all $\nu\in(-1/2,0].$ Since $I_{-1/2}(u)=\sqrt{\pi/(2u)}\cosh u,$ we conclude that in fact $I_{\nu}$ is strictly log-convex on $(0,\infty)$ for all $\nu\in[-1/2,0].$ Thus, for all $\nu\in[-1/2,0]$ and $u_1,u_2>0$ the third inequality in can be improved as follows $$I_{\nu}\left(\sqrt{u_1u_2}\right)\leq I_{\nu}\left(\frac{u_1+u_2}{2}\right)\leq \sqrt{I_{\nu}(u_1)I_{\nu}(u_2)}.$$ Moreover, this implies that the function $I_{\nu}$ is strictly $(p,q)-$convex on $(0,\infty)$ for all $\nu\in[-1/2,0],$ $p\leq1$ and $q\geq0.$ This can be verified by writing the function $u\mapsto u^{1-p}I_{\nu}'(u)I_{\nu}^{q-1}(u)$ as a product of the functions $u\mapsto I_{\nu}'(u)/I_{\nu}(u)$ and $u\mapsto u^{1-p}I_{\nu}^q(u).$ Concerning Theorem \[th1\] we have the following open problem. What can we say about the monotonicity of the functions $u\mapsto uI_{\nu}'(u)/I_{\nu}^2(u)$ and $u\mapsto u^2I_{\nu}'(u)/I_{\nu}^2(u)$ for $|\nu|<1$ and $\nu\in(-1,\nu_0),$ respectively? Is it true that $u\mapsto \sqrt{u}I_{\nu}(u)$ is strictly log-concave on $(0,\infty)$ for all $\nu\geq 0$? Now, concerning Theorem \[th2\], \[th1new\] and \[th2new\] we may ask the following. What can we say about the monotonicity of $u\mapsto K_{\nu}'(u)/K_{\nu}^2(u)$ when $|\nu|<1$? What can we say about the $(p,q)-$convexity (concavity) of $I_{\nu}$ when $p\geq0,$ $q\in(-1,0)$? Moreover, the conditions for $\nu$ in parts [**(b)**]{}, [**(c)**]{} and [**(d)**]{} of Theorem \[th1new\] can be relaxed? What can we say about the $(p,q)-$convexity (concavity) of $K_{\nu}$ when $p\leq1,$ $q\in(0,1)$? Moreover, the conditions for $\nu$ in parts [**(a)**]{} and [**(c)**]{} of Theorem \[th2new\] can be relaxed? It is well-known that the function $\nu\mapsto K_{\nu}(u)$ is strictly log-convex on $\mathbb{R}$ for all $u>0$ fixed (see [@bariczstudia]). On the other hand $\nu\mapsto K_{\nu}(u)$ is strictly increasing on $(0,\infty)$ for all $u>0$ fixed. Clearly these imply that the function $\nu\mapsto K_{\nu}(u)$ is strictly $(p,q)-$convex on $(0,\infty)$ for all $p\leq 1$ and $q\geq 0,$ and all fixed $u>0.$ This suggest the following. What can we say about the $(p,q)-$convexity (concavity) of the function $\nu\mapsto K_{\nu}(u)$ on $(0,\infty)$ when $p$ and $q$ are arbitrary real numbers? Similarly, the function $\nu\mapsto I_{\nu}(u)$ is strictly log-concave on $(-1,\infty)$ for all $u>0$ fixed (see [@bariczstudia]). On the other hand $\nu\mapsto I_{\nu}(u)$ is strictly decreasing on $(-1,\infty)$ for all $u>0$ fixed. Clearly these imply that the function $\nu\mapsto I_{\nu}(u)$ is strictly $(p,q)-$concave on $(0,\infty)$ for all $p\geq 1$ and $q\geq 0,$ and all fixed $u>0.$ Thus, it is natural to ask the following. What can we say about the $(p,q)-$convexity (concavity) of the function $\nu\mapsto I_{\nu}(u)$ on $(0,\infty)$ when $p$ and $q$ are arbitrary real numbers? And what about the $(p,q)-$convexity (concavity) of $\nu\mapsto I_{\nu}(u)$ on $(-1,\infty)$? Due to Laforgia [@laforgia] it is known that $K_{\nu}'(u)/K_{\nu}(u)\leq-\nu/u-1$ for all $u>0$ and $\nu\in(0,1/2).$ First observe that the above inequality is valid for all $\nu\in[0,1/2].$ Since $K_0'(u)=-K_1(u)$ for $\nu=0$ the above inequality is equivalent to $K_1(u)>K_0(u),$ which is clearly true, since the function $\nu\mapsto K_{\nu}(u)$ is strictly increasing on $(0,\infty)$ for all $u>0$ fixed. Now, since $K_{1/2}(u)=\sqrt{\pi/(2u)}e^{-u}$ we obtain that in Laforgia’s inequality for $\nu=1/2$ we have equality and since $\nu\mapsto K_{\nu}(u)$ is even, we deduce that $K_{\nu}'(u)/K_{\nu}(u)\leq-\nu/u-1$ holds true for all $u>0$ and $|\nu|\leq 1/2,$ with equality for $\nu=1/2.$ By using this result we obtain that $$\begin{aligned} \frac{\left[u^2K_{\nu}'(u)\right]'}{K_{\nu}(u)}&=2\frac{uK_{\nu}'(u)}{K_{\nu}(u)}+\frac{u^2K_{\nu}''(u)}{K_{\nu}(u)}= \left[\frac{uK_{\nu}'(u)}{K_{\nu}(u)}+u^2+\nu^2\right]\leq u^2-u+\nu^2-\nu<0\end{aligned}$$ for all $u\in(0,1)$ and $|\nu|\leq1/2.$ This implies that the function $u\mapsto u^2K_{\nu}'(u)$ is strictly decreasing on $(0,1)$ for all $|\nu|\leq1/2,$ i.e. the modified Bessel function of the second kind $K_{\nu}$ is strictly $(-1,1)-$concave on $(0,1)$ for all $|\nu|\leq1/2.$ This completes parts [**(e)**]{} and [**(f)**]{} of Theorem \[th2\]. Taking into account the above discussion we may ask the following. Is it true that $u\mapsto u^2K_{\nu}'(u)$ is strictly decreasing on $(0,2)$ for all $|\nu|\leq1/2$? In reliability analysis it has been found very useful to classify life distributions (i.e. distributions of which cumulative distribution function satisfies $F(u)=0$ for $u\leq0$) according to the monotonicity properties of the failure rate. By definition a life distribution (with probability density function $f$ and survival or reliability function $\overline{F}$) has the increasing failure rate (IFR) property if the function $u\mapsto f(u)/\overline{F}(u)$ is increasing on $(0,\infty).$ Since by definition $\overline{F}(u)=1-F(u)$ for all $u>0,$ clearly we have $\overline{F}'(u)=-f(u)$ for all $u>0.$ Thus, a life distribution is IFR if and only if $u\mapsto -\overline{F}'(u)/\overline{F}(u)$ is increasing on $(0,\infty),$ i.e. the reliability function $\overline{F}$ is log-concave. It is well-known that if a probability density function is log-concave then this implies that the corresponding cumulative distribution function and the complementary cumulative distribution function (or survival function) have the same property (for more details see [@andras; @bagnoli; @bariczgeo]). Another class of life distributions is the NBU, which has been shown to be fundamental in the study of replacement policies. By definition a life distribution satisfies the new-is-better-than-used (NBU) property if $u\mapsto \log\overline{F}(u)$ is sub-additive, i.e. $$\overline{F}(u_1+u_2)\leq \overline{F}(u_1)\overline{F}(u_2)$$ for all $u_1,u_2>0.$ The corresponding concept of a new-is-worse-than-used (NWU) distribution is defined by reversing the above inequality. The NBU property may be interpreted as stating that the chance $\overline{F}(u_1)$ that a new unit will survive to age $u_1$ is greater than the chance $\overline{F}(u_1+u_2)/\overline{F}(u_2)$ that an unfailed unit of age $u_2$ will survive an additional time $u_1.$ It can be shown easily that if a life distribution is IFR then it is NBU (see for example [@bariczgamma]), but the inverse implication in general does not hold. Since the most important life distribution satisfies the NBU property it is natural to ask the following. Is it true that the gamma-gamma distribution satisfies the NBU property? To answer this question it would be enough to prove that the probability density function $f_{a,b,\alpha}$ is log-concave, and for this in view of part [**(b)**]{} of Theorem \[th5\] it is quite enough to show that $f_{a,b,\alpha}$ is increasing. Similarly, observe that for the log-concavity of $f_{a,b,\alpha}$ we just need to show that $\widetilde{f}_{a,b,\alpha}$ is increasing and log-concave. However, by part [**(a)**]{} of Theorem \[th5\] if $\widetilde{f}_{a,b,\alpha}$ is increasing, then it is log-concave. Thus, to prove that the gamma-gamma distribution is NBU we need to show that either ${f}_{a,b,\alpha}$ or $\widetilde{f}_{a,b,\alpha}$ is increasing. Acknowledgments {#acknowledgments .unnumbered} --------------- The research of Árpád Baricz was supported by the János Bolyai Research Scholarship of the Hungarian Academy of Sciences and by the Romanian National Authority for Scientific Research CNCSIS-UEFISCSU, project number PN-II-RU-PD388/2011. [1]{} ACZÉL, J., 1947, The notion of mean values. [*Norske Vid. Selsk. Forh. Trondhjem*]{} [**19**]{}, 83–-86. ALZER, H., 2008, Inequalities for Euler’s gamma function. [*Forum Math.*]{}, [**20**]{}, 955–1004. ANDERSON, G.D, VAMANAMURTHY, M.K., VUORINEN, M., 1993, Inequalities for quasiconformal mappings in space. [*Pacific J. Math.*]{}, [**160**]{}, 1–18. ANDERSON, G.D, VAMANAMURTHY, M.K., VUORINEN, M., 2006, Monotonicity rules in calculus. [*Amer. Math. Monthly*]{}, [**113**]{}, 805–816. ANDERSON, G.D, VAMANAMURTHY, M.K., VUORINEN, M., 2007, Generalized convexity and inequalities. [*J. Math. Anal. Appl.*]{}, [**335**]{}, 1294–1308. ANDRÁS, S., BARICZ, Á., 2008, Properties of the probability density function of the non-central chi-squared distribution. [*J. Math. Anal. Appl.*]{}, [**346**]{}, 395–402. BAGNOLI, M., BERGSTROM, T., 2005, Log-concave probability and its applications. [*Econom. Theory*]{}, [**26**]{}, 445-–469. BARICZ, Á., 2006, Functional inequalities involving special functions. [*J. Math. Anal. Appl.*]{}, [**319**]{}, 450–459. BARICZ, Á., 2007, Functional inequalities involving special functions. II. [*J. Math. Anal. Appl.*]{}, [**327**]{}, 1202–1213. BARICZ, Á., 2007, Convexity of the zero-balanced Gaussian hypergeometric functions with respect to Hölder means. [*J. Inequal. Pure Appl. Math.,*]{} [**8**]{}, art. 40, 9 pp. BARICZ, Á., 2008, A functional inequality for the survival function of the gamma distribution. [*J. Inequal. Pure Appl. Math.,*]{} [**9**]{}, art. 13, 5 pp. BARICZ, Á., 2008, Mills’ ratio: Monotonicity patterns and functional inequalities. [*J. Math. Anal. Appl.*]{}, [**340**]{}, 1362–1370. BARICZ, Á., 2008, Functional inequalities involving Bessel and modified Bessel functions of the first kind. [*Expo. Math.*]{}, [**26**]{}, 279–293. BARICZ, Á., 2009, On a product of modified Bessel functions. [*Proc. Amer. Math. Soc.,*]{} [**137**]{}, 189–193. BARICZ, Á., 2010, Geometrically concave univariate distributions. [*J. Math. Anal. Appl.,*]{} [**363**]{}, 182–196. BARICZ, Á., 2010, Turán type inequalities for some probability density functions. [*Studia Sci. Math. Hungar.*]{}, [**47**]{}, 175–189. BARICZ, Á., 2010, Turán type inequalities for modified Bessel functions. [*Bull. Aust. Math. Soc.*]{}, [**82**]{}, 254–264. BARICZ, Á., NEUMAN, E., 2007, Inequalities involving modified Bessel functions of the first kind. II. [*J. Math. Anal. Appl.*]{}, [**332**]{}, 265–271. BARICZ, Á., PONNUSAMY, S., On Turán type inequalities for modified Bessel functions. Available online at `http://arxiv.org/abs/1010.3346`. BIERNACKI, M., KRZYŻ, J., 1955, On the monotonity of certain functionals in the theory of analytic functions. [*Ann. Univ. Mariae Curie-Skłodowska. Sect. A.*]{}, [**9**]{}, 135–147. BITHAS, P.S., SAGIAS, N.C., MATHIOPOULOS, P.T., KARAGIANNIDIS, G.K., RONTOGIANNIS, A.A., 2006, On the performance analysis of digital communications over generalized-K fading channels. [*IEEE Commun. Letters*]{}, [**10**]{}, 353–355. BORWEIN, D., BORWEIN, J., FEE, G., GIRGENSOHN, R., 2001, Refined convexity and special cases of the Blascke-Santalo inequality. [*Math. Inequal. Appl.*]{}, [**4**]{}, 631–638. CHATZIDIAMANTIS, N.D., KARAGIANNIDIS, G.K., On the distribution of the sum of gamma-gamma variates and applications in RF and optical wireless communications. [*IEEE Trans. on Comm.,*]{} (submitted). Available online at `http://arxiv.org/abs/0905.1305v1`. DUFF, G.F.D., 1969, Positive elementary solutions and completely monotonic functions. [*J. Math. Anal. Appl.*]{}, [**27**]{}, 469–494. GIORDANO, C., LAFORGIA, A., PEČARIĆ, J., 1996, Supplements to known inequalities for some special functions. [*J. Math. Anal. Appl.*]{}, [**200**]{}, 34–41. GRADSHTEYN, I.S., RYZHIK, I.M., 2000, [*Table of Integrals, Series, and Products*]{}, 6th ed., New York: Academic. GRONWALL, T.H., 1932, An inequality for the Bessel functions of the first kind with imaginary argument. [*Ann. of Math.,*]{} [**33**]{}(2), 275–278. HARTMAN, P., 1977, On the products of solutions of second order disconjugate differential equations and the Whittaker differential equation. [*SIAM J. Math. Anal.*]{}, [**8**]{}, 558–571. LAFORGIA, A., 1991, Bounds for modified Bessel functions. [*J. Comput. Appl. Math.*]{}, [**34**]{}, 263–267. LAFORGIA, A., NATALINI, P., 2010, Some inequalities for modified Bessel functions. [*J. Inequal. Appl.*]{}, Art. 253035. MITRINOVIĆ, D.S., 1970, [*Analytic Inequalities,*]{} Berlin: Springer-Verlag. NEUMAN, E., 1992, Inequalities involving modified Bessel functions of the first kind. [*J. Math. Anal. Appl.*]{}, [**171**]{}, 532–536. PENFOLD, R., VANDEN-BROECK, J.M., GRANDISON, S., 2007, Monotonicity of some modified Bessel function products. [*Integral Transforms Spec. Funct.*]{}, [**18**]{}, 139–144. PHILLIPS, R.S., MALIN, H., 1950, Bessel function approximations. [*Amer. J. Math.,*]{} [**72**]{}, 407–418. PINELIS, I., 2002, L’Hospital’s rules for monotonicity, with applications. [*J. Inequal. Pure Appl. Math.*]{} [**3**]{}, art. 5, 5 pp. PONNUSAMY, S., VUORINEN, M., 1997, Asymptotic expansions and inequalities for hypergeometric functions. [*Mathematika*]{}, [**44**]{}, 43–64. SEGURA, J., 2011, Bounds for ratios of modified Bessel functions and associated Turán-type inequalities. [*J. Math. Anal. Appl.*]{}, [**374**]{}, 516–528. SUN, Y., BARICZ, Á., 2008, Inequalities for the generalized Marcum $Q$-function. [*Appl. Math. Comput.*]{}, [**203**]{}, 134–141. TEMME, N.M., 1981, On the expansion of confluent hypergeometric functions in terms of Bessel functions. [*J. Comput. Appl. Math.*]{}, [**7**]{}, 27–32. THIRUVENKATACHAR, V.R., NANJUNDIAH, T.S., 1951, Inequalities concerning Bessel functions and orthogonal polynomials. [*Proc. Indian Acad. Sci., Sect. A.,*]{} [**33**]{}, 373–384. WATSON, G.N., 1944, [*A Treatise on the Theory of Bessel Functions*]{}, Cambridge: Cambridge University Press. WIDDER, D.V., 1941, [*The Laplace Transform*]{}, Princeton: Princeton University Press. ZHANG, X., WANG, G., CHU, Y., 2009, Convexity with respect to Hölder mean involving zero-balanced hypergeometric functions. [*J. Math. Anal. Appl.*]{}, [**353**]{}, 256–259.
--- abstract: 'We present accretion rates for selected samples of nova-like variables having IUE archival spectra and distances uniformly determined using an infrared method by Knigge (2006). A comparison with accretion rates derived independently with a multi-parametric optimization modeling approach by Puebla et al.(2007) is carried out. The accretion rates of SW Sextantis nova-like systems are compared with the accretion rates of non-SW Sextantis systems in the Puebla et al. sample and in our sample, which was selected in the orbital period range of three to four and a half hours, with all systems having distances using the method of Knigge (2006). Based upon the two independent modeling approaches, we find no significant difference between the accretion rates of SW Sextantis systems and non-SW Sextantis nova-like systems insofar as optically thick disk models are appropriate. We find little evidence to suggest that the SW Sex stars have higher accretion rates than other nova-like CVs above the period gap within the same range of orbital periods.' author: - 'Ronald-Louis Ballouz' - 'Edward M. Sion' title: 'On The Accretion Rates of SW Sextantis Nova-Like Variables' --- Subject Headings: Stars: cataclysmic variables, white dwarfs, Physical Processes: accretion, accretion disks Introduction ============ Cataclysmic variables (CVs) are short-period binaries in which a late-type, Roche-lobe-filling main- sequence dwarf transfers gas through an accretion disk onto a rotating, accretion-heated white dwarf (WD). The nova-like variables are a non-magnetic subclass of CVs in which the mass-transfer rate tends to be high and the light of the system is typically dominated by a very bright accretion disk (Warner 1995). The spectra of nova-like variables resemble those of classical novae (CNe) that have settled back to quiescence. However, nova-like variables have never had a recorded CN outburst or any outburst. Hence their evolutionary status remains unknown. They could be close to having their next CN explosion, or they may have had an unrecorded explosion, in the recent past. Their distribution of orbital periods reveals a large concentration of systems in the range between three and four hours, the former period being the upper boundary of the CV period gap where very few CVs are found. Some nova-likes (classified as the VY Sculptoris systems) show the behavior of being in a high optical brightness state for most of the time, but then, for no apparent reason, plummeting into a deep low optical brightness state with little or no ongoing accretion. Then, just as unpredictably, their optical brightness returns to the high state(cf. Honeycutt & Kafka 2004 and references therein). These precipitous drops in brightness are possibly related to the cessation of mass transfer from the K-M dwarf secondary star either by starspots that drift into position under the inner Lagrangian point, L1 (Livio & Pringle 1998) or irradiation feedback in which an inflated outer disk can modulate the mass transfer from the secondary by blocking its irradiation by the hot inner accretion disk region (Wu et al. 1995). Other nova-like systems, the UX UMa subclass, do not appear to exhibit low states but remain in a state of high accretion, sometimes referred to as dwarf novae stuck in permanent outburst. It is widely assumed that the absence of dwarf novae outbursts in nova-likes is explained by their mass transfer rates being above a critical threshold where the accretion rates are so high that the accretion disk is largely ionized thus suppressing the viscous-thermal instability (the disk instability mechanism or DIM) which drives dwarf nova limit cycles (Shafter, Cannizzo and Wheeler 1986). Until recently, the accretion rates of nova-likes, including SW Sex stars, have been reported for only a few individual systems from a variety of model analyses of their optical, FUV spectra or X-ray spectra. Optical determinations of the accretion rates in nova-likes are based upon estimates of their disk luminosity using distance estimates or clues (Patterson 1984). The absolute magnitudes of the accretion disks in nova-likes reveal that their accretion rates are similar to those derived for dwarf novae during their outbursts (Warner 1995). Unfortunately, the distances of nova-like variables remain uncertain due to the scarcity of trigonometric parallaxes and the absence of a reliable usable relation for nova-like variables between their absolute magnitude at maximum light versus orbital period similar to what exists for dwarf novae. A more systematic study of a larger number of systems is clearly needed in order to compare accretion rates among different subgroups of CVs. One recent statistical study (Puebla, Diaz & Hubeny 2007), utilizing a multi-parametric optimization model fitting method, explored how well current optically thick accretion disk models fit the FUV spectra of nova-likes and old novae in a sample of 33 nova-like and old novae. They found the average value of M for nova-like systems was $\sim 9.3\times 10 ^{-9}$ M$_{\sun}$ yr$^{-1}$. Among the nova-like variables is a subclass, the SW Sextantis stars, which display a multitude of observational characteristics: orbital periods between 3 and 4 hours, up to one-half of the known SW Sex systems are non-eclipsing and roughly one-half show deep eclipses of the WD by the secondary, thus requiring high inclination angles, single-peaked emission lines despite the high inclination, and high excitation spectral features including He II (4686) emission and strong Balmer emission on a blue continuum, high velocity emission S-waves with maximum blueshift near phase $\sim$ 0.5, delay of emission line radial velocities relative to the motion of the WD, and central absorption dips in the emission lines around phase $\sim$ 0.4 - 0.7 (Rodriguez-Gil, Schmidtobreick & Gaensicke 2007; Hoard et al. 2003). The SW Sex stars appear to be intrinsically luminous as indicated by the apparent brightnesses of systems like DW UMa despite their being viewed at high orbital inclinations of 80 degrees and higher (Rodriguez-Gil et al.2007). A picture of very high secular mass transfer rates is supported by the presence of very hot white dwarfs in the rare SW Sex systems observed in a low state (e.g. DW UMa’s white dwarf). The white dwarfs in many, if not all, of these systems are suspected of being magnetic (Rodriguez-Gil et al. 2007). However, the case for magnetic white dwarfs in the SW Sex stars remains highly speculative and does not consistently account for the spectroscopic and photometric characteristics from system to system (see for example Hoard et al. 2003). It has also been asserted that the SW Sextantis subclass of nova-like variables have higher accretion rates than other nova-like systems (Rodriguez-Gil et al. 2007). Indeed, twenty-seven out of thirty-five SW Sex stars listed by Rodriguez et al.(2007) have orbital periods concentrated in the range of 3 hours to 4.5 hours where nova-like systems tend to accumulate. As asserted in Rodriguez et al., either the SW Sex stars have “an average mass transfer rate well above that of their CV cousins” or another source of luminosity exists. Since these objects are found near the upper boundary of the period gap, their study is of critical importance to understanding CV evolution as they enter the period gap (Rodriguez-Gil et al.2007). In the SW Sex systems (except those observed during low states), the accretion disk flux completely dominates the FUV wavelength range. The white dwarf contribution is expected to be minimal in these systems because their disks are thick and luminous and because at high inclination the inner disk, boundary layer and white dwarf should be significantly obscured by vertical structure in the disk. Therefore, it is entirely reasonable that the analysis of a nova-like system in a high state be carried out with optically thick steady state accretion disk models in which the accretion rates are determined from fitting the continuum slopes and Lyman Alpha profiles with the fits constrained by the system distance, if known, and by parameters like the inclination angle and white dwarf mass if reliably known. This is the same model fitting strategy that we employed to determine the accretion rates of 46 dwarf novae in outburst from disk modeling of their IUE archival FUV spectra (Hamilton et al. 2007). An important question is whether or not the SW Sex really have higher than average accretion rates compared with other nova-like systems as asserted by Rodriguez et al. 2007). If they do not have higher than average mass transfer rates, then what is the source of their higher luminosities? In order to test this assertion, we have determined accretion rates of SW Sex nova-likes with non-SW Sex nova-likes within approximately the same orbital period range. Our primary goal in this work is to examine the accretion rates of the SW Sextantis subclass of nova-like variables System Parameters and Distances of Nova-Like Variables ====================================================== In order to constrain the synthetic spectral fitting and reduce the number of free parameters, we search the published literature for the most accurately known system parameters. This included the compilations in Ritter & Kolb (2003) and the Goettingen CVCat website as well as publications documented in the SAO/NASA Astrophysics Data Service (ADS). The most critical parameter for the model fitting, the distance, is the least known. We conducted an exhaustive search of the literature for previous published distance estimates. There is only one nova-like, RW Tri, with a reliable trigonometric parallax measurement. Unlike dwarf novae where there exists a correlation between their absolute magnitude at maximum and their orbital period, there is no such relation for the nova-likes. However, a new method (Knigge 2006) utilizing 2MASS JHK photometry and the observed properties of CV donor stars has proven useful for constraining nova-like distances. At present, this is the only reliable handle one has on nova-like distances (Warner 2008). For each system, we obtained the J,H,K apparent magnitudes from 2MASS. For a given orbital period, Knigge (2006) provides absolute J, H and K magnitudes based upon his semi-empirical donor sequence for CVs. If it is assumed that the donor provides 100% of the light in J, H and K, then the distance is a strict lower limit. If the donor emits 33% of the light (the remainder being accretion light), then an approximate upper limit is obtained. The latter limit is a factor of 1.75 times the lower limit distance. For moderately bright CVs, interstellar reddening is expected to have a negligible effect on the IR photometry used to estimate the distances. The adopted distance ranges which are used as constraints in the synthetic spectral fitting procedure are given below (see Table 1). For our comparative study, we selected all SW Sex and non-SW Sex nova-like variables within the orbital period range of 3 hours to 4.5 hours for which usable IUE archival spectra exist. This period range is where 77% of the known SW Sex stars are found. The SW Sex status was confirmed by a comparison of the census in Table 6 of Rodriguez et al. (2007) with the latest census of membership in the SW Sex class given in Don Hoard’s Big List of SW Sex Stars. Within the 3 to 4.5 hour period range, the following objects are assigned “Definite” SW Sex membership status: V442 Oph, SW Sex, AH Men, WX Ari, BP Lyn and UU Aqr. The systems HL Aqr and LN UMa are listed Hoard’s Big List as “Probable” members while being listed in Rodriguez et al. (2007) as bona fide SW Sex stars. We have retained these two objects as SW Sex members. For the non-SW Sex nova-like systems within the same orbital period for which there are usable IUE archival spectra obtained during their high brightness state, we selected the following systems: LQ Peg, MV Lyr, TT Ari, VZ Scl, BZ Cam, and CM Del. We note that TT Ari and VZ Scl are listed in Hoard’s Big List as “Possible” SW Sex membership status. Therefore, we compute the average accretion rates below, with and without their inclusion. In Table 1, where we list the adopted parameters for the orbital period (hours), the apparent V-magnitude, the inclination $\it{i}$ ($\degr$), the white dwarf mass (M$_{\sun}$), the interstellar reddening, E($\bv$), and the distance in parsecs. In our model fitting procedure, these published parameters are used as initial guesses but if the resulting fits are unsatisfactory, then we allow the relevant parameters to vary in the model fitting. [lcccccccc]{} V442 Oph & VY,SW &2.98&12.6& - & - & 0.22 & 153-268 &130\ SW Sex &UX,SW& 3.24&14.3& 0.5 & $> 75$: & 0.0 &243-426&450\ AH Men & SW & 3.05& 13.2 & - &- & 0.12 & 91-160 & 120\ HL Aqr & SW & 3.25 & 13.3 & - & - & 0.05 & 174-304 & 213\ WX Ari & SW & 3.34 & 15.3 & - & 72:& - & 258.7-453 & 468\ LN UMa & SW & 3.46 & 14.6 & - & - & - & 349-610 & 405\ BP Lyn & SW & 3.67 & 14.5 & - & - & - & 251-440 & 344\ UU Aqr & SW & 3.93 & 13.3 & $0.67\pm0.14$ & $78\pm2$ & - & 174-304 & 208\ LQ Peg & NSW & 2.99 & 14.4 & -& -& -& 270-472 & 350\ MV Lyr & NSW & 3.18 & 11.8 & $0.73\pm0.10$ & 12 & - & 431-754 & 442\ TT Ari & NSW & 3.30 & 9.5 & - & - & -&63-109 & 65\ VZ Scl & NSW & 3.47 & 15.6 & 1: & 90: & - & 490-858 & 566\ BZ Cam & NSW & 3.69 & 12.0 & - & - & - & 235-412 & 258\ CM Del & NSW & 3.89 & 13.4 & $0.48\pm0.15$ & $73\pm47$ & - & 229-401 & 241\ VY Scl & NSW & 5.57: & 12.1 & $1.22\pm0.22$ & $30\pm10$ & - & 196-343 & 337\ Far Ultraviolet Spectroscopic Observations ========================================== All the spectral data were obtained from the Multimission Archive at Space Telescope (MAST) IUE archive are in a high activity state, very near or at outburst. We restricted our selection to those systems with SWP spectra, with resolution of 5Å and a spectral range of 1170Å to 2000Å. All spectra were taken through the large aperture at low dispersion. When more than one spectrum with adequate signal-to-noise ratio was available, the spectra were co-added or the two best spectra were analyzed. In Table 2, an observing log of the IUE archival spectra is presented in which by column: (1) lists the SWP spectrum number, (2) the aperture diameter, (3) the exposure time in seconds, (4) the date and time of the observation, (5) the continuum to background counts, and (6) the brightness state of the system. Transition refers to an intermediate state between the highest optical brightness state and the deepest low state. [lccccccc]{} V442 Oph& SW & 14731 & 6600& LOW& Lg&1981-12-8 & Intermed.?\ SW Sex& SW & 21534+21535+21536 & 2400/2400/3240 & LOW& Lg &1983-11-13 & High\ AH Men & SW & 43037 & 2880 & LOW & Lg &1991-11-8 & High\ HL Aqr & SW & 23325 & 3600 & LOW & Lg & 1984-6-24 & High\ WX Ari & SW & 55953 & 14100 & LOW & Lg &1995-9-17 & High\ LN UMa & SW & 40948 & 7200& LOW& Lg & 1991-2-28 & High\ BP Lyn & SW & 32940 & 7200& LOW & Lg& 1988-2-18 & High\ UU Aqr & SW & 51249 & 3600 & LOW& Lg & 1994-6-29 & High\ LQ Peg & NSW & 17367& 2400 & LOW& Lg& 1982-7-6 & High\ MV Lyr & NSW & 07296 & 5400 & LOW & Lg & 1979-12-2 & High\ TT Ari & NSW & 42491 & 420 & LOW & Lg& 1991-9-17 & High\ VZ Scl & NSW & 23021 & 25500 & LOW & Lg& 1984-5-15 & High\ BZ Cam & NSW& 21251& 1800& LOW& Lg& 1983-10-07 & High\ CM Del & NSW& 14707 & 3600 & LOW & Lg & 1981-8-10 & High\ VY Scl & NSW& 32594 & 2100 & LOW & Lg & 1987-12-23 & High\ In the case of those systems not covered by the AAVSO, their activity state was assessed based upon either mean photometric magnitudes taken from the Ritter & Kolb(2003) catalogue or from IUE Fine Error Sensor (FES) measurements at the time of the IUE observation. In addition, the presence of P-Cygni profiles, absorption lines, and a comparison with spectral data and flux levels of other systems during different activity states was used to ascertain the state of the system. The reddening of the systems was determined based upon all estimates listed in the literature. The three principal sources of reddening were the compilations of Verbunt (1987), laDous (1991) and Bruch & Engel (1994). The spectra were de-reddened with the IUERDAF IDL routine UNRED. Synthetic Spectral Fitting Models ================================= We adopted model accretion disks from the optically thick disk model grid of Wade & Hubeny (1998). In these accretion disk models, the innermost disk radius, R$_{in}$, is fixed at a fractional white dwarf radius of $x = R_{in}/R_{wd} = 1.05$. The outermost disk radius, R$_{out}$, was chosen so that T$_{eff}(R_{out})$ is near 10,000K since disk annuli beyond this point, which are cooler zones with larger radii, would provide only a very small contribution to the mid and far UV disk flux, particularly the SWP FUV bandpass. The mass transfer rate is assumed to be the same for all radii. Thus, the run of disk temperature with radius is taken to be: $$T_{eff}(r)= T_{s}x^{-3/4} (1 - x^{-1/2})^{1/4}$$ where $x = r/R_{wd}$ and $\sigma T_{s}^{4} = 3 G M_{wd}\dot{M}/8\pi R_{wd}^{3}$ Limb darkening of the disk is fully taken into account in the manner described by Diaz et al. (1996) involving the Eddington-Barbier relation, the increase of kinetic temperature with depth in the disk, and the wavelength and temperature dependence of the Planck function. The boundary layer contribution to the model flux is not included. However, the boundary layer is expected to contribute primarily in the extreme ultraviolet below the Lyman limit. The disk is divided into a set of ring annuli. The vertical structure of each ring is computed with TLUSDISK (Hubeny 1990), which is a derivative of the stellar atmosphere program TLUSTY (Hubeny 1988). The spectrum synthesis program SYNSPEC described by Hubeny & Lanz (1995) is used to solve the radiative transfer equation to compute the local, rest frame spectrum for each ring of the disk. In addition to detailed proÐles of the H and He lines, the spectrum synthesis includes metal lines up to Nickel (Z = 28). The accretion disks are computed in LTE and the chemical composition of the accretion disk is kept fixed at solar values in our study. Theoretical, high gravity, photospheric spectra were computed by first using the code TLUSTY version 200(Hubeny 1988) to calculate the atmospheric structure and SYNSPEC version 48 (Hubeny and Lanz 1995) to construct synthetic spectra. We compiled a library of photospheric spectra covering the temperature range from 15,000K to 70,000K in increments of 1000 K, and a surface gravity range, log $g = 7.0 - 9.0$, in increments of 0.2 in log $g$. After masking emission lines in the spectra, our normal procedure, in general, is to determine separately for each spectrum, the best-fitting white dwarf-only model, the best-fitting accretion disk-only model, the best fitting combination of a white dwarf plus an accretion disk, and the best-fitting two-temperature white dwarfa (to include an accretion belt or ring). Using two $\chi^{2}$ minimization routines, either IUEFIT for disks-alone and photospheres-alone or DISKFIT for combining disks and photospheres or two-temperature white dwarfs, $\chi^{2}$ values and a scale factor were computed for each model or combination of models. The scale factor, $S$, normalized to a kiloparsec and solar radius, can be related to the white dwarf radius R through: $F_{\lambda(obs)} = S H_{\lambda(model)}$, where $S=4\pi R^2 d^{-2}$, and $d$ is the distance to the source. For the white dwarf radii, we use the mass-radius relation from the evolutionary model grid of Wood (1995) for C-O cores. The best-fitting model or combination of models was chosen based not only upon the minimum $\chi^{2}$ value achieved, but the goodness of fit of the continuum slope, the goodness of fit to the observed Lyman Alpha region and consistency of the scale factor-derived distance with the adopted Knigge (2006) distance for each system. orthe For a non-magnetic nova-like variable during its high state, it is reasonable to expect that a steady state optically thick accretion disk might provide a successful fit. Therefore, our modeling procedure is the same as the procedures carried out by Hamilton et al. (2007) for the entire IUE archive of 46 dwarf novae in outburst. For the nova-like variables during their high brightness states, we first try accretion disk models which satisfy both the continuum slope and Lyman Alpha line width in a single model. We use published parameters like the WD mass and inclination ONLY as an initial guess in searching for the best-fitting accretion disk models. If the parameters are published but not considered reliable or if they are entirely absent, then for each systems’s spectrum, we carry out fits for every combination of M, inclination and white dwarf mass in the Wade and Hubeny (1998) library. The values of [*[i]{}*]{} are 18, 41, 60, 75 and $81^{\deg}$. The range of accretion rates covers $-10.5 < \log \dot{M} < -8.0$ in steps of 0.5 in the log and five different values of the white dwarf mass, namely, 0.35, 0.55, 0.80, 1.03, and 1.2 M$_{\sun}$. The process is streamlined by a routine that compares each observed spectrum with the full 900 models using every combination of $i$, M and M$_{wd}$, and provides the model-computed distance and $\chi^{2}$ value of each model fit. A good sense of the accuracy of our derived accretion rates for IUE FUV spectra of comparable quality is provided by a formal error analysis with contours discussed in Winter and Sion (2003). In general, we estimate that our accretion rates from these spectra are accurate to within a factor of 2 to 3. Accretion rates of SW Sex Nova-likes and Non-SW Sex Nova-Likes ============================================================== Accretion Rates from Multiparametric Optimization ------------------------------------------------- The Puebla et al.(2007) sample contains ten SW Sex systems for which they derived accretion rates using a method different from ours, known as multiparametric optimization. Of the ten, three systems, RW Tri, RR Pic and LX Ser are listed as “Possible” SW Sex members on D.Hoard’s Big List while V347 Pup is listed as a “Probable” member. Since these systems are listed as members by Rodriguez-Gil et al. (2007), then we have retained them in the following rough comparison with non-SW Sex nova-likes. For the latter systems, we have V592 Cas, CM Del, KR Aur, IX Vel, UX UMa, V794 Aql, V3885 Sgr, VY Scl, RW Sex, RZ Gru and QU Car. if we take the comparison to be within approximately the orbital period range of 3 to 4.5 hours, then the average accretion rate of the SW Sex systems is $3.6 \times 10^{-9}$ and non-SW Sex members have M$ = 3.0\times10^{-9}$ M$_{\sun}$ yr$^{-1}$. If we average the SW SE and non-SW Sex systems without regard to orbital period, then the 13 SW Sex systems have M$ = 5.0\times10^{-9}$ M$_{\sun}$ yr$^{-1}$ compared with 11 non-SW Sex nova-likes with M$ = 1.0\times10^{-8}$ M$_{\sun}$ yr$^{-1}$ One must caution that this comparison includes non-SW Sex systems having a mixed bag of distance methods, white dwarf masses and fitting methods with derived accretion rates from different groups. Therefore, to facilitate a more uniform comparison between SW Sex and non-SW Sex nova-likes, we chose to (1) enlarge the sample size of SW Sex and non-SW Sex nova-likes from the IUE and HST archives; (2) restrict the comparison to the period range of the SW Sex stars and; (3) adopt distances determined uniformly with the same method. If we restrict our attention to the range of orbital periods between 3 and 4.5 hours where most nova-like variables appear to be concentrated, we can directly compare the accretion rates of non-SW Sex systems in that period range to the SW Sex systems in this same range of orbital period. For this experiment, the model fitting was carried out uniformly for all the systems, SW Sex and non-SW Sex, using the distances from the method of Knigge (2006) where, ideally, the P$_{orb}$ versus M relation, would be expected to vary commensurately within this restricted period range. N-L Sample with $3 < P_{orb} < 4.5$, Knigge Distances ----------------------------------------------------- In Table 3, we list the best-fitting parameters of our selected sample of SW Sex and non-SW Sex nova-likes (from Tables 1 and 2) where the entries by column are (1) the system name, (2) nova-like subclass, (3) white dwarf mass, (4) inclination angle, (5) best-fitting model distance in pc, (6) $\dot{M}_{\sun}$ yr$^{-1}$), and (7) $\chi^{2}$ value. [lcccccc]{} V442 Oph & SW & 0.4 & 75 & 183 & 1$\times10^{-8}$ & 7.01\ SW Sex & SW & 0.4 & 75 & 357 & 3$\times10^{-9}$ & 9.86\ AH Men & SW & 0.55 & 75 & 116 & 1$\times10^{-9}$ & 2.44\ HL Aqr & SW & 0.35& 18 & 213 & 1$\times10^{-9}$ & 4.99\ WX Ari & SW & 0.35 & 60 & 468 & 1$\times10^{-9}$ & 4.74\ LN UMa & SW & 0.55 & 41 & 405 & 3$\times10^{-10}$ & 5.67\ BP Lyn & SW & 0.35 & 75 & 344 & 1$\times10^{-8}$ & 2.809\ UU Aqr & SW & 0.35 & 60 & 208 & 1$\times10^{-9}$ & 6.74\ LQ Peg & NSW & 0.55 & 60 & 350 & 1$\times10^{-9}$& 6.38\ MV Lyr & NSW & 0.55 & 75 & 442 & 1$\times10^{-8}$ & 5.86\ TT Ari & NSW & 0.55 & 41 & 65 & 1$\times10^{-9}$ & 3.32\ VZ Scl & NSW & 0.55 & 60 & 566 & 3$\times10^{-10}$ & 7.81\ BZ Cam & NSW & 0.35 & 60 & 258 & 3$\times10^{-9}$ & 7.09\ CM Del & NSW & 0.35 & 75 & 241 & 1$\times10^{-8}$ & 3.61\ VY Scl & NSW & 1.03 & 18 & 337 & 1$\times10^{-9}$ & 4.4\ The best fitting accretion disk model for each system is shown in the accompanying multi-part figures where the systems are displayed in the same order as they are listed in Tables 1,2, and 3. We display in Fig. 1(a) V442 Oph and (b) SW Sex. While the accretion disk is the overwhelmingly dominant contributor in the FUV in nova-like variables during their high states, these two systems were among seven flagged by Puebla et al.(2007) as possibly having a significant white dwarf flux contribution. This finding led us to combine accretion disk models with white dwarf models in the fitting of V442 Oph and SW Sex. The thick solid line represents their combination, the dashed curve represents the accretion disk alone, and the dotted curve represents the white dwarf. We display in Fig. 2 (a) AH Men), (b) HL Aqr, (c) WX Ari, (d) LN UMa, (e) BP Lyn (f) UU Aqr and in Fig. 3 (a) LQ Peg, (b) MV Lyr, (c) TT Ari, (d) VZ Scl, (e) BZ Cam, (f) CM Del. The solid line is the best fitting accretion disk. ### Comments on Individual Systems - [V442 Oph]{} - The observation of V442 Oph was made when its V-magnitude was 13.7 as indicated by the FES magnitude measured with IUE. This is 1.1 magnitudes fainter than its visual magnitude in the high state and 0.3 magnitudes brighter than its typical low state visual magnitude of 14.0 although it has been observed as faint as 15.5. Given this brighter low state, it may not surprising Puebla et al.(2007) predicted a WD flux contribution. With the system inclination and white dwarf mass constrained by the range of values published in the literature (see Table 1) and the distance of 183 pc, the best-fit (see Fig. 1a) is a combination of an accretion disk model with $\it{i} = 75\degr$, M$_{wd}$ = 0.4 M$_{\sun}$ with an accretion rate 1$\times10^{-8}$ M$_{\sun}$ yr$^{-1}$ and a white dwarf with log $g = 7$, T$_{eff}$ = 23,000K but only a modest improvement in the fit over a disk alone. Puebla et al.(2007) derived $3\times10^{-9}$ M$_{\sun}$ yr$^{-1}$ for 130 pc. However, Shafter and Szkody (1983) derived $1.0\times10^{-8}$ M$_{\sun}$ yr$^{-1}$ from three SWP spectra which had essentially the same flux level as SWP14731 and the same IUE FES magnitudes at the time their spectra were taken. - [SW Sex]{} - We co-added a closely spaced time series of three IUE spectra to improve the signal. Puebla et al.(2007) predicted a significant WD contribution. The co-added spectrum was best-fit by a combination of an accretion disk with M$_{wd}$ = 0.4 M$_{\sun}$, $\it{i} = 75\degr$, an accretion rate of $3\times10^{-9}$ M$_{\sun}$ yr$^{-1}$ and a white dwarf with log $g = 7$, T$_{eff}$ = 21,000K for a distance of 357 pc and $\chi^{2} = 9.86$. This combined fit (see Fig. 1a) agrees with the accretion rate value derived by Puebla et al (2007). - [AH Men]{} - AH Men is normally at V$= 13.2$ but has been observed as faint as V = 14. There are several high state spectra and two low state spectra. (Mouchet et al.1996) estimated the reddening to be $E(B-V) = 0.12$ and derived an accretion rate of $3\times10^{-9}$ M$_{\sun}$ yr$^{-1}$ using standard black body accretion disk models. We modeled one of the highest flux level IUE spectra (SWP41399), displayed in Fig. 2(a). - [HL Aqr]{} - This low inclination object, PHL227, is virtually a twin of V3885 Sgr in both the optical and the FUV (Hunger, Heber and Koester 1985). The H$\alpha$ line of HL Aqr shows significant blueshifted absorption modulated at the orbital period. While the high inclination SW Sex stars are dominated by emission, HL Aqr and other low inclination SW Sex stars show dominant absorption. Our modeling with the Knigge distance yields an accretion rate of 1$\times10^{-9}$ M$_{\sun}$ yr$^{-1}$. - [WX Ari]{} - This is the first time that the IUE spectrum and a derived accretion rate has appeared for WX Ari, a definite SW Sex star in D.W. Hoard’s Big List. The inclination is uncertain. Our best-fit favors 60 degrees. - [LN UMa]{}- Listed as a “probable” SW Sex member. This is the first time the IUE spectrum and a derived accretion has been published for LN UMa. LN UMa was discovered, like other nova-likes, as a thick disk CV in the Palomar-Green Survey. - [BP Lyn]{} - This is first publication of the IUE spectrum. The continuum is very steeply rising toward shorter wavelengths. Many strong absorption features are present. - [MV Lyr]{} - The IUE archival spectrum was unfortunately not obtained during a high state of MV Lyra but instead an intermediate brightness state. Of ten IUE spectra, two were taken during intermediate states and eight during low states. Therefore, we have removed it from the non-SW Sex sample compared in the same orbital period range as the SW Sex stars. However, Linnell et al. (2005) modeled an HST spectrum of MV Lyra taken during a high state. They obtained an accretion rate M$ = 3.0\times10^{-9}$ M$_{\sun}$ yr$^{-1}$ for a distance of 505 pc, well within the Knigge et al. distance range. We have included the accretion rate derived from the HST STIS spectrum of MV Lyra in its high state in our comparison of SW Sex and non-SW Sex accretion rates. - [TT Ari]{} - There are numerous high state IUE archival spectra. TT Ari is listed as a “Possible” SW Sex member on D.W.Hoard’s Big List. It has exhibited positive superhumps during high states and negative superhumps during low states. The white dwarf temperature (39,000K) is well-determined from HST spectral data taken during a low state. - [VZ Scl]{} - Eclipsing system with optical spectra out of eclipse revealing strong emission lines of the Balmer series, He II and He I. - [BZ Cam]{} - This nova-like object has highly variable wind outflow, a bow shock nebula, strong, highly variable wind absorption with pronounced P Cygni profiles in C IV, Si IV and very short timescale line profile variations. The origin of the bow shoock nebula remains unclear. - [CM Del]{} - Puebla et al. obtained M$ = 4.0\times10^{-9}$ M$_{\sun}$ yr$^{-1}$ but the spectrum they analyzed, SWP15280, was in a low to intermediate state with a flux level at 1350A of $5\times 10^{-14}$. They missed the higher brightness state spectrum, SWP14707, which has a flux level at 1350Å  of $1.2 \times 10^{-13}$ ergs/cm$^2$/s/Å. This is the spectrum we have modeled. - [VY Scl]{} - The orbital period remains uncertain. Puebla et al (2007) derived an accretion rate The resulting accretion rates of the two groups of nova-likes are compared in Table 4 where the first three columns on the left hand side of the table are the SW Sex system name, second column the orbital period in hours, the third column the accretion rate in M$_{\sun}$ yr$^{-1}$. On the right hand side of the table, the three columns are (1) the non-SW Sex system name (2) the orbital period in hours; (3) the accretion rate in M$_{\sun}$ yr$^{-1}$. [lccccc]{} System &P$_{orb}(hrs)$ &$\dot{M}$ (M$_{\sun}$yr$^{-1})$ &System &$P_{orb}(hrs)$ &$\dot{M}$ (M$_{\sun}$ yr$^{-1}$)\ V442 Oph & 2.88 & $1.0\times10^{-08}$ & LQ Peg& 2.9 &$1.0\times10^{-09}$\ AH Men & 3.05 & $1.0\times10^{-09}$ & MV Lyr$^6$ & 3.19 &$3.0\times10^{-09}$\ SW Sex & 3.12 & $3.2\times10^{-09}$ & TT Ari & 3.3 & $1.0\times10^{-08}$\ HL Aqr & 3.25 & $1.0\times10^{-09}$ & V751 Cyg$^3$ & 3.47 & $1.0\times10^{-09}$\ WX Ari & 3.34 & $ 1.0\times10^{-09}$ & VZ Scl & 3.47 & $5.00\times10^{-09}$\ DW UMa$^1$ & 3.36 & $1.4\times10^{-8}$ & V794 Aql & 3.6 & $3.2\times10^{-10}$\ LN UMa & 3.47 & $3.2\times10^{-10}$ & BZ Cam & 3.68 &$3.2\times10^{-09}$\ BP Lyn & 3.67 & $1.0\times10^{-08}$ & CM Del& 3.88 & 1.00$\times10^{-08}$\ V380 Oph$^2$ & 3.7 & $1.0\times10^{-09}$ & KR Aur$^4$ & 3.90 & $6.9\times10^{-9}$\ UU Aqr & 3.93 & $1.0\times10^{-9}$ & VY Scl& 3.99 & $1.0\times10^{-09}$\ & & & IX Vel$^5$ & 4.65 & $5\times10^{-9}$\ First, we computed the average accretion rates of the 16 SW Sex systems and non-SW Sex systems in this paper within approximately the same orbital period range of 3 to 4.5 hours, the period range where 27 out 35 SW Sex stars reside (Rodriguez-Gil et al. 2007). The average accretion for the SW Sex systems is 2.7$\times10^{-09}$ (M$_{\sun}$ yr$^{-1})$ and for the non-SW Sex systems 4.2$\times10^{-09}$ (M$_{\sun}$ yr$^{-1})$. Within the uncertainty of our accretion rates, there is no difference in the accretion rates of the two groups. Second, we added five additional nova-likes, two SW Sex systems and three non-SW Sex systems discussed elsewhere but whose accretion rates were determined with the same model grid as used in this paper and with Knigge (2006) distances. This amounted to 10 SW Sex systems and eleven non-SW Sex systems. Once again there is virtually no difference in the average accretion rates. In view of these results, we do not believe that SW Sex have higher secular accretion rates that other CVs, specifically the nova-like systems that do not exhibit the SW Sex spectroscopic characteristics and behavior. Conclusions =========== \(1) We have examined the accretion rates of nova-like systems of the SW Sex and non-SW Sex subclasses that were derived by the multi-parametric optimization method of Puebla et al.(2007). If the two subclasses are compared in the same orbital period range of 3 to 4.5 hours, then the average accretion rates of the two subclasses are essentially the same, 3.6$\times10^{-9}$ M$_{\sun}$ yr$^{-1}$ for the SW Sex systems and 3.0$\times10^{-9}$ M$_{\sun}$ yr$^{-1}$ for the non-SW Sex systems. If the average accretion rates of the two groups are computed with no restriction on the orbital periods of the two groups, then the SW Sex systems have M$ = 5\times10^{-9}$ M$_{\sun}$ yr$^{-1}$ and the non-SW Sex systems have M$ = 1.0\times10^{-8}$ M$_{\sun}$ yr$^{-1}$. Using a different approach, we enlarged the sample of SW Sex and non-SW Sex stars, restricted attention to the orbital period range of 3 to 4.5 hour, used distances from the method of Knigge (2006) and applied a different methodology for determining accretion rates. We find that the non-SW Sex systems have an average accretion rate $\dot{M}$ = 3.4$\times10^{-9}$ M$_{\sun}$ yr$^{-1}$ and the SW Sex systems in the sample also have 3.4$\times10^{-9}$ M$_{\sun}$ yr$^{-1}$. Therefore, based upon two independent methods of deriving accretion rates, that of Puebla et al. (2007) and the approach in this paper with Knigge (2006) distances, we find little evidence to support the suggestion that the SW Sex systems have higher than average accretion rates among the nova-like systems, a possibility raised by Rodriguez-Gil et al (2007). Therefore, it is likely that the SW Sextantis phenomenon, particularly their high optical luminosities, must be attributed to some other factor or characteristic of the systems than higher than average accretion rates. Among these possibilities are magnetic accretion and nuclear burning (Honeycutt 2001; Honeycutt and Kafka 2004). However, this conclusion applies only to the average accretion rates of the two groups. This does not rule out the possibility that the SW Sex phenomena is exhibited when the accretion flow in a given system has undergone a temporary large increase at the time an observation of a nova-like is obtained thus leading to the object being observed as an SW Sex star (Groot et al.2004). We see little to suggest that SW Sex stars have higher than average accretion rates than other nova-likes in the same range of orbital period. \(2) Given the high average value of M in nova-likes and the fact that their high states are generally longer in duration than their low states, the rate of accretion onto the underlying white dwarf and hence a higher degree of compressional heating would be expected. Thus, surface temperatures of nova-likes should be higher than in dwarf novae at the same orbital period. This should be true if one accepts a correlation between CV orbital period and M such as shown by Patterson (1984). There is some preliminary evidence that this is the case when one compares the surface temperatures of white dwarfs in nova-like variables to the white dwarfs in dwarf novae (Hamilton and Sion 2007; Godon et al. 2008). However, these studies have relied upon the relatively rare situation when the nova-like drops into a deep low state and the white dwarf is exposed to FUV spectroscopic observation. We point out that a number of SW Sex systems have lower orbital inclinations. In these systems, the upper hemisphere of the underlying white dwarf and the inner boundary layer would not be obscured. It would be interesting to find evidence of cooler white dwarfs in SW Sex systems and non-SW Sex systems, unlike the typically high temperatures found for DW UMa and MV Lyr, since such cooler temperatures would be unexpectedly low for the derived high rates of time-averaged mass transfer. Unfortunately, we are unable to reliably characterize the white dwarf temperatures in the lower inclination SW Sex systems due to: (1) the poor quality FUV spectra; (2) the overwhelming luminosity of the bright (high state) accretion disk and (3) the lack of FUV spectra down to the Lyman limit (e.g. FUSE) where the flux contribution of a bright accretion disk and hot white dwarf photosphere can be more easily disentangled. Finally, given the small sample size of exposed white dwarfs in nova-like systems, it is particularly important to catch more nova-like systems in their low states for both ground-based optical and space observations. This research utilized the Big List of SW Sextantis Stars, which is maintained by D. W. Hoard. This work was supported by NSF grant AST0807892 to Villanova University. Some or all of the data presented in this paper were obtained from the Multi mission Archive at the Space Telescope Science Institute (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NAG5-7584 and by other grants and contracts. Ballouz, R.-L., Sion, E.M., Gaensicke, B., Long, K. 2009, ApJ, in preparation Bruch, A., & Engel, A. 1994, A&A Suppl.Ser., 104, 79 Diaz et al. 1996, ApJ, 459, 236 Frank, J., King, A.R., & Raine, D.J. 1992, Accretion Power in Astrophysics (Cambridge: Cambridge Univ. Press) Godon et al. 2007, ApJ, 656, 1103 Godon et al. 2008, ApJ, 679, 1447 Godon, P., Sion, E., Barrett, P., Szkody, P., & Schlegel, E.2008, ApJ, 687, 532 Groot, P., Rutten, R.G.M., & van Paradijs, J.2004, A&A, 417, 283 Hamilton, Ryan T., Sion, E. M., 2008, PASP, 120, 165 Hamilton, Ryan T., Urban, Joel A.; Sion, Edward M.; Riedel, Adric R.; Voyer, Elysse N.; Marcy, John T.; Lakatos, Sarah L.2007, ApJ, 667, 1139 Hoard et al.2003, AJ, 126, 2473 Honeycutt, K.2001, PASP, 113, 473 Honeycutt, R. K.,& Kafka, S.,2004, AJ, 128, 1279 Hubeny, I. 1988, Comput. Phys. Comun., 52, 103 Hubeny, I., and Lanz, T. 1995, ApJ, 439, 875 Hunger, K., Heber, U., & Koester, D.1985, A&A, 149, 4 Knigge, C.2006, MNRAS, 373, 484 LaDous, C. 1991, A&A, 252, 100 Livio, M., & Pringle, J.E. 1998, ApJ, 505, 339 Linnell, A. et al.2005, ApJ, 624, 923 Linnell, A. 2007, ApJ, 662, 1204 Mizusawa, T., et al.2009, PASP, in preparatioon Mouchet, M. et al. 1996, A& A, 306, 212 Patterson, J.1984, ApJS, 54, 443 Puebla, R.E., Diaz, M.P., & Hubeny, I.2007, AJ, 134, 1923 Ritter, H., & Kolb, U. 2003, A&A, 404, 301(update RKcat7.10) Rodriguez-Gil, P., Schmidtobreick, L., Gaensicke, B.T. 2007, MNRAS 374, 1359 P. Rodriguez-Gil, B. T. G¨ansicke, H.-J. Hagen, S. Araujo-Betancor, A. Aungwerojwit, C. Allende Prieto, D. Boyd, J. Casares, D. Engels, O. Giannakis, E. T. Harlaftis, J. Kube, H. Lehto, I. G. Martinez-Pais, R. Schwarz, W. Skidmore, A. Staude, and M. A. P. Torres, 2007, MNRAS, 377, 1747 Shafter,A. W., Wheeler,J. C., & Cannizzo, J. K., 1986, ApJ, 305, 261 Verbunt, F.1987, A&A, 71, 339 Wade, R.A. & Hubeny, I. 1998, ApJ, 509, 350. Warner, B. 1995, Cataclysmic Variable Stars (Cambridge: Cambridge Univ. Press) Warner, B. 2008, private communication Winter, L., & Sion, E.M.2003, ApJ, 582, 352 Wood, M. 1995, in White Dwarfs, Proceedings of the 9th European Workshop on White Dwarfs Held at Kiel, Germany, 29 August - 1 September 1994. Lecture Notes in Physics, Vol.443, edited by Detlev Koester and Klaus Werner. Springer-Verlag, Berlin Heidelberg New York, 1995., p.41. Wu, K., Warner, B., Wickramasinghe, D.T. 1995, Proc.Ast.Soc.Austral.12:1, p.60 Zellem, R., Hollon, N., Ballouz, R.-L., Sion, E.M., Gaensicke, B.2009, submitted to PASP. Figure Captions ===============
--- abstract: 'We present a short and elegant proof of the inequality ${\left\lVertp\right\rVert}_{L_s(\Omega)} \leq c(\Omega) \left({\left\lVertv\right\rVert}^2_{L_{2s}(\Omega)} + {\left\lVertf\right\rVert}_{L_s(\Omega)}\right)$ for bounded domains $\Omega$ under the slip and Navier boundary conditions. We also show an application of this result for conditional regularity of weak solutions to the Navier-Stokes equations.' address: - | Bernard Nowakowski\ Institute of Mathematics\ Polish Academy of Sciences\ Śniadeckich 8\ 00-956 Warsaw\ Poland - | Wojciech M. Zajączkowski\ Institute of Mathematics\ Polish Academy of Sciences\ Śniadeckich 8\ 00-956 Warsaw\ Poland\ and\ Institute of Mathematics and Cryptology\ Military University of Technology\ Kaliskiego 2\ 00-908 Warsaw\ Poland - | Adam Kubica\ Faculty of Mathematics and Information Science\ Warsaw University of Technology\ Koszykowa 75\ Warsaw 00-662\ Poland\ and\ Institute of Mathematics and Cryptology\ Military University of Technology\ Kaliskiego 2\ 00-908 Warsaw\ Poland author: - Adam Kubica - Bernard Nowakowski - 'Wojciech M. Zajączkowski' bibliography: - 'bibliography.bib' title: Regularity criteria of weak solutions to NSE in some bounded domains involving the pressure --- [^1] Introduction ============ We consider the initial-boundary value problem for the Navier-Stokes equations $$\label{p1} \begin{aligned} &v_{,t} + (v\cdot \nabla) v - \nu \triangle v + \nabla p = f & &\text{in $\Omega\times(0,T) =: \Omega^T$},\\ &\operatorname{div}v = 0 & &\text{in $\Omega^T$}, \\ &v\vert_{t = 0} = v(0) & &\text{in $\Omega$}, \end{aligned}$$ either with boundary slip conditions $$\label{p2} \begin{aligned} \begin{aligned} &n \cdot \mathbb{D}(v) \cdot \tau^{\alpha} = 0, \\ &n \cdot v = 0 \end{aligned}& &\text{on $\partial \Omega$} \end{aligned}$$ or with the Navier boundary conditions $$\label{p3} \begin{aligned} \begin{aligned} &\operatorname{rot}v \times n = 0, \\ &n \cdot v = 0 \end{aligned}& &\text{on $\partial \Omega$,} \end{aligned}$$ where $\Omega \subset \mathbb{R}^3$ is a bounded domain. In case of the boundary slip conditions it is more convenient to write $_1$ in the form $$\label{eq52} v_{,t} + (v\cdot \nabla)v - \operatorname{div}\mathbb{T}(v,p) = f.$$ To make the above conditions clear let us recall that $n$ and $\tau^{\alpha}$, $\alpha \in \{1,2\}$ are the unit outward normal vector and the unit tangent vectors. By $\mathbb{T}(v,p)$ we mean the stress tensor $$\mathbb{T}(v,p) = \nu\mathbb{D}(v) - p\mathbb{I},$$ where $\mathbb{D}(v)$ is the dilatation tensor, which equals $\frac{1}{2}\left(\nabla v + \nabla^{\perp} v\right)$, $\mathbb{I}$ is the unit matrix and $\nu > 0$ represents the viscosity coefficient. Note that is sometimes referred also as a boundary slip condition whereas as the Navier boundary condition. In some cases these conditions coincide (i.e. $\Omega$ is half-space) but in general they differ. In certain cases this difference can be measured in term of the curvature of $\partial \Omega$ (for $\Omega$ of cylindrical type see e.g. [@Nowakowski2012 Lemma 6.5]) but this issue is beyond the scope of our work. It is well known that for $v(0) \in H^1(\Omega)$ there exists at least one weak solution (see e.g. [@Hopf:1950fk], [@Galdi:2000uq]), but the problem of uniqueness and regularity of weak solution in three dimensions remains open. Our primary interest in is an extension of the regularity criterion for the weak solutions onto bounded domains under boundary slip type conditions. One of the basic ideas used in the proof would rely on testing with $v{\left\lvertv\right\rvert}^{\theta - 2}$, $\theta \geq 2$. This approach leads to a difficulty related to the estimate for the pressure term. In the whole space or in periodic setting it can be resolved by the application of the Calderón-Zygmund theorem (see e.g. [@Struwe:2007vn]) to the equation $$\label{eq210} -\triangle p = \sum_{i,j=1}^3\frac{\partial^2}{\partial x_i\partial x_j} \left(v_iv_j\right),$$ thereby yielding the following estimate $$\label{eq340} {\left\lVertp\right\rVert}_{L_s(\Omega)} \leq {\left\lVertv\right\rVert}_{L_{2s}(\Omega)}^2.$$ Clearly, in bounded domains must be supplemented with some boundary condition, which at large are difficult or even impossible to establish due to lack of information on $p$ or $\frac{\partial p}{\partial n}$ on $\partial \Omega$ in terms of $v$. One of effective, but restrictive to particular cases remedies that may be exhausted lies in e.g. choosing axially symmetric cylinders with boundary slip conditions (see e.g. [@Zajaczkowski:2004fk Ch. 3, Lemma 1.1]). One can put another restrictions on the geometry of the domain or on the boundary conditions but at the end we lose certain generality. Therefore, it is reasonable to look for any estimates to for the pressure without the necessity of analyzing . In this paper we give an alternative proof of the estimate of the form of , which indeed does not rely on . In principle, it is based on an auxiliary Poisson equation with the Neumann boundary conditions. The result reads: \[thm1\] Suppose that $f \in L_s(\Omega)$ and $v \in L_{2s}(\Omega)$ satisfy (\[p1\]). Let hold and $\Omega$ is bounded and sufficiently regular. Then $p \in L_s(\Omega)$ and $${\left\lVertp\right\rVert}_{L_s(\Omega)} \leq c(\Omega)\left({\left\lVertv\right\rVert}_{L_{2s}(\Omega)}^2 + {\left\lVertf\right\rVert}_{L_s(\Omega)}\right).$$ Let now hold. If in addition $\nabla v \in L_s(\Omega)$, then $${\left\lVertp\right\rVert}_{L_s(\Omega)} \leq c(\Omega)\left({\left\lVertv\right\rVert}_{L_{2s}(\Omega)}^2 + {\left\lVert\nabla v\right\rVert}_{L_s(\Omega)} + {\left\lVertf\right\rVert}_{L_s(\Omega)}\right).$$ The regularity requirement concerning the domain $\Omega$ is related to the Neumann problem for the Poisson equation. For a given $s$ the set $\Omega$ is *sufficiently regular* if for each $g \in L_{s'}(\Omega)$ such that $\int_{\Omega}g\, {\mathrm{d}}x = 0$ the following problem $$\label{np4} \begin{aligned} &-\triangle \psi = g\, & &\text{in $\Omega$}, \\ &n\cdot \nabla \psi = 0 & &\text{on $S$}, \\ &\int_{\Omega} \psi=0, \end{aligned}$$ has the unique solution $u \in W^2_{s'}(\Omega)$, where $\frac{1}{s}+ \frac{1}{s'}=1$. It holds, for example, if: - $\partial \Omega \in C^{1,1}$, $s>1$ (see [@Grisvard:1985vn Lemma 2.4.2.1]) - $\Omega$ is convex, $s\in [2, \infty)$ (see [@Adolfsson:1994uq]), - $\Omega\subset \mathbb{R}^{3}$ is a bounded, convex polyhedron and $$1< \frac{s}{s-1} < \min\left\{3,\frac{2\alpha_{E}}{(2\alpha_{E} - \pi)_{+}} \right\},$$ where $\alpha_{E}$ is opening of the dihedral angle with edge $E$ (see [@Mazya:2009fk]), - $\Omega=[0,a]\times[0,b]\times[0,c]$, $s >1$ (the reflection argument), - $\Omega = [0,a] \times \Omega'$, where $\Omega' \subset \mathbb{R}^2$ is a bounded set with a smooth boundary, $s > 1$ (the reflection argument). Clearly, apart the already mentioned idea, there are different techniques that could be utilized to analyze problem with or without relying on . It was a great surprise that they seem to work only in case of the Dirichlet boundary conditions (see [@Choe:1998kx], [@Berselli:2002ys Sec. 3], [@Zhou:2004fk], [@Kang:2006uq], [@Farwig:2009zr], [@Kim:2010qf]), whereas the boundary slip type conditions were only considered in the half space (see [@Bae:2008fk] and [@Bae:2008uq]) or in the case of axially symmetric solutions (see [@Zajaczkowski:2010lr]). In our work we achieve a little progress. Although the domain we work with is bounded but we assume that it is of cubical type. This kind of restriction, tightly related to the boundary integrals, is removable in many cases (see Remark \[rem6\]). Our major motivation for investigating the simplest domain follows from intention of keeping the calculations clear and simple. The result reads: \[thm2\] Let $f\equiv 0$, $T > 0$ and $\Omega := [0,a]\times[0,b]\times[0,c]$ for finite, positive real constants $a$, $b$ and $c$. Suppose that a weak solution $v$ to supplemented with either or satisfies $${\left\lVertv\right\rVert}_{L_q(0,T;L_p(\Omega))} < +\infty \qquad \text{where} \qquad \frac{3}{p} + \frac{2}{q} = 1$$ for $q < +\infty$. Then, $v$ is unique and smooth. The assumption $f\equiv 0$ is artificial and can be omitted. It does not change the proof but makes it a little longer. For the definition and the proof of existence of weak solutions to supplemented with or see e.g. Introduction in [@Zajaczkowski:2005zr]. If we drop the assumption on the cubical shape of the domain, the claim of Theorem \[thm2\] in case $q < + \infty$ is still true. The proof is different, easier but does not base on Theorem \[thm1\]. We will present its sketch at the end of this work. \[rem3\] In cubical domains under both boundary conditions and the assertion of Theorem \[thm1\] reads $${\left\lVertp\right\rVert}_{L_s(\Omega)} \leq c(\Omega)\left({\left\lVertv\right\rVert}_{L_{2s}(\Omega)}^2 + {\left\lVertf\right\rVert}_{L_s(\Omega)}\right).$$ Before we move to the next section, let us note that the extension of Serrin condition is mostly studied for the Cauchy problem (see e.g. [@Kozono:2004nx], [@Kukavica:2006cr], [@Zhou:2006kx], [@Cao:2008ly], [@Bjorland:2011ve], [@Penel:2011vn]) or for the local-interior regularity (see e.g. [@Gustafson:2006dq]), thereby excluding the boundary issues. We do not intend to compare or discuss these improvements. This has been nicely done in several papers. The interested reader we would refer e.g. to [@Berselli:2009bh]. Auxiliary results ================= Throughout this article we use the following Young inequality: \[lem4\] For any positive $a$ and $b$ the inequality $$ab \leq \kappa a^{\lambda_1} + (\kappa \lambda_1)^{-\frac{\lambda_2}{\lambda_1}}\lambda_2^{-1}b^{\lambda_2}$$ holds, where $\kappa > 0$ and $$\frac{1}{\lambda_1} + \frac{1}{\lambda_2} = 1, \qquad 1 < \lambda_1, \lambda_2 < + \infty.$$ Another useful tool is the imbedding lemma for the space $V^k_2(\Omega^t)$, which is defined as the closure of $\mathcal{C}^{\infty}(\Omega\times(t_0,t_1))$ in the norm $${\left\lVertu\right\rVert}_{V^k_2(\Omega^t)}^2 = \underset{t\in (t_0,t_1)}{\operatorname{ess\, sup}}{\left\lVertu(t)\right\rVert}_{H^k(\Omega)}^2 \\ +\left(\int_{t_0}^{t_1}{\left\lVert\nabla u(t)\right\rVert}^2_{H^{k}(\Omega)}\, {\mathrm{d}}t\right)^{1/2}.$$ The imbedding lemma reads: \[lem2\] Suppose that $u \in V_2^0(\Omega^t)$, where $\Omega^t := \Omega\times (t_0,t)$, $t_0 < t \leq t_1$. Then $u \in L_q(t_0,t;L_p(\Omega))$ and $${\left\lVertu\right\rVert}_{L_q(t_0,t;L_p(\Omega))} \leq c(p,q,\Omega) {\left\lVertu\right\rVert}_{V^0_2(\Omega^t)}$$ holds under the condition $\frac{3}{p} + \frac{2}{q} = \frac{3}{2}$, $2 \leq p \leq 6$. Let us emphasize that the constant that appears on the right-hand side does not depend on time. \[lem10\] Let $\Omega$ satisfy the cone condition and let $q \geq p$. Set $$\kappa = 2 - 2r - s - 5\left(\frac{1}{p} - \frac{1}{q}\right) \geq 0.$$ Then for any function $u \in W^{2,1}_{p}(\Omega^t)$ the inequality $${\left\lVert\partial^r_t {\mathrm{D}}_x^s u\right\rVert}_{L_q(\Omega^t)} \leq c_1(p,q,r,s,\Omega) \epsilon^{\kappa}{\left\lVertu\right\rVert}_{W^{2,1}_p(\Omega^t)} + c_2(p,q,r,s,\Omega) \epsilon^{-\kappa + 2s - 2}{\left\lVertu\right\rVert}_{L_p(\Omega^t)}$$ holds, where the constants $c_1$ and $c_2$ do not depend on $t$. For the proof of the lemma we refer the reader to [@lad Ch.2, §3, Lemma 3.3]. Proof of Theorem \[thm1\] ========================= Let $\psi \in W^{2}_{s'}(\Omega)$ be a unique solution to the following elliptic problem: $$\label{p4} \begin{aligned} &-\triangle \psi = p{\left\lvertp\right\rvert}^{s - 2} - \frac{1}{{\left\lvert\Omega\right\rvert}}\int_{\Omega}p{\left\lvertp\right\rvert}^{s - 2}\, {\mathrm{d}}x & &\text{in $\Omega$}, \\ &n\cdot \nabla \psi = 0 & &\text{on $S$}, \\ &\int_{\Omega}\psi\, {\mathrm{d}}x = 0 . \end{aligned}$$ Then the estimate $$\label{eq2} {\left\lVert\psi\right\rVert}_{W^2_{s'}(\Omega)} \leq c(s,\Omega) {\left\lVertp\right\rVert}^{s - 1}_{L_{s}(\Omega)}$$ holds. Multiplying $_1$ by $\nabla \psi$ and integrating over $\Omega$ yields $$\label{eq1} \int_{\Omega} \big(v_{,t} - \nu \triangle v + v\cdot \nabla v + \nabla p\big)\cdot \nabla \psi\, {\mathrm{d}}x = \int_{\Omega} f \cdot \nabla \psi\, {\mathrm{d}}x.$$ We have four integral on the left-hand side which need to be estimated. First we see $$\label{eq50} \int_{\Omega} v_{,t} \cdot \nabla \psi \,{\mathrm{d}}x = \int_{\Omega}\big(\nabla \cdot (v_{,t}\psi) - \operatorname{div}v_{,t}\psi\big)\, {\mathrm{d}}x = \int_{S} \psi(v_{,t} \cdot n)\,{\mathrm{d}}S = 0.$$ The estimate for the second integral varies in dependence on the boundary conditions. Let us assume first. Condition will be discussed at the end of the proof. Thus, $$\int_{\Omega} \triangle v\cdot \nabla \psi\, {\mathrm{d}}x = -\int_{\Omega} \operatorname{rot}\operatorname{rot}v\cdot \nabla \psi\, {\mathrm{d}}x = -\int_{S} \operatorname{rot}v \times n \cdot \nabla \psi\, {\mathrm{d}}S = 0.$$ For the third integral we have $$\int_{\Omega} v_iv_{j,x_i}\psi_{,x_j}\, {\mathrm{d}}x = -\int_{\Omega} v_iv_j\psi_{,x_jx_i}\, {\mathrm{d}}x + \int_S v_iv_j\psi_{,x_j} n_i\, {\mathrm{d}}S \leq {\left\lVertv\right\rVert}^2_{L_{2s}(\Omega)}{\left\lVert\nabla^2\psi\right\rVert}_{L_{s'}(\Omega)},$$ where we integrated by parts and utilized equality (\[p3\])${}_{2}$. The last term on the left-hand side in is equal to $$\label{eq54} - \int_{\Omega} p \cdot \triangle \psi\, {\mathrm{d}}x + \int_S p \left(n\cdot \nabla \psi\right) \, {\mathrm{d}}S = -\int_{\Omega} {\left\lvertp\right\rvert}^{s } {\mathrm{d}}x + \frac{1}{{\left\lvert\Omega\right\rvert}} \left(\int_{\Omega} p{\left\lvertp\right\rvert}^{s - 2}\, {\mathrm{d}}x\right)\, \int_{\Omega} p\, {\mathrm{d}}x = -{\left\lVertp\right\rVert}_{L_s(\Omega)}^s ,$$ because the boundary integral is equal to zero due to $_2$ and $p$ is a distribution determined up to a constant. Finally, by the Hölder inequality $$\int_{\Omega} f\cdot \nabla \psi\, {\mathrm{d}}x \leq {\left\lVertf\right\rVert}_{L_s(\Omega)}{\left\lVert\nabla \psi\right\rVert}_{L_{s'}(\Omega)}.$$ Summing up the above estimates and in view of we obtain $${\left\lVertp\right\rVert}_{L_s(\Omega)}^s \leq {\left\lVertv\right\rVert}^2_{L_{2s}(\Omega)}{\left\lVert\nabla^2\psi\right\rVert}_{L_{s'}(\Omega)} + {\left\lVertf\right\rVert}_{L_s(\Omega)}{\left\lVert\nabla \psi\right\rVert}_{L_{s'}(\Omega)} \leq c(\Omega) {\left\lVertp\right\rVert}^{s - 1}_{L_{s}(\Omega)}\left({\left\lVertv\right\rVert}_{L_{2s}(\Omega)}^2 + {\left\lVertf\right\rVert}_{L_s(\Omega)}\right).$$ Hence $${\left\lVertp\right\rVert}_{L_s(\Omega)} \leq c(\Omega)\left({\left\lVertv\right\rVert}_{L_{2s}(\Omega)}^2 + {\left\lVertf\right\rVert}_{L_s(\Omega)}\right),$$ which concludes the proof of the first assertion. Let now hold. Then, instead of we have in light of the identity $$\label{eq60} \int_{\Omega} \big(v_{,t} - \operatorname{div}\mathbb{T}(v,p) + v\cdot \nabla v \big)\cdot \nabla \psi\, {\mathrm{d}}x = \int_{\Omega} f \cdot \nabla \psi\, {\mathrm{d}}x.$$ We need to examine the term involving the Cauchy stress tensor. We see that $$\begin{gathered} \label{eq58} \int_{\Omega} - \operatorname{div}\mathbb{T}(v,p) \cdot \nabla \psi\, {\mathrm{d}}x \\ = - \int_S n \cdot \nu\mathbb{D}(v) \cdot \nabla \psi\, {\mathrm{d}}S + \int_S p \left(\nabla \psi \cdot n\right)\, {\mathrm{d}}S + \int_{\Omega} \nu \mathbb{D}(v) \nabla^2 \psi\, {\mathrm{d}}x + \int_{\Omega} p \triangle \psi\, {\mathrm{d}}x. \end{gathered}$$ Expressing $\mathbb{D}(v)$ in the basis $n,\tau^{\alpha}$, $\alpha = 1,2$ yields $$\int_S n \cdot \nu\mathbb{D}(v) \cdot \nabla \psi\, {\mathrm{d}}S = \nu \int_S \left(n \cdot \mathbb{D}(v) \cdot n\right)n \cdot \nabla \psi\, {\mathrm{d}}S + \nu \int_S \left(n \cdot \mathbb{D}(v) \cdot \tau^{\alpha}\right)\tau^{\alpha} \cdot \nabla \psi\, {\mathrm{d}}S.$$ The first integral vanishes due to $_2$, whereas the second due to . Combining , and we infer from that $$\begin{gathered} {\left\lVertp\right\rVert}_{L_s(\Omega)}^s \leq {\left\lVertv\right\rVert}^2_{L_{2s}(\Omega)}{\left\lVert\nabla^2\psi\right\rVert}_{L_{s'}(\Omega)} + {\left\lVert\nabla v\right\rVert}_{L_{s}(\Omega)}{\left\lVert\nabla^2\psi\right\rVert}_{L_{s'}(\Omega)} + {\left\lVertf\right\rVert}_{L_s(\Omega)}{\left\lVert\nabla \psi\right\rVert}_{L_{s'}(\Omega)} \\ \leq c(\Omega) {\left\lVertp\right\rVert}^{s - 1}_{L_{s}(\Omega)}\left({\left\lVertv\right\rVert}_{L_{2s}(\Omega)}^2 + {\left\lVert\nabla v\right\rVert}_{L_s(\Omega)} + {\left\lVertf\right\rVert}_{L_s(\Omega)}\right), \end{gathered}$$ which is our second assertion. The proof is complete. Proof of Theorem \[thm2\] ========================= We start with multiplying by $v{\left\lvertv\right\rvert}^{\theta - 2}$ and integrating over $\Omega$ $$\label{eq3} \frac{1}{\theta}\operatorname{\frac{{\mathrm{d}}}{{\mathrm{d}}t}}\int_{\Omega}{\left\lvertv\right\rvert}^{\theta}\, {\mathrm{d}}x + \nu\int_\Omega \nabla v \cdot \nabla \left(v{\left\lvertv\right\rvert}^{\theta - 2}\right)\, {\mathrm{d}}x = -\int_\Omega \nabla p\cdot v{\left\lvertv\right\rvert}^{\theta - 2}\, {\mathrm{d}}x + \int_S \sum_{i,j = 1}^3v_{j,x_i} \cdot v_j{\left\lvertv\right\rvert}^{\theta - 2} \cdot n_i\, {\mathrm{d}}S,$$ where $\theta > 3$ and the non-linear term vanishes due to $$\begin{gathered} \int_{\Omega} (v\cdot \nabla)v\cdot v{\left\lvertv\right\rvert}^{\theta -2 }\, {\mathrm{d}}x = \frac{1}{2}\int_{\Omega} v_i \left(v\cdot v\right)_{,x_i} {\left\lvertv\right\rvert}^{\theta - 2}\, {\mathrm{d}}x = \frac{1}{\theta}\int_{\Omega} v_i \left({\left\lvertv\right\rvert}^2\right)^{\frac{\theta}{2}}_{,x_i}\, {\mathrm{d}}x \\ = -\frac{1}{\theta}\int_{\Omega} \operatorname{div}v {\left\lvertv\right\rvert}^{\theta}\, {\mathrm{d}}x + \frac{1}{\theta}\int_S {\left\lvertv\right\rvert}^{\theta} \left(v\cdot n\right)\, {\mathrm{d}}S = 0. \end{gathered}$$ Consider first the boundary integral on the right-hand side. On the walls $x_3 = 0$ and $x_3 = c$ the normal vector $n$ equals $(0,0,\mp1)$ and conditions , imply $v_{1,x_3} = v_{2,x_3} = v_3 = 0$ (see [@Zajaczkowski:2005zr Lemma 3.1 and its proof] and [@Nowakowski2012 Lemma 6.6], respectively). Therefore $$\int_{S \cap x_3 \in \{0,c\}} \sum_{i,j = 1}^3v_{j,x_i} \cdot v_j{\left\lvertv\right\rvert}^{\theta - 2} \cdot n_i\, {\mathrm{d}}S = 0.$$ Following nearly identical reasoning for $x_2 \in \{0,b\}$ and $x_1 \in \{0,a\}$ we conclude that $$\int_{S} \sum_{i,j = 1}^3v_{j,x_i} \cdot v_j{\left\lvertv\right\rvert}^{\theta - 2} \cdot n_i\, {\mathrm{d}}S = 0.$$ For the second term on the left-hand side in we have $$\nu\int_\Omega \nabla v \cdot \nabla \left(v{\left\lvertv\right\rvert}^{\theta - 2}\right)\, {\mathrm{d}}x = \nu\int_\Omega {\left\lvert\nabla v\right\rvert}^2 {\left\lvertv\right\rvert}^{\theta - 2}\, {\mathrm{d}}x + \frac{4\nu(\theta - 2)}{\theta^2}\int_\Omega {\left\lvert\nabla {\left\lvertv\right\rvert}^{\frac{\theta}{2}}\right\rvert}^2 \, {\mathrm{d}}x.$$ To estimate the term with the pressure we integrate by parts and use (\[p1\])${}_{2}$ and boundary conditions $$-\int_\Omega \nabla p\cdot v{\left\lvertv\right\rvert}^{\theta - 2}\, {\mathrm{d}}x = \left(\frac{\theta}{2} - 1\right)\int_{\Omega} p |v|^{\theta-4}v \cdot \nabla {\left\lvertv\right\rvert}^{2}\, {\mathrm{d}}x \leq (\theta - 2)\int_{\Omega} {\left\lvertp\right\rvert}{\left\lvertv\right\rvert}^{\frac{\theta}{2} - 1}{\left\lvert\nabla v\right\rvert}{\left\lvertv\right\rvert}^{\frac{\theta}{2} - 1}\, {\mathrm{d}}x.$$ From the Cauchy inequality we immediately get $$(\theta - 2)\int_{\Omega} {\left\lvertp\right\rvert}{\left\lvertv\right\rvert}^{\frac{\theta}{2} - 1}{\left\lvert\nabla v\right\rvert}{\left\lvertv\right\rvert}^{\frac{\theta}{2} - 1}\, {\mathrm{d}}x \leq (\theta - 2) \left(\int_{\Omega} {\left\lvertp\right\rvert}^2 {\left\lvertv\right\rvert}^{\theta - 2}\, {\mathrm{d}}x\right)^{\frac{1}{2}} \left(\int_{\Omega} {\left\lvert\nabla v\right\rvert}^2 {\left\lvertv\right\rvert}^{\theta - 2}\,{\mathrm{d}}x\right)^{\frac{1}{2}}.$$ So far we have obtained $$\begin{gathered} \label{eq18} \frac{1}{\theta}\operatorname{\frac{{\mathrm{d}}}{{\mathrm{d}}t}}\int_{\Omega}{\left\lvertv\right\rvert}^{\theta}\, {\mathrm{d}}x + \frac{4\nu(\theta - 2)}{\theta^2}\int_\Omega {\left\lvert\nabla {\left\lvertv\right\rvert}^{\frac{\theta}{2}}\right\rvert}^2 \, {\mathrm{d}}x + \nu\int_\Omega {\left\lvert\nabla v\right\rvert}^2 {\left\lvertv\right\rvert}^{\theta - 2}\, {\mathrm{d}}x \\ \leq (\theta - 2) \left(\int_{\Omega} {\left\lvertp\right\rvert}^2 {\left\lvertv\right\rvert}^{\theta - 2}\, {\mathrm{d}}x\right)^{\frac{1}{2}} \left(\int_{\Omega} {\left\lvert\nabla v\right\rvert}^2 {\left\lvertv\right\rvert}^{\theta - 2}\,{\mathrm{d}}x\right)^{\frac{1}{2}}. \end{gathered}$$ To estimate the right-hand side we use the Hölder inequality $$\begin{gathered} \left(\int_{\Omega} {\left\lvertp\right\rvert}^2 {\left\lvertv\right\rvert}^{\theta - 2}\, {\mathrm{d}}x\right)^{\frac{1}{2}} \left(\int_{\Omega} {\left\lvert\nabla v\right\rvert}^2 {\left\lvertv\right\rvert}^{\theta - 2}\,{\mathrm{d}}x\right)^{\frac{1}{2}} \\ \leq \left(\left(\int_{\Omega} {\left\lvertp\right\rvert}^{2\lambda_1}\, {\mathrm{d}}x\right)^{\frac{1}{\lambda_1}}\left(\int_{\Omega} {\left\lvertv\right\rvert}^{(\theta - 2)\lambda_2}\, {\mathrm{d}}x\right)^{\frac{1}{\lambda_2}}\right)^{\frac{1}{2}} \left(\int_{\Omega} {\left\lvert\nabla v\right\rvert}^2 {\left\lvertv\right\rvert}^{\theta - 2}\,{\mathrm{d}}x\right)^{\frac{1}{2}}. \end{gathered}$$ By Remark \[rem3\] $$\left(\int_{\Omega} {\left\lvertp\right\rvert}^{2\lambda_1}\, {\mathrm{d}}x\right)^{\frac{1}{2\lambda_1}} \leq c(\Omega) \left(\int_{\Omega}{\left\lvertv\right\rvert}^{4\lambda_1}\, {\mathrm{d}}x\right)^{\frac{2}{4\lambda_1}} = c(\Omega){\left\lVertv\right\rVert}_{L_{4\lambda_1}(\Omega)}^2,$$ which combined with yields $$\begin{gathered} \label{eq20} \frac{1}{\theta}\operatorname{\frac{{\mathrm{d}}}{{\mathrm{d}}t}}\int_{\Omega}{\left\lvertv\right\rvert}^{\theta}\, {\mathrm{d}}x + \frac{4\nu(\theta - 2)}{\theta^2}\int_\Omega {\left\lvert\nabla {\left\lvertv\right\rvert}^{\frac{\theta}{2}}\right\rvert}^2 \, {\mathrm{d}}x + \nu\int_\Omega {\left\lvert\nabla v\right\rvert}^2 {\left\lvertv\right\rvert}^{\theta - 2}\, {\mathrm{d}}x\\ \leq c(\Omega)(\theta - 2) {\left\lVertv\right\rVert}_{L_{4\lambda_1}(\Omega)}^2 {\left\lVertv\right\rVert}_{L_{(\theta -2 )\lambda_2}(\Omega)}^{\frac{\theta - 2}{2}}\left(\int_{\Omega} {\left\lvert\nabla v\right\rvert}^2 {\left\lvertv\right\rvert}^{\theta - 2}\,{\mathrm{d}}x\right)^{\frac{1}{2}}. \end{gathered}$$ Due to the imbedding $H^1(\Omega) \hookrightarrow L_6(\Omega)$ and the Poincaré inequality (every component of $v$ vanishes on different part of the boundary) we see that $$\label{eq30} \int_\Omega {\left\lvert\nabla {\left\lvertv\right\rvert}^{\frac{\theta}{2}}\right\rvert}^2\, {\mathrm{d}}x \geq c(\Omega) \left(\int_{\Omega} {\left\lvertv\right\rvert}^{\frac{\theta}{2} \cdot 6}\, {\mathrm{d}}x\right)^{\frac{1}{6}\cdot 2} = c(\Omega) {\left\lVertv\right\rVert}_{L_{3\theta}(\Omega)}^{\theta}.$$ Therefore we interpolate $L_{4\lambda_1}(\Omega)$ and $L_{(\theta - 2)\lambda_2}(\Omega)$ between $L_{\theta}(\Omega)$ and $L_{3\theta}(\Omega)$: $$\begin{aligned} \frac{1}{4\lambda_1} &= \frac{\alpha}{\theta} + \frac{1 - \alpha}{3\theta} = \frac{2\alpha + 1}{3\theta} & &\Leftrightarrow & &\alpha = \frac{1}{2}\left( \frac{3\theta}{4\lambda_1} - 1\right) = \frac{3\theta - 4\lambda_1}{8\lambda_1}, \\ \frac{1}{(\theta - 2)\lambda_2} &= \frac{\beta}{\theta} + \frac{1 - \beta}{3\theta} = \frac{2\beta + 1}{3\theta} & &\Leftrightarrow & &\beta = \frac{1}{2}\left(\frac{3\theta}{(\theta - 2)\lambda_2} - 1\right) = \frac{3\theta - (\theta - 2)\lambda_2}{2(\theta - 2)\lambda_2} \end{aligned}$$ and $$\begin{aligned} 1 - \alpha &= 1 - \frac{3\theta - 4\lambda_1}{8\lambda_1} = \frac{12\lambda_1 - 3\theta}{8\lambda_1}, \\ 1 - \beta &= 1 - \frac{3\theta - (\theta - 2)\lambda_2}{2(\theta - 2)\lambda_2} = \frac{3(\theta - 2)\lambda_2 - 3\theta}{2(\theta - 2)\lambda_2}. \end{aligned}$$ Finally $$\label{eq22} {\left\lVertv\right\rVert}_{L_{4\lambda_1}(\Omega)}^2 {\left\lVertv\right\rVert}_{L_{(\theta -2 )\lambda_2}(\Omega)}^{\frac{\theta - 2}{2}} \leq {\left\lVertv\right\rVert}_{L_{\theta}(\Omega)}^{w_1}{\left\lVertv\right\rVert}_{L_{3\theta}(\Omega)}^{w_2},$$ where $$\begin{gathered} \label{eq24} w_1 = 2 \cdot \frac{3\theta - 4\lambda_1}{8\lambda_1} + \frac{\theta - 2}{2} \cdot \frac{3\theta - (\theta - 2)\lambda_2}{2(\theta - 2)\lambda_2} = \frac{3\theta}{4\lambda_1} - 1 + \frac{3\theta - (\theta - 2)\lambda_2}{4\lambda_2} = \frac{3\theta}{4} - 1 - \frac{\theta}{4} + \frac{1}{2}\\ = \frac{\theta}{2} - \frac{1}{2} \end{gathered}$$ and $$\begin{gathered} \label{eq26} w_2 = 2 \cdot \frac{12\lambda_1 - 3\theta}{8\lambda_1} + \frac{\theta - 2}{2} \cdot \frac{3(\theta - 2)\lambda_2 - 3\theta}{2(\theta - 2)\lambda_2} = 3 - \frac{3\theta}{4\lambda_1} + \frac{3(\theta - 2)\lambda_2 - 3\theta}{4\lambda_2} \\ = 3 - \frac{3\theta}{4} + \frac{3\theta}{4} - \frac{3}{2} = \frac{3}{2}. \end{gathered}$$ Thus, from , , and it follows $$\begin{gathered} \frac{1}{\theta}\operatorname{\frac{{\mathrm{d}}}{{\mathrm{d}}t}}\int_{\Omega}{\left\lvertv\right\rvert}^{\theta}\, {\mathrm{d}}x + \frac{4\nu(\theta - 2)}{\theta^2}\int_\Omega {\left\lvert\nabla {\left\lvertv\right\rvert}^{\frac{\theta}{2}}\right\rvert}^2 \, {\mathrm{d}}x + \nu\int_\Omega {\left\lvert\nabla v\right\rvert}^2 {\left\lvertv\right\rvert}^{\theta - 2}\, {\mathrm{d}}x \\ \leq c(\Omega)(\theta - 2){\left\lVertv\right\rVert}_{L_{\theta}(\Omega)}^{\frac{\theta}{2} - \frac{1}{2}} {\left\lVertv\right\rVert}_{L_{3\theta}(\Omega)}^{\frac{3}{2}}\left(\int_\Omega {\left\lvert\nabla v\right\rvert}^2 {\left\lvertv\right\rvert}^{\theta - 2}\, {\mathrm{d}}x \right)^{\frac{1}{2}}. \end{gathered}$$ Multiplying by $\theta$ and utilizing in the above inequality gives $$\begin{gathered} \operatorname{\frac{{\mathrm{d}}}{{\mathrm{d}}t}}\int_{\Omega}{\left\lvertv\right\rvert}^{\theta}\, {\mathrm{d}}x + \frac{4\nu(\theta - 2)}{\theta}\int_\Omega {\left\lvert\nabla {\left\lvertv\right\rvert}^{\frac{\theta}{2}}\right\rvert}^2 \, {\mathrm{d}}x + \nu\theta \int_\Omega {\left\lvert\nabla v\right\rvert}^2 {\left\lvertv\right\rvert}^{\theta - 2}\, {\mathrm{d}}x \\ \leq c(\Omega)(\theta - 2)\theta{\left\lVertv\right\rVert}_{L_{\theta}(\Omega)}^{\frac{\theta}{2} - \frac{1}{2}} \left(\int_{\Omega} {\left\lvert\nabla {\left\lvertv\right\rvert}^{\frac{\theta}{2}}\right\rvert}^2\,{\mathrm{d}}x\right)^{\frac{3}{2\theta}}\left(\int_\Omega {\left\lvert\nabla v\right\rvert}^2 {\left\lvertv\right\rvert}^{\theta - 2}\, {\mathrm{d}}x\right)^{\frac{1}{2}} \\ \leq c(\Omega)(\theta - 2)\theta^{2} 2^{-\frac{3}{\theta}} {\left\lVertv\right\rVert}_{L_{\theta}(\Omega)}^{\frac{\theta}{2} - \frac{1}{2}} \left(\int_\Omega {\left\lvert\nabla v\right\rvert}^2 {\left\lvertv\right\rvert}^{\theta - 2}\, {\mathrm{d}}x\right)^{\frac{3}{2\theta} + \frac{1}{2}}. \end{gathered}$$ Utilizing the Young inequality (see Lemma \[lem4\]) we obtain $$\begin{gathered} \label{eq28} \operatorname{\frac{{\mathrm{d}}}{{\mathrm{d}}t}}\int_{\Omega}{\left\lvertv\right\rvert}^{\theta}\, {\mathrm{d}}x + \frac{4\nu(\theta - 2)}{\theta}\int_\Omega {\left\lvert\nabla {\left\lvertv\right\rvert}^{\frac{\theta}{2}}\right\rvert}^2 \, {\mathrm{d}}x + \nu\theta \int_\Omega {\left\lvert\nabla v\right\rvert}^2 {\left\lvertv\right\rvert}^{\theta - 2}\, {\mathrm{d}}x \\ \leq \kappa \left( \int_\Omega {\left\lvert\nabla v\right\rvert}^2 {\left\lvertv\right\rvert}^{\theta - 2}\, {\mathrm{d}}x \right)^{\left(\frac{3}{2\theta} + \frac{1}{2}\right)\gamma_1} + \left(\frac{1}{\kappa\gamma_1}\right)^{\frac{\gamma_2}{\gamma_1}} \frac{1}{\gamma_2} \left( c(\Omega)(\theta - 2)\theta^{2} 2^{-\frac{3}{\theta}})\right)^{\gamma_2}{\left\lVertv\right\rVert}_{L_{\theta}(\Omega)}^{\left(\frac{\theta}{2} - \frac{1}{2}\right)\gamma_2}. \end{gathered}$$ Now we chose $\gamma_1$ so it satisfies $$\left(\frac{3}{2\theta} + \frac{1}{2}\right)\gamma_1 = 1 \qquad \Leftrightarrow \qquad \gamma_1 = \frac{2}{\frac{3}{\theta} + 1} = \frac{2\theta}{3 + \theta}.$$ Thus $$\gamma_2 = \frac{\gamma_1}{\gamma_1 - 1} = \frac{\frac{2\theta}{3 + \theta}}{\frac{2\theta}{3 + \theta} - 1} = \frac{2\theta}{3 + \theta}\cdot \frac{3 + \theta}{2\theta - 3 - \theta} = \frac{2\theta}{\theta - 3}.$$ Hence $$\left(\frac{\theta}{2} - \frac{1}{2}\right)\gamma_2 = \left(\frac{\theta}{2} - \frac{1}{2}\right)\cdot \frac{2\theta}{3 + \theta} = \frac{\theta(\theta - 1)}{\theta - 3}.$$ and $$\label{eq44} \operatorname{\frac{{\mathrm{d}}}{{\mathrm{d}}t}}\int_{\Omega}{\left\lvertv\right\rvert}^{\theta}\, {\mathrm{d}}x + \nu\int_\Omega {\left\lvert\nabla {\left\lvertv\right\rvert}^{\frac{\theta}{2}}\right\rvert}^2 \, {\mathrm{d}}x + \nu \int_\Omega {\left\lvert\nabla v\right\rvert}^2 {\left\lvertv\right\rvert}^{\theta - 2}\, {\mathrm{d}}x\leq c(\nu,\theta,\Omega){\left\lVertv\right\rVert}_{L_{\theta}(\Omega)}^{\frac{\theta(\theta - 1)}{\theta - 3}}.$$ Since $$\frac{\theta(\theta - 1)}{\theta - 3} = \theta \left(1 + \frac{2}{\theta - 3}\right)$$ we put $\theta=p$ and using the assumption on $v$ (then $q$ is equal to $\frac{2p}{p - 3}$) we may apply the Gronwall inequality $$\sup_{0 \leq t \leq T} {\left\lVertv(t)\right\rVert}_{L_{p}(\Omega)}^{p} \leq \exp\left(c(\nu,p,\Omega){\left\lVertv\right\rVert}_{L_{q}(0,T;L_{p}(\Omega))}^{q}\right) {\left\lVertv(0)\right\rVert}^{p}_{L_{p}(\Omega)}.$$ Integrating with respect to $t$ gives $$\begin{gathered} \sup_{0\leq t \leq T} {\left\lVertv(t)\right\rVert}^{p}_{L_{p(\Omega)}} + \nu\int_{\Omega^T} {\left\lvert\nabla {\left\lvertv\right\rvert}^{\frac{p}{2}}\right\rvert}^2 \, {\mathrm{d}}x\, {\mathrm{d}}t\leq c(\nu,p,\Omega)\int_0^T{\left\lVertv(t)\right\rVert}_{L_{p}(\Omega)}^{p+q}\, {\mathrm{d}}t + {\left\lVertv(0)\right\rVert}^{p}_{L_{p}(\Omega)} \\ \leq c(\nu,p,\Omega) \exp\left(c(\nu,p,\Omega){\left\lVertv\right\rVert}_{L_{q}(0,T;L_{p}(\Omega))}^{q}\right){\left\lVertv\right\rVert}_{L_{q}(0,T;L_{p}(\Omega))}^{q} {\left\lVertv(0)\right\rVert}_{L_{p}(\Omega)}^p + {\left\lVertv(0)\right\rVert}^{p}_{L_{p}(\Omega)}. \end{gathered}$$ By Lemma \[lem2\] we get $$\begin{gathered} \label{eq130} {\left\lVertv\right\rVert}_{L_{\frac{5}{3}p}(\Omega^t)} \leq \left[c(\nu,p,\Omega) \exp\left(c(\nu,p,\Omega){\left\lVertv\right\rVert}_{L_{q}(0,T;L_{p}(\Omega))}^{q}\right) {\left\lVertv\right\rVert}_{L_{q}(0,T;L_{p}(\Omega))}^{q} + 1\right]{\left\lVertv(0)\right\rVert}_{L_{p}(\Omega)} \\ =: c(\text{data}). \end{gathered}$$ In view of the classical theory (see e.g. [@Solonnikov:1964uq], [@Solonnikov:1976kx], [@Solonnikov:1977vn], [@Solonnikov:1990fk] and recently [@wz80]) we infer (see Remark \[rem4\]) $${\left\lVertv\right\rVert}_{W^{2,1}_s(\Omega^t)} + {\left\lVert\nabla p\right\rVert}_{L_s(\Omega^t)} \leq c(s,\nu,\Omega){\left\lVert(v \cdot \nabla)v\right\rVert}_{L_s(\Omega^t)} + {\left\lVertv(0)\right\rVert}_{W^{2 - \frac{2}{s}}_s(\Omega)}.$$ for $t \in (0,T)$. By the Hölder inequality $${\left\lVert(v \cdot \nabla)v\right\rVert}_{L_s(\Omega^t)} \leq {\left\lVertv\right\rVert}_{L_{\frac{5}{3}p}(\Omega^t)} {\left\lVert\nabla v\right\rVert}_{L_r(\Omega^t)},$$ where $$\frac{1}{\frac{5}{3}p} + \frac{1}{r} = \frac{1}{s}.$$ Lemma \[lem10\] yields $${\left\lVert\nabla v\right\rVert}_{L_r(\Omega^t)} \leq \epsilon^{\kappa} {\left\lVertv\right\rVert}_{W^{2,1}_s(\Omega^t)} + \epsilon^{-\kappa} {\left\lVertv\right\rVert}_{L_s(\Omega^t)},$$ where $$\kappa = 2 - 1 - 5\left(\frac{1}{s} - \frac{1}{r}\right) = 1 - \frac{3}{p} > 0 \qquad \Leftrightarrow \qquad p > 3,$$ thereby $${\left\lVertv\right\rVert}_{W^{2,1}_s(\Omega^t)} + {\left\lVert\nabla p\right\rVert}_{L_s(\Omega^t)} \leq c(\text{data}) {\left\lVertv\right\rVert}_{L_s(\Omega^t)} + {\left\lVertv(0)\right\rVert}_{W^{2 - \frac{2}{s}}_s(\Omega)}.$$ For $s=p$ the right hand side is finite, thus $v$ and $p$ are smooth provided $v(0)$ is smooth. This completes the proof. ![image](reflection) \[fig:2\] \[rem4\] At the end of the proof of Theorem \[thm2\] we used some references to classical theory concerning the regularity of the Stokes system under boundary slip conditions. One of the assumptions in these results is certain smoothness of the boundary (roughly speaking: the higher regularity the higher boundary smoothness). In our case we deal with domains of cubical type, which have corners. Nevertheless, the classical theory holds because we can localize the problem near corners and due to either or reflect it outside the cube. For example, let us consider the corner at $O = (0,0,0)$ (see Figure \[fig:2\]). As we saw in the beginning of the proof of Theorem \[thm2\] we have on the wall $x_3 = 0$ the equality $v_{1,x_3} = v_{2,x_3} = v_3 = 0$, which suggests the reflection $$\check v (x) = \begin{cases} \bar{v}(x) & x_3 \in \overline{\operatorname{supp}\zeta\cap \Omega}, \\ ( \bar{v}'(\bar x),-\bar{v}_3(\bar x)) & x_3 \leq 0, \end{cases}$$ where $\bar x = (x',-x_3)$ (see Figure \[fig:3\]). By $\operatorname{supp}\zeta$ we denote the support of the cut-off function $\zeta$, and $\bar v$ denotes $v$ localized to $\operatorname{supp}\zeta$, i.e. $\bar v = v \zeta$. Similarly, since $f = 0$ we immediately get that $\frac{\partial p}{\partial n} = 0$ on each part of the boundary. This implies that the reflection with respect to $x_3$ preserves the Stokes system. Now, to get the problem in the half-space we need one more reflection (see Figure \[fig:4\]). Observe that on $x_1 = 0$ we have $v_1 = v_{2,x_1} = v_{3,x_1} = 0$ and $\frac{\partial p}{\partial n} = 0$, so we introduce $$\check{\check{ v}} (x) = \begin{cases} \check v(x) & x_1 \in \overline{\operatorname{supp}\check v}, \\ (- \check v_1(\bar{\bar x}),\check v_2(\bar{\bar x}),\check v_3(\bar{\bar x})) & x_1 \leq 0, \end{cases}$$ where $\bar{\bar x} = (-x_1,x_2,x_3)$. Now we see that $\check{\check{v}}$ is defined in the half-space $x_2 \geq 0$ and the Stokes system is preserved. \[rem6\] We have already mentioned that the assumption on the cubical shape of the domain can be relaxed. This motivation follows from , where the appearing boundary integral can be written in the form $$\label{eq310} \int_S \sum_{i,j = 1}^3v_{j,x_i} \cdot v_j{\left\lvertv\right\rvert}^{\theta - 2} \cdot n_i\, {\mathrm{d}}S = \int_S {\left\lvertv\right\rvert}^{\theta - 2} \left(\operatorname{rot}v \times n\right)\cdot v\, {\mathrm{d}}S - \int_S \sum_{i,j = 1}^3 v_{i,x_j} n_iv_j\, {\mathrm{d}}S.$$ We see that under the first integral on the right-hand side vanishes. ![image](cylinder) \[fig:1\] To eliminate the second integral we impose that $\Omega$ is of cylindrical type, parallel to the $x_3$ axis with convex cross section (see Figure \[fig:1\]). Denoting the side boundary by $S_1$, the bottom and the top of the cylinder (perpendicular to $x_3$) by $S_2$, the normal unit vector and the tangent unit vectors by $n$, $\tau^{\alpha}$, $\alpha = 1,2$, respectively, we easily establish (see e.g. Introduction in [@Zajaczkowski:2005fk]) that $$\label{p44} \begin{aligned} &n\vert_{S_1} = \frac{1}{{\left\lvert\nabla \varphi\right\rvert}}(\varphi_{,x_1},\varphi_{,x_2},0) & &\tau^1\vert_{S_1} = \frac{1}{{\left\lvert\nabla \varphi\right\rvert}}(-\varphi_{,x_2},\varphi_{,x_1},0) & & \tau^2\vert_{S_1} = (0,0,1)\\ &n\vert_{S_2} = \left(0,0,\frac{a}{{\left\lverta\right\rvert}}\right) & &\tau^1\vert_{S_2} = (1,0,0) & &\tau^2\vert_{S_2} = (0,1,0), \end{aligned}$$ where $\varphi(x_1,x_2) = c_0$ is a sufficiently smooth, convex, closed curve in the plane $x_3 = \operatorname{const}$. Since $n$ does not depend on $x_3$ on $S_1$ we get in view of that $$\int_{S_1} \sum_{i,j = 1}^3 v_{i,x_j} n_iv_j\, {\mathrm{d}}S = \frac{\left(v\cdot \tau_1\right)^2}{{\left\lvert\nabla \varphi\right\rvert}^3} \left(\tau^1_1n_{1,x_1}\tau^1_1 + \tau^1_1n_{1,x_2}\tau^1_2 + \tau^1_2n_{2,x_1}\tau^1_1 + \tau^1_2n_{2,x_2}\tau^1_2\right) = (v \cdot \tau_1)^2 \cdot \kappa,$$ where $\kappa$ is the curvature of $\varphi$. On $S_2$ we immediately see that $v_3 = v_{3,x_1} = v_{3,x_2} = 0$. Thus, is negative and can be safely removed from . For further geometrical considerations of the last term on the right-hand side in we would refer the reader to e.g. [@Watanabe:2003fk Section 2]. If we do not use the estimate for the pressure from Theorem \[thm1\], then we proceed as follows. First, we multiply $_{1,2}$, and by $\eta_k(t)$, $k \in \mathbb{N}$, where $$\eta_k(t) = \begin{cases} 1 & \text{for } t \in \big(kT,(k + 1)T\big), \\ 0 & \text{for } t \leq (k - 1)T \end{cases}$$ with the properties $\eta_k \in \mathcal{C}^{\infty}_c(0,\infty)$ and $\frac{{\mathrm{d}}}{{\mathrm{d}}t} \eta_k(t) \leq \frac{1}{T}$. Denoting $\bar v = v \eta_k$ (we omit $k$ for clarity) we see that becomes $$\label{p11} \begin{aligned} &\bar v_{,t} + (v\cdot \nabla) \bar v - \nu \triangle \bar v + \nabla \bar p = \bar f - v \eta_{,t} =: \bar F& &\text{in $\Omega\times\big((k - 1)T,(k + 1)T\big) =: \Omega^{kT}$},\\ &\operatorname{div}\bar v = 0 & &\text{in $\Omega^{kT}$}, \\ &v\vert_{t = (k - 1)T} = 0 & &\text{in $\Omega$}. \end{aligned}$$ and for and we have $$\label{p22} \begin{aligned} \begin{aligned} &n \cdot \mathbb{D}(\bar v) \cdot \tau_{\alpha} = 0, \\ &n \cdot \bar v = 0 \end{aligned}& &\text{on $\partial \Omega$} \end{aligned}$$ and $$\label{p33} \begin{aligned} \begin{aligned} &\operatorname{rot}\bar v \times n = 0, \\ &n \cdot \bar v = 0 \end{aligned}& &\text{on $\partial \Omega$.} \end{aligned}$$ By similar reasoning as in [@Solonnikov:2002fk] we get $$\begin{gathered} {\left\lVert\bar v\right\rVert}_{W^{2,1}_{p,q}(\Omega^{kT})} + {\left\lVert\nabla p\right\rVert}_{L_q((k-1)T,(k + 1)T;L_p(\Omega))} \\ \leq c(\nu,p,q,T,\Omega)\left({\left\lVert(v\cdot \nabla)\bar v\right\rVert}_{L_q((k-1)T,(k + 1)T;L_p(\Omega))} + {\left\lVert\bar F\right\rVert}_{L_q((k-1)T,(k + 1)T;L_p(\Omega))}\right). \end{gathered}$$ The Hölder inequality implies that $${\left\lVert(v\cdot \nabla)\bar v\right\rVert}_{L_q((k-1)T,(k + 1)T;L_p(\Omega))} \leq {\left\lVertv\right\rVert}_{L_s((k-1)T,(k + 1)T;L_r(\Omega))}{\left\lVert\nabla \bar v\right\rVert}_{L_{\beta}((k-1)T,(k + 1)T;L_{\alpha}(\Omega))},$$ where $$\label{eq190} \frac{1}{r} + \frac{1}{\alpha} = \frac{1}{p} \qquad \text{and} \qquad \frac{1}{s} + \frac{1}{\beta} = \frac{1}{q}.$$ The imbedding $W^{2,1}_{p,q}(\Omega^{kT}) \hookrightarrow {L_q((k-1)T,(k + 1)T;L_p(\Omega))}$ holds (see e.g. [@bes Ch.3, §10.2]) provided $$\left(\frac{1}{p} - \frac{1}{\alpha}\right)\frac{3}{2} + \frac{1}{2} + \left(\frac{1}{q} - \frac{1}{\beta}\right) = 1,$$ which in view of is equivalent to $$\frac{3}{r} + \frac{2}{s} = 1.$$ Thus, $$\begin{gathered} \label{eq200} {\left\lVert\bar v\right\rVert}_{W^{2,1}_{p,q}(\Omega^{kT})} + {\left\lVert\nabla p\right\rVert}_{L_q((k-1)T,(k + 1)T;L_p(\Omega))} \\ \leq c(\nu,p,q,T,\Omega) \left({\left\lVertv\right\rVert}_{L_s((k-1)T,(k + 1)T;L_r(\Omega))} +\frac{c(initial\ data)}{T} + {\left\lVert\bar f\right\rVert}_{L_q((k-1)T,(k + 1)T;L_p(\Omega))}\right) \end{gathered}$$ for $T$ small enough. By the same interpolation argument we deduce that the solution to satisfies $$\begin{gathered} {\left\lVertv\right\rVert}_{W^{2,1}_{p,q}(\Omega^{T})} + {\left\lVert\nabla p\right\rVert}_{L_q(0,T;L_p(\Omega))} \\ \leq c(\nu,p,q,T,\Omega) \left({\left\lVertv\right\rVert}_{L_s(0,T;L_r(\Omega))} + {\left\lVertf\right\rVert}_{L_q(0,T;L_p(\Omega))}\right) + {\left\lVertv(0)\right\rVert}_{W^{2 - \frac{2}{p}}(\Omega)} \end{gathered}$$ for $T$ small enough. Thus, combining the above inequality with and summing over $k$ yields $${\left\lVertv\right\rVert}_{W^{2,1}_{p,q}(\Omega^{T})} + {\left\lVert\nabla p\right\rVert}_{L_q(0,T;L_p(\Omega))} \leq c(initial\ and\ external\ data)$$ for arbitrary large $T < +\infty$. Now, the classical theory yields smoothness of $v$ and $p$. [^1]: All authors were financially supported by the National Science Centre, under project number NN 201 396937
--- abstract: 'We study low-energy nucleon Compton scattering in the framework of baryon chiral perturbation theory (B$\chi$PT) with pion, nucleon, and $\Delta$(1232) degrees of freedom, up to and including the next-to-next-to-leading order (NNLO). We include the effects of order $p^2$, $p^3$ and $p^4/\varDelta$, with $\varDelta\approx 300$ MeV the $\Delta$-resonance excitation energy. These are all “predictive" powers in the sense that no unknown low-energy constants enter until at least one order higher (i.e, $p^4$). Estimating the theoretical uncertainty on the basis of natural size for $p^4$ effects, we find that uncertainty of such a NNLO result is comparable to the uncertainty of the present experimental data for low-energy Compton scattering. We find an excellent agreement with the experimental cross section data up to at least the pion-production threshold. Nevertheless, for the proton’s magnetic polarizability we obtain a value of $(4.0\pm 0.7)\times 10^{-4}$ fm$^3$, in significant disagreement with the current PDG value. Unlike the previous $\chi$PT studies of Compton scattering, we perform the calculations in a manifestly Lorentz-covariant fashion, refraining from the heavy-baryon (HB) expansion. The difference between the lowest order HB$\chi$PT and B$\chi$PT results for polarizabilities is found to be appreciable. We discuss the chiral behavior of proton polarizabilities in both HB$\chi$PT and B$\chi$PT with the hope to confront it with lattice QCD calculations in a near future. In studying some of the polarized observables, we identify the regime where their naive low-energy expansion begins to break down, thus addressing the forthcoming precision measurements at the HIGS facility.' author: - Vadim Lensky - Vladimir Pascalutsa title: Predictive powers of chiral perturbation theory in Compton scattering off protons --- Introduction ============ Compton scattering off nucleons has a long and exciting history, see Refs [@Drechsel:2002ar; @Schumacher:2005an; @Phillips:2009af] for recent reviews. The 90’s witnessed a breakthrough in experimental techniques which led to a series of precision measurements of Compton scattering [@Schmiedmayer:1991zz; @Federspiel:1991yd; @Zieger:1992jq; @Hal93; @MacG95; @MAMI01] with the aim to determine the nucleon [*polarizabilities*]{} [@Baldin; @Holstein:1992xr]. Many theoretical approaches have been tried in the description of nucleon polarizabilities and low-energy Compton scattering. The more prominent examples include dispersion relations [@Hearn:1962zz; @Pfeil:1974ib; @Guiasu:1978ak; @Lvov:1980wp; @L'vov:1996xd; @Drechsel:1999rf; @Pasquini:2007hf], effective-Lagrangian models [@Pascalutsa:1995vx; @Scholten:1996mw; @Feuster:1998cj; @Kondratyuk:2000kq], constituent quark model [@Capstick:1992tx], and chiral-soliton type of models [@Chemtob:1987ut; @Scoccola:1989px; @Scherer:1992jb; @Broniowski:1992vj; @Scoccola:1995tf]. There has been as well a significant recent progress in approaching the subject from first principles—lattice QCD (lQCD). The present lQCD studies are based on the external electromagnetic field method [@Lee:2005dq; @Detmold:2006vu], and even though the actual results for the nucleon have been obtained only in quenched approximation, the pion and kaon polarizabilities have been calculated with dynamical quarks [@Detmold:2009dx]. The full-lQCD calculations for the nucleon will hopefully be done in a near future. In this work we exploit another theoretical approach rooted in QCD, namely, chiral perturbation theory ($\chi$PT) [@Pagels:1974se; @Weinberg:1978kz; @Gasser:1983yg; @GSS89]. The very first $\chi$PT calculation of nucleon polarizabilities, published in 1991 by Bernard, Kaiser and Mei[ß]{}ner [@Bernard:1991rq], quotes the result shown in the $\cO(p^3)$ column of Table . In the same column, the numbers in brackets show the result of the so-called heavy-baryon (HB) expansion [@JeM91a]. Here it means that one additionally expands the full result [@Bernard:1991rq] in powers of $m_\pi/M_N$, the ratio of the pion and nucleon masses, and drops all but the leading terms (cf. Appendix A). The $\cO(p^3)$ HB$\chi$PT result thus corresponds to the static nucleon approximation. The relativistic effects are systematically included in HB$\chi$PT at higher orders, but nonetheless even leading order HB$\chi$PT result is widely considered to be more consistent than the B$\chi$PT (i.e., fully relativistic) one. The reason for that is that the full relativistic evaluation of the chiral loops may yield contributions which are of lower order than is given by the power-counting argument. This “pathology”, however, does not arise in the case of polarizabilities at $\cO(p^3)$, so as far as power counting is concerned, the B$\chi$PT result is as good here as the one of HB$\chi$PT. But even more generally, chiral symmetry ensures that the power-counting violating terms to be always accompanied by low-energy constants (LECs), hence they can simply be removed in the course of renormalization of those LECs [@Gegelia:1999gf; @Fuchs:2003qc]. In simpler terms, there is no problem with power counting in B$\chi$PT. The present state-of-the-art $\chi$PT studies based on pion and nucleon degrees of freedom [@McGovern:2001dd; @Beane:2004ra] utilize the HB expansion. They find, however, that despite the very reasonable values for polarizabilities, the $\cO(p^3)$ and even $\cO(p^4)$ results for the Compton-scattering cross sections show significant discrepancy with experimental data starting from energies of about 120 MeV, especially at backward kinematics. The inclusion of the $\Delta$(1232)-resonance as an explicit degree of freedom helps to remedy this discrepancy in the cross sections [@Pascalutsa:2003zk; @Hildebrandt:2003fm]. However, it comes at an expense of a large contribution to the polarizabilities [@Hemmert:1996rw]. This $\Delta$-contribution is highly unwanted in HB$\chi$PT, since polarizabilties come out nearly perfect already in the theory without the $\Delta$ (cf. the numbers in brackets in Table ). There is no natural solution to this problem. One is bound to either omit some of the $\Delta$ contributions by “demoting" them to higher orders [@Pascalutsa:2003zk], or cancel them by “promoting" some of the low-energy constants (LECs) to lower orders [@Hildebrandt:2003fm]. Such an apparent failure of $\chi$PT is sometimes attributed to certain “$\sigma$-meson” contributions [@Schumacher:2007xr], which $\chi$PT misses. Of course, while the $\si$-meson of the linear sigma model is included in $\chi$PT, the contribution from the $f_0$(600) is not, but it is doubtful that the $f_0$ can explain it; its two-photon coupling is too small. Alternatively, studies based on dispersion relations suggest that some essentially relativistic effects, discarded in HB$\chi$PT as being higher order, are in fact important because of the proximity of cuts in both pion mass and energy [@Lvov:1993ex; @Holstein:2005db; @Pascalutsa:2004wm]. In our present study we verify the latter scenario and perform the calculations in a manifestly Lorentz-covariant fashion, refraining from the use of the heavy-baryon formalism. The HB$\chi$PT results can then be recovered by simply expanding in powers of pion mass over the baryon mass, $m_\pi/M_B$. We thus are coming back to the original (relativistic quantum field theory) ways [@Bernard:1991rq]. The difference with the original work [@Bernard:1991rq] is that we compute the Compton scattering observables, not only the scalar polarizabilities, and that we include the $\De(1232)$ in addition to the pion and nucleon degrees of freedom. ---------------- -------------- ------------------------------ ------------------ ---------------- PDG $ \cO(p^3)$ $ \cO( p^3)+ \cO(p^4/\vDe) $ $\cO(p^4) $ est. [@PDG2006] $\alpha^{(p)}$ 6.8 (12.2) 10.8 (20.8) $\pm 0.7$ $12.0 \pm 0.6$ $\beta^{(p)}$ $-1.8$ (1.2) 4.0 (14.7) $\pm 0.7$ $1.9\pm 0.5 $ ---------------- -------------- ------------------------------ ------------------ ---------------- : Predictions of baryon $\chi$PT for electric ($\al$) and magnetic ($\be$) polarizabilities of the proton in units of $10^{-4}\,$fm$^3$, compared with the Particle Data Group summary of experimental values. Table  shows the results of both manifest-covariant and HB calculations at all the “predictive” orders, i.e., below $\cO(p^4)$ — the order at which the unknown LECs start to enter. A natural estimate of the $\cO(p^4)$ contribution, given in the corresponding column, can serve as an error bar on the $\chi$PT prediction. A detailed discussion of these results can be found in Sect. . It can be noted, however, how significant the differences are between the exact and the HB results. This is of course not the first and only example where B$\chi$PT and HB$\chi$PT are in dissent, see e.g., the case of $\ga N\to \Delta$ transition [@Gail:2005gz; @Pascalutsa:2005ts], or the baryon magnetic moments in SU(3) [@Geng:2008mf; @Camalich:2009uf]. These differences can often be significantly diminished by slight improvements of the HB calculations, such as readjusting the position of the thresholds to have them in the exactly correct place [@McGovern:2001dd]. It is not yet clear, however, how to systematically derive such improvements from the HB formalism itself. As to why the orders considered here are [*predictive*]{}, any chiral power-counting scheme will tell us that the expansion of the Compton amplitude begins at order $p^2$, and that $p^4$ is the order where the first unknown LECs should enter. In between there are $p^3$ and the $\De$-excitation effects. The counting for the latter is itself a subject of controversy related to the issue of how to count the $\De$-nucleon mass difference: $\vDe= M_\Delta-M_N\approx 300$ MeV. In the hierarchy of chiral symmetry breaking scales, $\vDe$ is neither as light as the scale of explicit symmetry-breaking, $m_\pi\sim 150$ MeV, nor as heavy as the scale of spontaneous symmetry-breaking, $4\pi f_\pi\sim 1$ GeV. We treat $\vDe$ as an independent light scale with the power-counting rules defined in Sect. . In any case, the leading $\Delta$ effects come before $p^4$. To recapitulate, in this work we compute the contributions to Compton amplitude up to, but not including, $\cO(p^4)$ in B$\chi$PT with $\Delta$’s. This is a complete next-to-next-to-leading order (NNLO) calculation which is entirely expressed in terms of only known LECs. The details of these calculations are given in Sect. . Polarizabilities and their chiral behaviors are discussed in Sect.  while the results for observables are shown in Sect. . Some of these results have recently been reported in a letter [@Lensky:2008re]. The present paper is more comprehensive and self-contained. Chiral Lagrangians and power counting ===================================== The method of constructing the chiral SU(2) Lagrangians with pion and nucleon fields is well known [@Gasser:1983yg; @GSS89; @Weinberg:1995mt], and the inclusion of the $\Delta$-isobar fields in a Lorentz-covariant fashion has recently been reviewed [@Pascalutsa:2006up]. We shall list here only the terms relevant to the present work. The strong-interaction piece is given by \^[(2)]{}\_&=& ( \^U \_U\^+ 2B\_0(U M\^+ M U\^) ),\ \^[(1)]{}\_N &=& N( i -[M]{}\_[N]{} + [/v]{}+ g\_A \_5 ) N,\ \^[(1)]{}\_ &=& \_(i\^\_- M\_\^) \_+ , where $U$ is the $SU(2)$ pion field in the exponential parameterization: $U=\exp(i\pi^a\tau^a/f)$, $f$ is the pion decay constant in the chiral limit, $M$ is the mass matrix of light quarks, and $B_0$ is a proportionality factor that can be related with the value of light quark condensate [@Gasser:1983yg]. In turn, $N$ denotes the isodoublet Dirac field of the nucleon, $M_N$ is the nucleon mass, and $g_A$ is the axial-coupling constant, both taken at their chiral-limit value, and the vector and axial-vector chiral fields above are defined in terms of the pion field, $\pi^a(x)$, as v\_& & \^a v\_\^a(x) = (u \_u\^+u\^\_u ),\ a\_& & \^a a\^[a]{}\_(x) = (u\^ \_u- u \_u\^), where $u=\exp(i\pi^a \tau^a/2f )=U^{1/2}$. Finally, $\Delta_\nu$ is the Delta isobar Rarita–Schwinger field with mass $M_\Delta$, and $h_A$ is the $\pi N\Delta$ coupling constant whose value is fixed to the $\Delta\to\pi N$ decay width of $115$ MeV. The antisymmetrized products of Dirac matrices in the above equations are defined as: $\gamma^{\mu\nu}=\frac{1}{2}(\gamma^\mu\gamma^\nu-\gamma^\nu\gamma^\mu)$ and $\gamma^{\mu\nu\lambda}=\frac{1}{2}(\gamma^{\mu\nu}\gamma^\lambda+\gamma^\lambda\gamma^{\mu\nu})$. The isospin $1/2\to 3/2$ transition matrix $T$ is normalized such that $T^aT^{b\,\dagger}=\frac{1}{3}(2\delta^{ab}-i\epsilon^{abc}\tau^c)$. The electromagnetic interaction is added as usual through the minimal substitution: \_N && \_N - i e A\_(1+\_3) N ,\ \_\^a && \_\^a - eA\_\^[ab3]{}\^b, where $A_\mu$ is the photon field. The minimal coupling of the photon to the Delta field gives contributions to Compton scattering which are of higher orders than the ones considered in this work. There is as well a number of nonminimal terms: \^[(2)]{}\_N &=& N(1\_3) \^ N F\_ ,\ \^[(2)]{}\_&=& N T\_3 (i g\_M F\^ - g\_E \_5 F\^)\_\_+ ,\ \^[(4)]{}\_&=& - F\_ F\^\_3 . Here, $F^{\mu\nu}$ and $\tilde F^{\mu\nu}$ are the photon field strength tensor and its dual tensor defined as $F^{\mu\nu}=\pa^\mu A^\nu-\pa^\nu A^\mu$, $\tilde F^{\mu\nu}=\frac{1}{2}\epsilon^{\mu\nu\rho\lambda}F_{\rho\lambda}$; $\kappa_p$ ($\kappa_n$) stands for the proton’s (neutron’s) anomalous magnetic moment; $g_E$ and $g_M$ are $\gamma N\Delta$ electric and magnetic couplings, respectively, which are well known from the analysis of pion-photoproduction $P_{33}$ multipoles [@Pascalutsa:2005ts]. Here we differ from the strategy adopted in Ref. [@Pascalutsa:2003zk], where, in the absence of any $\chi$PT analysis of pion-photoproduction at the time, the values of $g_E$ and $g_M$ were fitted to Compton scattering data, with a rather unsatisfactory result. The precise values of all the parameters used in the present work are given in Table . ----------------------- -------------------------------------------------------------------------------------------- $ \cO(p^2)$ $\frac{e^2}{4\pi} = \frac{1}{137} $, $M_N = 938.3$ MeV, $\hbar c = 197$ MeV$\cdot\,$fm $ \cO(p^3)$ $g_A=1.267$, $f_\pi = 92.4$ MeV, $m_\pi = 139$ MeV, $m_{\pi^0} = 136$ MeV, $\kappa_p=1.79$ $ \cO(p^4/\vDe) \, $ $M_\De= 1232$ MeV, $h_A=2.85$, $g_M = 2.97$, $g_E=-1.0$ $\cO(p^4) $ $\al_0, \be_0 = \pm \frac{e^2}{4\pi M_N^3}$ ----------------------- -------------------------------------------------------------------------------------------- : Parameters (fundamental and low-energy constants) at the order they first appear. Inclusion of the $\Delta$-isobar fields in a Lorentz-covariant fashion raises the consistency problems of higher-spin field theory [@Johnson:1961vt; @Velo:1969bt; @Piccinini:1984dd]. The $\Delta$-isobar couplings used here possess the property of invariance under a gauge transformation: $\Delta_\mu\to \Delta_\mu + \partial_\mu \epsilon$, where $\epsilon$ is an arbitrary spinor field. This ensures the decoupling of unphysical spin-$1/2$ degrees of freedom and eliminates the consistency problems [@Pascalutsa:1998pw; @Pascalutsa:1999zz]. It is far less straightforward to reconcile this extra gauge symmetry with other symmetries of the chiral Lagrangian. For recent progress see Refs. [@Pascalutsa:2006up; @Pascalutsa:2000kd; @Krebs:2008zb]. Coming to the power counting, for the pion and nucleon contributions we shall use the usual scheme [@GSS89], i.e., a graph with $V_k$ vertices from $\lag^{(k)}$, $L$ loops, $N_\pi$ pion and $N_N$ nucleon lines is of order $p^n$ with n = \_k k V\_k + 4L - 2N\_-N\_N . In the case of Compton scattering by $p$ we understand the photon energy or/and the pion mass as compared with $4\pi f_\pi \sim 1$ GeV. The graphs with $\Delta$s are more tricky because for small $p$ they go as S\_\~ rather than simply $1/p$ as the nucleon propagators. The new scale = M\_- M\_N 293 is neither as light as $m_\pi$ nor as as heavy as $4\pi f_\pi$, hence can and will be treated independently. For energies comparable to the pion mass we choose to additionally expand in $p/\vDe$, and hence the $\Delta$ propagator counts as $1/\vDe$, while a graph with $N_\De$ internal lines contributes to order p\^n ()\^[N\_]{}. For definiteness, when needed, we count $\vDe^2$ to be of $\cO(p)$, i.e., the “$\de$ counting" scheme [@Pascalutsa:2003zk]. For the power counting in the region where the energies are of order of $\vDe$ (the resonance region) see [@Pascalutsa:2003zk; @Pascalutsa:2005ts]. Hereby we limit ourselves to the low-energy region, p\~m\_4f\_. Compton amplitude at NNLO ========================= Graphs and the nucleon field redefinition ----------------------------------------- The chiral expansion for the Compton amplitude begins with graph $(1)$ in and its crossed counterpart. Nominally they both are of $\cO(p)$, however, together they simply give the Thomson amplitude, which is of $\cO(p^2)$. We thus refer to $\cO(p^2)$ as the leading order (LO). The other graphs in contribute to $\cO(p^3)$, the next-to-leading order (NLO). At NLO we also have the one-loop contributions shown in , but before evaluating them we make a redefinition of the nucleon field, $N\to \xi N$, where = (\_5 ). The first-order chiral Lagrangian then becomes:[^1] \^[(1)]{}& = & N( i -[M]{}\_[N]{} + g\_A \_5 ) N\ &=& N( i -[M]{}\_[N]{} ) N + M\_N N (1-\^2) N\ && + N ( i - [v /]{} + g\_A \_5) N . The two Lagrangians are equivalent, in the sense of equivalence theorem, however, may have drastically different forms when expanded in the pion field. For the one-loop contributions to Compton scattering it is sufficient to expand up to the second order in the pion field: v\_&=& \^a\^[abc]{} \^b\_\^c+ (\^3),\ a\_&=& \^a\_\^a+ (\^3),\ &=& 1+ \^a \^a \_5 - \^2 + (\^3). The original and the redefined Lagrangians take, respectively, the following form: \^[(1)]{}\_N & = & N( i -[M]{}\_[N]{} + \^a \^a\_5 .\ && - . \^a \^[abc]{} \^b\^c ) N + (\^3) ,\ [’]{}\_N\^[(1)]{} & = & N( i -[M]{}\_[N]{} - i M\_N \^a\^a\_5 + M\_N \^2 .\ && . - \^a\^[abc]{} \^b\^c ) N + (\^3). The major difference between the two forms is that the pseudovector $\pi NN$ coupling is transformed into a pseudoscalar one, while the Weinberg–Tomozawa $\pi\pi NN$ term, which resembles a $\rho$-meson exchange, gets replaced by an isoscalar term akin to the remains of an integrated-out $\sigma$-meson in the linear $\sigma$ model. The isovector $\pi\pi NN$ term, which is now proportional to $(g_A-1)^2$, does not give any contribution to Compton amplitude at one-loop level. Also, in the NLO loops, the photon couples only minimally, i.e., to the electric charge of the pion and nucleon. Now that the pion couples to the nucleon via pseudoscalar coupling, there is no Kroll–Ruderman ($\gamma \pi NN$) term arising, and hence the number of one-loop graphs is reduced. The resulting expressions for amplitudes become simpler. As a result, the loop graphs shown in with couplings from the Lagrangian transform to the graphs shown in with the couplings from . We have also checked explicitly that the two sets of one-loop diagrams give identical expressions for the Compton amplitude. Although the main purpose of the above field redefinition is to simplify the calculation, it does give more insight about the chiral dynamics. First of all, it explains how Metz and Drechsel [@Metz:1996fn], calculating polarizabilities in the linear $\sigma$ model with a heavy $\sigma$-meson, obtain to one loop exactly the same result as B$\chi$PT at $\cO(p^3)$ [@Bernard:1991rq]. Secondly, observing that graphs (12) and (13) vanish in the forward kinematics, we can see that chiral symmetry plays less of a role in the forward Compton scattering at order $p^3$. Going to $\cO(p^4/\vDe)$ we encounter the graphs with one $\De$-isobar propagator shown in . The nucleon-field redefinition does not affect these contributions at this order. Note that the graphs where photons couple minimally to $\Delta$ contain more than one $\Delta$ propagator and therefore should be suppressed by extra powers of $p/\vDe$. However, their lower-order contributions are important for electromagnetic gauge invariance and therefore for the renormalization program. In particular, the lower-order contributions of chiral loops should not affect the result of the low-energy theorem (LET) [@Low:1954kd], and this condition is automatically satisfied for a subclass of graphs which obeys gauge invariance. The loop graphs in form such a subclass for the case of neutral $\Delta$. In reality the $\De$ comes in four charge states (isospin $3/2$), and hence a gauge invariant set will in addition have the higher-order graphs where photon couples minimally to the $\De$. To make the subclass of loop graphs in gauge invariant without the higher-order graphs, we used the following procedure: - The one-particle-irreducible (1PI) graphs, (15–18) are computed with the correct isospin factors, i.e., summing over all charge states of the $\De$. The isospin factors for the one-particle reducible (1PR) graphs (19–22) are chosen such that their ratio to the isospin factors of 1PI graphs is the same as in the neutral $\De$ case. This procedure automatically ensures exact gauge invariance and thus effectively includes the lower-order contributions of the one-loop graphs with minimal coupling of photons to the $\De$. In case when the latter graphs are included explicitly, the isospin factors of 1PR graphs can be restored to actual values. This, however, will not affect the result at the order considered here. The graphs in and in were eventually computed by us with the help of the symbolic manipulation tool FORM [@FORM] and the LoopTools library [@Hahn:2006qw] using dimensional regularization. Renormalization --------------- In accordance with the LET [@Low:1954kd], the loop contributions shown in and in may contribute to the renormalization of nucleon mass, field, charge, and anomalous magnetic moment. We have adopted the on-mass-shell renormalization scheme, not the extended on-mass-shell renormalization (EOMS) [@Fuchs:2003qc]. The difference is that in EOMS the above-listed quantities are taken to be at their chiral limit value while here we simply take the values at the actual pion mass, see, e.g., Table . Let us discuss first the contributions to nucleon self-energy and $\gamma NN$ vertex corresponding to the nucleon loops (5–7) in . The corresponding amputated diagrams give contributions to nucleon self-energy (5) and to $\gamma NN$ vertex (6), (7). More specifically, the contributions to the self-energy and the $\gamma NN$ vertex can be written in the following form: $$\begin{aligned} i\Sigma(\cancel{p})&=&i\Sigma(M_N)+i(\cancel{p}-M_N)\Sigma^\prime(M_N)\nonumber\\ & &+S(\cancel{p}-M_N),\\ i\Gamma^\mu (p,p_s)&=&i\gamma^\mu F_1(p_s^2)-i\gamma^{\mu\nu}q_\nu F_2(p_s^2)+i\cancel{q}p^\mu F_3(p_s^2)\nonumber\\ & &+i(\cancel{p}_s-M_N)\gamma^\mu F_4(p_s^2),\end{aligned}$$ where $p$ is the initial nucleon momentum, $q$ is the initial photon momentum, and $p_s=p+q$. The function $S$ is finite and its expansion in powers of $\cancel{p}-M_N$ starts from a quadratic term. Of all the functions $F_1\dots F_4$ that contribute to the $\gamma NN$ vertex only $F_1$ is divergent. It contributes to the renormalization of charge. Function $F_2$ contributes to a renormalization of the nucleon’s anomalous magnetic moment. In this case the renormalization is finite [@Holstein:2005db]. After the renormalization, the self-energy and the $\gamma NN$ vertex can be written in the following form: $$\begin{aligned} i\Sigma_R(\cancel{p})&=&i\Sigma(\cancel{p})-i\Sigma(M_N)-i(\cancel{p}-M_N)\Sigma^\prime(M_N)\nonumber\\ &=&S(\cancel{p}-M_N)\\ i\Gamma_R^\mu(p,p_s)&=&i\gamma^\mu \overline{F}_1(p_s^2)-i\gamma^{\mu\nu}q_\nu \overline{F}_2(p_s^2) +i\cancel{q}p^\mu F_3(p_s^2)\nonumber\\ & &+i(\cancel{p}_s-M_N)\gamma^\mu F_4(p_s^2),\end{aligned}$$ where $\overline{F}_{1,2}(p_s^2)=F_{1,2}(p_s^2)-F_{1,2}(M_N^2)$ are subtracted functions. Note that functions $F_3(p_s^2)$ and $F_4(p_s^2)$ do not get subtracted; indeed, the Lorentz structures that correspond to these functions are purely off-shell — they give zero when both nucleons are on-shell (i.e., when both $p$ and $p_s$ are on-shell momenta), so they do not contribute to the renormalization of charge or magnetic moment. However, both these functions play an important role in making the complete Compton scattering amplitude gauge invariant. Note also the fact that after the renormalization of nucleon mass, wave function, charge, and anomalous magnetic moment is performed, the remaining expressions for the nucleon loops (5–7) become finite. Now we come to the loops with $\Delta$, . The corresponding amputated loops also give contributions to nucleon self-energy and to $\gamma NN$ vertex, and the corresponding expressions can be written in a full analogy to the case of nucleon loops. The loop (19) in also gives a contribution to $\gamma NN$, however, this contribution is fully off-shell and momentum-independent: $$\begin{aligned} i\Gamma^\mu (p,p_s)&=&i\cancel{q}p^\mu A+i(\cancel{p}_s-M_N)\gamma^\mu B,\end{aligned}$$ where $A$ and $B$ are constants. Nevertheless, it is important to take this contribution into account in order to preserve the electromagnetic gauge invariance. The renormalization of nucleon self-energy and $\gamma NN$ vertex proceeds for these loops in complete analogy to the purely nucleon loops. In the case of $\Delta$ loops, however, we obtain in addition some higher-order divergences, i.e., ultraviolet divergences of ${\cO}(p^4)$. They are to be renormalized by a corresponding ${\cO}(p^4)$ contact term. At this stage it is customary to use the $\overline{MS}$ subtraction of the higher-order divergences, see e.g., Ref. [@Kubis:2000zd]. We have implemented the $\overline{MS}$ scheme for the higher-order divergences by putting the dimreg factor equal to zero (see Appendix A for more detail). Consistency with forward-scattering sum rules --------------------------------------------- The dispersion relations enjoy a special role in nucleon Compton scattering, see Ref. [@Drechsel:2002ar] for a review. First of all, practically all up-to-date empirical values of nucleon polarizabilities are extracted from data with the use of a model based on dispersion relations [@Lvov:1980wp; @L'vov:1996xd]. Secondly, in the forward kinematics, the Compton amplitude can be related to an integral over energy of the photoabsorption cross section, which in combination with the low-energy expansion yields a number of model-independent sum rules. A famous example is the Baldin sum rule: +=\^\_0 d, where the sum of polarizabilities is related to an integral of the total photoabsorption cross section $\sigma_{\mathrm{tot}}$ over the photon lab-frame energy $\nu$. In general, the forward Compton-scattering amplitude can be decomposed into two scalar functions of a single variable in the following way: T\_[fi]{}()=\^f() +i(\^)g(), where $\vec\epsilon^{\,\prime},\ \vec\epsilon$ are the polarization vectors of the initial and final photons, respectively, and $\vec\sigma$ are the Pauli spin matrices. The functions $f$ and $g$ are even functions of $\nu$. Using analyticity and the optical theorem, one can write down the following sum rules: f() &=& f(0)+\^\_0 d\^,\ g() &=& \^\_0 d\^\^ , where $f(0) = -e^2/M_N$ is the Thomson amplitude and $\sigma$ is the doubly polarized photoabsorption cross section, with the index indicating the helicity of the initial photon–nucleon state; $\sigma_{\mathrm{tot}} = \half ( \sigma_{1/2}+\sigma_{3/2})$. These sum rules should also hold for the individual contributions of the loop graphs in . In this case the photoabsorption process is given by the Born graphs of single-pion photoproduction, for which analytic expressions exist [@Holstein:2005db; @Pascalutsa:2004ga]: \_[1/2]{}\^[(\^0 p)]{}+\_[3/2]{}\^[(\^0 p)]{} &=& { \[\^2-\^2 x\_N s\] + 2 },\ \_[1/2]{}\^[(\^+ n)]{} + \_[3/2]{}\^[(\^+ n)]{}&=& { -x\_s \^2 +2 (x\_N\^2 +s \^2)},\ \^[(\^0 p)]{}\_[1/2]{} - \^[(\^0 p)]{}\_[3/2]{} &=& { - (2x\_N +1- ) .\ &&. + 2 },\ \^[(\^+ n)]{}\_[1/2]{}- \^[(\^+ n)]{}\_[3/2]{} &=& { \^2 - 2( x\_- x\_N)} , where $C= [e g_A M_N/(4\pi f_\pi) ]^2$, $\mu=m_\pi/M_N$, $s=M_N^2 +2M_N\nu$, and && x\_N=(s+M\_N\^2-m\_\^2)/2s,\ && x\_=(s-M\_N\^2+m\_\^2)/2s ,\ && = (1/2s) ,are the fractions of nucleon and pion energy ($x$) and momentum ($\la$) in the center-of-mass frame. We have verified indeed that the (renormalized) $p^3$ loop contributions in fulfill the sum rules in exactly for any positive $\nu$. It is interesting to note that the leading-order pion photoproduction amplitude, which enters on the right-hand side of , is independent of whether one uses pseudovector or pseudoscalar $\pi NN$ coupling [@Pascalutsa:2004ga]. It essentially means that chiral symmetry of the effective Lagrangian plays no role at this order. The latter statement can, by means of the sum rule, be extended to the forward Compton amplitude at $\cO(p^3)$. On the other hand, the graphs $(12)$ and $(13)$ in , being the only ones beyond the pseudoscalar theory, take the sole role of chiral symmetry. In the forward kinematics these graphs indeed vanish but play an important role in the backward angles. Without them the values of $\al$ and $\be$ would be entirely different. The value of $\al+\be$ would of course be the same, but $\al -\be$ would (approximately) flip sign. Furthermore, in the chiral limit, the value of $\al -\be$ would diverge as $1/m_\pi^{2}$ (instead of $1/m_\pi$ as it should). We thus arrive at the conclusion that chiral symmetry of the effective Lagrangian plays a more prominent role in backward Compton scattering. Error due to $\cO(p^4)$ effects ------------------------------- In the previous publication [@Lensky:2008re] we simply adopted the error estimate from Ref. [@Pascalutsa:2003zk]. However, in this work we compute to one order higher than in Ref. [@Pascalutsa:2003zk] and hence the error analysis needs to be revised accordingly. An error of an effective-field theory calculation is an estimate of higher-order effects assuming their natural size. The higher-order effects not included in our calculation begin at $\cO(p^4)$, i.e., the order at which the polarizability LECs, $\de\al$ and $\de\be$, arise. The naturalness assumption requires these constants to be of order of unity in the units of the chiral symmetry breaking scale of a GeV. To be more specific we assume the absolute value of these constants (in the $\ol{MS}$ scheme) is limited by \_[(err.)]{}=\_[(err.)]{}=(e\^2/4)/M\_N\^3 0.7 10\^[-4]{}\^3, This number gives a natural estimate of the error on polarizability values we have obtained at NNLO. It is not difficult to find how this error propagates to observables, once the effect of the LECs on those observables is known. For example, for the unpolarized differential cross section, the error is given by \[cf. \]: = 8 (e\^2/4) ’ \^[1/2]{} where $z=\cos\th_{\mathrm{lab}}$, $ \nu$ ($\nu'$) the laboratory energy of the incident (scattered) photon, and = { [lc]{} & ,\ ( )\^2 & . . Let us emphasize that we do not include the errors due to the uncertainty in the values of parameters in Table  or due to the $\cO(p^5/\vDe^2)$ effects which stem from graphs with two $\De$ propagators. Our errors are thus underestimated, however, they can directly serve as an indicator of sensitivity to the polarizability LECs at $\cO(p^4)$. Proton polarizabilities ======================= The chiral-loop contribution to scalar polarizabilities of the proton which arise from the NNLO calculation of the Compton amplitude is given in the Appendix A. In addition, we have the tree-level $\De$(1232) contribution from graphs (14) in and its crossed, given by [@Pascalutsa:2003zk]: $$\begin{aligned} \alpha \,(\mbox{$\De$-excit.})&=& -\frac{2e^2g_E^2}{4\pi(M_N+M_\Delta)^3}\simeq - 0.1\,,\\ \beta \,(\mbox{$\De$-excit.}) &=& \frac{2e^2g_M^2}{4\pi(M_N+M_\Delta)^2\vDe}\simeq 7.1\,.\end{aligned}$$ Here and in what follows the numerical values are given in the units of $10^{-4}\,$fm$^3$. The numerical composition of the full result thus looks as follows: $$\begin{aligned} \alpha&=& \underbrace{6.8}_{\cO(p^3)} + \underbrace{(-0.1) + 4.1}_{\cO(p^4/\vDe)} = 10.8\,,\\ \beta&=&\underbrace{-1.8}_{\cO(p^3)} +\underbrace{ 7.1-1.3}_{\cO(p^4/\vDe)} =4.0 \,.\end{aligned}$$ As explained earlier, a natural estimate of $\cO(p^4)$ contributions yields an uncertainty of at least $\pm 0.7$ on these values. In this result, shown by the red blob, is compared with the empirical information, and with the $\De$-less $\cO(p^4)$ HB$\chi$PT result of Beane et al. [@Beane:2004ra]. We can clearly see a few-sigma discrepancy of our result with the TAPS-MAMI determination of polarizabilities [@MAMI01]. On the other hand, as shown in the next section, our result agrees with TAPS data for the Compton differential cross sections. Of course we compare with the data at the lower energy end (below the pion threshold) where polarizabilities play the prominent role. The extraction of the polarizabilities in Ref. [@MAMI01] has also been influenced by data above the $\De$-resonance region to which we cannot compare. Clearly an extraction of scalar polarizabilities based on the data of 400 MeV and higher could be affected by uncontrolled model dependencies and needs to be avoided. Excluding the higher-energy data from the TAPS analysis could help to resolve the apparent discrepancy between theory and experiment in . In we show the pion mass dependence of proton polarizabilities in both B$\chi$PT and HB$\chi$PT. The difference between the two for the magnetic polarizability (lower panel) at $\cO(p^3)$ is stunning (compare the blue dashed and violet dotted curves). The region of applicability of the HB expansion is apparently limited here to essentially the chiral limit, $m_\pi \to 0$. For any finite pion mass, the B$\chi$PT and HB$\chi$PT results come out to be of a similar magnitude but of the opposite sign. A similar picture is observed for the $\pi \De$ loops arising at $\cO(p^4/\vDe)$. In fact, we have checked that in the limit of vanishing $\De$-nucleon mass splitting ($\vDe\to 0$), the considered $\pi N$ and $\pi \De$ loops give (up to the spin–isospin factors) the same result. The total effect of the Delta here is the difference between the $\cO(p^3)$ and the $\cO(p^4/\vDe)$ curves in B$\chi$PT and the difference between the $\cO(p^3)$ and the $\cO(\eps^3)$ curves in HB$\chi$PT. Note that here the $\cO(\eps^3)$ contribution in HB$\chi$PT with $\De$(1232) precisely corresponds to $\cO(p^4/\vDe)$ of B$\chi$PT, thus we do not include the $\cO(p^4)$ LECs in neither of the calculations. The actual $\cO(\eps^3)$ calculations [@Hildebrandt:2003fm] are supplemented with the $\cO(p^4)$ LECs, whose main role is then to cancel the large contribution of the $\De$-isobar. Results for cross sections ========================== In this section we present the results for the differential cross sections of proton Compton scattering. Unpolarized ----------- =10.5cm =5.6cm In , we consider the unpolarized differential cross section of the $\gamma p\to\gamma p$ process as a function of the scattering angle in center-of-mass system, at fixed incident photon energy. In , we study the same cross section, but as a function of the energy for fixed scattering angle in the lab frame. Our complete NNLO result is shown by red solid curves with band indicating the theory error estimate given in . The agreement between the theory and the experiment is quite remarkable here, especially given the fact that the theory result here is a prediction in the sense that has no free parameters. Despite this good agreement, as already noted above, there is a few-sigma discrepancy in the polarizability values between this theoretical prediction and the most precise empirical extraction [@MAMI01]. This is apparently because the data at higher energies used additionally in the empirical extraction play an important role in the determination of $\beta$. It is always interesting to study the convergence of the chiral expansion. In these figures the leading-order, $\cO(p^2)$, result is shown by dotted curve, which is nothing else than the Klein–Nishina cross section (i.e, Compton scattering off a classical pointlike particle with the charge and mass of the proton). The NLO, $\cO(p^3)$, result is given by the blue dash-dotted curve. One can see that the size of the effects varies strongly with the scattering angle. At energies below the pion-production threshold, the NLO effects are tiny at backward angles but play a crucial role at forward angles. The situation is quite the opposite for the NNLO $\De$-isobar contributions. Nevertheless, the convergence of this expansion seems to be satisfactory and in any case is much better than it would be in analogous HB$\chi$PT calculations. For completeness the result for the Born contribution, given by the Powell cross section together with the WZW anomaly contribution (graphs in ), is shown here by the green dashed curves. Any deviation from these curves at low energies is attributed to polarizability effects. More specifically, - \^[()]{} = - 8 (e\^2/4) ’ + (\^3), where $\Phi$ is defined in and $z$ is the cosine of the lab-frame scattering angle. The difference between the dashed (Born) curves and the dotted (Klein–Nishina) curves arises mainly due to proton’s anomalous magnetic moment. The difference is substantial but at backward angles can be seen to cancel almost entirely against the chiral loop contribution at $\cO(p^3)$, to obtain the blue curves. Thus, at $\cO(p^3)$, there is an intricate cancellation between the anomalous magnetic moment and the chiral loop effects. It would be interesting to see if this cancellation persists at higher orders. Polarized --------- In we show the results for Compton-scattering differential cross sections obtained with linearly polarized beam. The subscript $x$ ($y$) indicates that the beam polarization is parallel (perpendicular) to the scattering plane, $\th$ is the scattering angle in the lab frame and $d\Omega = -2\pi \sin\th \,d\th $. The two particular combinations of polarized cross sections, seen in the upper and the lower panels, are chosen such that each of them is sensitive to only one of the polarizabilities. They are therefore selected for the forthcoming measurement of proton polarizabilities at the HIGS facility [@Weller:2009zz]. The HIGS measurements are planned to be taken at 110 MeV photon lab energy, where a low-energy expansion (LEX) is assumed to hold. Indeed a study of the unpolarized differential cross section indicates that LEX can be trusted to energies of about 100 MeV [@MacG95]. Here we make a similar study for the polarized cross sections. The LEX states that at the second order the deviation of the polarized cross sections from the corresponding Born result (dashed lines in the figure) is simply given in terms of the polarizabilities: - \^[()]{} & = & - 8 (e\^2/4) ’ (+ )+ (\^3)\ - \^[()]{} & = & - 8 (e\^2/4) ’ (+ ) + (\^3), where $\Phi$ is defined in . A derivation of these expressions is given in Appendix B. From one can indeed see that, at this order in LEX, the difference of the polarized cross sections, $d\si_{y}/d\Omega - d\si_{x}/d\Omega$, is proportional to $\alpha$, while the combination $\cos^2\th\, d\si_{y}/d\Omega - d\si_{x}/d\Omega $ is proportional to $\beta$. One should realize, though, that this is only an approximate result which breaks down at sufficiently high energies. The attempts to address this issue in a quantitative way by comparing the second-order LEX (long-dashed curves) with the result of NNLO B$\chi$PT (red solid curves with the error band). The LEX and B$\chi$PT results have exactly the same values for the polarizabilities $\al$ and $\be$, but the validity of B$\chi$PT extends over the whole considered energy range. We conclude that a determination at 110 MeV based on a second-order LEX can be reliable for $\alpha$, see the upper panels. The situation is not as fortunate for the observable aimed at the determination of $\beta$, see the lower panels. The LEX result begins to fail here at lower energies, at least in the backward angles where Compton experiments are usually simpler. Conclusion ========== We have completed a next-to-next-to-leading order (NNLO) calculation of low-energy Compton scattering on the proton within the $\chi$PT framework. More specifically, we have computed all the effects of order $p^2$, $p^3$, and $p^4/\vDe$, with $\vDe$ being the excitation energy of the $\De(1232)$ resonance. These are all [*predictive*]{} powers in the sense that no unknown low-energy constants (LECs) enter until at least one order higher \[i.e., $\cO(p^4)$\]. This fact together with the availability of precise data for Compton scattering has given us a unique opportunity to put $\chi$PT to a test. We have found that, assuming a natural size of the $\cO(p^4)$ LECs, the theoretical uncertainty of the NNLO calculation is comparable with the uncertainty of present empirical information about the cross sections of proton Compton scattering and the corresponding values for isoscalar polarizabilities of the proton. Within these uncertainties the NNLO result agrees with the cross sections data below the pion threshold but shows a three-sigma discrepancy in the value for the magnetic polarizability. We note that the state-of-the-art empirical value for the polarizabilities was extracted by using not just the low-energy data but also data above the $\De$-resonance region. The planned experiments at HIGS could be very helpful in sorting out this issue, since they plan to use precision low-energy data only. In this case, however, the reliance on the strict second-order low-energy expansion might be a problem, as our calculation has shown. The $\chi$PT framework itself could provide a more reliable energy interpolation needed for the extraction of polarizabilities. In this work we have insisted on the fact that [*chiral power counting*]{} should be done for [*graphs*]{}, not [*contributions*]{}. It does not put any constraint on how many powers of pion mass or energy may appear in the result. It puts the constraint on the leading power only. The heavy-baryon expansion is therefore not mandatory for correct power-counting. What is important is that no powers lower than given by power-counting are present in the result. The manifestly covariant baryon $\chi$PT (B$\chi$PT) conforms to this requirement, because even if the lower-order terms appear in calculation of a given graph, they are shown to contribute only to a renormalization of the LECs. In our example, the low-energy theorem and chiral symmetry ensured that all such troublesome terms contributed only to the renormalization of nucleon mass, charge, and the anomalous magnetic moment. Acknowledgments {#acknowledgments .unnumbered} =============== We would like to thank Daniel Phillips and Marc Vanderhaeghen for a number of illuminating discussions, and Martin Schumacher for a helpful communication. V. L. is grateful to the Institut f[ü]{}r Kernphysik at Johannes Gutenberg Universit[ä]{}t Mainz for kind hospitality. Chiral loop contributions to polarizabilities ============================================= Hereby we give the expressions for the loop contributions of $\cO(p^3)$ and $\cO(p^4/\vDe)$ to isoscalar proton polarizabilities $\alpha$ and $\beta$, as well as the corresponding heavy-baryon results. Nucleon Loops ------------- Our results for the loop contributions at $\cO(p^3)$ agree with [@Bernard:1991rq]: $$\begin{aligned} \alpha&=&\frac{e^2g_A^2}{192\pi^3M_Nf^2}\Bigg\{ -1 + \int\limits^1_0\! \frac{dx}{[D_N(x)]^3} \bigg[ 2x^4(-3x^3\!+\!8x^2\!-\!9x\!+\!5)\\ &&+\, x^2(9x^4\!-\!26x^3\!+\!29x^2\!-\!18x+7)\mu^2-(9x^5\!-\!33x^4\!+\!45x^3\!-\!27x^2\!+\!7x\!-\!1)\mu^4 \bigg] \Bigg\},\nn \\ \beta &=&\frac{e^2g_A^2}{192\pi^3 M_Nf^2} \\ & \times & \Bigg\{ 1 - \int\limits^1_0\! \frac{dx}{[D_N(x)]^2} \bigg[ 2x^2(6x^3\!-\!13x^2\!+\!9x\!-\!1) +(9x^4\!-\!24x^3\!+\!21x^2\!-\!6x\!+\!1)\mu^2 \bigg] \Bigg\}, \nn\end{aligned}$$ where $\mu=m_\pi/M_N$, and $D_N(x)=\mu^2(1-x)+x^2$. The corresponding heavy-baryon result is obtained from these expressions by expanding in $\mu$ and keeping the leading term only: \^[(HB)]{}&=& ,\ \^[(HB)]{}&=& . Delta Loops ----------- The $\cO(p^4/\vDe)$ loops of give the following contribution to polarizabilities: $$\begin{aligned} \alpha&=&\frac{e^2h_A^2M_N}{3456\pi^3 M_\Delta^2f^2} \Bigg\{ \frac{25}{2} + 8\de - 3 \int\limits^1_0 \frac{dx\, {x}^2 }{[D_\Delta(x)]^2}\big[ (1-x) \big(35\!-\!104 x +17 {x^2}\!+\!112 {x^3}\!-\!60 {x^4}\nonumber\\ & & +\, \big(105\!-\!273 x\!+\!72 {x^2}\!+\!92 {x^3}\big) \delta +\big(105\!-\!269 x\!+\!88 {x^2}\!+\!72 {x^3}\big){{\delta }^2} +\big(35\!-\!100 x\!+\!64 {x^2}\big) {{\delta }^3}\big) \nonumber \\ && - \, x \big(35\!-\!69 x\!-\!40 {x^2}\!+\!72 {x^3}\!+\!\big(35\!-\!100 x\!+\!64 {x^2}\big) \delta \big) {{\mu }^2}\big] \nonumber\\ & &-\, 6 \int\limits^1_0 dx\, x \big(12+9 x-34 {x^2}+3 (4-5 x) \delta \big) \big[\Xi -\log D_\Delta(x)\big] \Bigg\},\\ \beta &=&\frac{e^2h_A^2M_N}{3456\pi^3 M_\Delta^2f^2} \Bigg\{ \frac{65}{6} - 8\de + \int\limits^1_0 \frac{dx\, {x}^2 }{D_\Delta(x)} (9-32 x+24 {x^2}) (1+x+\delta )\nonumber\\ & &+\,6 \int\limits^1_0 dx\, x \big(12+7 x+10 {x^2}+3 (-4+5 x) \delta \big) \big[\Xi -\log D_\Delta(x)\big] \Bigg\},\end{aligned}$$ where $\mu=m_\pi/M_N$, $\delta=\vDe/M_N$, and $D_\Delta(x)=(1-x)[(1+\de)^2-x]+x\mu^2$. Furthermore, in these expressions we have $\Xi=2/(4-d)-\gamma_E+ \log (4\pi\La/M_N)$ the divergence in $d$ dimensions, with $\La$ the dimreg scale. Thus, the $\De$ loops contain an ultraviolet divergence which is to be renormalized by $\cO(p^4)$ LECs. We choose to define the values for these LECs in the $\ol{MS}$ scheme, and hence put $\Xi=0$. Expanding these results in small $\mu$ and $\delta$ to leading order, we reproduce the heavy-baryon result for the $\pi\De$-loop contributions [@Hemmert:1996rw]: $$\begin{aligned} \alpha^{(HB)}&=&\frac{e^2h_A^2}{864\pi^3f^2 \vDe}\left(9+\log\frac{2\vDe}{m_\pi}\right),\\ \beta^{(HB)}&=&\frac{e^2h_A^2}{864\pi^3f^2 \vDe} \log\frac{2\vDe}{m_\pi}\,.\end{aligned}$$ Low-energy expansion for cross sections ======================================= The differential cross section is given in terms of the Compton amplitude by = \_[’’]{} | T\_[’’, ]{}|\^2, where $\la$ and $\si$ are the target and the photon’s helicities. The sum is over the final helicities, the initial ones are fixed. To find the low-energy expansion (LEX) of this quantity at the second order in energy, we can ignore the [*spin-dependent*]{} contribution and write the Compton amplitude as follows: T\_[’’, ]{} = ( - A\_1(s,t) \_[’]{} ’\_+ A\_2(s,t) q\_[’]{}’ q’\_) 2M\_N\_[’]{}, where $A_i$ are scalar amplitudes dependent on the Mandelstam variables only, $q$ ($q'$) is the initial (final) photon 4-momentum, and \^& = & \^- q\^,\ [’]{}\^ & = & [’]{}\^ - \^ with $\veps$ the vectors of photon polarization, and $P=p+p'$ the sum of the nucleon external momenta. Since \_\^\_\_\^ = -g\^ , \_\_\^=-1, we have \_\^\_\_\^ = -g\^ + - with $ P\cdot q = \half(s-M_N^2 - u+M_N^2)=M_N (\nu+\nu') $. We therefore obtain \_[’’]{} | T\_[’’, ]{}|\^2 & = & (2M\_N)\^2 (-A\_1 \_ + A\_2 q\_q’\_) (-A\_1 \_\^+ A\_2 q\_q’\_\^)\ & & ( -g\^ + - ). We next take the Born contribution out of $A_1$: \_1 = A\_1 + e\^2/M\_N and use the LEX: \_1 &=& 4(+ z) ’ + (\^3),\ A\_2 &=& -4 + (), where $z$ is the cosine of the lab-frame scattering angle, $\al$ and $\be$ are respectively the electric and the magnetic polarizability. At the second order in $\nu$ for the non-Born (NB) contribution we thus have \_[’’]{} | T\^[(NB)]{}\_[’’, ]{}|\^2 & =& -8 M\_N (4e\^2) ’\ & =& -8 M\_N (4e\^2) ’ +(\^3), where $\hat{q}\,' = (1,\sqrt{1-z^2}, 0 , z)$. Substituting in and selecting the appropriate photon polarization we arrive at . [99]{} D. Drechsel, B. Pasquini and M. Vanderhaeghen, Phys. Rept.  [**378**]{}, 99 (2003). M. Schumacher, Prog. Part. Nucl. Phys.  [**55**]{}, 567 (2005). D. R. Phillips, arXiv:0903.4439 \[nucl-th\]. J. Schmiedmayer, P. Riehs, J. A. Harvey and N. W. Hill, Phys. Rev. Lett.  [**66**]{}, 1015 (1991). F. J. Federspiel [*et al.*]{}, Phys. Rev. Lett.  [**67**]{}, 1511 (1991). A. Zieger, R. Van de Vyver, D. Christmann, A. De Graeve, C. Van den Abeele and B. Ziegler, Phys. Lett.  B [**278**]{}, 34 (1992). E. L. Hallin [*et al.*]{}, Phys. Rev. C [**48**]{}, 1497 (1993). B. E. MacGibbon, G. Garino, M. A. Lucas, A.M. Nathan, G. Feldman and B. Dolbilkin, Phys. Rev. C [**52**]{}, 2097 (1995). V. Olmos de Leon [*et al.*]{}, Eur. Phys. J. A [**10**]{}, 207 (2001). A. M. Baldin, Nucl. Phys.  [**18**]{}, 310 (1960). B. R. Holstein, Comments Nucl. Part. Phys.  [**20**]{}, 301 (1992). A. C. Hearn and E. Leader, Phys. Rev.  [**126**]{}, 789 (1962). W. Pfeil, H. Rollnik and S. Stankowski, Nucl. Phys.  B [**73**]{}, 166 (1974). I. Guiasu, C. Pomponiu and E. E. Radescu, Annals Phys.  [**114**]{}, 296 (1978). A. I. L’vov, Sov. J. Nucl. Phys.  [**34**]{}, 597 (1981) \[Yad. Fiz.  [**34**]{}, 1075 (1981)\]. A. I. L’vov, V. A. Petrun’kin and M. Schumacher, Phys. Rev.  C [**55**]{}, 359 (1997). D. Drechsel, M. Gorchtein, B. Pasquini and M. Vanderhaeghen, Phys. Rev.  C [**61**]{}, 015204 (1999). B. Pasquini, D. Drechsel and M. Vanderhaeghen, Phys. Rev.  C [**76**]{}, 015203 (2007). V. Pascalutsa and O. Scholten, Nucl. Phys.  A [**591**]{}, 658 (1995). O. Scholten, A. Y. Korchin, V. Pascalutsa and D. Van Neck, Phys. Lett.  B [**384**]{}, 13 (1996). T. Feuster and U. Mosel, Phys. Rev.  C [**59**]{}, 460 (1999). S. Kondratyuk and O. Scholten, Nucl. Phys.  A [**677**]{}, 396 (2000); Phys. Rev.  C [**64**]{}, 024005 (2001). S. Capstick and B. D. Keister, Phys. Rev.  D [**46**]{}, 84 (1992) \[Erratum-ibid.  D [**46**]{}, 4104 (1992)\]. M. Chemtob, Nucl. Phys.  A [**473**]{}, 613 (1987). N. N. Scoccola and W. Weise, Phys. Lett.  B [**232**]{}, 287 (1989). S. Scherer and P. J. Mulders, Nucl. Phys.  A [**549**]{}, 521 (1992). W. Broniowski and T. D. Cohen, Phys. Rev.  D [**47**]{}, 299 (1993). N. N. Scoccola and T. D. Cohen, Nucl. Phys.  A [**596**]{}, 599 (1996). F. X. Lee, L. Zhou, W. Wilcox and J. C. Christensen, Phys. Rev.  D [**73**]{}, 034503 (2006); A. Alexandru and F. X. Lee, in Proc. of Science (Lattice 2008), arXiv:0810.2833 \[hep-lat\]. W. Detmold, B. C. Tiburzi and A. Walker-Loud, Phys. Rev.  D [**73**]{}, 114505 (2006); in Proc. of Science (Lattice 2008), arXiv:0809.0721 \[hep-lat\]. W. Detmold, B. C. Tiburzi and A. Walker-Loud, arXiv:0904.1586 \[hep-lat\]. H. Pagels, Phys. Rept.  [**16**]{}, 219 (1975). S. Weinberg, Physica A [**96**]{}, 327 (1979). J. Gasser and H. Leutwyler, Annals Phys.  [**158**]{} (1984) 142. J. Gasser, M. E. Sainio and A. Svarc, Nucl. Phys. B [**307**]{}, 779 (1988). V. Bernard, N. Kaiser and U.-G. Mei[ß]{}ner, Phys. Rev. Lett.  [**67**]{}, 1515 (1991); Nucl. Phys.  B [**373**]{}, 346 (1992). W. M. Yao [*et al.*]{} \[Particle Data Group\], J. Phys. G [**33**]{}, 1 (2006). E. Jenkins and A. V. Manohar, Phys. Lett. B [**255**]{}, 558 (1991). J. Gegelia and G. Japaridze, Phys. Rev. D [**60**]{}, 114038 (1999); J. Gegelia, G. Japaridze and X. Q. Wang, J. Phys. G [**29**]{}, 2303 (2003), \[arXiv:hep-ph/9910260\]. T. Fuchs, J. Gegelia, G. Japaridze and S. Scherer, Phys. Rev.  D [**68**]{}, 056005 (2003). J. A. McGovern, Phys. Rev. C [**63**]{}, 064608 (2001) \[Erratum-ibid. C [**66**]{}, 039902 (2002)\]. S. R. Beane, M. Malheiro, J. A. McGovern, D. R. Phillips and U. van Kolck, Phys. Lett.  B [**567**]{}, 200 (2003) \[Erratum-ibid.  B [**607**]{}, 320 (2005)\]; Nucl. Phys. A [**747**]{}, 311 (2005). V. Pascalutsa and D. R. Phillips, Phys. Rev.  C [**67**]{}, 055202 (2003). R. P. Hildebrandt, H. W. Griesshammer, T. R. Hemmert and B. Pasquini, Eur. Phys. J.  A [**20**]{}, 293 (2004). T. R. Hemmert, B. R. Holstein and J. Kambor, Phys. Rev.  D [**55**]{}, 5598 (1997). M. Schumacher, Eur. Phys. J.  A [**31**]{}, 327 (2007). A. I. L’vov, Phys. Lett. B [**304**]{}, 29 (1993). B. R. Holstein, V. Pascalutsa and M. Vanderhaeghen, Phys. Rev. D [**72**]{}, 094014 (2005). V. Pascalutsa, Prog. Part. Nucl. Phys.  [**55**]{}, 23 (2005). T. A. Gail and T. R. Hemmert, Eur. Phys. J.  A [**28**]{}, 91 (2006). V. Pascalutsa and M. Vanderhaeghen, Phys. Rev. Lett.  [**95**]{}, 232001 (2005); Phys. Rev. D [**73**]{}, 034003 (2006). L. S. Geng, J. Martin Camalich, L. Alvarez-Ruso and M. J. V. Vacas, Phys. Rev. Lett.  [**101**]{}, 222002 (2008). J. M. Camalich, L. Alvarez-Ruso, L. S. Geng and M. J. V. Vacas, arXiv:0904.4894 \[hep-ph\]. V. Lensky and V. Pascalutsa, JETP Lett.  [**89**]{}, 108 (2009), \[arXiv:0803.4115 \[nucl-th\]\]. S. Weinberg, “The Quantum theory of fields. Vol. 2” (Cambridge U. P., 1996). V. Pascalutsa, M. Vanderhaeghen and S. N. Yang, Phys. Rept.  [**437**]{}, 125 (2007). K. Johnson and E. C. Sudarshan, Annals Phys.  [**13**]{}, 126 (1961). G. Velo and D. Zwanziger, Phys. Rev.  [**186**]{}, 1337 (1969). F. Piccinini, G. Venturi and R. Zucchini, Lett. Nuovo Cim.  [**41**]{}, 536 (1984). V. Pascalutsa, Phys. Rev.  D [**58**]{}, 096002 (1998). V. Pascalutsa and R. G. E. Timmermans, Phys. Rev.  C [**60**]{}, 042201 (1999). V. Pascalutsa, Phys. Lett.  B [**503**]{}, 85 (2001). H. Krebs, E. Epelbaum and U. G. Mei[ß]{}ner, arXiv:0812.0132 \[hep-th\]; arXiv:0905.2744 \[hep-th\]. A. Metz and D. Drechsel, Z. Phys.  A [**356**]{}, 351 (1996). J. Vermaseren, “Symbolic Manipulation with FORM,” (Computer Algebra Nederland, Amsterdam, 1991). T. Hahn and M. Rauch, Nucl. Phys. Proc. Suppl.  [**157**]{}, 236 (2006). F. E. Low, Phys. Rev.  [**96**]{}, 1428 (1954); M. Gell-Mann and M. L. Goldberger, Phys. Rev.  [**96**]{}, 1433 (1954). B. Kubis and U. G. Meissner, Nucl. Phys.  A [**679**]{}, 698 (2001). V. Pascalutsa, B. R. Holstein and M. Vanderhaeghen, Phys. Lett. B [**600**]{}, 239 (2004). D. Babusci, G. Giordano and G. Matone, Phys. Rev. C [**57**]{}, 291 (1998). H. R. Weller [*et al.*]{}, Prog. Part. Nucl. Phys.  [**62**]{}, 257 (2009); M. W. Ahmed, talk at INT Workshop “Soft Photons and Light Nuclei,” Seattle, June 16 - 20, 2008. [^1]: In our conventions $\ga_5^\dagger= \gamma_5$, hence $\xi^\dagger=\exp(-ig\pi^a \tau^a\ga_5/2f_\pi)$, $\xi \xi^\dagger =1$. Note also that $\ol N\to \ol N \xi$, and $\xi\ga^\mu\xi =\ga^\mu$.
--- abstract: 'We formulate a generalized scattering field theory *à la* Büttiker describing particles transport in magnetic/superconducting heterostructures. The proposed formalism, characterized by a four-component spinorial wavefunction of the Bogoliubov de Gennes theory, allows to describe the spin flipping phenomena induced by noncollinear magnetizations in the scattering region. As a specific application of the theory, we analyze the conductance, the magnetoresistance and the generation of spin-torque produced by an applied voltage in a spin-valve system. Quantum size effects and quantum beating patterns both in the conductance and in the spin-torque are carefully described.' author: - 'F. Romeo$^{1,2}$ and R. Citro$^{1,2}$' title: 'Scattering theory of magnetic/superconducting junctions with spin active interfaces' --- Introduction ============ Nanoscale structures involving Normal (N), Ferromagnet (F) and Superconductor (S) junctions, the so-called heterostructures, involve interplay of superconducting and ferromagnetic order parameters providing a novel opportunity to study the influence of the spin degree of freedom on transport and thermodynamic properties of such systems. A paradigmatic example is represented by the normal metal (N)/superconducting (S) bilayer. In this system the sub-gap transmission of an electron propagating from the N-side towards the S-side is forbidden due to the absence of available electronic states within the superconducting gap. Thus, in order to conserve the charge current, a propagating Cooper pair is transmitted in the superconductor, while an hole is reflected in the normal metal. This anomalous reflection, described by the pioneering work of Andreev[@andreev_original], has been successively described by Blonder *et al.*[@btk82] in the language of scattering theory (the so-called BTK theory). Differently from methods based on the transfer Hamiltonian formalism (THF), the BTK scattering approach does not assume weak coupling approximation and allows to study transport in NS heterostructures from the metallic (high transparency of the interface) up to the tunneling limit (low transparency). In the tunneling limit, the agreement between BTK theory and the Green’s function approach shows that, in the absence of many-body correlations, a scattering theory is extremely suitable for studying transport in heterostructures avoiding time-consuming methods. Almost ten years after the BTK formulation, a development for F/S interfaces based on a BTK-like theory able to describe the Andreev reflection physics within the scattering approach appeared[@deJong_Beenakker95]. A further improvement of the original BTK formalism has been successively brought out by Anantram *et al.*[@anantram96] who reformulated the BTK theory in terms of the scattering field theory[@buttiker92] (BSFT) originally conceived by Büttiker[@buttiker92] for mesoscopic (normal) systems. The formalism provides a clear correspondence between physical observables and the scattering matrix of the system within a second quantization formalism, avoiding the by-hand construction characterizing some parts of the BTK theory[@lambert91]. The Anantram and Datta work[@anantram96] is based on a two-component spinorial Bogoliubov-de Gennes[@degennes_book] theory (as in the original BTK theory) which is a convenient representation in the analysis of spin conserving processes. However, recently, the need to explore the interplay between superconductivity and magnetism stimulated the realization of magnetic superconducting heterostructures. In the simplest case of a ferromagnetic F/S bilayer[@cuoco], the bulk magnetization of the F-side can differ from the one at the interface and spin flipping phenomena can take place at the F/S junction. In order to fully describe the physics of the magnetic superconducting heterostructures one needs to generalize the BdG formalism to a four-component spinorial representation. To the best of our knowledge, a scattering field theory *a la* Büttiker for magnetic superconducting heterostructures which properly treat spin active phenomena at the interface is not yet available and our work aims to fill such vacancy. In particular, the second quantized form of the BTK theory offers important advantages in obtaining non-local quantum properties (i.e. correlation functions) which are not provided by the BTK theory. Following the work of Anantram *et al.*[@anantram96], the construction of a spinful theory is important to correctly describe interplay phenomena between superconductivity and magnetism. For the presentation we follow the formalism of Refs.\[,\] and focus our attention on quasi-one-dimensional systems, while the generalization to the 2-dimensional case is left for a forthcoming work. Within our generalized scattering formalism we derive the expression for the charge and spin currents and the respective linear response to an external bias, i.e. the conductance and the spin-torque. The second part of the paper is devoted to the study of the spin polarized transport in a spin-valve system by the developed formalism. In particular, we focus on the evidence of the Andreev reflection in the subgap transport at varying the magnetic interaction and on the quantum size effects due to the interlayer width to make a link to experimental observations. Among the transport properties, we do analyze the conductance, the magnetoresistance and finally the spin torque as a probe of spin-polarized transport. The organization of the paper is the following: In Sec.\[sec:model\] we introduce the Bogoliubov-de Gennes Hamiltonian and present the scattering field theory generalized to include spin flipping phenomena. We then derive the expression of the charge and spin current and of the respective differential conductance in the presence of an external bias. In Sec.\[sec:results\] we present the results of the linear response observables: the conductance, the magnetoresistance and the torkance for the structure shown in Fig.\[fig:fig2\]. Conclusions and perspectives are given in Sec.IV. The model and formalism {#sec:model} ======================== ![Magnetic superconducting heterostructure as described in the main text. The sequence of materials $[\alpha_1|\alpha_2|...|\alpha_n]$ can be $\alpha_n \in\{F, S, NM\}$.[]{data-label="fig:fig1"}](fig1.eps) We consider a one-dimensional magnetic superconducting heterostructure connected to normal nonmagnetic leads (see Fig.\[fig:fig1\]). The scattering region is made by magnetic and superconducting regions $\alpha_i$ arbitrarily disposed along the transport direction. The system is conveniently described by a 4-component Bogoliubov-de Gennes (BdG) formalism in which the quantum state of the system is described by the wave-function $|\Psi(x,t)\rangle=\sum_{\beta,\sigma}\phi_{\beta \sigma}(x,t)|\beta\rangle\otimes|\sigma\rangle$, where $\beta \in\{e,h\}$ is the particle index and $\sigma \in \{+,-\}$ represents the spin orientation along the quantization axis. Introducing the standard notation of the BdG theory we set $\phi_{e\sigma}(x,t) \rightarrow u_{\sigma}(x,t)$ and $\phi_{h\sigma}(x,t) \rightarrow v_{\sigma}(x,t)$ thus the state vector can be put in the form $|\Psi(x,t)\rangle=(u_{\uparrow}(x,t),u_{\downarrow}(x,t),v_{\uparrow}(x,t),v_{\downarrow}(x,t))^t$.\ Using a tensor product notation (see APPENDIX \[app:tensor-product\]), the quasiparticle Hamiltonian can be represented as[@nota1] $$\begin{aligned} \label{eq:bdg_ham} \hat{H} &=& P_{ee} \otimes (\hat{H}_e+ \hat{U}(x))+P_{hh} \otimes [-(\hat{H}_e+ \hat{U}(x))^{\ast}]\nonumber \\ &+&P_{eh} \otimes \hat{\Delta}(x)+P_{he} \otimes \hat{\Delta}(x)^{\dagger},\end{aligned}$$ where $P_{\alpha\beta}=|\alpha\rangle\langle\beta|$ ($\alpha,\beta=e,h$) are particle and/or hole *projectors* with $|e\rangle=(1,0)^t$ and $|h\rangle=(0,1)^t$; $\hat{H}_e$, $\hat{U}(x)$, $\hat{\Delta}(x)$ are operators written in the spin-space basis $|\sigma=\pm\rangle$, $|+\rangle=(1,0)^t$ and $|-\rangle=(0,1)^t$ ($t$ stands for the transposed vector). Specifically, $\hat{H}_e$ is kinetic energy operator defined by $$\hat{H}_e=\Bigl[-\frac{\hbar^{2} \partial^{2}_x}{2m}-E_F \Bigl]\hat{\mathbb{I}}_{sp},$$ being $E_F$ the Fermi energy and $\hat{\mathbb{I}}_{sp}$ the identity operator in the spin space. The potential energy $\hat{U}(x)$ may include spin dependent Zeeman or spin-orbit coupling terms (e.g. $\hat{U}(x)=\vec{h}(x)\cdot \hat{\vec{\sigma}}$) and finally for an s-wave superconductor, $\hat{\Delta}(x)=i\hat{\sigma_y}\Delta(x)$ where $\sigma_y$ is a Pauli matrix. In the following we characterize the superconducting regions by a constant order parameter $\Delta$ (i.e. we assume a step-like behavior of the gap at the N/S interfaces[@bozovic]). Within our tensor product representation, the charge and spin current operator in first quantization can be written in a compact form as: $$\hat{J}_\mu=\frac{i\hbar q_{\mu}}{2m}\Bigl(\overleftarrow{\partial}_x-\overrightarrow{\partial}_x \Bigl)\sum_{\beta}\eta_\beta P_{\beta\beta}\otimes\sigma_{\mu}^{\beta},$$ where the index $\mu=0,1,\ldots,3$, so that $\hat{J}_0$ is the charge current operator, while $\hat{J}_1$, $\hat{J}_2$ and $\hat{J}_3$ represent the three space-components of the spin current operator. In the above notation we introduced $q_{0}=q_e$, $q_1=q_2=q_3=\hbar/2$, $\eta_e=-\eta_h=1$ and $(\sigma^h_{\mu})^\ast=\sigma^e_{\mu}=\sigma_{\mu}$, $\sigma^{\beta}_0=\eta_{\beta}\mathbb{I}_{sp}$, being $q_e=-|e|$ the electron charge and $\sigma_{\mu}$ the $\mu$-th Pauli matrix.\ The above expression for the currents derives from the conservation laws of the charge $\hat{Q}$ and spin $\hat{S}_{\mu}$ densities, $$\begin{aligned} \hat{Q}&=&q_e\sum_\beta \eta_{\beta} P_{\beta\beta}\otimes \hat{\mathbb{I}}_{sp}\\\nonumber \hat{S}_{\mu}&=&(\hbar/2)\sum_\beta P_{\beta\beta}\otimes \hat{\sigma^{\beta}}_{\mu}.\end{aligned}$$ In particular, the charge and spin densities conservation law can be put in the form of a continuity equation with source/sink terms. Concerning the charge continuity equation, the source/sink term is related to the divergence of the Cooper pairs current[@yamashita03], while in the spin density case such term is related to the spin-torque. Indeed, a spin-torque $\hat{T}_{\mu}$ can be generated by a magnetic potential of the form $\hat{U}(x)=\vec{h}(x)\cdot \hat{\vec{\sigma}}$ and its $\mu$-component is represented by the operator $$\label{eq:torque-locale} \hat{T}_{\mu}=\sum_{\beta} P_{\beta\beta} \otimes \Bigl[\vec{h}(x)\times \hat{\vec{\sigma}}^{\beta}\Bigl]_{\mu}.$$ The continuity equation for the spin density can thus be written as $\partial_t \hat{S}_{\mu}+\partial_x \hat{J_{\mu}}=\hat{T}_{\mu}$. Scattering formalism -------------------- In the following we describe the scattering field theory for magnetic/superconducting heterostructures. In constructing the theory, the Andreev approximation (which neglects the difference between the particle and hole momentum) is performed only in the external leads, while inside the scattering region the exact wave-functions are considered. This approach is performed to properly treat the phase-coherent phenomena in the scattering region and to correctly capture all the Andreev-reflections probabilities.\ The scattering field (see APPENDIX \[app:scattering-field\]) in the $j$-th lead can be written as[@nota2]: $$\begin{aligned} \label{eq:scattering fields} \hat{\Psi}_j(x,t)&=&\sum_{\beta,\sigma}\int \frac{dE\exp[-iEt]}{\sqrt{2 \pi \hbar v(E)}}|\beta \rangle \otimes |\sigma \rangle \times\\\nonumber &[& a^{\sigma}_{j\beta}(E)e^{i k_{\beta}x}+b^{\sigma}_{j\beta}(E)e^{-i k_{\beta}x}],\end{aligned}$$ where $k_{\beta}=\eta_{\beta}k(E)$, while the scattering operator $a^{\sigma}_{j\beta}(E)$ ($b^{\sigma}_{j\beta}(E)$) destroys an incoming (outgoing) particle of species $\beta \in \{e , h \}$ and spin projection $\sigma \in \{+,-\}$ in the lead $j$ and within the Andreev approximation $v_{j\beta\sigma}(E) \approx v(E)=\hbar k(E)/m$. The scattering field defined above generalizes the one introduced in Ref.\[\] to the spinful case. The outgoing field operators are related to the incoming field operators by the scattering matrix[@nota3; @nam_d02007]: $$b^{\sigma}_{i\beta}(t)=\sum_{i'\sigma'\beta'}S^{\beta\beta'}_{ii'\sigma\sigma'} (t) a^{\sigma'}_{i'\beta'}(t).$$ Since the scattering states given by (\[eq:scattering fields\]) form a complete set of mutually orthogonal states (completeness relation), the fields $b_i^\sigma$ satisfy the canonical commutation relations $\{b^{\sigma}_{i\beta},(b^{\sigma'}_{i'\beta'})^{\dagger}\}=\delta_{ii'}\delta_{\beta\beta'}\delta_{\sigma\sigma'}$ and the current conservation ensures the unitary condition of the $S$-matrix: $$\sum_{i'\sigma'\beta'}S^{\beta\beta'}_{ii'\sigma\sigma'}(S^{b\beta'}_{ki's\sigma'})^{\ast}=\delta_{ik}\delta_{b\beta}\delta_{s\sigma}.$$ The quantum statistical properties of the leads are defined by the expectation value $\langle a^{\sigma\dagger}_{j\alpha}(E) a^{s}_{i\beta}(E') \rangle=\delta_{ij}\delta_{s\sigma}\delta_{\alpha\beta}\delta(E-E')f_{j\alpha}(E)$, $f_{j\alpha}(E)$ being the Fermi distribution of the particle of species $\alpha$ in the electrode $j$.\ The field operator $\hat{\Psi}_j$ acts on a many-particle state and thus the expectation value $\bar{\mathcal{O}}_j$ of the generic operator $\hat{\mathcal{O}}$ in the $j$-th electrode is given by $\bar{\mathcal{O}}_j=\langle \hat{\Psi}^{\dagger}_j\hat{\mathcal{O}}\hat{\Psi}_j\rangle$, where the notation $\langle \cdot\cdot\cdot\rangle$ stands for the quantum statistical average. Charge current and differential conductance {#sec:charge current & differential conductance} ------------------------------------------- In this Section we derive the two-terminal conductance of the magnetic/superconducting heterostructure depicted in Fig.\[fig:fig1\]. For a multi-terminal device, the average charge current $\bar{J}^{i}_0$ flowing through the $i$-th lead is given by $\bar{J}^{i}_0=\langle \hat{\Psi}^{\dagger}_i(x,t) \hat{J}_0\hat{\Psi}_i(x,t)\rangle$ and using the field representation (\[eq:scattering fields\]), it can be expressed in terms of the scattering matrix as $$\label{eq:charge current} \bar{J}^{i}_0= \frac{q_e}{h}\sum_{\beta \alpha j}\eta_{\beta}\int dE \Bigl[2\delta_{ij}\delta_{\alpha \beta}-\mathcal{M}_{ij}^{\beta \alpha}(E)\Bigl]f_{j\alpha}(E),$$ where $\mathcal{M}_{ij}^{\beta \alpha}(E)=Tr[S^{\beta \alpha \dagger}_{ij}(E)S^{\beta \alpha }_{ij}(E)]$, $Tr[\cdot \cdot \cdot]$ indicates the trace on the spin indices, while $S^{\beta \alpha }_{ij}(E)$ are matrices with respect to the spin indices. When a symmetric potential drop is applied to the system, the electrochemical potential in the $i$-th lead can be written as $\mu_i=\mu+(-)^i q_eV/2$, being $\mu=(\mu_1+\mu_2)/2$ and $V$ the bias voltage. Taking as zero of the energies the electrochemical potential of the scattering region $\mu_s$, we can write $f_{j\alpha}(E)=f([E+\eta_{\alpha}(\mu_s-\mu_j)]/(K_B T))$, being $T$ the temperature. Let us note that $\mu_s=(\mu_1+\mu_2)/2$ only in the symmetric case. In the nonsymmetric case $\mu_s\neq (\mu_1+\mu_2)/2=\mu$ and thus $\mu_s$ must be determined self-consistently to conserve the charge current (i.e. $\sum_i\bar{J}^{i}_0(V,\mu_s(V))=0$) as described in the Appendix \[app:conductance-tensor\]. Within the linear response theory, the charge current flowing through the $i$-th lead is obtained as $I_i=\sum_j g_{ij} (\mu_j-\mu_s)=G_i V$ where $g_{ij}$ is the conductance tensor whose expression (see Appendix \[app:conductance-tensor\]) is: $$\begin{aligned} g_{ik}&=& \frac{e^2}{h}\int d\xi [-\partial_{\xi}f(\xi)]_{eq}\times \\\nonumber &[& 4\delta_{ik}+\mathcal{M}^{he}_{ik}(\xi)+\mathcal{M}^{eh}_{ik}(\xi)-\mathcal{M}^{ee}_{ik}(\xi)-\mathcal{M}^{hh}_{ik}(\xi)],\end{aligned}$$ where the sum rule $\sum_{j\alpha}\mathcal{M}^{\beta\alpha}_{ij}(E)=2$ has been used. For a generic structure the two-terminal conductance in terms of the conductance tensor is given by: $$\label{eq:two-probe cond} G=\frac{g_{22}g_{11}-g_{21}g_{12}}{\sum_{ij}g_{ij}}.$$ Let us note that in the symmetric case the above relation can be simplified as $G_{sym}=(g_{11}-g_{12})/2$ and thus: $$\begin{aligned} \label{eq:g_sym} G_{sym}&=& \frac{e^2}{h}\int d\xi [-\partial_{\xi}f(\xi)]_{eq}\times \\\nonumber &[& \mathcal{M}^{ee}_{12}(\xi)+\mathcal{M}^{hh}_{12}(\xi)+\mathcal{M}^{he}_{11}(\xi)+\mathcal{M}^{eh}_{11}(\xi)].\end{aligned}$$ Spin currents and spin-torque {#sec:spin currents & spin-torque} ----------------------------- When the heterostructure contains magnetic layers apart the superconducting ones, the application of an external bias produces a spin current coexisting with the charge flux. Differently from the charge, the spin density is not conserved and thus a spin torque is exerted along the nanostructure. In particular the spin torque results from the divergence of the spin current as discussed in Ref.\[\]. In order to derive an expression of the spin-torque, we need first to derive an expression of the spin current in terms of the scattering matrix. This is given by the quantum average of the spin current density operator $\bar{J}^i_{\mu}=\langle \hat{\Psi}_i^\dagger \hat{J}_{\mu}\hat{\Psi}_i \rangle$ ($\mu=1,2,3$) and using (6) one obtains: $$\label{eq:spin-curr} \bar{J}^i_{\mu}=-\sum_{\alpha \beta j}\int \frac{d E}{4\pi}Tr[S^{\beta \alpha \dagger}_{ij}(E)\sigma^{\beta}_{\mu}S^{\beta \alpha}_{ij}(E)]f_{j\alpha}(E),$$ where $\bar{J}^i_{\mu}$ represents the $\mu$-component of the spin current density generated in the $i$-th lead. Let us note that Eq.(\[eq:spin-curr\]) represents an expression of the spin current beyond the linear response regime. Moreover in calculating the spin current, the charge current conservation through the system must be monitored. In fact the spin current is not conserved due to the presence of a spin-transfer torque acting on the local magnetic momentum of the magnetic region and thus a violation of the charge conservation law may artificially change the spin current gradient.\ In order to relate the spin current gradient to the spin torque we can consider the continuity equation of the spin density $S_{\mu}$: $$\label{eq:cont_torque} \partial_t S_{\mu}+\vec{\nabla} \cdot \vec{J}_{\mu}=T_{\mu}.$$ Under stationary condition, i.e. $\partial_t S_{\mu}=0$, one can apply the Gauss-Green theorem to Eq.(\[eq:cont\_torque\]). Let us consider a cylindrical surface of volume $\Omega$ encircling the scattering center and with axis collinear to the transport direction $\hat{x}$: $$\int_{\Omega} \vec{\nabla} \cdot \vec{J}_{\mu}dV=\int_{\Omega} T_{\mu}dV=\oint_{\Sigma}J_{\mu}\hat{x} \cdot d\vec{s}.$$ where $\Sigma$ is the cylindrical surface $\Sigma=s_{1} \cup s_{2} \cup s_{l}$ with $s_{1}$ and $s_{2}$ the areas collinear to the transport direction ($\hat{s}_1=-\hat{s}_2=\hat{x}$) and $s_l$ the lateral surface of the cylinder. Since for a quasi-one dimensional system, the physical quantities $J_{\mu}$ and $T_{\mu}$ can be considered uniform along the radial direction, one obtains: $$\label{eq:kir-spin-curr} \sum_i J^{i}_{\mu}+\int_{SR} T_{\mu}dx=0,$$ where the integral is performed over the scattering region (SR). Eq.(\[eq:kir-spin-curr\]) is the *Kirchhoff’s law* for the spin current and it can be used together with Eq.(\[eq:spin-curr\]) to derive the total spin torque $\tau_{\mu}=\int_{SR} T_{\mu}dx$ produced along the system: $$\tau_{\mu}=\sum_{\alpha \beta j i}\int \frac{d E}{4\pi}Tr[S^{\beta \alpha \dagger}_{ij}(E)\sigma^{\beta}_{\mu}S^{\beta \alpha}_{ij}(E)]f_{j\alpha}(E).$$ However, in spin valve devices it is important to resolve the spatial dependence of the spin torque density. Indeed, a spin valve is a device made by two magnetic layers separated by an interstitial nonmagnetic region (spacer). The relative volume of the magnetic layers can be taken very different: one is the pinned magnetic layer (fixed layer), having the largest volume, and the other is the thin magnetic layer called free-layer (FL). When a spin-polarized current interacts with the thin ferromagnetic layer it undergoes a spin-filtering and the result is, in general, that a spin-transfer torque is applied to the magnetic layer. The energy required to change the magnetization direction of the free-layer can be provided by the flux of spin polarized current activated by the application of an external driving field (e.g. dc voltage bias or ac modulations). In order make an explicit calculation we consider a spin valve, as the one of Fig.2, with a Zeeman potential of the form: $$\label{eq:pot-mag} U(x)=[\gamma \delta(x)\hat{n}_1+h(x)\hat{n}_2 ]\cdot \vec{\sigma},$$ where $\hat{n}_1=(\sin(\theta),0,\cos(\theta))$ represents the magnetization direction of the FL, $\hat{n}_2=(0,0,1)$ is the direction of the magnetization of the fixed layer, while $h(x)$ is a step-like function. In this model the free-layer is directly connected to the first lead (i.e. the region $x<0$) and thus the spin torque acting on this region is simply related to the non-equilibrium spin density produced by the bias in the first lead. Using Eq.(\[eq:torque-locale\]), the zero temperature spin torque components acting on the free-layer in the linear response regime can be written as follows: $$\begin{aligned} \label{eq:torque-free-layer} && T_{\parallel}=-\frac{eV \Gamma}{4\pi}\sum_{\alpha \beta j}Tr[S^{\beta\alpha\dagger}_{1j} \sigma^{\beta}_y S^{\beta\alpha}_{1j}]\eta_{\alpha}\lambda_{j}\\\nonumber && T_{\perp}=\frac{eV \Gamma}{4\pi}\sum_{\alpha \beta j}Tr[S^{\beta\alpha\dagger}_{1j}\Bigl(\sin(\theta) \sigma^{\beta}_z-\cos(\theta) \sigma^{\beta}_x \Bigl)S^{\beta\alpha}_{1j}]\eta_{\alpha}\lambda_{j},\end{aligned}$$ where $\Gamma=(k_F \gamma)/E_F$, while the coefficients $\lambda_j$ are given in APPENDIX \[app:conductance-tensor\]. Furthermore, $T_{\parallel}$ and $T_{\perp}$ are the components of the spin torque parallel and perpendicular to the plane of the magnetization of the fixed layer (i.e. $\vec{T}=T_{\parallel} \hat{\nu}_{\parallel}+T_{\perp} \hat{\nu}_{\perp}$) whose directions are defined by the vectors[@nostri_torque]: $$\begin{aligned} && \hat{\nu}_{\parallel}=-\hat{x}\cos(\theta)+\hat{z}\sin(\theta)\\\nonumber && \hat{\nu}_{\perp}=\hat{y}.\end{aligned}$$ ![Superconducting spin valve as described in the main text.[]{data-label="fig:fig2"}](fig2.eps) Spin-polarized transport in spin-valve systems {#sec:results} ============================================== The scattering formalism developed so far can be employed to describe the linear response properties (i.e. conductance and torkance) of the superconducting spin valve depicted in Fig.\[fig:fig2\] (see also APPENDIX \[app:boundary\]). The system is described by the BdG Hamiltonian given in Eq.(\[eq:bdg\_ham\]), where the superconducting gap operator is $\hat{\Delta}(x)=i\sigma_y \Delta \theta(x)\theta(d_1-x)$, $\theta(x)$ being the Heaviside step function while the two magnetic regions, namely F1 and F2, are modeled by the Zeeman potential given in Eq.(\[eq:pot-mag\]), where $h(x)=E_F h_z \theta(x-d_1)\theta(d_1+d_2-x)$. Furthermore an additional barrier potential of the form $\hat{U}_s(x)=\sum_{j=0,1,2}V_j \delta(x-x_j)\mathbb{I}_{sp}$ ($x_0=0$, $x_1=d_1$, $x_2=d_1+d_2$) is introduced at the interfaces. The $V_j$ are related to the dimensionless BTK parameters $z_j=2mV_j/(\hbar^2 k_F)$ which measure the interface transparencies, and are given by the transmission and reflection probabilities $\mathcal{T}_j$, $\mathcal{R}_j$ via the relation $z_j=\sqrt{\mathcal{R}_j/\mathcal{T}_j}$. Finally, the s-wave order parameter is taken in dimensionless form as $\eta=\Delta/E_F$ and its relation to the BCS coherence length $\xi$ is given by $k_F \xi=1/\eta$, $\xi \approx \hbar v_F/(2\Delta)$. In the following we set $\eta=1/200$ and $k_F \approx 1 $Å$^{-1}$ which are suitable phenomenological values for conventional superconducting materials such as Nb[@yamashita03]. In the subsequent analysis, adopting the same line of Ref.\[\], the self-consistent computation of the superconducting order parameter is neglected. In fact we will consider the low-bias regime (i.e. $eV/\Delta \ll 1$) under which the spin accumulation in the superconducting region is unable to produce a relevant suppression of the superconducting gap. A different mechanism of modification of the superconducting gap could be induced by the size of the superconducting region as reported in Ref.\[\]. However, as shown in Fig.2 of that work, the superconducting gap saturates to the bulk value as a function of the thickness of the superconducting layer already at values of $2 \xi$ ($\xi$ being the BCS coherence length). These features are quite generic and seem to be robust for any value of the scattering potential at the F/S interface and also for parallel or anti-parallel magnetizations in the ferromagnetic leads. Thus we conclude that in our analysis neglecting the self-consistency of the gap does not induce quantitative important changes. Differential conductance and magnetoresistance ---------------------------------------------- ![Two-probe differential conductance $G$ (in unit of $e^2/h$) as a function of $\epsilon/\Delta$ for the model parameters: $k_F d_1=600$, $k_F d_2=250$, $z_1=z_2=z_3=0.1$, $\theta=0$, $\eta=1/200$, $h_z=-0.45$. The parameter $\Gamma$ takes values ranging from $0.9$ (top curve) up to $2.2$ (bottom curve) and it is increased with constant step of $0.1$ going from top to bottom curve.[]{data-label="fig:fig3"}](fig3.eps) In Fig.\[fig:fig3\] we report the differential conductance $G$ as a function of the energy $\epsilon/\Delta$, with $\epsilon=eV$, computed setting the model parameters as follows: $k_F d_1=600$, $k_F d_2=250$, $z_1=z_2=z_3=0.1$, $\theta=0$, $\eta=1/200$, $h_z=-0.45$. At increasing the Zeeman interaction $\Gamma$ of the thin layer, a lowering of the conductance is observed below the gap. This is due to the fact that Andreev reflection processes dominating the transport properties below the superconducting gap become suppressed. This behavior qualitatively reproduces the experimental observations reported in Ref.\[\] by STM technique. The effect of the spin active barrier on the transport properties of the system is analyzed in Figs.\[fig:fig4ab\]. ![Two-probe differential conductance $G$ (in unit of $e^2/h$) as a function of $\epsilon/\Delta$ for the model parameters: $k_F d_1=575$, $k_F d_2=350$, $z_1=z_2=z_3=0.1$, $\Gamma=0.85$, $\eta=1/200$. The full line is computed by setting $\theta=\pi/2$, while the dashed line is obtained fixing $\theta=0$. The Zeeman energy term of the fixed layer is taken $h_z=-0.5$ in the upper panel and $h_z=-0.75$ in the lower one.[]{data-label="fig:fig4ab"}](fig4a.eps "fig:")\ ![Two-probe differential conductance $G$ (in unit of $e^2/h$) as a function of $\epsilon/\Delta$ for the model parameters: $k_F d_1=575$, $k_F d_2=350$, $z_1=z_2=z_3=0.1$, $\Gamma=0.85$, $\eta=1/200$. The full line is computed by setting $\theta=\pi/2$, while the dashed line is obtained fixing $\theta=0$. The Zeeman energy term of the fixed layer is taken $h_z=-0.5$ in the upper panel and $h_z=-0.75$ in the lower one.[]{data-label="fig:fig4ab"}](fig4b.eps "fig:") The figures represent the differential conductance $G$ computed using the parameters: $k_F d_1=575$, $k_F d_2=350$, $z_1=z_2=z_3=0.1$, $\Gamma=0.85$, $\eta=1/200$. For both the upper and lower panel the full line is computed by setting $\theta=\pi/2$, while the dashed line is obtained fixing $\theta=0$. The Zeeman energy of the fixed layer is taken $h_z=-0.5$ in the upper panel and $h_z=-0.75$ in the lower one. The analysis of the figures shows that the sub gap transport is not much sensitive to the magnetization direction, while the quasi-particles transport depends on the orientation of the magnetization of the free-layer and more harmonics appear in the oscillating behavior of $G$ above the gap. We do also observe a lowering of the differential conductance as a function of $h_z$ from the upper to the lower panel. The origin of the oscillations above $\epsilon \simeq \Delta$ is due to releasing the Andreev approximation and are related to the formation of quasiparticles resonances above the gap[@dong2003].\ In order to describe the magneto-transport properties of the system, we introduce the magnetoresistance (MR) defined as follows: $MR=[G_{P}-G_{AP}]/G_{AP}$, where we defined $G_{P}=G(|h_z|,\Gamma,\theta=0)$ and $G_{AP}=G(-|h_z|,\Gamma,\theta=0)$. In Fig.\[fig:fig5\] we report the MR as a function of $\epsilon/\Delta$ setting the remaining parameters as follows: $k_F d_2=250$, $z_1=z_2=z_3=0$, $\Gamma=0.5$, $\eta=1/200$, $\theta=0$, $|h_z|=0.5$. ![Magnetoresistance MR as a function of $\epsilon/\Delta$ for the model parameters as follows: $k_F d_2=250$, $z_1=z_2=z_3=0$, $\Gamma=0.5$, $\eta=1/200$, $\theta=0$, $|h_z|=0.5$, while $k_F d_1=600$ (dashed line), $k_F d_1=800$ (full line) or $k_F d_1=1000$ (dashed-dotted line). The inset contains the MR behavior within the energy range $[0,0.5]$.[]{data-label="fig:fig5"}](fig5.eps) The different curves are related to different width of the superconducting region and in particular the dashed line indicates $k_F d_1=600$, the full line $k_F d_1=800$, while the dashed-dotted line $k_F d_1=1000$. The analysis of the results shows a change of sign of MR as a function of $k_F d_1$ for $\epsilon<0.5\Delta$ which indicates a change in the relative magnitudes of $G_P$ and $G_{AP}$. Furthermore, the small subgap values of the MR indicate the inefficiency of the spin polarized transport operated by the Cooper pairs. On the other side, above the superconducting gap the quasi-particles transport efficiently provides spin polarized currents and thus MR values ranging from $-10\%$ up to $20\%$ are observed. The low values of MR below the gap are due to a thickness of the superconducting region larger than the coherence length $\xi$. Indeed, the curves in Fig.\[fig:fig5\] are obtained for $d_1 \geq 3\xi$, i.e. for a thickness such that the quasi-particles current coming from the normal leads is almost fully converted in non-polarized supercurrent. The latter point is evident in Fig.\[fig:fig6\] which presents the MR as a function of the size $k_F d_1$ of the superconducting region computed for the following set of parameters: $\epsilon/\Delta=0.01$, $z_1=z_2=z_3=0$, $\Gamma=0.5$, $\eta=1/200$, $|h_z|=0.5$, $\theta=0$. As the superconducting spacer becomes larger than $3\xi$ (i.e. $k_Fd_1=600$) a strong suppression of the MR is observed for all the curves, while below this threshold the quasi-particles current is not efficiently converted in unpolarized supercurrent leading to a residual polarization responsible for sizeable values of MR ($\approx \pm 10\%$). ![Magnetoresistance MR as a function of the size $k_F d_1$ of the superconducting region computed for the model parameters: $\epsilon/\Delta=0.01$, $z_1=z_2=z_3=0$, $\Gamma=0.5$, $\eta=1/200$, $|h_z|=0.5$, $\theta=0$, while $k_F d_2=250$ (circle, $\circ$), $k_F d_2=300$ (square, $\square$) or $k_F d_2=400$ (diamond, $\lozenge$). The inset contains the MR *vs* $k_Fd_1$ in the range $[850,1200]$. The sampling step is 25 (i.e. $2.5$nm).[]{data-label="fig:fig6"}](fig6.eps "fig:")\ The latter results imply that a competition between superconducting and magnetic properties becomes relevant for $d_1 <2\xi$, i.e. $d_1 <40$nm for Nb superconductors. This result is consistent with that found in Ref.\[\] and the above conditions can be easily tackled in nanostructured devices[@nb_py_nanowire_exp].\ Regarding the oscillations observed they come from the formation of resonant states below the gap and their period is of the order of the coherence length. From the analysis above it is evident that the behavior of the MR is related to the amount of polarized current transmitted to the free-layer. This quantity on turn depends on (i) the efficiency of the fixed magnetic layer in polarizing the particle current and (ii) on the transmission of the polarized current produced by the polarizer (i.e. the fixed layer) through the spacer region. Point (i) is investigated in Fig.\[fig:fig7\] where the MR is reported as a function of the size $k_F d_1$ of the superconducting region for different values of $|h_z|$ setting the model parameters as follows: $\epsilon/\Delta=0.01$, $z_1=z_2=z_3=0$, $\Gamma=0.5$, $\eta=1/200$, $\theta=0$, $k_F d_2=650$. Apart from the general aspect similar to the one of Fig.\[fig:fig6\], one observes that an increasing of the Zeeman energy $h_z$ of the fixed layer produces higher values of MR for a superconducting spacer width smaller than $k_Fd_1=400$. ![Magnetoresistance MR as a function of the size $k_F d_1$ of the superconducting region computed for the model parameters: $\epsilon/\Delta=0.01$, $z_1=z_2=z_3=0$, $\Gamma=0.5$, $\eta=1/200$, $\theta=0$, $k_F d_2=650$, while $|h_z|=0.1$ (circle, $\circ$), $|h_z|=0.25$ (square, $\square$) or $|h_z|=0.5$ (diamond, $\lozenge$). The inset contains the MR *vs* $k_Fd_1$ in the range $[850,1200]$. The sampling step is 25 (i.e. $2.5$nm).[]{data-label="fig:fig7"}](fig7.eps "fig:")\ Furthermore, the behavior of the MR as a function of $h_z$ is expected to be proportional to $\Gamma h_z \cos(\theta)$, i.e. the scalar product of the magnetic momenta of the ferromagnets. This is found in Fig.\[fig:fig8\] where we plot the MR as a function of $|h_z|$ for the other model parameters: $\epsilon/\Delta=0.01$, $z_1=z_2=z_3=0$, $\Gamma=0.5$, $\eta=1/200$, $\theta=0$, $k_F d_2=650$, $k_F d_1=200$. The analysis of the figure shows a linear behavior with respect to $|h_z|$ with a slope proportional to $\Gamma \cos(\theta)$, while an additional oscillating pattern is observed. Such superimposed oscillations depend on the interface potentials and their amplitude increases at increasing the barrier heights $z_j$ from 0 up to 0.1. In higher dimension (2D or 3D) we do expect that interface disorder can reduce the amplitude of such oscillations. ![Magnetoresistance MR as a function of $|h_z|$ computed for the model parameters: $\epsilon/\Delta=0.01$, $z_1=z_2=z_3=0$, $\Gamma=0.5$, $\eta=1/200$, $\theta=0$, $k_F d_2=650$, $k_F d_1=200$.[]{data-label="fig:fig8"}](fig8.eps "fig:")\ Finally the quantum size effects are displayed in Fig.\[fig:fig9ab\] where a density-plot of the MR in the plane $(k_Fd_1,k_Fd_2)$ is shown for the model parameters: $\epsilon/\Delta=0.01$, $z_1=z_2=z_3=0$, $\Gamma=0.5$, $\eta=1/200$, $\theta=0$ and $|h_z|=0.3$ (upper panel) or $|h_z|=0.5$ (lower panel). The dimensionless sampling step adopted in the numerical simulations is $10$ (i.e. 1 nm) for the upper panel and $25$ (i.e. 2.5 nm) for the lower panel[@nota3b]. The overall behavior of the curves presented in Fig.\[fig:fig9ab\] show oscillating patterns and a change of sign of the MR as a function of the geometric parameters of the system. The comparison between the upper and lower panel shows the effect of the magnetic energy $|h_z|$ of the fixed layer in rotating the wave front of the curves. This is particularly evident for the MR as a function of $k_Fd_2$ (i.e. the length of the fixed layer) for a fixed size of the superconducting spacer. This dependence can represent a relevant information for the experiments.\ Spin-torque ----------- Up to now we focussed our attention on the MR; however an additional probe of the spin polarized transport through the system is provided by the spin torque ($T_{\perp,||}$) acting on the free-layer, see Eq.(\[eq:torque-free-layer\]). Despite the few experimental reports concerning the direct measurement of this observable, recently difficulties in making quantitative measurements of the spin-torque seem to be overcome. In particular magnitude and direction of the spin torque have recently been measured in magnetic tunnel junction[@sankey_nat_phys_torque08] leading to a substantial understanding of the angular momentum transfer in these systems. These devices are of primary interest for the applications and represent excellent probes of the possibility to electrically control (using dc or ac signals) the magnetic degrees of freedom (i.e. the free layer magnetization). Within this framework, the study of superconducting spin valves (as the one depicted in Fig.2) can clarify the mechanism involved in the angular momentum transfer through a thin superconducting layer thus constituting a complementary tool in investigating the interplay between superconductivity and magnetism. A systematic analysis of these structures, also including different symmetries of the superconducting order parameters, could be useful to probe exotic pairings and their ability in supporting spin polarized current. To start this study, we perform an analysis of the s-wave case here. ![Magnetoresistance MR as a function of $k_F d_1$ and $k_F d_2$ for the model parameters: $\epsilon/\Delta=0.01$, $z_1=z_2=z_3=0$, $\Gamma=0.5$, $\eta=1/200$, $\theta=0$ and $|h_z|=0.3$ (upper panel) or $|h_z|=0.5$ (lower panel). The density-plot shows the large scale structure of the oscillations.[]{data-label="fig:fig9ab"}](fig9.eps) In the following we take $eV\rightarrow 1$ and thus the quantities $T_{\perp,||}$ in units of $eV$ coincide with the derivative of the spin torque with respect to the bias in the linear response regime, i.e. the so-called torkance. ![Parallel (dashed line) and perpendicular (full line) component of the spin torque $T_{\perp,||}$ as a function of $\theta$ computed for the model parameters: $\epsilon/\Delta=0.01$, $h_z=0.5$, $\Gamma=0.5$, $\eta=1/200$, $\theta=\pi/2$, $k_F d_1=300$, $k_F d_2=300$, while $z=0.1$ for the upper panel, $z=0.3$ for the middle panel and $z=0.6$ for the lower panel.[]{data-label="fig:fig11abc"}](fig10a.eps "fig:")\ ![Parallel (dashed line) and perpendicular (full line) component of the spin torque $T_{\perp,||}$ as a function of $\theta$ computed for the model parameters: $\epsilon/\Delta=0.01$, $h_z=0.5$, $\Gamma=0.5$, $\eta=1/200$, $\theta=\pi/2$, $k_F d_1=300$, $k_F d_2=300$, while $z=0.1$ for the upper panel, $z=0.3$ for the middle panel and $z=0.6$ for the lower panel.[]{data-label="fig:fig11abc"}](fig10b.eps "fig:")\ ![Parallel (dashed line) and perpendicular (full line) component of the spin torque $T_{\perp,||}$ as a function of $\theta$ computed for the model parameters: $\epsilon/\Delta=0.01$, $h_z=0.5$, $\Gamma=0.5$, $\eta=1/200$, $\theta=\pi/2$, $k_F d_1=300$, $k_F d_2=300$, while $z=0.1$ for the upper panel, $z=0.3$ for the middle panel and $z=0.6$ for the lower panel.[]{data-label="fig:fig11abc"}](fig10c.eps "fig:")\ In Fig.\[fig:fig11abc\] we report the parallel (dashed line) and perpendicular (full line) component of the spin torque $T_{\perp,||}$ as a function of $\theta$ computed setting the model parameters as follows: $\epsilon/\Delta=0.01$, $h_z=0.5$, $\Gamma=0.5$, $\eta=1/200$, $\theta=\pi/2$, $k_F d_1=300$, $k_F d_2=300$, where we use $z=0.1$ for the upper panel, $z=0.3$ for the middle panel and $z=0.6$ for the lower panel. The spin torque components present an almost sinusoidal behavior as a function of the magnetizations angle $\theta$ and thus exhibit vanishing values for $\theta=0,\pi$. The maximum values of $T_{\perp,||}$ are observed close to $\theta=\pm \pi/2$. By analyzing Fig.\[fig:fig11abc\] we do observe an increasing of the maximum value of $T_{\perp}$ and a change of sign of $T_{||}$ going from the upper to the lower panel (i.e. by increasing $z$ from $0.1$ up to $0.6$) . The latter behavior is attributed to the difference of spin polarized currents at the interface. The maximum (minimum) value of $T_{\perp}$ in the lower panel (see Fig.\[fig:fig11abc\]) close to $\theta=-\pi/2$ ($\theta=\pi/2$) takes an absolute value of $0.1 \mu$eV in the presence of an applied bias of $1.5$meV. This value is of the same order of magnitude of that obtained in the case of nonsuperconducting spin-valves (see for instance Ref.\[\]). This fact points out that nanostructured superconducting material can support a spin polarized particles transport in agreement with recent experimental findings, see e.g. Ref.\[\].\ However the values of the spin-torque strongly depend on the interface properties, i.e. on the parameters $z_j$ in our model, and thus a comparison with the experimental data can be done only by considering $z_j$ as phenomenological fitting parameters. The behavior of the spin torque $T_{\perp,||}$ as a function of $z$ (where we set $z_1=z_2=z_3=z$) for different thickness of the SC layer is shown in Fig.\[fig:fig12abc\] for the choice of parameters: $\epsilon/\Delta=0.01$, $h_z=0.5$, $\Gamma=0.5$, $\eta=1/200$, $\theta=\pi/2$, $k_F d_2=300$. All the curves present maximum values of the torkance close to $z \sim 1$, while the maximum value of the spin torque component $T_{\perp}$ is in the range $1-2\mu$eV. For highest values of $z$, i.e. $z>1$, $T_{\perp,||}$ start to decrease as an effect of vanishing particles flux through the interfaces. On the experimental side, all our analysis can represent an efficient way of detecting the spin polarized effects in the magnetic/superconducting heterostructures, despite the experimental difficulties of engineering reproducible interfaces. ![Parallel (dashed line) and perpendicular (full line) component of the spin torque $T_{\perp,||}$ as a function of the interface potential $z$ ($z_1=z_2=z_3=z$) computed setting the model parameters as follows: $\epsilon/\Delta=0.01$, $h_z=0.5$, $\Gamma=0.5$, $\eta=1/200$, $\theta=\pi/2$, $k_F d_2=300$, while $k_F d_1=300$ for the upper panel, $k_F d_1=380$ for the middle panel and $k_F d_1=400$ for the lower panel.[]{data-label="fig:fig12abc"}](fig11a.eps "fig:")\ ![Parallel (dashed line) and perpendicular (full line) component of the spin torque $T_{\perp,||}$ as a function of the interface potential $z$ ($z_1=z_2=z_3=z$) computed setting the model parameters as follows: $\epsilon/\Delta=0.01$, $h_z=0.5$, $\Gamma=0.5$, $\eta=1/200$, $\theta=\pi/2$, $k_F d_2=300$, while $k_F d_1=300$ for the upper panel, $k_F d_1=380$ for the middle panel and $k_F d_1=400$ for the lower panel.[]{data-label="fig:fig12abc"}](fig11b.eps "fig:")\ ![Parallel (dashed line) and perpendicular (full line) component of the spin torque $T_{\perp,||}$ as a function of the interface potential $z$ ($z_1=z_2=z_3=z$) computed setting the model parameters as follows: $\epsilon/\Delta=0.01$, $h_z=0.5$, $\Gamma=0.5$, $\eta=1/200$, $\theta=\pi/2$, $k_F d_2=300$, while $k_F d_1=300$ for the upper panel, $k_F d_1=380$ for the middle panel and $k_F d_1=400$ for the lower panel.[]{data-label="fig:fig12abc"}](fig11c.eps "fig:")\ Conclusions {#sec:conclusions} =========== In this work a scattering field theory for quasi-one-dimensional magnetic heterostructures containing s-wave superconducting regions has been developed. The second quantized form of the scattering fields for the [*spinful case*]{} allows a direct link between physical observable and their relation to the scattering matrix describing the system. Our formalism take fully into account Andreev reflections in the presence of spin-flip phenomena. We formally derived spin and charge current and all the quantities related to the linear response to an applied voltage bias $V$, i.e. the conductance and the torkance. In particular, it has been pointed out that, in deriving the spin current, the charge conservation through the system must be monitored in order to guarantee the conservation laws. Indeed, the spin current is not conserved due to the presence of a spin-transfer torque acting on the local magnetization of the free-layer and thus a violation of the charge conservation law may artificially change the spin current gradient. The above change modifies the spin torque in a quantitative way. As for the observables, in the second part of the paper, we derived the conductance and the magnetoresistance of a superconducting spin-valve and analyzed all the relevant quantum size and coherent effects. Our analysis showed evidence of Andreev reflections in the subgap transport at varying the Zeeman interaction revealing the importance of the spin-flip processes. As for the magnetoresistance we analyzed quantum size effects due to the superconducting layer thickness and showed that it displays a strong oscillatory and non-monotonous behavior as a function of the interlayer width. A peculiar interplay between superconducting and spin polarized transport properties becomes evident for thickness of the order of the superconducting coherence length. As a probe of the spin-polarized transport we analyzed the spin-torque in the linear response regime and characterized its behavior as a function of the interface transparencies and direction of the magnetization between the fixed and the free layer. It has been found that the torque and magnetoresistance are both strongly enhanced by a non-zero barrier height at the interfaces. Our analysis can provide an efficient way of detecting spin polarized transport in experiments on magnetic/superconducting heterostructures helping some basic understanding and stimulating further studies. Tensor product {#app:tensor-product} ============== In this work the sign $\otimes$ is employed to define the Kronecker product or tensor product of matrices. Given the matrices $\mathcal{A}$ and $\mathcal{B}$ the matrix $\mathcal{C}=\mathcal{A}\otimes\mathcal{B}$ is obtained as follows: $$\mathcal{C}=\left( \begin{array}{ccc} \mathcal{A}_{11} \mathcal{B}& ... & \mathcal{A}_{1n}\mathcal{B}\\ \mathcal{A}_{m1}\mathcal{B}& ... & \mathcal{A}_{mn} \mathcal{B}\\ \end{array} \right),$$ where the size of $\mathcal{A}$ is $m \times n$. According to the above definition, provided that $|e\rangle=(1,0)^t$ and $|-\rangle=(0,1)^t$, we get, for instance, $|e\rangle \otimes |-\rangle=(0,1,0,0)^t$. Scattering field in momentum-representation {#app:scattering-field} =========================================== Within the scattering approach one assumes that far from the scattering center the particle is free and its linear momentum $p=\hbar k$ is a good quantum number for labeling the scattering states. According to this, the scattering field can be expanded in the eigenstates of the linear momentum operator $\hat{\mathcal{P}}=\sum_{\beta}P_{\beta\beta} \otimes \eta_{\beta} \mathbb{I}_{sp}(-i\hbar \partial_x)$. The eigenstates of $\hat{\mathcal{P}}$ in our tensor product notation are defined by $$\hat{\mathcal{P}}| \Psi_{\beta \sigma k}(x)\rangle =\hbar k | \Psi_{\beta \sigma k}(x)\rangle,$$ where $| \Psi_{\beta \sigma k}(x)\rangle=(\sqrt{2\pi})^{-1}|\beta\rangle \otimes |\sigma\rangle e^{i\eta_{\beta}k x}$ and $1/\sqrt{2\pi}$ is a normalization factor. These set of states satisfy the completeness relation: $$\sum_{\beta\sigma}\int dk |\Psi_{\beta \sigma k}(x)\rangle \langle \Psi_{\beta \sigma k}(x')|=\mathbb{I}_{4\times 4}\delta(x-x'),$$ where the identity operator is written as $\mathbb{I}_{4\times4}=\sum_{\beta\sigma}P_{\beta\beta}\otimes |\sigma \rangle \langle \sigma|$. The generic wave-function $|\Psi(x)\rangle=\sum_{\beta,\sigma}\phi_{\beta \sigma}(x)|\beta\rangle\otimes|\sigma\rangle$, can be written in the basis set of the eigenstates of $\hat{\mathcal{P}}$ as: $$|\Psi(x)\rangle=\sum_{\beta \sigma}\int \frac{dk }{\sqrt{2\pi}}\Phi_{\beta \sigma}(k)|\beta \rangle \otimes |\sigma \rangle e^{i \eta_{\beta}k x},$$ where the coefficients $\Phi_{\beta \sigma}(k)=\int dx' \phi_{\beta\sigma}(x') e^{-i\eta_{\beta}k x'}$, are related to the projection of $|\Psi(x)\rangle$ on the eigenvectors of $\hat{\mathcal{P}}$. Self-consistent determination of the chemical potential and the conductance tensor {#app:conductance-tensor} ================================================================================== As described in the main text, in the non-symmetric case the chemical potential of the scattering region $\mu_s\neq (\mu_1+\mu_2)/2=\mu$ and thus a self-consistent computation of $\mu_s$ is required. Its calculation follows from the charge current conservation[@dong2003], $\sum_i\bar{J}^{i}_0(V,\mu_s(V))=0$. Since in principle, such a condition implies the solution of an integral equation, a great simplification follows in the linear response regime in the applied voltage bias $V$. In this case the charge current flowing through the $i$-th lead is obtained as $I_i=\sum_j g_{ij} (\mu_j-\mu_s)$, where $g_{ij}$ is the conductance tensor and the charge conservation implies $\sum_i I_i=0$. Solving the latter equation (Kirchhoff’s law) with respect to $\mu_s$ we have: $$\label{eq:chem-pot-sc-region} \mu_s=\frac{\sum_{ij}g_{ij}\mu_j}{\sum_{ij}g_{ij}}.$$ From the equation above it immediately follows that in the case of a two-terminal[@nota4] symmetric system (where $g_{11}=g_{22}$) the chemical potential $\mu_s$ is bias independent, $\mu_s=\mu$. More generically, one can analyze the potential drop to the left and right junction, i.e.: $$\mu_j-\mu_s=q_e V \lambda_j,$$ where the coefficients $\lambda_j$ are function of $g_{ij}$ as shown below: $$\begin{aligned} &&\lambda_1=-\frac{g_{12}+g_{22}}{\sum_{ij}g_{ij}}\\\nonumber && \lambda_2=\frac{g_{21}+g_{11}}{\sum_{ij}g_{ij}}.\end{aligned}$$ Observing that $\lambda_2-\lambda_1=1$ one correctly recovers that $\mu_2-\mu_1=q_e V (\lambda_2-\lambda_1)=q_e V$, while for a symmetric system $\lambda_j=(-)^j/2$. From the definitions above one immediately infers that the electrochemical potential of the scattering region $\mu_s$ is displaced from $\mu=(\mu_1+\mu_2)/2$ according to the relation: $$\label{eq:mu-s} \mu_s=\mu+\frac{q_e V}{2}\Bigl[\frac{g_{22}-g_{11}}{\sum_{ij}g_{ij}}\Bigl].$$ Noticing that in the linear response regime $I_i=G_iV$ where $G_i=\sum_j g_{ij}\lambda_j$ and using the expression above for $\lambda_j$ one obtains[@nota5]: $$\label{eq:two-probe cond} G=\frac{g_{22}g_{11}-g_{21}g_{12}}{\sum_{ij}g_{ij}}.$$ For symmetric systems the relation above can be simplified as $$\label{eq:g_sym_bis} G_{sym}=(g_{11}-g_{12})/2$$ Let us note that Eq.(\[eq:g\_sym\_bis\]) and Eq.(\[eq:g\_sym\]) in the main text reproduces the result given in Eq.(11) of Ref.\[\] using the Lambert’s method. However, since we are considering a one-dimensional structure in place of the bidimensional one, the angular integration $\int d\theta \cos(\theta)[\cdot \cdot \cdot ]$ is not present in our result. Boundary conditions of the scattering problem {#app:boundary} ============================================= To determine the scattering matrix coefficients one has to use the mode-matching technique as formulated in the theory of quantum wave-guides. According to this method, the BdG equation is solved in each branch and the resulting eigenmodes are used to expand the scattering wave-function. Each wave-function is than determined by imposing proper boundary conditions[@boundary-conditions]. E.g. in the presence of a single particle magnetic potential $U(x)=\gamma \delta(x) \hat{n} \cdot \vec{\sigma}$, $\hat{n}=(n_x,n_y,n_z)$ being the unit vector describing the magnetization direction ($|\hat{n}|^2=1$), the BdG wavefunction $\Psi(x)=(u_{\uparrow}(x),u_{\downarrow}(x),v_{\uparrow}(x),v_{\downarrow}(x))^t$ must satisfy the following boundary conditions: $$\begin{aligned} &&\Psi(x=0^+)=\Psi(x=0^-)\\\nonumber &&\partial_x \Psi(x=0^+)-\partial_x \Psi(x=0^-)=\frac{2m \gamma}{\hbar^2}\mathcal{A}\Psi(x=0^+),\end{aligned}$$ where the $4 \times 4$ matrix $\mathcal{A}$ is defined as follows: $$\mathcal{A}=\left( \begin{array}{cc} \hat{n} \cdot \vec{\sigma} & 0 \\ 0 & \hat{n} \cdot \vec{\sigma}^{\ast} \\ \end{array} \right).$$ In the case of a non-magnetic potential $U(x)=\gamma\delta(x)\mathbb{I}_{sp}$ the previous boundary conditions must be modified substituting $\mathcal{A}$ with the $4 \times 4$ identity $\mathbb{I}_{4 \times 4}$, i.e. $\mathcal{A} \rightarrow \mathbb{I}_{4 \times 4}$. In the absence of potential, i.e. $\gamma=0$, the boundary conditions imply the continuity of the BdG wavefunction and its derivative. Acknowledgements {#acknowledgements .unnumbered} ================ The authors wish to thank G. Annunziata, C. Attanasio, M. Cuoco, A. Di Bartolomeo, F. Giubileo , G. Lambiase and A. Sorgente for helpful discussions during the preparation of the present work. [99]{} A. F. Andreev, Sov. Phys. JETP **19**, 1228 (1964) G. E. Blonder, M. Tinkham, and T. M. Klapwijk, Phys. Rev. B **25**, 4515 (1982) M. J. M. de Jong and C. W. J. Beenakker, Phys. Rev. Lett. **74**, 1657 (1995) M. P. Anantram and S. Datta, Phys. Rev. B **53**, 16390 (1996) M. Büttiker, Phys. Rev. B **46**, 12485 (1992) C. J. Lambert, J. Phys.: Condens. Matter **3**, 6579 (1991); see also A. M. Zagoskin, *Quantum Theory of Many-Body Systems* (Springer-Verlag, New York, 1998) P. G. de Gennes, *Superconductivity of Metals and Alloys* (W. A. Benjamin, New York, 1966) For a detailed discussion of this system see, for instance, J. Linder, M. Cuoco, and A. Sudb[ø]{}, Phys. Rev. B **81**, 174526 (2010); for a discussion of F/S/F system within a modified BTK approach see J. Linder, T. Yokoyama, and A. Sudb[ø]{}, Phys. Rev. B **79**, 224504 (2009) Notice that the spinorial particle-hole notation adopted in this work follows from the Bogoliubov field transformation $\psi_{\sigma}(x)=\sum_n [u_{n\sigma}(x) \gamma_n+v_{n\sigma}^{\ast}(x)\gamma_n^{\dagger}]$. Using the Bogolons rapresentation the BCS Hamiltonian can be diagonalized assuming the form $H_{BCS}=\sum_n E_n \gamma^{\dagger}_n\gamma_n+E_g$. On this point see also Ref.\[\]. This assumption can be relaxed performing a self-consistent computation of the s-wave superconducting gap adapting the procedure described in M. Božović and Z. Radović, Europhys. Lett. **70**, 513 (2005) See Eq.(23) given by Taro Yamashita, Hiroshi Imamura, Saburo Takahashi, and Sadamichi Maekawa, Phys. Rev. B **67**, 094515 (2003) Notice that a more precise notation should be of the form $|\beta\rangle\otimes|\sigma\rangle_{\beta}$, where $|\sigma\rangle_{h}=|\sigma\rangle_{e}^{\ast}$ provided that $|\sigma\rangle_{e}$ is eigenvector of the spin dependent part of the single particle potential (proportional to $\hat{n}\cdot \vec{\sigma}$). However, in case of non-magnetic leads the scattering events can be described using the eigenstates of $\sigma_z$ and thus the notation can be simplified as done in the main text. In general the scattering relation takes the form $b^{\sigma}_{i\beta}(t)=\sum_{i'\sigma'\beta'}\int dt'S^{\beta\beta'}_{ii'\sigma\sigma'} (t,t') a^{\sigma'}_{i'\beta'}(t')$. Performing an istantaneous scattering approximation, i.e. $S(t,t')\approx \delta(t-t')S(t)$, a semplified scattering relation is obtained. Within this framework the static and adiabatic scatterer regime can be described. The scattering relation can be generalized to include hidden degrees of freedom as done in V. Nam Do, P. Dollfus, and V. Lien Nguyen, Phys. Rev. B **76**, 125309 (2007). The resulting theory is equivalent to the Büttiker fictitious leads method. F. Romeo and R. Citro, Phys. Rev. B **81**, 045307 (2010); A. Sorgente, F. Romeo, and R. Citro, Phys. Rev. B **82**, 064413 (2010). R. J. Soulen Jr., J. M. Byers, M. S. Osofsky, B. Nadgorny, T. Ambrose, S. F. Cheng, P. R. Broussard, C. T. Tanaka, J. Nowak, J. S. Moodera, A. Barry, J. M. D. Coey, Science **282**, 85 (1998) K. Ohnishi, T. Kimura and Y. Otani, Appl. Phys. Lett. **96**, 192509 (2010) Despite the small step considered, one should keep in mind that this sampling could not be totally suited to discern oscillations with period less that the sampling one and then reveal the true multiple frequencies contained in the curve. Jack C. Sankey, Yong-Tao Cui, Jonathan Z. Sun, John C. Slonczewski, Robert A. Buhrman and Daniel C. Ralph, Nature Physics **4**, 67 (2008) See for instance Fig.2 given in Alan Kalitsov, Mairbek Chshiev, Ioannis Theodonis, Nicholas Kioussis, and W. H. Butler, Phys. Rev. B **79**, 174416 (2009) See for instance, Z. C. Dong, R. Shen, Z. M. Zheng, D. Y. Xing, and Z. D. Wang, Phys. Rev. B **67**, 134515 (2003) Notice that for a two-terminal system the Onsager’s symmetry $g_{12}=g_{21}$ is respected. The two probe conductance formula given in Eq.(\[eq:two-probe cond\]) has been derived following the standard method used by R. Seviour and C. J. Lambert, A. F. Volkov, Phys. Rev. B **58**, 12338 (1998) and provides the same result (see Eq.(17) of the cited work). Z. C. Dong, Z. M. Zheng and D. Y. Xing, J. Phys.: Condens. Matter **16**, 6099 (2004) S. Griffith, Trans. Faraday Soc. **49**, 650 (1953)
--- abstract: 'We deduce from a determinant identity on quantum transfer matrices of generalized quantum integrable spin chain model their generating functions. We construct the isomorphism of Clifford algebra modules of sequences of transfer matrices and the boson space of symmetric functions. As an application, tau-functions of transfer matrices immediately arise from classical tau-functions of symmetric functions.' address: 'Department of Mathematics, Kansas State University, Manhattan, KS 66506, USA' author: - Natasha Rozhkovskaya title: Action of Clifford algebra on the space of sequences of transfer operators --- Introduction ============ Connections between transfer matrices of quantum integrable models and solutions of classical integrable hierarchies of non-linear partial differential equations, observed in [@K1], were developed later in a series of papers [@Z1], [@Z3], [@Z2], [@Z4], etc. The authors of [@Z1] defined a generating function of commuting quantum transfer matrices of a generalized quantum integrable spin chain. They proved that this master $T$-operator obeys Hirota bilinear equations (see e.g. [@D1], [@Hirota1], [@Miwa1]), identifying it with a tau-function of the KP hierarchy. In [@Z3], [@Z2], [@Z4] similar identification is obtained for the quantum inhomogeneous XXX spin chain of $GL(N)$ type with twisted boundary conditions, quantum spin chain with trigonometric $R$-matrix, and the quantum Gaudin model with twisted boundary conditions. In this note we consider another generating function for quantum transfer matrices of a generalized quantum integrable spin chain. We describe combinatorial properties of this generating function and define the action of Clifford algebra on a space spanned by sequences of quantum transfer matrices. This approach suggests another interpretation of the connections of transfer matrices to $\tau$-function formalism, the one that refers to the construction of the classical Hirota bilinear equations from the vertex operator action of Clifford algebra on the boson space of symmetric functions. One can mention that the master $T$-operator of [@Z1] generalizes the right-hand side of the Cauchy-Littlewood identity $$\begin{aligned} \prod_{ij}(1-a_ix_j)^{-1}=\sum_{\lambda} s_{\lambda} ({ a})s_{\lambda} ({ x}),\end{aligned}$$ where $s_\lambda({a})$, $s_\lambda({ x})$ are symmetric Schur functions in two independent sets of variables $a=(a_1, a_2,\dots )$ and $x=(x_1, x_2,\dots )$ (see e.g. [@Md], I.4 (4.3)), while the generating function in this note is the analogue of the generating function for Schur functions of the form $$\begin{aligned} \prod_{i<j} \left(1-\frac{x_j}{x_i}\right) \prod_{i=1}^{l} H(x_i)=\sum_{l(\lambda)\le l} s_\lambda ({ a}) x_1^{\lambda_1}\cdots x_l^{\lambda_l},\label{fsym} \end{aligned}$$ where $H(x)= \sum_k h_k({ a}) x^k$ is the generating function for complete symmetric functions $h_k({ a})$ (see e.g. [@Md], I.5, Example 29 (7)). In Section 2 we introduce the necessary notations and definitions, and describe the properties of generating functions of transfer matrix. In Section 3 we construct the action of Clifford algebra on the space of sequences of $T$-operators. In Section 4 we make some remarks on bosonisation of the action of Clifford algebra. Properties of Transfer operators ================================= Notations and definitions ------------------------- Let $\{e_{ij}\}_{i,j=1,\dots, N}$ be the set of standard generators of the universal enveloping algebra $U({\mathfrak {gl} }_N({{\mathbb C}}))$. The action of these generators on ${{\mathbb C}}^N$ is given by the elementary matrices $\{ E_{ij}\}_{i,j=1,\dots, N}$. Let partition $\lambda=(\lambda_1\ge \dots\ge \lambda_l>0)$ with $\lambda_i\in{{\mathbb Z}}_{\ge 0}$ and the length $l\le N$ be the highest weight of an irreducible finite-dimensional $U({\mathfrak {gl} }_N({{\mathbb C}}))$-representation $\pi_\lambda$ acting on the space $ V_\lambda$. We will use the same notation for the corresponding representation of $GL_N({{\mathbb C}})$. Consider $R$-matrix given by a linear function in variable $u$ with coefficients in ${\operatorname{End}}\,V_\lambda \otimes{\operatorname{End}}\,{{\mathbb C}}^N $: $$\begin{aligned} \label{Rm} R(u)= 1+\frac{1}{u}\sum_{ij} \pi_\lambda(e_{ji})\otimes E_{ij} . \end{aligned}$$ Let $R_{0i} (u)$ be the operator that acts as $R$-matrix (\[Rm\]) on the $0$-th component $V_\lambda$ and on the $i$-th component ${{\mathbb C}}^N$ of the tensor product $V_\lambda \otimes ({{\mathbb C}}^N)^{\otimes n}$. We fix a collection of complex parameters $ a= (a_i )_{i=1,2,\dots}$. Let $g$ be an invertible $N$ by $N$ matrix, called the twist matrix, with eigenvalues $(g_1,\dots, g_N)$. Consider a family of quantum transfer matrices ($T$-operators) $T_\lambda(u)=T_{\lambda, n}^N (u, a)(g)$ defined by $$\begin{aligned} \label{Tm} T_{\lambda, n}^N (u, a)(g)=\begin{cases} tr_{V_\lambda}\left(R_{01} (u-a_1)\dots R_{0n}(u-a_n) (\pi_\lambda(g)\otimes Id^{\otimes n})\right),& \text{if} \quad l\le N,\\ 0,&\text{if} \quad l> N, \end{cases} \end{aligned}$$ where the trace is taken over the $0$-th component $V_\lambda$. We think of these $T$-operators as functions from $GL_N({{\mathbb C}})$ to the space of $\text{Hom}(({{\mathbb C}}^N)^{\otimes n} )[1/u]$. We will omit $n, g, a$ and $N$ labels and write $T_\lambda(u)$ or $T^N_\lambda(u)$ when no confusion arrises. The R-matirx (\[Rm\]) is an image of the universal $R$-matrix of the Yangian $Y({\mathfrak {gl} }_N({{\mathbb C}}))$, and the quantum Yang-Baxter equation on the universal $R$-matrix implies that $T$-operators (\[Tm\]) commute for a fixed twist matrix $g$, a fixed collection of parameters $a=(a_i)_{i=1,2,\dots}$ and for all $u$ and $\lambda$. \[rem1\] When $n=0$, the element $T_\lambda(u)=tr\,{\pi_\lambda (g)}$ coincides with the value of the character of $\pi_\lambda$ on $g$ given by Schur polynomial $s_\lambda(g_1,\dots, g_N)$ of the eigenvalues of the matrix $g$. Also $T_\lambda(u) \to s_\lambda(g_1,\dots, g_N)\,Id^{\otimes n}$ when $u\to\infty$. CBR determinant --------------- Introduce the notation $ h_k(u)= T_{(k)}(u)$ for $k=1,2,\dots$. It is also convenient to set $h_k(u)=0$ for $k=-1, -2,\dots$, and $h_0(u)=1$. Then the the following remarkable relation between $T$-matrices was observed in [@Baz], [@Cher] (see also [@KNS]). \[Cherednik-Bazhanov-Reshetikhin (CBR) determinant\] $$\begin{aligned} \label{Tdet1} T_{\lambda}(u)=\det_{i,j=1,\dots,l} h_{\lambda_i-i+j}(u-j+1). \end{aligned}$$ This formula is an analogue of Jacobi – Trudi identity for Schur symmetric functions. It not only implies several important properties of transfer matrices $T_\lambda(u)$, but leads to the action of Clifford algebra of fermions on the linear space spanned by sequences of $T$-matrices. For the later goal we follow the approach developed in [@JR1], [@JR2]. Newton’s identity ----------------- For any $u$ set $e_k(u)=T_{(1^k)}(u)$ for $k=1,2\dots$, and $e_k(u)=0$ for $k=-1,-2$, as well as $e_0(u)=1$. Then from (\[Tdet1\]) for $k=1,2,\dots$, $$\begin{aligned} \label{edet1} e_k(u)= T_{(1)^k} (u)=\det_{i,j=1,\dots, k}[h_{1-i+j} (u-j+1)].\end{aligned}$$ a\) The following analogue of Newton’s formula relation holds: $$\begin{aligned} \label{Newt} \sum_{p=-\infty} ^{+\infty}(-1)^{a-p} h_{b+p}(u-p) e_{-p-a} (u-p-1)=\delta_{a,b} \quad \text{for any $a,b\in {{\mathbb Z}}$.}\end{aligned}$$ b) Let $\lambda ^\prime=(\lambda^\prime_1,\dots,\lambda^\prime_k)$ be the conjugate partition of the partition $\lambda$. Then the dual (equivalent) form of (\[Tdet1\]) holds: $$\begin{aligned} \label{Tdet2} T_\lambda(u)= \det_{i,j=1,\dots,k}[ e_{\lambda^\prime _i-i+j}(u+j-1)].\end{aligned}$$ Relation (\[Newt\]) follows from the recursive expansion of the determinant (\[edet1\]) for $e_{b-a} (u+b-1)$ by the first row. Then (\[Newt\]) implies that the upper triangular infinite matrices $$\begin{aligned} \mathcal{H} = (\,h_{p-b} (u-p)\,)_{b,p\,\in {{\mathbb Z}}},\quad \mathcal{E} =(\, (-1)^{a-p} e_{a-p} (u-p-1)\,)_{p,a\,\in {{\mathbb Z}}} \end{aligned}$$ satisfy the identity $\mathcal{H}\mathcal{E}=Id$, and (\[Tdet2\]) follows from the standard argument on the relations on minors of matrices that are inverse of each other (see e.g. Lemma A.42 of [@Fult]). For some applications it is convenient to write Newton’s formula in the form of an equation on generating functions: $$YH(u|t) YE(u-1|t)=1,$$ where $$YH(u|t)= \sum_{s=0}^{\infty} (te^{-\partial_u})^s h_s(u), \quad YE(u|t)= \sum_{s=0}^{\infty} (-1)^{s} e_s(u)(te^{-\partial_u})^s$$ are generating functions for [*operators*]{} acting on the space of $\text{Hom}\,({{\mathbb C}}^N)^{\otimes n}$-valued functions in variable $u$, and $e^{\partial_u} (f(u))= f(u+1)$ is the a shift operator of the variable $u$. Indeed, $$\begin{aligned} YH(u|t) YE(u-1|t)= \sum_{s,p\, =0}^{\infty} h_s(u-s)(-1)^p e_p(u-s-1)(te^{-\partial_u})^{s+p} \\ =\sum_m\left( \sum_{s=0}^{\infty} h_s(u-s)(-1)^{m-s} e_{m-s}(u-s-1) \right) (te^{-\partial_u})^{m}=1.\end{aligned}$$ From definition (\[Tm\]), it follows that infinite matrix $\mathcal{E}$ and infinite formal series $YE(u|t)$ contain only finite number of non-zero terms $e_k(u)= e^N_{k, n}(u,a ) (g)$. In [@MTV1] common eigenvalues and common eigenvectors of higher transfer matrices of an XXX-type model are studied through a certain generating function of transfer matrices with coefficients in the Yangian $Y({\mathfrak {gl} }_N({{\mathbb C}}))$ (see formula (4.16) in [@MTV1]), and earlier in [@Tal], the higher transfer matrices of the Gaudin model were constructed explicitly as a limit of that generating function. Up to a multiplication by a rational function of $u$, the generating function $YE(u|t)$ is the image of the generating function considered in [@MTV1], [@Tal] under the evaluation representation ${{\mathbb C}}_N(a_1)\otimes\dots\otimes {{\mathbb C}}_N (a_n)$ of $Y({\mathfrak {gl} }_N({{\mathbb C}}))$. Transfer matrices labeled by integer vectors {#sec_int} --------------------------------------------- Formula (\[Tdet1\]) allows us to extend the notion of $T_\alpha(u)$ for any integer vector $\alpha=(\alpha_1,\dots, \alpha_l)$, $\alpha_i\in {{\mathbb Z}}$, by literally setting $$\begin{aligned} \label{hhdet} T_{\alpha}(u)=\det [ h_{\alpha_i-i+j}(u-j+1)].\end{aligned}$$ Note that $$\begin{aligned} \label{transposition} T_{(\dots, \alpha_i, \alpha_{i+1},\dots)}(u)= - T_{(\dots, \,\alpha_{i+1} -1\,, \,\alpha_{i}+1\, ,\dots)}(u), $$ and that $T_\alpha(u)=0$ whenever $\alpha_i -i= \alpha_j-j$ for some $i, j$. Similarly, we can use formula (\[Tdet2\]) to extend the definition of transfer matrices to integer vectors. Set $$\begin{aligned} \label{eedet1} T_{\alpha^\prime}(u)=\det_{i,j=1,\dots,k}[ e_{\alpha_i-i+j}(u+j-1)]. \end{aligned}$$ While we do not define here the conjugation on arbitrary integer vector, the notation $T_{\alpha^\prime}(u)$ for the expression (\[eedet1\]) is justified by the following lemma. Let $\alpha=(\alpha_1,\dots, \alpha_l)$ be an integer vector, let $\rho=\rho_l= (0,1,\dots,l-1)$. Let $T_\alpha(u)$ and $ T_{\alpha^\prime} (u)$ be defined by (\[hhdet\]), (\[eedet1\]). Then $$\begin{aligned} T_\alpha(u)= \begin{cases} (-1)^{\sigma} T_\lambda(u),&\quad \text{if} \quad \alpha-\rho=\sigma(\lambda-\rho)\,\text{for some partition}\, \lambda \,\text{and}\, \sigma\in S_l, \\ 0,&\text{otherwise.} \end{cases} \end{aligned}$$ $$\begin{aligned} T_{\alpha^\prime}(u)= \begin{cases} (-1)^{\sigma} T_{\lambda^\prime}(u),&\quad \text{if} \quad \alpha-\rho=\sigma(\lambda-\rho)\,\text{for some partition}\, \lambda \,\text{and}\, \sigma\in S_l, \\ 0,&\text{otherwise.} \end{cases}\end{aligned}$$ ![Non-zero values of $T_{(\alpha_1,\alpha_2)}(u)$ for $N\ge 2$.[]{data-label="fig:1"}](fig1.eps "fig:"){width="50mm"}\ Figure \[fig:1\] illustrates the distribution of non-zero values of $T$-operators for integer vectors $(\alpha_1, \alpha_2)$ for $N\ge 2$. Black points represent integer vectors $(\alpha_1,\alpha_2)$ with non-vanishing transfer matrices $T_{(\alpha_1,\alpha_2)}(u)$. Generating functions of $T$-operators ------------------------------------- Set $$\begin{aligned} H^N(x| u)= \sum_{k=0}^{\infty} h^N_k(u) x^k, \quad E^N(x| u)= \sum_{k=0}^{\infty}(-1)^k e^N_k(u) x^k. \end{aligned}$$ For any integer vector $\alpha$, the transfer matrix $T^N_\alpha(u)$ is the coefficient of $x_1^{\alpha_1}\dots x_l^{\alpha_l-l+1}$ in $$\begin{aligned} \label{H11} H^N(x_1,\dots, x_l|u)&=\det \left[{x_i^{-j+1}}H^N(x_i|u-j+1)\right]\\ &=\left.\left(\prod_{1\le i<j\le l}\left( \frac{ e^{-\partial_{u_j} } } {x_j}-\frac{ e^{-\partial_{u_i}}} {x_i}\right)\prod_{i=1}^{l}H^N(x_i|u_i)\right)\right\vert_{u_1=u_2=\dots= u_l=u}.\notag\end{aligned}$$ Similarly, $ (-1)^{A}T^N_{\alpha^\prime}(u)$ with $A=\sum(\alpha_i-i+1)$ is the coefficient of $x_1^{\alpha_1}\dots x_l^{\alpha_l-l+1}$ in $$\begin{aligned} \label {E11} E^N(x_1,\dots, x_l|u)&=\det \left[{(-x_i)^{-j+1}}E^N(x_i|u+j-1)\right]\\ &=\left.\left(\prod_{1\le i<j\le l}\left( \frac{ e^{\partial_{u_i} } } {x_i}-\frac{ e^{\partial_{u_j}}} {x_j}\right)\prod_{i=1}^{l}E^N(x_i|u_i) \right)\right\vert_{u_1=u_2=\dots= u_l}. \notag\end{aligned}$$ The first statement of (\[H11\]) follows from the expansion of the determinant as a sum over permutations: $$\begin{aligned} &\sum_{\alpha\in {{\mathbb Z}}^{l}}T^N_\alpha(u) x_1^{\alpha_1}\dots x_l^{\alpha_l-l+1}= \sum_{\alpha\in {{\mathbb Z}}^{l}}\det [h^N_{\alpha_i-i+j}(u-j+1)] x_1^{\alpha_1}\dots x_l^{\alpha_l-l+1}\\ &= \sum_{\alpha\in {{\mathbb Z}}^{l}}\sum_{\sigma\in S_l} (-1)^{\sigma}h^N_{\alpha_1-1+\sigma(1)}(u-\sigma(1)+1)\dots h^N_{\alpha_l-l+\sigma(l)}(u-\sigma(l)+1) x_1^{\alpha_1}\dots x_l^{\alpha_l-l+1}\\ &=\sum_{\sigma\in S_l}\sum_{(a_1,\dots, a_l)\in {{\mathbb Z}}^{l}} (-1)^{\sigma}h^N_{a_1}(u-\sigma(1)+1)\dots h^N_{a_l}(u-\sigma(l)+1) x_1^{a_1-\sigma(1)+1}\dots x_l^{a_l-\sigma(l)+1}\\ &=\sum_{\sigma\in S_l} (-1)^{\sigma}H^N(x_1|u-\sigma(1)+1)\dots H^N(x_l|u-\sigma(l)+1) x_1^{-\sigma(1)+1}\dots x_l^{-\sigma(l)+1}\\ &=\det \left[{x_i^{-j+1}}H^N(x_i|u-j+1)\right].\end{aligned}$$ For the second part of (\[H11\]), which can be viewed as a generalization of (\[fsym\]), consider a set of independent variables $(u_1,\dots, u_l)$. Define $$H^N(x_1,\dots, x_l|u_1,\dots,u_l)=\det \left[{x_i^{-j+1}}H^N(x_i|u_i-j+1)\right].$$ Then $$\begin{aligned} H^N(x_1,\dots, x_l|u_1,\dots u_l) =\sum_{\sigma\in S_l}(-1)^{\sigma}\left({x_1}{ e^{\partial_{u_1}}}\right) ^{1-\sigma(1)}H^N(x_1|u_1) \dots\left( {x_l}e^{\partial_{u_l}}\right) ^{1-\sigma(l)}H^N(x_1|u_1) \\ =\det[ (x_i e^{\partial_{u_i}}) ^{1-j}] \prod_{i=1}^{l}H^N(x_i|u_i) =\prod_{1\le i<j\le l}\left( \frac{ e^{-\partial_{u_j} } } {x_j}-\frac{ e^{-\partial_{u_i}}} {x_i}\right)\prod_{i=1}^{l}H^N(x_i|u_i),\end{aligned}$$ and the second statement follows. The proof of (\[E11\]) is exactly the same. Clifford algebra action on the space of sequences of transfer matrices ======================================================================= Clifford algebra action on symmetric functions ---------------------------------------------- Our next goal is to define the action of Clifford algebra on a space generated by transfer matrices, which will allow us to establish connections to the classical boson-fermion correspondence and $\tau$-function formalism. Clifford algebra $Cl$ is generated by elements $\{\psi^{\pm}_k\}_{k\in {{\mathbb Z}}}$ with the relations $$\begin{aligned} \psi^+_k\psi^-_l+\psi^-_l\psi^+_k=\delta_{k,l},\quad \psi^\pm_k\psi^\pm_l+\psi^\pm_l\psi^\pm_k=0. \label{pmrel}\end{aligned}$$ This infinite Clifford algebra originates from the vector space $W=(\oplus_i{{\mathbb C}}\psi^+_i)\bigoplus (\oplus_i{{\mathbb C}}\psi^-_i)$ with the bilinear form that is defined by $(\psi^+_i,\psi^-_j)=\delta_{ij}$, and the rest of the values being zeros. The crucial component of the classical boson-fermion correspondence refers to the action of $Cl$ on the space of symmetric functions, which we reproduce here in purely combinatorial terms. Let $z$ be a formal variable and let ${\mathcal{B} }^{(m)}$ be the linear span of the set of basis vectors $\{ z^ms_\lambda\}$, where $s_\lambda$ are Schur symmetric functions taken over all partitions $\lambda$. Note that each ${\mathcal{B} }^{(m)}$ is just a copy of the ring of symmetric functions. Set $${\mathcal{B} }= \oplus_{m\in{{\mathbb Z}}} {\mathcal{B} }^{(m)}.$$ The action of operators $\psi^\pm_{k}$ on $\{z^m s_\lambda\}$ is given by the following rules: $$\begin{aligned} \psi^+_{k} (z^ms_\lambda) &= z^{m+1}s_{(k-m-1,\lambda)} ,\label{p10}\\ \psi^-_k (z^ms_\lambda)&=z^{m-1}\sum_{t=1}^{\infty}(-1)^{t+1}\delta_{k-m-1,\,\lambda_t-t}\, s_{(\lambda_1+1,\dots, \lambda_{t-1}+1, \lambda_{t+1},\lambda_{t+2}\dots) }, \label{p100} $$ where again for any integer vector $\alpha$ of $l$ parts, one sets $s_\alpha= (-1)^{\sigma}s_\lambda$, if $\alpha-\rho= \sigma (\lambda-\rho)$ for some partition $\lambda$ and some permutation $\sigma$, and $s_\alpha=0$ otherwise. Note that only one term survives in the sum (\[p100\]). One of the easiest ways to check that operators $\psi^\pm_{k}$ satisfy the relations of Clifford algebra is through identification of basis elements $z^m s_\lambda$ with semi-infinite monomials of fermionic Fock space $$z^m s_\lambda \sim v_{m+\lambda_1}\wedge v_{m-1+\lambda_2}\wedge v_{m-2+\lambda_3}\wedge\dots.$$ Here $\{v_i\}_{i\in {{\mathbb Z}}}$ is a linear basis of a vector space $V=\oplus_{i\in {{\mathbb Z}}}{{\mathbb C}}v_i$. Operators $\psi^\pm_{k}$ are creation and annihilation operators on the linear span of such monomials (see e.g. [@D1], [@K] or [@Bom] for more details). Formulae (\[Tdet1\]), (\[Tdet2\]) suggest that we can define Clifford algebra action on the space of transfer matrices as well. This construction is described below, and first, we would like to explain the necessity of some adjustments. Recall that definition (\[Tm\]) of transfer matrix $T_\lambda(u)=T_{\lambda, n}^N (u, a)(g)$ depends on the size $N$ of an invertible matrix $g$. For consistency we are forced to set $T_{\lambda,n}^N(u,a)(g)=0$ whenever the number $l$ of parts of $\lambda$ is greater than $N$. In that sense, transfer matrices $T_{\lambda,n}^N(u, a) (g)$ create natural analogue of Schur symmetric polynomials rather than Schur symmetric functions, which is not yet enough to construct a $Cl$-module of the type ${\mathcal{B} }$. Hence, to get the analogue of Schur symmetric functions, we have to consider the sequences of the transfer matrices $\{T_{\lambda, n}^N (u, a)(g)\}_{N=1}^{\infty}$. Since we deal only with linear vector space structure, the questions of stability of these sequences, important for the ring structure of symmetric functions, seem to be of no concern for us here. Clifford algebra action on the space of sequences of transfer matrices. ----------------------------------------------------------------------- Let $n\in {{\mathbb N}}$ and $a=(a_1,\dots, a_n)$ be fixed parameters. As above, $h_k(u)=h_{k,n}^N(u,a)$ is a function from $GL_N({{\mathbb C}})$ to $\text{Hom}( {{\mathbb C}}^N)^{\otimes n}\left[1/u\right]$, defined by $$\begin{aligned} h_{k,n}^N(u,a) (g)=tr_{Sym^k{{\mathbb C}}^N} (R_{01} (u-a_1)\dots R_{0n} (u-a_n) Sym^k(g)\otimes Id^n),\end{aligned}$$ with $R$-matrix (\[Rm\]) for $V_\lambda= Sym^k{{\mathbb C}}^N$. Recall that for any partition $\lambda= (\lambda_1\ge\lambda_2\ge\dots\ge\lambda_l>0)$, $$\begin{aligned} T^N_{\lambda,n}(u,a) (g)=\begin{cases} \det_{i,j=1,\dots,l} [h^N_{\lambda_i-i+j,n}(u-j+1,a)(g)] , & \text{if} \quad l\le N,\\ 0,&\text{if} \quad l> N \end{cases} \end{aligned}$$ defines a function $T^N_{\lambda,n}(u,a)$ from $GL_N({{\mathbb C}})$ to $\text{Hom}( {{\mathbb C}}^N)^{\otimes n}\left[1/u\right]$. Set ${\bf t}_\lambda (u)$ to be the sequence of functions $T^N_{\lambda,n}(u,a)$ $$\begin{aligned} {\bf t}_\lambda(u)=\,(T^N_{\lambda,n}(u,a))_{N=1}^{\infty}\,= \,(0, \dots, 0,\, T^{l}_{\lambda,n}(u,a),\, T^{l+1}_{\lambda,n}(u,a),\,...).\end{aligned}$$ In the view of the Remark \[rem1\], ${\bf t}_\lambda(u)$ form a set of linearly independent elements. We set $\tilde {\mathcal{B} }^{(0)}$ to be the linear span of all ${\bf t}_\lambda(u)$ over all partitions $\lambda$. Set $ \tilde {\mathcal{B} }^{(m)} = z^m\tilde {\mathcal{B} }^{(0)} $ for $m\in{{\mathbb Z}}$ and $\tilde {\mathcal{B} }=\bigoplus_{m\in{{\mathbb Z}}}\tilde {\mathcal{B} }^{(m)} $ Along the lines of Section \[sec\_int\], we extend the definition of sequences of transfer matrices to the ones labeled by integer vectors of $l$ parts by setting $$\begin{aligned} {\bf t}_\alpha(u)= \begin{cases} (-1)^{\sigma} {\bf t}_\lambda(u),&\quad \text{if} \quad \alpha-\rho=\sigma(\lambda-\rho)\,\text{for a partition}\, \lambda \,\text{and }\, \sigma\in S_l, \\ 0,&\text{otherwise.} \end{cases} \end{aligned}$$ We have: $$\begin{aligned} \label{tr} {\bf t}_{(\dots, \alpha_i, \alpha_{i+1},\dots)}(u)= - {\bf t}_{(\dots, \,\alpha_{i+1} -1\,, \,\alpha_{i}+1\, ,\dots)}(u).\end{aligned}$$ We also extend the conjugation operation on the sequences labeled by integer vectors: $$\begin{aligned} {\bf t}_{\alpha^\prime}(u)= \begin{cases} (-1)^{\sigma} {\bf t}_{\lambda^\prime}(u),&\quad \text{if} \quad \alpha-\rho=\sigma(\lambda-\rho)\,\text{for a partition}\, \lambda \,\text{and }\, \sigma\in S_l, \\ 0,&\text{otherwise.} \end{cases} \end{aligned}$$ Then the action of $Cl$ on $\tilde {\mathcal{B} }$ can be defined exactly by the same formulae as the action of this algebra on the boson space of symmetric functions: $$\begin{aligned} \psi^+_{k} (z^m{\bf t}_\lambda (u)) &= z^{m+1}{\bf t}_{(k-m-1,\lambda)}(u) ,\label{Tcl1}\\ \psi^-_k (z^m{\bf t}_\lambda(u) )&=z^{m-1}\sum_{s=1}^{\infty}(-1)^{s+1}\delta_{k-m-1,\,\lambda_s-s}\, {\bf t}_{(\lambda_1+1,\dots, \lambda_{s-1}+1, \lambda_{s+1},\lambda_{s+2}\dots) } (u). \label{Tcl2} $$ The property (\[tr\]) implies that this is a well-defined action of $Cl$ on $\tilde{\mathcal{B} }$, and we have an isomorphism of $Cl$-modules. The map from $\varphi: {\mathcal{B} }\to \tilde {\mathcal{B} }$, defined on the basis vectors $\varphi(s_\lambda z^m)= t_\lambda(u)z^m$ defines the $Cl$-module isomorphism. These statements follow immediately. \(a) Let $ 1=(1, 1,1,\dots)\in \tilde {\mathcal{B} }^{(0)}$ be the vacuum vector of the graded component $\tilde {\mathcal{B} }^{(0)}$ of the bosonic space of sequences of transfer matrices, and let $\lambda=(\lambda_1\ge\lambda_2\ge\dots \ge \lambda_l\ge0)$ be a partition. Then $$\begin{aligned} \psi^+_{\lambda_1+l}\dots \psi^+_{\lambda_l+1} (1)&= z^{l} {\bf t}_\lambda,\quad \\ \psi^-_{-\lambda_l-l+1}\dots \psi^-_{-\lambda_1} (1)&= z^{-l} (-1)^{\sum\lambda_i} {\bf t}_{(1^{(\lambda_1-\lambda_2)},2^{(\lambda_2-\lambda_3)},\,\dots\,, l^{\lambda_l})},\end{aligned}$$ where we use $(1^{(a_1)},2^{(a_2)}\dots)$ notation for a partition that has $a_1$ parts of length $1$, $a_2$ parts of length $2$, etc. \(b) Let $\tau=\sum_\lambda c_\lambda s_\lambda $ be a solution of $$\begin{aligned} \label {H1} \sum_{k\in {{\mathbb Z}}}\psi^+_k( \tau)\otimes \psi^{-}_{k} (\tau)=0\end{aligned}$$ for the Clifford algebra action on the boson space ${\mathcal{B} }$ of symmetric functions. Then $ {\tilde \tau} =\sum_\lambda c_\lambda {\bf t}_\lambda (u)= \left(\sum_\lambda c_\lambda T^N_{\lambda, n} (u,a)\right)_{N=1}^{\infty}$ is a solution of the equation $$\sum_{k\in {{\mathbb Z}}}\psi^+_k( \tilde \tau (u))\otimes \psi^-_{k} (\tilde\tau(v))=0$$ for the Clifford algebra action on the boson space $\tilde {\mathcal{B} }$ of sequences of transfer matrices. (cf. [@Z1], [@Z3], [@Z2], [@Z4]). Notes on bosonisation ===================== Combine operators $\psi^{\pm}_k$ into generating functions $$\begin{aligned} \label{pp} \Psi^+(x,m)=\sum_{k\in {{\mathbb Z}}}^\infty \psi^+_k|_{ {\mathcal{B} }^{(m)}}x^{k}, \quad \Psi^-(x,m)=\sum_{k\in {{\mathbb Z}}}^\infty \psi^-_k|_{ {\mathcal{B} }^{(m)}}x^{-k}.\end{aligned}$$ Then the relations of Clifford algebra are $$\begin{aligned} &\Psi^\pm(x, m\pm1)\Psi^\pm(y, m)+\Psi^\pm(y, m\pm1)\Psi^\pm(x, m)=0,\\ &\Psi^-(x, m+1)\Psi^+(y, m)+\Psi^+(y, m-1)\Psi^-(x, m)= \delta( x^{-1}, y^{-1}),\end{aligned}$$ with formal distribution $ \delta(x^{-1},y^{-1})=\sum_{k\in {{\mathbb Z}}}\frac{y^{k+1}}{x^k}. $ It is known (see e.g. proof in [@JR2]) that the action of $\Psi^\pm(x,m)$ on vacuum vector $1$, the constant polynomial in the boson space ${\mathcal{B} }^{(0)}$ of Schur functions, produces the generating functions of symmetric functions: $$\begin{aligned} \Psi^+(x_1, l-1)\dots \Psi^+(x_l, 0) (1)&= z^{l} x_1^l\dots x_l^l \sum s_\lambda x_1^{\lambda_1}\dots x_l^{\lambda_l+1-l},\\ \Psi^-(x_1,- l+1)\dots \Psi^-(x_l, 0) (1)&= z^{-l} x_1^l\dots x_l^l \sum (-1)^A s_\lambda x_1^{\lambda_1}\dots x_l^{\lambda_l+1-l}, \end{aligned}$$ where $A=\lambda_1+\dots +\lambda_l$. Clifford algebra modules isomorphism immediately implies analogous statement for the action of these operators on $\tilde{\mathcal{B} }$. Let ${\bf 1}= (1)_{N=1}^{\infty}$ be the vacuum vector in in the boson space $\tilde {\mathcal{B} }^{(0)}$ of sequences of transfer matrices. Then $$\begin{aligned} \Psi^+(x_1, l-1)\dots \Psi^+(x_l, 0) ({\bf 1})&= \left(z^{l}\, x_1^l\dots x_l^l \,H^N(x_1,\dots,x_l|u) \right)_{N=1}^{\infty}\\ \Psi^-(x_1,- l+1)\dots \Psi^-(x_l, 0) ({\bf1})&=\left( z^{-l} \, (-x_1)^l\dots (-x_l)^l\, E^N(x_1,\dots, x_l|u)\right)_{N=1}^{\infty}, \end{aligned}$$ where $\Psi^\pm(x, m)$ are generating functions of the action of operators $\psi^{\pm}$ on the graded component $\tilde {\mathcal{B} }^{(m)}$. Recall that the transition between (\[H1\]) and Hirota bilinear equations that produce KP hierarchy is based on the bosonization of the operators $\Psi^{\pm} (x,0)$. Namely, consider the action of $\Psi^{\pm} (x,0)$ on the boson space of symmetric functions. Let $h_r= s_{(r)}$ be complete symmetric functions, let $e_r= s_{(1^r)}$ be elementary symmetric functions, and let $p_k$ be (normalized) classical power sums. One can write generating functions for complete and elementary symmetric functions in the form $H(x)=\sum_{k\ge 0} h_k x^k,$ $E(x)=\sum_{k\ge 0} (-1)^ke_k x^k$. Then the families $\{e_r\}, \{h_r\}, \{p_r\}$ are related by the identities $$\begin{aligned} H(x) E(x)=1,\quad H(x)=\exp\left(\sum_{k\ge 1}p_kx^k\right),\quad E(x)=\exp\left(-\sum_{k\ge 1} p_k x^k\right).\end{aligned}$$ The ring of symmetric functions possesses a natural scalar product, where the classical Schur functions $s_\lambda$ constitute an orthonormal basis ${\langle }s_\lambda,s_\mu{\rangle}=\delta_{\lambda, \mu}$. For any symmetric function $f$ one can define the adjoint operator $D_f$ acting on the ring of symmetric functions by the standard rule: $ {\langle }D_fg, w{\rangle}={\langle }g,fw{\rangle}$, where $g,w$ are any symmetric functions. The properties of adjoint operators are described e.g. in [@Md], Section I.5. Set $$\begin{aligned} DE(x)= \sum_{k\ge 0}(-1)^k D_{e_k} x^{-k},\quad DH(x)= \sum_{k\ge 0} D_{h_k} x^{-k}. \end{aligned}$$ Then the operators $ \Psi^{\pm}(x,m)$ acting on the component ${\mathcal{B} }^{(m)}$ can be written in the vertex operator form $$\begin{aligned} \Psi^+(x,m)&= x^{m+1}z \,H(x) DE\left(x\right),\quad \Psi^- (x,m)= x^{{-m}}z^{-1} \, E(x) DH\left(x\right).\label{psi11}\end{aligned}$$ Often these formulae are written through the power sums (see e.g. [@Bom] Lecture 5, or [@Md] Section I.5 Example 29). Any symmetric function $f$ can be expressed as a polynomial $ f=\varphi(p_1,2p_2, 3p_3,\dots)$ in (normalized) power sums. Then one has $D_f=\varphi ({\partial_{ p_1}}, {\partial_ {p_2}}, {\partial_{ p_3}},\dots)$. (See e.g. [@Md], I.5, Example 3). Hence $$DH(x)=\exp\left(\sum_k {\frac {\partial_{ p_k}}{k}}x^{-k}\right), \quad DE(x)= \exp\left(-\sum_k \frac{\partial_{ p_k}}{k}x^{-k}\right),$$ and we can write $$\begin{aligned} \Psi^+(x,m)= x^{m+1}z\exp \left(\sum_{j\ge 1}p_j x^j \right) \exp \left(-\sum_{j\ge 1}\frac{ \partial_{p_j}}{j} {x^{-j}}\right), \label{psi2}\\ \Psi^-(x,m)= x^{-m}z^{-1} \exp \left(-\sum_{j\ge 1} {p_j} x^j \right) \exp \left(\sum_{j\ge 1}\frac{\partial_{p_j}}{j} {x^{-j}}\right). \label{psi21}\end{aligned}$$ The formulae (\[psi2\]), (\[psi21\]) describe the operators in terms of generators $\{p_i, \partial_{p_j}\}_{j=1,2,\dots}$ of the Heisenberg algebra acting on the boson space. The substitution of variables and the Taylor expansion of these formulas produce the Hirota bilinear equations of KP hierarchy [@D1], [@Hirota1],[@Miwa1]. The constructed in Section 3 isomorphism ${\mathcal{B} }\simeq \tilde {\mathcal{B} }$ of Clifford algebra modules does not transport the ring structures of these spaces, hence decompositions (\[psi11\]), (\[psi2\]), (\[psi21\]) are not applicable to $ \Psi^\pm(u,m)$-action on $\tilde {\mathcal{B} }$. The partial analogue of (\[psi11\]) for the components of the action of $Cl$ on the sequences of transfer matrices is stated below, but at the moment we do not know further interpretation of vertex operators through Heisenberg-type generators in the spirit of (\[psi2\]), (\[psi21\]). Recall determinant formulae (\[H11\]), (\[E11\]) for generating functions $H^N(x_1, x_2\dots, x_l|u)$ and $E^N(x_1, x_2\dots, x_l|u)$. Since operators $x_i e^{-\partial_{u_i}}$ commute with each other, one can apply the expansion $$\begin{aligned} \prod _{i=1}^{m}(a_i-y)=\sum_{k=0}^{m} (-y)^{m-k} e_k(a_1,\dots, a_m)\end{aligned}$$ in terms of elementary symmetric functions $e_k(a_1,\dots, a_m)$ to the product: $$\begin{aligned} \prod _{i=1}^{l}\left(\frac{e^{-\partial_{u_1}}}{x_1}-\frac{e^{-\partial_{u}}}{x}\right)= \sum_{k=0}^{l} \left(-\frac{e^{-\partial_{u}}}{x}\right)^{l-k} e_k\left(\frac{e^{-\partial_{u_1}}}{x_1},\dots, \frac{e^{-\partial_{u_l}}}{x_l}\right).\end{aligned}$$ Hence, from (\[H11\]), $$\begin{aligned} H^N(x, x_1,\dots, x_l|u)=\sum_{k=0} ^{l} (- x)^{-l+k} H^N(x|u-l+k) DE^N_{k} \, H(x_1,\dots, x_l|u),\\ E^N(x, x_1,\dots, x_l|u)=\sum_{k=0} ^{l} (- x)^{-l+k} H^N(x|u+k-l) DH^N_{k} \, E(x_1,\dots, x_l|u),\end{aligned}$$ where $ DE^N_{k}$, $ DH^N_{k}$ are analogues of operators $D_{e_k}$, $D_{h_k}$. They act on the coefficients of the generating series of transfer matrices by formulae $$\begin{aligned} DE^N_{k} \, H^N(x_1,\dots, x_l|u)= e_k\left(\frac{e^{-\partial_{u_1}}}{x_1},\dots, \frac{e^{-\partial_{u_l}}}{x_l}\right)H( x_1,\dots, x_l|u_2,\dots u_l)\, |_{u_1=\dots=u_l=u},\\ DH^N_{k} \, E^N(x_1,\dots, x_l|u)= e_k\left(\frac{e^{\partial_{u_1}}}{x_1},\dots, \frac{e^{\partial_{u_l}}}{x_l}\right)E( x_1,\dots, x_l|u_2,\dots u_l)\, |_{u_1=\dots=u_l=u}.\end{aligned}$$ This sums up to the following statement of the decomposition of coordinates of the action of generating functions $\Psi^\pm(x,m)$ on the sequences of transfer matrices. Let $\Psi^\pm(x,m)$ be generating functions of the restrictions of action of $\psi^{\pm}_k$ on the graded component $\tilde {\mathcal{B} }^{(m)}$ (cf. (\[pp\])). Let $\lambda=(\lambda_1,\dots, \lambda_l)$ be a partition of at most $l$ parts. Then the coordinates of the action of $\Psi^\pm(x,m)$ on $ z^m{\bf t_\lambda}=(z^m T^N_{\lambda, n}(u,a))_{N=0}^{\infty}$ can be decomposed as $$\begin{aligned} \Psi^+(x,m)(z^m{\bf t_\lambda})= \left(z^{m+1}\sum_{k=0} ^{l} (- x)^{-l+k} H^N(x|u-l+k) DE^N_{k}\, T^N_{\lambda, n}(u,a)\right)_{N=1,...,\infty},\\ \Psi^-(x,m)(z^m{\bf t_\lambda})= \left(z^{m-1}\sum_{k=0} ^{l} (- x)^{-l+k} E^N(x|u+l-k) DH^N_{k}\, T^N_{\lambda, n}(u,a)\right)_{N=1,...,\infty}.\end{aligned}$$ Aknowledgements =============== The author would like to thank E. Mukhin for valuable discussions and the reviewer for consideration of the text. [99]{} A.  Alexandrov, V. Kazakov, S. Leurent, Z. Tsuboi, A. Zabrodin, [*Classical tau-function for quantum spin chains*]{}, J. High Energy Phys. no. 9, (2013) 064. A.  Alexandrov, S. Leurent, Z. Tsuboi, A. Zabrodin, [ *The master T-operator for the Gaudin model and the KP hierarchy*]{}, Nuclear Phys. B 883 (2014), 173 – 223. V. Bazhanov, N. Reshetikhin, [*Restricted solid-on-solid models connected with simply laced algebras and conformal field theory*]{}, J. Phys. A: Math. Gen. 23 (1990), 1477–1492. I.  Cherednik, [*An analogue of the character formula for Hekke algebras*]{}, Funct. Anal. Appl. 21 (1987), no. 2, 172 –174. E. Date, M. Kashiwara, M. Jimbo, Michio, T. Miwa, [*Transformation groups for soliton equations. Nonlinear integrable systems—classical theory and quantum theory*]{}, (Kyoto, 1981), World Sci. Publishing, Singapore, 1983, 39–119. R. Hirota, [*Discrete analogue of a generalized Toda equation*]{}, J. Phys. Soc. Japan 50 (1981), no.11, 3785 –2791. Fulton, W., Harris, J.: Representation Theory. GTM 129. A First Course. Springer, New York (1991) N. Jing, N. Rozhkovskaya, [*Vertex operators arising from Jacobi-Trudi identities*]{}, Comm. Math. Phys. 346 (2016), no. 2, 679 –701. N. Jing, N. Rozhkovskaya, [*Generating functions for symmetric and shifted symmetric functions*]{}, arXiv:1610.03396. V. G. Kac, [*Vertex algebras for beginners.*]{} 2nd ed., University Lecture Series, 10. Amer. Math. Soc., Providence, RI, 1998. V. G. Kac, A. K. Raina, N. Rozhkovskaya, [*Bombay lectures on highest weight representations of infinite dimensional Lie algebras*]{}, 2nd ed., World Scientific, Hackensack, NJ, 2013. I. Krichever, O. Lipan, O, P. Wiegmann, A. Zabrodin. [*Quantum integrable models and discrete classical Hirota equations*]{}. Comm. Math. Phys. 188 (1997), no. 2, 267–304. A. Kuniba, , T. Nakanishi, J. Suzuki [*T-systems and Y-systems in integrable systems*]{} J. Phys. A44 (2011), no.10, 103001. I. G. Macdonald, [*Symmetric functions and Hall polynomials*]{}, 2nd ed., Oxford Univ. Press, New York, 1995. T.  Miwa, [*On Hirota’s difference equations*]{}, Proc. Japan Acad. 58 (1982), no.1, 9–12. E. Mukhin, V. Tarasov, A. Varchenko [*Bethe eigenvectors of higher transfer matrices*]{} J. Stat. Mech. (2006), no.8, P08002. D. Talalaev, [*The quantum Gaudin system*]{} Funct. Anal. Appl (2006), 40, no.1, 73 –77. A. Zabrodin, [*The master T-operator for vertex models with trigonometric $R$-matrices as a classical $\tau$-function*]{}, Theoret. and Math. Phys. 174 (2013), no. 1, 52 –67. A. Zabrodin, [*The master T-operator for inhomogeneous XXX spin chain and mKP hierarchy*]{}, SIGMA Symmetry Integrability Geom. Methods Appl. 10 (2014), Paper 006.
--- abstract: 'We construct solutions of higher-dimensional Einstein gravity coupled to nonlinear $\sigma$-model with cosmological constant. The $\sigma$-model can be perceived as exterior configuration of a spontaneously-broken $SO(D-1)$ global higher-codimensional “monopole". Here we allow the kinetic term of the $\sigma$-model to be noncanonical; in particular we specifically study a quadratic-power-law type. This is some possible higher-dimensional generalization of the Bariola-Vilenkin (BV) solutions with $k$-global monopole studied recently. The solutions can be perceived as the exterior solution of a black hole swallowing up noncanonical global defects. Even in the absence of comological constant its surrounding spacetime is asymptotically non-flat; it suffers from deficit solid angle. We discuss the corresponding horizons. For $\Lambda>0$ in $4d$ there can exist three extremal conditions (the cold, ultracold, and Nariai black holes), while in higher-than-four dimensions the extremal black hole is only Nariai. For $\Lambda<0$ we only have black hole solutions with one horizon, save for the $4d$ case where there can exist two horizons. We give constraints on the mass and the symmetry-breaking scale for the existence of all the extremal cases. In addition, we also obtain factorized solutions, whose topology is the direct product of two-dimensional spaces of constant curvature ($M_2$, $dS_2$, or $AdS_2$) with (D-2)-sphere. We study all possible factorized channels.' author: - Ilham Prasetyo - 'Handhika S. Ramadhan' title: 'Classical defects in higher-dimensional Einstein gravity coupled to nonlinear $\sigma$-models' --- \#1[[^1]]{} Introduction ============ General relativity coupled to $SO(3)$ global monopole was first studied by Barriola and Vilenkin [@Barriola:1989hx]. They found that the exterior monopole spacetime can be brought to “look like" Minkowski with a deficit solid angle ($\Delta=8\pi G\eta^2$), only that it is not locally flat. The size of the monopole core is of the order of $\delta\sim\eta^{-1}$. When the core is smaller than its corresponding Schwarzschild radius, their solution describes a global monopole eaten up by a black hole [@Dadhich:1997mh]. The Barriola-Vilenkin (BV) monopole solution was later extended to higher-codimensional defects by Olasagasti and Vilenkin (OV) [@Olasagasti:2000gx]. They studied spacetime around higher-dimensional $SO(n)$ global defects, where the hedgehog configuration wraps the $n$ extra dimensions. Upon rescaling, their solution (with $\Lambda=0$) of the spacetime transverse to the $p$-brane describes the generalization of BV metric, with deficit angle that grows with the dimension. The black hole version of this solution is described by Tangherlini metric [@Tangherlini:1963bw]. These facts seem to support the well-known conjecture of “black hole has no scalar hair" (for example, see [@Coleman:1991ku]). In field theory there has been a recent growing interest on defects solutions having noncanonical kinetic term(s), dubbed $k$-defects [@Babichev:2006cy; @Babichev:2007tn]. One of the simplest model of $k$-monopole is by having a power-law kinetic terms. The gravitational field of such objects and the geodesic of a test particle around it have been studied numerically in [@Jin:2007fz]. Recently we have investigated analytically the corresponding black hole and compactification solutions [@Prasetyo:2015bga; @Prasetyo:2016nng]. We found that the noncanonical nature of the theory allows the existence of a black hole with a genuine scalar charge, that cannot be brought into Schwarzschild upon rescaling. This solution behaves a lot like Reissner-Nordstrom, having two horizons that can be extremal or succumb to naked singularity, depending on the value of the ratio between the charge and the coupling to nonlinearity, ${\eta^4\over\beta^2}$. In this work we generalize our analysis to higher dimensions. We look for exact solutions of spacetime around a noncanonical $SO(D-1)$ global defects with cosmological constant in $(D+1)$ dimensions. To achieve this purpose, in Section II we consider the Einstein’s equations with nonlinear $\sigma$-model as a source. In this work we limit our investigation only for the case of quadratic-power-law kinetic term. The case for Dirac-Born-Infeld (DBI) kinetic term is reported in the subsequent report. The solution with the hedgehog ansatz can be perceived as a metric around a global defects. In Section III we present the black hole solution and study its various extremal conditions. Section IV is devoted to the factorized solutions, where we analyze all possible topologies and give classical conditions of how each can be achieved. We summarize our conclusions in Section V. Gravity of power-law global defects =================================== The simplest power-law global monopole theory is described by the following action $$\mathcal{S}=\int d^{D}x \; \sqrt{|g|} \; \left( {R-2\Lambda \over 16\pi G} +\mathcal{K}(\mathcal{X})-{\lambda\over 4} \left( \vec{\Phi}^2-\eta^2 \right) \right),$$ with $\mathcal{X}\equiv -(1/2)\partial_M\vec{\Phi}\partial^M\vec{\Phi}$ and $\mathcal{K}$ is a functional of $\mathcal{X}$ only. Here $G$ and $\Lambda$ are the $D$-dimensional Newton’s constant and cosmological constant, repsectively. As is discussed in [@Babichev:2006cy; @Jin:2007fz], the form of $\mathcal{K}(\mathcal{X})$ must satisfy the following: $$\begin{aligned} \mathcal{K}(\mathcal{X})=\begin{cases} -\mathcal{X},\ \ \ |\mathcal{X}|\ll1,\\ -\mathcal{X}^{\alpha},\ \ \ |\mathcal{X}|\gg1, \end{cases}\end{aligned}$$ where $\alpha$ is some constant. The upper condition ensures that the theory goes to its canonical counterpart in its perturbative regime to avoid “zero-kinetic problem". In this work, we particularly choose the following specific form: $$\mathcal{K}(\mathcal{X})\equiv-\mathcal{X}-\beta^{-2} \mathcal{X}^2, \label{eq:kbeta}$$ with $\beta$ is the parameter coupling to nonlinearity. In the limit of $\beta\rightarrow\infty$ the theory reduces to the canonical Einstein-$\sigma$-model case. This model can be perceived as the simplest nonlinear $\sigma$-model with noncanonical kinetic term(s), and can be thought of as a truncation of a more general nonlinear lagrangian. Note that this truncation is only valid as long as $|\mathcal{X}|\ll\beta^{-1}$. Otherwise, higher-order correction terms should be included. We surely do not expect this truncated toy model to represent a realistic theory. However, we can regard this as a model that is simple enough to study while at the same time gives, as we shall see, genuine features of solutions not present in its canonical counterpart. Another type of toy model we can consider is the Dirac-Born-Infeld (DBI)[^2] global monopole coupled to gravity, $$\mathcal{K}(\mathcal{X})\equiv b^2\left(1-\sqrt{1+{2\mathcal{X}\over b^2}}\right). \label{eq:kdbi}$$ Note that the corresponding effective action can be written as $$\mathcal{S}=\int d^{D}x~ \sqrt{|g|} \left[ {R-2\lambda\over 16\pi G}- \beta^2\sqrt{1-{\partial_M \vec{\Phi} \partial^M \vec{\Phi} \over \beta^2}} -{\lambda\over4}\left( \vec{\Phi}^2-\eta^2\right)^2\right],$$ with $\lambda\equiv\Lambda-16\pi G\beta^2$ is the effective cosmological constant. This type of theory is at the moment being studied and shall be reported separately [@nextpub]. The scalar field part of the action admits an $SO(d)$ symmetry which is spontaneously broken to $SO(d-1)$, where $d$ is the number of degrees of freedom of the scalar field $\vec{\Phi}$. The nonlinear $\sigma$-model constraint restricts the scalar field to stay in its vacuum manifold $\mathcal{M}$, defined by $\vec{\Phi}^2=\eta^2$, which is an $S^{d-1}$. The scalar field can then be regarded as having internal coordinates, $\vec{\Phi}=\vec{\Phi}\left(\phi^i\right)$, $i=1, 2,\cdots, d-1$. Its effective action is then given by $$\mathcal{S}=\int d^{D}x \; \sqrt{|g|} \; \left[ {R-2\Lambda \over 16\pi G} -X-\beta^{-2} X^2 \right] ,$$ with $X\equiv-(1/2)\eta^2 h_{ij}\partial_M{\phi}^i\partial^M{\phi}^j$, where $h_{ij}=h_{ij} (\phi^k)\equiv{\partial\vec{\Phi}\over\partial\phi^i}\cdot{\partial\vec{\Phi}\over\partial\phi^j}$ is the internal metric on the manifold $\mathcal{M}$. The equation of motion for the scalar field and the energy-momentum tensor are, respectively, $$\label{eq:sigmamod} {1\over\sqrt{|g|}}\partial_M\left( \sqrt{|g|}(1+2\beta^{-2}X)\eta^2 h_{ij} \partial^M \phi^j \right)=(1+2\beta^{-2}X){\eta^2\over2} \partial_M\phi^m\partial^M\phi^n {\partial h_{mn}\over\partial\phi^i},$$ $$T^M_N=\delta^M_N \left[ {\Lambda\over 8\pi G} +X+\beta^{-2} X^2 \right]+(1+2\beta^{-2}X) \eta^2 h_{ij} \partial^M\phi^i \partial_N\phi^j.$$ They are to be solved along with the Einstein’s equations. In this work we seek solutions for spherically-symmetric metric $$ds^2=A^2(r)dt^2 -B^2(r)dr^2-C^2(r)d\Omega^2_{D-2}.\label{eq:metric}$$ The unit $(D-2)$-sphere are parametrized by the angular coordinates $\theta^1,\theta^2,...,\theta^{D-2}$. The Ricci tensor components are $$\begin{aligned} R^t_t&=&B^{-2}\left[ {A''\over A}-{A'B'\over AB} +(D-2) {A'C'\over AC} \right],\\ R^r_r&=&B^{-2}\left[ {A''\over A} + (D-2) {C''\over C} -{B'\over B}\left\{ {A'\over A} +(D-2) {C'\over C} \right\}\right],\\ R^\theta_\theta&=&B^{-2}\left[ {C''\over C}+{C'\over C}\left\{ {A'\over A}-{B'\over B}+(D-3){C'\over C} \right\}\right]-{(D-3)\over C^2}.\end{aligned}$$ For the sake of later purpose, we also show the components of Einstein tensor $$\begin{aligned} G^0_0&=&-{(D-2)\over B^2}{C''\over C}+{(D-2)\over B^2}{B'C'\over BC}+{(D-2)(D-3)\over2C^2}\left(1-{C'^2\over B^2}\right),\\ G^r_r&=&-{(D-2)\over B^2}{A'C'\over AC}+{(D-2)(D-3)\over2C^2}\left(1-{C'^2\over B^2}\right),\\ G^\theta_\theta&=&-{1\over B^2}{A''\over A}-{(D-3)\over B^2}{C''\over C}+{1\over B^2}{A'B'\over AB}-{(D-3)\over B^2}{A'C'\over AC}+{(D-3)\over B^2}{B'C'\over BC}\nonumber\\ &&+{(D-4)(D-3)\over2C^2}\left(1-{C'^2\over B^2}\right).\end{aligned}$$ The crucial part in this work comes when choosing the appropriate ansatz for the scalar field. Here we follow [@GellMann:1984sj] in taking the simplest ansatz that respects spherical symmetry, $$\phi^i(\theta^i)=\theta^i,$$ where now $i=1, \cdots, D-2$; the number of degrees of freedom of the internal space should equal the angular degrees of freedom of the coordinate space. It can be seen that such an ansatz satisfies Eq. trivially if $$h_{ij}=-{1\over C^2}g_{ij}.$$ This will give us $$T^0_0=T^r_r=\left[ {\Lambda\over 8\pi G} +X+\beta^{-2} X^2 \right],$$ $$T^\theta_\theta=T^0_0-(1+2\beta^{-2}X) {\eta^2\over C^2},$$ with $X\equiv(D-2)\eta^2/2C^2$. In the next sections we shall discuss several classes of solutions to these equations. Black hole solutions ==================== Let us first consider an ansatz: $$C(r)\equiv r.$$ The Einstein’s equations, $R^A_{B}=8\pi G\left(T^A_B-\delta^A_B {T\over (D-2)}\right)$, give us $$\begin{aligned} && R^0_0={1\over B^2}\left[ {A''\over A} - {A'B'\over AB} + (D-2) {A'\over rA} \right]=-{2\over D-2}\Lambda + {(D-2)\over 2} {8\pi G\eta^4\over \beta^2 r^4},\\ && R^r_r={1\over B^2}\left[ {A''\over A} - {A'B'\over AB} - (D-2) {B'\over rB} \right]=-{2\over D-2}\Lambda + {(D-2)\over 2} {8\pi G\eta^4\over \beta^2 r^4},\\ && R^\theta_\theta = {1\over B^2}\left[ {A'\over rA} - {B'\over rB} + {D-3\over r^2} \right] -{D-3\over r^2}\label{eq:ttheta} =-{2\over D-2}\Lambda - {8\pi G\eta^2\over r^2} - {(D-2)\over 2} {8\pi G\eta^4\over \beta^2 r^4}.\end{aligned}$$ From $R^0_0-R^r_r$ we have $${A'\over A}+{B'\over B}=0 \Rightarrow (AB)'=0.$$ Without loss of generality we can take $$A=B^{-1}.$$ Substituting into $R^\theta_\theta$ yields $$R^\theta_\theta = {1\over r^{D-2}}\left( {r^{D-3}\over B^2} \right)'- {D-3\over r^2},$$ and substituting it back to , we obtain the following solution $$\begin{aligned} B^{-2}=1-{8\pi G\eta^2\over(D-3)}-{2\Lambda r^2\over (D-2)(D-1)} -{4(D-2)\pi G\eta^4\over (D-5)\beta^2 r^2}-{2GM\over r^{(D-3)}},\end{aligned}$$ with $M$ a constant of integration. As in the case of BV and OV solutions, ours is only valid for $D>3$. The noncanonical nature of the scalar field, $X^2$, adds another constraint that $D\neq 5$. To better see what this solution tells, let us rescale[^3] $t\rightarrow t(1-\Delta)^{1/2}$ and $r\rightarrow r(1-\Delta)^{-1/2}$, where $\Delta\equiv 8\pi G\eta^2/(D-3)$ is the deficit solid angle. The metric thus becomes $$ds^2=f(r)dt^2-{dr^2\over f(r)}-(1-\Delta)r^2d\Omega^2_{(D-2)},\label{eq:metricsol}$$ with $$\begin{aligned} f(r)=1-{2\Lambda r^2\over (D-2)(D-1)} -{4(D-2)\pi G\eta^4\over (D-5)\beta^2 r^2}-{2GM\over r^{(D-3)}},\label{eq:solblackhole}\end{aligned}$$ where simultaneously we also transform $M\rightarrow M(1-\Delta)^{(1-D)/2}$ and $\beta\rightarrow\beta(1-\Delta)$. The Kretchmann scalar $K^2\equiv R_{ABCD} R^{ABCD}$ yields $$\begin{aligned} K^2&=&\frac{4 (D-2)}{B^4}\left[{\left(\frac{A'}{Ar}\right)^2}+{\left(\frac{B'}{Br}\right)^2}\right]+\frac{4 }{B^4}{\left(\frac{A''}{A}-\frac{A'B'}{AB}\right)^2} +\frac{8 (D-2) (D-3)}{r^4}\left(1-\frac{1}{B^2}\right)^2 \nonumber \\ &=&{\left[{2 (D-2)^2 \left(D^3-9 D^2+23 D-15\right) G M \over { (D-5) (D-2) (D-1)} r^{D-1}}+{ 24 (D-2)^2 (D-1) \eta ^4 \pi G +4 \beta ^2 (D-5) \Lambda r^4\over {\beta ^2 (D-5) (D-2) (D-1)} r^{4}}\right]^2}\nonumber\\ &&+\frac{8 (D-3) (D-2)}{r^4} \left(\frac{2 \Lambda r^2}{(D-2)(D-1)}+\frac{8\pi G\eta ^2 }{D-3}+{2 G M \over r^{D-3}}+\frac{8(D-2) \eta ^4\pi G }{2 \beta ^2 (D-5) r^2}\right)^2\nonumber\\ &&+\frac{2 (D-2)}{r^2} \left(-\frac{4 \Lambda r}{(D-2)(D-1)}+{2 (D-3) G M \over r^{D-2}}+\frac{8(D-2) \eta ^4\pi G }{\beta ^2 (D-5) r^3}\right)^2,\end{aligned}$$ which implies absolute singularity at $r=0$. The metric  describes a “scalarly-charged" (A)dS-Tangherlini black hole with global monopole. The charge is genuinely due to the nonlinearity of the scalar field’s kinetic term. In the limit of $\beta\rightarrow\infty$ Eq  reduces to the ordinary (A)dS-Tangherlini black hole and the scalar charge disappears. The difference between this solution and (A)dS-Reissner-Nordstrom-Tangherlini solution is that the latter has the EM charge term that goes like $O(r^{-2(D-3)})$ while in the former the scalar charge term has a fixed polynomial order, $O(r^{-2})$, regardless of the dimensions. It is amusing that this scalar-charge term is very similar to the magnetic term in the metric solution of the generalized Nariai black hole with multiple magnetic charges [@Batista:2016qnu; @Ortaggio:2007hs]. For $D=4$ the charge term is positive, much like the case with electric (or magnetic) charge in Reissner-Nordstrom solution. This looks like a solution that violates the well-known black-hole-has-no-hair theorem, but the reader should be reminded that the spacetime around this blackhole is not asymptotically flat due to the deficit angle; it has conical solid angle. Such a metric solution has been discovered and studied (albeit without the noncanonical kinetic term), for example, in [@Dadhich:1997mh; @Jensen:1995fz; @Yu:1994fy; @Lustosa:2015hwa; @Mazharimousavi:2014uya][^4]. The noncanonical nature of this solution allows it to evade the no-scalar-hair theorem. The static black hole solution ceases to exist when $$\eta>\eta_{crit}\equiv\sqrt{D-3\over8\pi G}.\label{eq:etacrit}$$ It should be pointed out that the result  above is generic to the theory of global monopole; It is the property of the corresponding vacuum manifold (the infra-red regime), independent of the noncanonicality of the kinetic term (the UV-regime). It is, however, interesting to stress that such deficit angle only appears in the black hole having spherical topology. One of us (HSR) investigated the corresponding global monopole black hole in hyperbolic topology, and one of the peculiar properties we found is that such a black hole possesses surplus (instead of deficit) solid angle [@ramadhanpradhana]. This result shall be reported elsewhere. We study the existence of horizons by plotting this metric solution for different coupling values, cosmological constants, and dimensions. There are several interesting cases. The Case with $\Lambda=0$ ------------------------- In this limit, Eq.  becomes $$f(r)=1-{4(D-2)\pi G\eta^4\over (D-5)\beta^2 r^2}-{2GM\over r^{(D-3)}}.\label{eq:solblackholelambdanol}$$ The case for $D=4$ has been discussed extensively in [@Prasetyo:2015bga; @Prasetyo:2016nng]. This is a Reissner-Nordstrom-like black hole with two horizons $$r_{\pm}=GM\left(1\pm\sqrt{1-{8\pi\eta^4\over M^2G\beta^2}}\right).$$ ![The four-dimensional metric $f(r)$ as a function of $r$ for the case of two horizons, extremal, and naked singularity.[]{data-label="k-beta"}](k-beta--.pdf){width="0.7\linewidth"} Their behavior is shown in Figs. \[k-beta\] and \[fig:k-betatambah\]. ![$f(r)$ in $D=4$ for various $\beta$. As $\beta$ increases the local minimum gets deeper. In the limit of $\beta\rightarrow\infty$ the local minimum disappears. []{data-label="fig:k-betatambah"}](k-beta---tambahan.pdf){width="0.7\linewidth"} ![The D-dimensional metric $f(r)$ for several $D$.[]{data-label="fig:k-beta-lambda-0"}](k-beta--2.pdf "fig:"){width="0.7\linewidth"} (c) For $D>5$, it is in general difficult to obtain the exact roots of $f_D(r)=0$. From Eq.  we can see that since the second term is always negative there is no local minimum of $f_D(r)$. Thus there is only one real root for each $D$; the higher-dimensional black hole with noncanonical global monopole all resemble Tangherlini’s. This can be seen in Fig. \[fig:k-beta-lambda-0\]. For example, in $D=6$ we have $$f(r)=1-{2GM\over r^3}-{16\pi G\eta^4\over\beta^2 r^2}.$$ Out of three, only one root is real, given by $$r_{H}={1\over3}\bigg[a+a^2\left(2\over 2a^3+27 b+3\sqrt{3}\sqrt{4a^3b+27b^2}\right)^{1/3}+\left(2a^3+27b+3\sqrt{3}\sqrt{4a^3b+27b^2}\over2\right)^{1/3}\bigg],$$ where $a\equiv{16\pi G\eta^4\over\beta^2}$ and $b\equiv2GM$. In $D=7$, we have $$f(r)=1-{2GM\over r^4}-{10\pi G\eta^4\over\beta^2 r^2}.$$ The positive roots are $$r_{\pm}=\sqrt{{5G\pi\eta^4\over\beta^2}\pm{\sqrt{2GM\beta^4+25G^2\pi^2\eta^8}\over\beta^2}}.$$ But it can easily be seen that for every physical values of $\beta$ and $\eta$, only $r_+$ is real. The Case with $\Lambda\neq0$ ---------------------------- The existence of the number of roots of  depends on the sign of $\Lambda$. For $D=4$, we have the polynomial equation $$f(r)\equiv1-{\Lambda r^2\over3} +{8\pi G\eta^4\over\beta^2 r^2}-{2GM\over r}=0.$$ This is a fourth-order polynomial. In general one expects to have four roots, though not all (or even none) of them are real. In [@Prasetyo:2015bga] we discussed the corresponding horizons under the almost-purely de Sitter condition ($M=0$). One of our main results in this paper is the exact solutions of the black hole horizons without taking any approximation; that is, by setting $M\neq0$. For the case of $\Lambda>0$, the situation resembles the dS-Reissner-Nordstrom black hole. We can, therefore, borrow the language of [@Romans:1991nq]. There exist at most three horizons: the inner ($r_-$), the outer ($r_+$), and the cosmological ($r_c$) horizons (satifsying $r_-<r_+<r_c$). Besides the non-extremal case where the black holes have all three horizons, we can also have solutions possessing some extremal conditions. They can be characterized as: $r_-=r_+$ (the cold black hole), $r_+=r_c$ (the Nariai black hole), and $r_-=r_+=r_c$ (the ultracold black hole) [@Cardoso:2004uz]. The cold black hole can be characterized in such a way that the metric function can be written as [@Romans:1991nq] $$f_{cold}(r)=\left(1-{r_0\over r}\right)^2\left[1-{\Lambda\over3}(r^2+2r_0 r+3r_0^2)\right],\label{eq:extremecold}$$ with $r_0\equiv r_-=r_+$ the common root. In this critical extremal condition, $M$, $\eta$, $\beta$, and $\Lambda$ are related through: $$\begin{aligned} M&=&{r_0\over G}\left(1-{2\over3}\Lambda r_0^2\right),\nonumber\\ {\eta^4\over\beta^2}&=&{r_0^2\over8\pi G}\left(1-\Lambda r_0^2\right).\label{eq:condextrem}\end{aligned}$$ ![Transition from cold black hole ($\Lambda=0.06$ and $\beta=0.1116$) to ultracold black hole ($\Lambda=0.215$ and $\beta=0.17$) and eventually to Nariai black hole ($\Lambda=0.14$ and $\beta=0.15$).[]{data-label="fig:k-cold-ultra-nariai"}](k-cold-ultra-nariai.pdf){width="0.7\linewidth"} In order that $\eta$ and $\beta$ have physically-reasonable values we must have $r_0\leq{1\over\sqrt{\Lambda}}$. The cold black hole exists whenever $0<r_0<{1\over\sqrt{2\Lambda}}$. At $r_0={1\over\sqrt{2\Lambda}}$ we have all three horizons coincide. This is the ultracold black hole, characterized by $$f_{ultracold}(r)=-{r^2\over6 r_0}\left(1-{r_0\over r}\right)^3\left(1+{3r_0\over r}\right).$$ The mass ($M$) and the ratio of $\eta^2$ and $\beta$ are related to $\Lambda$ by $$\begin{aligned} M&=&{2\over\sqrt{18\Lambda}G},\nonumber\\ {\eta^2\over\beta}&=&{1\over32\pi G\Lambda}.\end{aligned}$$ As long as ${\beta\over\Lambda}\lesssim4$ the ultracold black hole can form before the entire solid angle is eaten up. In the range of $1/\sqrt{2\Lambda}<r_0\leq1/\sqrt{\Lambda}$ the ultracold common horizon disintegrates into $r_-$ and $r_0\equiv r_+=r_c$, and the black hole interpolates into the so-called Nariai regime [@Nariai] (see Fig. \[fig:k-cold-ultra-nariai\]). It becomes maximal (chargeless or neutral Nariai) when $r_0={1\over\sqrt{\Lambda}}$. Here, the inner horizon $r_-$ disappears and the metric can be written as $$f_{Nariai}(r)=-{1\over3r_0^2}\left(1-{r_0\over r}\right)^2\left(r^2+2r_0r\right).$$ In this case, $$\begin{aligned} M&=&{1\over3\sqrt{\Lambda}G},\nonumber\\ {\eta^2\over\beta}&=&0.\end{aligned}$$ ![Black hole transition. By varying $\Lambda$ it can be seen that we can have transition from solution with one (inner) horizon, to Nariai, to cold, and finally to naked singularity.[]{data-label="fig:k-transition-to-singular"}](k-transition-to-singular.pdf){width="0.7\linewidth"} In Fig. \[fig:k-transition-to-singular\] we can see how the black hole smoothly transitions from having one horizon to naked singularity through cold and Nariai states. For $D>5$, the third term in Eq.  is always negative and does have a fixed power, $r^{-2}$, $$f_D(r)=1-{2\Lambda r^2\over (D-2)(D-1)} -{4(D-2)\pi G\eta^4\over (D-5)\beta^2 r^2}-{2GM\over r^{(D-3)}}.$$ As a result, there exist at most two horizons only. The extremal case happens when [@Cardoso:2004uz]: $$f(r)=\left(1-{r_0\over r}\right)^2\left[1-{\Lambda\over3}\left(r^2+a+b r+{c_1\over r}+{c_2\over r^2}+\cdots+{c_{D-5}\over r^{D-5}}\right)\right],$$ where determining the values of the constants $a, b, c_1,\cdots, c_{D-5}$ by matching the above equation with Eq.  leads to the following conditions to be satisfied ![Transition of $6d$ black hole horizon from nonextremal, to extremal, and to naked singularity.[]{data-label="fig:k-trans-d6"}](k-trans-d6.pdf){width="0.7\linewidth"} $$\begin{aligned} M&=&{r_0^{D-3}\over(D-5) G}\left({4\Lambda r_0^2\over(D-1)(D-2)}-1\right),\nonumber\\ {\eta^4\over\beta^2}&=&{r_0^2\over(D-2)4\pi G}\left((D-3)-{2\Lambda\over(D-2)}r_0^2\right).\label{eq:condextremD}\end{aligned}$$ One can easily see that putting $D=4$ reduces condition  into . The positivity of mass $M$ and ${\eta^4\over\beta^2}$ ratio implies $\sqrt{(D-1)(D-2)\over4\Lambda}<r_0<\sqrt{(D-3)(D-2)\over2\Lambda}$. In Figs. \[fig:k-trans-d6\] and \[fig:k-trans-d7\] we show how transition from nonextremal black hole can happen by varying $\Lambda$ to extremal and to naked singularity in six and seven dimensions. ![Transition of $7d$ black hole horizon from nonextremal, to extremal, and to naked singularity.[]{data-label="fig:k-trans-d7"}](k-trans-d7.pdf){width="0.7\linewidth"} For $\Lambda<0$, we have[^5] $$f_D(r)=1+{2|\Lambda| r^2\over (D-2)(D-1)} -{4(D-2)\pi G\eta^4\over (D-5)\beta^2 r^2}-{2GM\over r^{(D-3)}}.$$ ![Existence of horizon in $D=6, 7, 8$ black holes with negative $\Lambda$.[]{data-label="fig:k-ads-beda"}](k-ads-beda.pdf){width="0.7\linewidth"} ![Extremal black hole in $D=4$ with negative $\Lambda$.[]{data-label="fig:k-ads-ekstrem-d4"}](k-ads-ekstrem-d4.pdf){width="0.5\linewidth"} Notice that the second term is always positive. This results in the existence of only one horizon $r_+$ for $D>5$. They all reduce to the Tangherlini. This can be seen in Fig. \[fig:k-ads-beda\]. For $D=4$, however, we have a positive third term. This contribution allows the existence of two horizons, $r_-$ and $r_+$. The extremal condition can easily be deduced from Eq. . Re-writing it $$f(r)=\left(1-{r_0\over r}\right)^2\left[1+{|\Lambda|\over3}(r^2+2r_0 r+3r_0^2)\right],$$ we can see that there exists at most one extremal condition, provided conditions  holds with negative $\Lambda$, $$\begin{aligned} M&=&{r_0\over G}\left(1+{2\over3}|\Lambda| r_0^2\right),\nonumber\\ {\eta^4\over\beta^2}&=&{r_0^2\over8\pi G}\left(1+|\Lambda| r_0^2\right).\end{aligned}$$ Such a solution is shown in Fig. \[fig:k-ads-ekstrem-d4\]. Note that throughout this paper we only study black hole with spherical topology ($k=1$). It is well-known that AdS Reissner-Nordstrom black holes can exist with non-spherical topology (having $k=0,$ or $-1$), for example in  [@Birmingham:1998nr; @Chamblin:1999tk]. To the best of our knowledge the similar solutions having global monopole charge have yet been obtained. Factorized solutions ==================== Another class of solutions to consider is when we set $C(r)=C=const$[^6]. The Einstein’s equations become $$\begin{aligned} {(D-2)(D-3)\over2C^2}&=&\Lambda+{4(D-2)\pi G\eta^2\over C^2}+{2(D-2)^2 \pi G\eta^4\over C^4\beta^2},\label{eq:compacX1}\\ -{1\over B^2}{A''\over A}+{1\over B^2}{A'B'\over AB}+{(D-4)(D-3)\over2C^2}&=&\Lambda+{4(D-4)\pi G\eta^2\over C^2}+{2(D-6)(D-2)\pi G\eta^4\over C^4\beta^2}.\label{eq:compacX2}~~~~~~\end{aligned}$$ From $G^0_0$, Eq. , we can solve for the radius $$C^2={4(D-2)\pi G \eta^4 \over (D-3-8\pi G\eta^2)\beta^2}\label{eq:radiusnol}$$ for $\Lambda=0$, and $$C^2_\pm={(D-2)[(D-3-8\pi G\eta^2)\pm \sqrt{(D-3-8\pi G\eta^2)^2 -32\pi G \eta^4 \Lambda \beta^{-2}}]\over 4\Lambda}\label{eq:radiustaknol}$$ for $\Lambda\neq 0.$ One can verify that as $\beta\rightarrow\infty$, $C_+^2\rightarrow{(D-2)(D-3-8\pi G\eta^2)/2\Lambda},$ approaching the OV solution [@Olasagasti:2000gx]; when $\Lambda=0$ the compactification radius is not fixed by the theory. To ensure that $C$ is real it requires $$\eta\leq{1\over\sqrt{8}}\sqrt{\frac{2 \beta ^2 (D-3)}{2 \pi \beta ^2 G-\Lambda }- \sqrt{\frac{2\beta ^2 (D-3)^2 \Lambda }{\pi G \left(\Lambda -2 \pi \beta ^2 G\right)^2}}} \equiv\eta_{crit2} \label{eq:etacrit2}$$ for $\Lambda<2 \pi G\beta^2$, $$\eta\leq{\eta_{crit}\over\sqrt{2}}\equiv\eta_{crit3},\label{eq:etacrit3}$$ for $\Lambda=2 \pi G\beta^2$, and $$\eta \leq {1\over\sqrt{8}} \sqrt{\frac{2 \beta ^2 (D-3)}{2 \pi \beta ^2 G-\Lambda }+\sqrt{\frac{2\beta ^2 (D-3)^2 \Lambda }{\pi G \left(\Lambda -2 \pi \beta ^2 G\right)^2}}} \equiv\eta_{crit4}, \label{eq:etacrit4}$$ for $\Lambda>2\pi G\beta^2$. They satisfy $$\eta_{crit3} < \eta_{crit2} < \eta_{crit4} < \eta_{crit}.$$ Eq.  can be re-written as $$B^{-2}\left({A'B'\over AB}-{A''\over A}\right)=\omega^2, \label{pers03}$$ where we define a constant $\omega$ $$\begin{aligned} \omega^2&\equiv& \Lambda-{(D-4)(D-3-8\pi G\eta^2)\over 2C^2}+{2(D-6)(D-2)\pi G\eta^4\over C^4\beta^2}. $$ In general, $\omega^2$ can be positive, zero, or negative. Now, Eq. (\[pers03\]) can be solved, by taking an ansatz $B\equiv A^{-1}$, to give[^7] $$\begin{aligned} ds^2= \begin{cases} (1 -\omega^2 r^2)~dt^2 -\frac{dr^2}{(1 -\omega^2 r^2)} - C^2 d\Omega_{D-2}^2, & \text{for $\omega^2>0$},\\ dt^2 -dr^2 - C^2 d\Omega_{D-2}^2, & \text{for } \omega=0,\\ (1 +\omega^2 r^2)~dt^2 -\frac{dr^2}{(1 +\omega^2 r^2)} - C^2 d\Omega_{D-2}^2, & \text{for $\omega^2<0$}. \end{cases}\label{eq:metrsol2}\end{aligned}$$ The resulting spacetimes are Naria ($dS_{2}\times S^{D-2}$) [@Nariai], Plebanski-Hacyan ($M_{2}\times S^{D-2}$) [@Plebanski], and Bertotti-Robinson ($AdS_{2}\times S^{D-2}$) [@Bertotti:1959pf; @Robinson:1959ev], respectively. Here, $\omega^2$ plays the role of the effective cosmological constant in the two-dimensional maximally-symmetric spacetimes. [cccc]{} ------------------------------------------------------------------------ & $dS_2\times S^{D-2}$ & $M_2\times S^{D-2}$ & $AdS_2\times S^{D-2}$\ ------------------------------------------------------------------------ $\Lambda>2 \pi \beta^2 G$   & $\eta^2 < \eta^2_{crit4}$    &   $\eta^2 = \eta^2_{crit4}$     &   cannot happen      \ ------------------------------------------------------------------------ $\Lambda=2 \pi \beta^2 G$   & $\eta^2<\eta^2_{crit3}$    &   $\eta^2=\eta^2_{crit3}$     &   cannot happen      \ ------------------------------------------------------------------------ $\Lambda<2 \pi \beta^2 G$   & $\eta^2 < \eta^2_{crit2}$    &   $\eta^2 = \eta^2_{crit2}$     &   cannot happen      \ ------------------------------------------------------------------------ $\Lambda=0$   & cannot happen & cannot happen      &   $\eta^2 < \eta^2_{crit}$     \ ------------------------------------------------------------------------ $\Lambda<0$   & & cannot happen      &     \ \[table:tangk\] To ensure which factorized channel can take place, we need to check whether the condition that satisfies $\omega^2$ simultaneously also holds for the positivity of $C^2$. This is done by solving the polynomial equations of $\omega^2>0$, $\omega^2=0$, or $\omega^2<0$, to obtain the allowed range of $\beta$ and $\eta$. Combining the results with the constraints of given by , , , and , we conclude that the (classically-)allowed conditions for compactifications are shown in Table \[table:tangk\]. Note that not all nine possibilities[^8] can happen. Take, for example, the case of $AdS_2\times S^{D-2}$ compactification with $\Lambda<2 \pi \beta^2 G$. Solving the polynomial $\omega^2<0$ yields $\eta^2 > \eta^2_{crit3}$. But this contradicts constraint . We therefore conclude that such compactification cannot happen. These imply the following possible channels. $$\begin{aligned} dS_D &\longrightarrow& \begin{cases} dS_2\times S^{D-2},\\ M_2\times S^{D-2}; \end{cases}\\ M_D &\longrightarrow& AdS_2\times S^{D-2}.\end{aligned}$$ Conclusions =========== We extend our previous investigation of $4d$ black hole and spacetime compactification solutions of $k$-global monopole, [@Prasetyo:2015bga], into higher dimensions. This is done by studying the $D$-dimensional Einstein-$\sigma$-model theory with cosmological constant, where the scalar fields have noncanonical kinetic term; specifically in the form of power-law. In this work we assume the simplest ansatz, the spherically-symmetric hedgehog. The scalar field equation is then satisfied automatically, while the Einstein’s equations take the form that describes the gravitational field outside the power-law global monopole. Our study reveals a rich spectrum of exact solutions, even in this simplest ansatz. The first type we obtain is a sclarly-charged black hole solution. This can be thought of as a noncanonical global monopole being eaten up by a blackhole. That there exist nontrivial blackhole solutions for this Einstein-scalar theory is not surprising since they evade the no-scalar-hair theorem due to the non-flatness of our asymptotic spacetime, even in the condition of $\Lambda=0$. As in the case of Barriola-Vilenkin (BV) or Olasagasti-Vilenkin (OV) solutions, the spacetime around global monopole suffers from a deficit solid angle that grows with dimension, $\Delta\equiv 8\pi G\eta^2/(D-3)$. For our specific model, the solution is not valid for $D=5$ but behaves regularly otherwise. For $D=3$ it seems that the deficit angle causes the metric to blow up. But this is misleading, since this case should be treated separately. This is because for $D=3$ the internal manifold is $S^1$, which is flat, while $S^{D-2}$ (for $D>3$) is non-flat. The genuine feature of our solution is the appearance of “scalar" charge, which cannot be rescaled away by coordinate rescaling. In four dimensions, this charge resembles the Reissner-Nordstrom electromagnetic charge. With positive cosmological constant, there are three horizons and three corresponding extremal conditions; the cold, ultracold, and Nariai blackholes. Our main results in this section is when $D>5$. In higher dimensions the charge term is negative with a fixed power of $r^{-2}$. Due to this peculiar property there are at most two horizons. We also found the higher-dimensional Nariai black hole solution with global power-law monopole within this spectrum, where the corresponding mass and scalar charge are given by . For $\Lambda<0$ all solutions, except at $D=4$, have only one horizon. For $4d$ case, however, the positivity of the scalar charge enables the existence of two horizons, out of which we obtain one extremal black hole. It is well-known in the literature that a $(2+1)$-dimensional black hole can exist with $\Lambda<0$ [@Banados:1992wn; @Banados:1992gq]. To the best of our knowledge the only obtained solutions for BTZ black holes having global monopole is by Mazharimousavi and Halilsoy [@Mazharimousavi:2014uya]. The “monopole" (or rather vortex) has $SO(2)$ global symmetry. In the next work we shall investigate such BTZ black holes in the context of noncanonical defects. In this work we investigate the case for asymptotically anti de-Sitter black hole only in spherical topology ($k=1$). Higher-dimensional AdS black holes with noncanonical global monopole having planar ($k=0$) or hyperbolic ($k=-1$) topology is being studied at the moment. As mentioned in the text above, our investigation on hyperbolic monopole reveals the existence of surplus solid angle [@ramadhanpradhana]. It is well-known that surplus angle is a generic property of Horava-Lifshitz gravity ([@Horava:2009uw; @Horava:2008ih; @Horava:2009if]) black holes (see, for example, [@Kim:2009dq; @Kim:2010vs; @Kim:2010af]). Interestingly enough, the global monopole in Horava-Lifshitz gravity appears to have deficit solid angle [@Lee:2010er]. It might be worth-investigating to consider this higher-dimensional noncanonical defects in the framework of Horava-Lifshitz gravity. Another type we obtain is factorized solutions, where the spacetime is the direct product of a two-dimensional Lorentzian manifold and an $(N-2)$-dimensional space, both of constant curvatures. For $\Lambda>0$ we can have Nariai ($dS_2\times S^{D-2}$) or Plebanski-Hacyan ($M_2\times S^{D-2}$) types, while for $\Lambda=0$ we only have Bertotti-Robinson ($AdS_2\times S^{D-2}$). There is no compactification in the case of $\Lambda<0$, though this conclusion might change should we work on planar or hyperbolic topologies. This possibilities is worth further investigation. We also found that all compactifications happen before the monopole reaches the super-critical stage. This is consistent to what we found previously for $D=4$ (see the erratum of [@Prasetyo:2015bga]). We do not obtain any result when $\eta>\eta_{crit}$. This means that the fate of super-critical power-law global monopole cannot be probed analytically. It is interesting to study numerically the gravity of this higher-dimensional power-law global monopole. It might give bounds on $\eta$ of when should the compactification starts to develop, as in the case of canonical global monopole studied in [@Liebling:1999bb]. There are several things still left undiscussed in this work. Even though we claim that the exact solutions we found are black hole with global defects, we have not investigated the stability condition. It may be that some, or even all, of them are perturbatively unstable. Studying the black hole’s classical stability is a cumbersome task. We shall return to this issue in the subsequent publication. Another thing we have not said anything about is the thermodynamics properties of these solutions. We also only focus on a particular choice of $k$-monopole, a quadratic power-law. Despite the rich spectrum of solutions even in this simple theory, nothing should prevent us from considering other types of noncanonical monopole. In our previous work we studied another type of noncanonicality; that is in the form of DBI kinetic term [@Prasetyo:2016nng]. Its generalization to higher dimensions have also been worked and our investigation revealed several interesting properties that are genuine. We decided to report it in a separate publication [@nextpub]. It is also interesting to see quantum tunneling in this simple landscape of vacua, as in [@BlancoPillado:2009di; @BlancoPillado:2009mi]. These issues, at the moment, are being worked on and we hopefully shall have any result to report soon. Acknowledgements ================ We thank Ardian Atmaja, Jose Blanco-Pillado, Haryanto Siahaan, Anto Sulaksono, and Alexander Vilenkin for useful comments and fruitful discussions. This work is partially supported by the “PITTA" grant from Universitas Indonesia under contract no. 656/UN2.R3.1/HKP.05.00/2017. [9]{} M. Barriola and A. Vilenkin, “Gravitational Field of a Global Monopole,” Phys. Rev. Lett.  [**63**]{} (1989) 341. N. Dadhich, K. Narayan and U. A. Yajnik, “Schwarzschild black hole with global monopole charge,” Pramana [**50**]{} (1998) 307 \[gr-qc/9703034\]. I. Olasagasti and A. Vilenkin, “Gravity of higher dimensional global defects,” Phys. Rev. D [**62**]{} (2000) 044014 \[hep-th/0003300\]. F. R. Tangherlini, “Schwarzschild field in n dimensions and the dimensionality of space problem,” Nuovo Cim.  [**27**]{}, 636 (1963). S. R. Coleman, J. Preskill and F. Wilczek, “Quantum hair on black holes,” Nucl. Phys. B [**378**]{} (1992) 175 \[hep-th/9201059\]. E. Babichev, “Global topological k-defects,” Phys. Rev. D [**74**]{} (2006) 085004 \[hep-th/0608071\]. E. Babichev, “Gauge k-vortices,” Phys. Rev. D [**77**]{} (2008) 065021 \[arXiv:0711.0376 \[hep-th\]\]. X. H. Jin, X. Z. Li and D. J. Liu, “Gravitating global k-monopole,” Class. Quant. Grav.  [**24**]{}, 2773 (2007) \[arXiv:0704.1685 \[gr-qc\]\]. I. Prasetyo and H. S. Ramadhan, “Gravity of a noncanonical global monopole: conical topology and compactification,” Gen. Rel. Grav.  [**48**]{} (2016) no.1, 10 \[arXiv:1508.02118 \[gr-qc\]\]. I. Prasetyo and H. S. Ramadhan, “Global spacetime topology outside global $k$-monopole,” J. Phys. Conf. Ser.  [**739**]{} (2016) no.1, 012062. H. S. Ramadhan, I. Prasetyo, and G. I. B. Darman, [*in preparation.*]{} M. Gell-Mann and B. Zwiebach, “Space-time Compactification Due To Scalars,” Phys. Lett. B [**141**]{}, 333 (1984). A. Marunovic and M. Murkovic, “A novel black hole mimicker: a boson star and a global monopole nonminimally coupled to gravity,” Class. Quant. Grav.  [**31**]{} (2014) 045010 \[arXiv:1308.6489 \[gr-qc\]\]. C. Batista, “Generalized Charged Nariai Solutions in Arbitrary Even Dimensions with Multiple Magnetic Charges,” arXiv:1605.05788 \[gr-qc\]. M. Ortaggio, J. Podolsky and M. Zofka, “Robinson-Trautman spacetimes with an electromagnetic field in higher dimensions,” Class. Quant. Grav.  [**25**]{} (2008) 025006 \[arXiv:0708.4299 \[gr-qc\]\]. B. Jensen, “Thermodynamics of black holes with gauge cosmic strings and global monopoles,” Nucl. Phys. B [**453**]{} (1995) 413. H. W. Yu, “Black hole thermodynamics and global monopoles,” Nucl. Phys. B [**430**]{} (1994) 427. F. B. Lustosa, M. E. X. Guimar‹es, C. N. Ferreira and J. L. Neto, “Thermodynamical Analysis of a Black Hole with a Global Monopole Within a Class of a f(R) Gravity,” arXiv:1510.08176 \[hep-th\]. S. H. Mazharimousavi and M. Halilsoy, “Global Monopole-BTZ black hole,” arXiv:1408.3008 \[gr-qc\]. M. Aryal, L. H. Ford and A. Vilenkin, “Cosmic Strings and Black Holes,” Phys. Rev. D [**34**]{} (1986) 2263. A. Achucarro, R. Gregory and K. Kuijken, “Abelian Higgs hair for black holes,” Phys. Rev. D [**52**]{} (1995) 5729 \[gr-qc/9505039\]. H. S. Ramadhan and C. Pradhana, [*in preparation*]{}. L. J. Romans, “Supersymmetric, cold and lukewarm black holes in cosmological Einstein-Maxwell theory,” Nucl. Phys. B [**383**]{} (1992) 395 \[hep-th/9203018\]. V. Cardoso, O. J. C. Dias and J. P. S. Lemos, “Nariai, Bertotti-Robinson and anti-Nariai solutions in higher dimensions,” Phys. Rev. D [**70**]{} (2004) 024002 \[hep-th/0401192\]. H. Nariai, “On some static solutions of Einstein’s gravitational field equations in a spherically symmetric case,” Sci. Rep. Tohoku Univ.  [**34**]{}, 160 (1950). M. Banados, C. Teitelboim and J. Zanelli, “The Black hole in three-dimensional space-time,” Phys. Rev. Lett.  [**69**]{} (1992) 1849 \[hep-th/9204099\]. M. Banados, M. Henneaux, C. Teitelboim and J. Zanelli, “Geometry of the (2+1) black hole,” Phys. Rev. D [**48**]{} (1993) 1506 Erratum: \[Phys. Rev. D [**88**]{} (2013) 069902\] \[gr-qc/9302012\]. D. Birmingham, “Topological black holes in Anti-de Sitter space,” Class. Quant. Grav.  [**16**]{} (1999) 1197 \[hep-th/9808032\]. A. Chamblin, R. Emparan, C. V. Johnson and R. C. Myers, “Charged AdS black holes and catastrophic holography,” Phys. Rev. D [**60**]{} (1999) 064018 \[hep-th/9902170\]. J. F. Plebanński and S. Hacyan, “Some exceptional electrovac type D metrics with cosmological constant," J. Math. Phys. [**20**]{}, 1004 (1979). B. Bertotti, “Uniform electromagnetic field in the theory of general relativity,” Phys. Rev.  [**116**]{}, 1331 (1959). I. Robinson, “A Solution of the Maxwell-Einstein Equations,” Bull. Acad. Pol. Sci. Ser. Sci. Math. Astron. Phys.  [**7**]{}, 351 (1959). P. Horava, “Quantum Gravity at a Lifshitz Point,” Phys. Rev. D [**79**]{} (2009) 084008 \[arXiv:0901.3775 \[hep-th\]\]. P. Horava, “Membranes at Quantum Criticality,” JHEP [**0903**]{} (2009) 020 \[arXiv:0812.4287 \[hep-th\]\]. P. Horava, “Spectral Dimension of the Universe in Quantum Gravity at a Lifshitz Point,” Phys. Rev. Lett.  [**102**]{} (2009) 161301 \[arXiv:0902.3657 \[hep-th\]\]. S. S. Kim, T. Kim and Y. Kim, “Surplus Solid Angle: Toward Astrophysical Test of Horava-Lifshitz Gravity,” Phys. Rev. D [**80**]{} (2009) 124002 \[arXiv:0907.3093 \[hep-th\]\]. H. B. Kim and Y. Kim, “Surplus Angle and Sign-flipped Coulomb Force in Projectable Horava-Lifshitz Gravity,” Phys. Rev. D [**82**]{} (2010) 103512 \[arXiv:1009.1201 \[hep-th\]\]. S. S. Kim, T. Kim and Y. Kim, “Surplus Solid Angle in Horava-Lifshitz Gravity,” J. Korean Phys. Soc.  [**57**]{} (2010) 634. T. H. Lee, “Global monopole asymptotic solutions in Horava gravity,” Class. Quant. Grav.  [**27**]{} (2010) 247001 \[arXiv:1003.1863 \[gr-qc\]\]. S. L. Liebling, “Static gravitational global monopoles,” Phys. Rev. D [**61**]{} (2000) 024030 \[gr-qc/9906014\]. J. J. Blanco-Pillado, D. Schwartz-Perlov and A. Vilenkin, “Quantum Tunneling in Flux Compactifications,” JCAP [**0912**]{}, 006 (2009) \[arXiv:0904.3106 \[hep-th\]\]. J. J. Blanco-Pillado, D. Schwartz-Perlov and A. Vilenkin, “Transdimensional Tunneling in the Multiverse,” JCAP [**1005**]{} (2010) 005 \[arXiv:0912.4082 \[hep-th\]\]. [^1]: **\#1** [^2]: Note that the lagrangian  cannot be obtained by expanding lagrangian  and truncate it up to second order, since the sign in front of the second order is different, $b^2\left(1-\sqrt{1+{2\mathcal{X}\over b^2}}\right)=-\mathcal{X}+{1\over2b^2}\mathcal{X}^2+O(\mathcal{X}^3)$. Thus we may perceive the toy model  as distinct from . [^3]: In general, an arbitrary metric solution $ds^2=A(r) dt^2-{dr^2\over A(r)}-r^2d\Omega^2_{D-2}$ in the form of $A(r)=1-\Delta-\sum^{\infty}_{i=1}\alpha_i r^i-\sum^{\infty}_{j=1}\sigma_j r^{-j}$ where $\alpha_i$ and $\sigma_j$ are constants can be brought to $ds^2=A(r) dt^2-{dr^2\over A(r)}-\left(1-\Delta\right)r^2d\Omega^2_{D-2}$ with $A(r)\rightarrow 1-\sum^{\infty}_{i=1}\alpha_i r^i-\sum^{\infty}_{j=1}\sigma_j r^{-j}$, should we simultaneously also transform $\alpha_i\rightarrow\alpha_i\left(1-\Delta\right)^{i-2\over 2}$ and $\sigma_j\rightarrow\sigma_j\left(1-\Delta\right)^{-(j+2)\over 2}$. See [@Marunovic:2013eka]. [^4]: Black holes solutions with other type of topological defects ([*e.g.,*]{} cosmic strings) can be found in [@Aryal:1986sz; @Achucarro:1995nu]. There, the solutions exhibit similar property of conical deficit angle. [^5]: It is well-known that in $D=3$ with $\Lambda<0$ there exists a $(2+1)$-dimensional black hole solution, called BTZ black hole [@Banados:1992wn; @Banados:1992gq]. [^6]: This ansatz is similar to the case of, and can be regarded as solutions, spacetime compactification due to scalar [@GellMann:1984sj]. The latter interpretation can be regarded as being generated by the nonlinearity of the scalar field’s kinetic terms. [^7]: Another choiec of ansatz, $B=1$, will lead to $$\begin{aligned} ds^2= \begin{cases} \frac{1}{\omega^2}(\sin^2 \chi ~dt^2 - d\chi^2) -C^2 d\Omega_{D-2}^2, & \text{for $\omega^2>0$},\nonumber\\ dt^2 -dr^2 -C^2 d\Omega_{D-2}^2, & \text{for } \omega=0,\nonumber\\ \frac{1}{\omega^2}(\sinh^2 \chi ~dt^2 - d\chi^2) -C^2 d\Omega_{D-2}^2, & \text{for $\omega^2<0$},\nonumber \end{cases}\label{eq:metrsol1}\end{aligned}$$ with $\chi\equiv\omega r$, which is essentially the same Nariai, Plebanski-Hacyan, or Bertotti-Robinson spacetimes as above, written in different gauge. [^8]: $X_D\rightarrow Y_2\times S^{D-2}$, where $X$ and $Y$ can each stands for the de Sitter, Minkowski, or Anti-de Sitter.